CN115965824B - Point cloud data labeling method, point cloud target detection method, equipment and storage medium - Google Patents

Point cloud data labeling method, point cloud target detection method, equipment and storage medium Download PDF

Info

Publication number
CN115965824B
CN115965824B CN202310184313.6A CN202310184313A CN115965824B CN 115965824 B CN115965824 B CN 115965824B CN 202310184313 A CN202310184313 A CN 202310184313A CN 115965824 B CN115965824 B CN 115965824B
Authority
CN
China
Prior art keywords
point cloud
size
category
target detection
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310184313.6A
Other languages
Chinese (zh)
Other versions
CN115965824A (en
Inventor
彭祎
何欣栋
熊子钰
任广辉
姚卯青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Weilai Zhijia Technology Co Ltd
Original Assignee
Anhui Weilai Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Weilai Zhijia Technology Co Ltd filed Critical Anhui Weilai Zhijia Technology Co Ltd
Priority to CN202310184313.6A priority Critical patent/CN115965824B/en
Publication of CN115965824A publication Critical patent/CN115965824A/en
Application granted granted Critical
Publication of CN115965824B publication Critical patent/CN115965824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of automatic driving, in particular to a point cloud data labeling method, a point cloud target detection method, equipment and a storage medium, and aims to solve the problem of accurately and reliably detecting targets. The labeling method comprises the steps of obtaining initial labeling types and sizes of target detection frames in point cloud data, and correcting the initial labeling types according to the sizes of the target detection frames. By the method, the labeling accuracy of the target detection frame can be ensured, and the accuracy of point cloud target detection is further ensured. The detection method provided by the invention can be used for correcting the initial labeling category of the target detection frame in the point cloud data to form the point cloud training data, then the point cloud training data is used for carrying out point cloud target detection training on the preset model to obtain the point cloud target detection model, and then the point cloud target detection model is used for carrying out target detection on the point cloud frame. In this way, the accuracy of target detection can be improved.

Description

Point cloud data labeling method, point cloud target detection method, equipment and storage medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to a point cloud data labeling method, a point cloud target detection method, equipment and a storage medium.
Background
In automatic driving control of a vehicle, a radar is generally used to collect point clouds around the vehicle, and target detection is performed on the point clouds to determine whether other vehicles, pedestrians, etc. exist around the vehicle.
The conventional point cloud target detection method mainly comprises the steps of firstly training to obtain a point cloud target detection model, and then utilizing the point cloud target detection model to carry out target detection on a point cloud frame acquired by a radar. When the point cloud target detection model is trained, the categories of target detection frames in the point cloud training data need to be marked in advance, and the categories of the target detection frames are difficult to mark because the semantic information of the point cloud training data is less. In this regard, at present, an image is mainly used for auxiliary labeling, but because the image lacks depth information, pixel points on the image cannot be completely corresponding to point cloud data on a point cloud frame, so that the accuracy of class labeling cannot be ensured by the method, the detection accuracy of a point cloud target detection model is affected, and accurate target detection cannot be performed on the point Yun Zhen by using the point cloud target detection model.
Accordingly, there is a need in the art for a new solution to the above-mentioned problems.
Disclosure of Invention
The present invention has been made to overcome the above-mentioned drawbacks, and provides a point cloud data labeling method, a point cloud target detection device, and a storage medium that solve or at least partially solve the technical problem of how to accurately and reliably perform target detection.
In a first aspect, a method for labeling point cloud data is provided, the method comprising:
acquiring an initial annotation category of a target detection frame in the point cloud data;
acquiring the size of the target detection frame;
and correcting the initial annotation category according to the size to determine the final annotation category of the target detection frame.
In one technical scheme of the point cloud data labeling method, the step of correcting the initial labeling category according to the size to determine the final labeling category of the target detection frame specifically includes:
acquiring a similar size category of the initial labeling category, wherein the size of the target represented by the similar size category is similar to the size of the target represented by the initial labeling category;
acquiring a size boundary value between the target represented by the initial labeling category and the target represented by the similar size category;
and correcting the initial labeling category according to the size of the target detection frame and the size boundary value so as to determine the final labeling category of the target detection frame.
In one technical scheme of the point cloud data labeling method, the step of acquiring the size similar class of the initial labeling class specifically includes:
for each initial labeling category, clustering all target detection frames labeled with the initial labeling category according to the size of each target detection frame labeled with the initial labeling category in the point cloud data, and determining the size range of the target represented by the initial labeling category according to the clustering result;
and respectively acquiring the similar size category of each initial annotation category from all the initial annotation categories according to the size range of the target represented by each initial annotation category.
In one technical scheme of the point cloud data labeling method, the step of "obtaining a size boundary value between the target represented by the initial labeling category and the target represented by the similar size category" specifically includes:
respectively acquiring the size range of the target represented by the initial labeling category and the size range of the target represented by the similar size category;
and acquiring the size boundary value according to the size range.
In one technical scheme of the point cloud data labeling method, the step of correcting the initial labeling category according to the size of the target detection frame and the size boundary value to determine the final labeling category of the target detection frame specifically includes:
according to the size relation between the target represented by the initial labeling category and the target represented by the similar size category, respectively determining a large size category representing the large size target and a small size category representing the small size target from the initial labeling category and the similar size category;
comparing the size of the target detection frame with the size boundary value;
if the size of the target detection frame is larger than the size boundary value, the final labeling category is the large-size category;
otherwise, the final annotation class is the small-size class.
In one technical scheme of the point cloud data labeling method, the step of comparing the size of the target detection frame with the size boundary value specifically includes:
comparing the size with the length and the width in the size boundary value respectively;
if the length and the width in the size are respectively larger than the length and the width in the size boundary value, the final labeling category is the large-size category;
otherwise, the final annotation class is the small-size class.
In a second aspect, a method for detecting a point cloud target is provided, the method comprising:
by adopting the point cloud data labeling method provided by the first aspect, correcting the initial labeling category of the target detection frame in the point cloud data to determine the final labeling category of the target detection frame, so as to form point cloud training data;
performing point cloud target detection training on a preset model by adopting the point cloud training data to acquire a point cloud target detection model;
and adopting the point cloud target detection model to carry out target detection on the point cloud frame.
In one technical scheme of the point cloud target detection method, the step of adopting the point cloud target detection model to perform target detection on a point cloud frame specifically includes:
extracting features of the point cloud frame by adopting a three-dimensional sparse convolution network in the point cloud target detection model;
adopting a top view convolution network in the point cloud target detection model to perform feature extraction on the features extracted by the three-dimensional sparse convolution network again;
and adopting a target detection head network in the point cloud target detection model to perform target detection on the characteristics extracted by the top view convolution network.
In a third aspect, a computer device is provided, where the computer device includes a processor and a storage device, where the storage device is adapted to store a plurality of program codes, where the program codes are adapted to be loaded and executed by the processor to perform the method according to any one of the above-mentioned point cloud data labeling or point cloud object detection methods.
In a fourth aspect, a computer readable storage medium is provided, where a plurality of program codes are stored, the program codes are adapted to be loaded and executed by a processor to perform the method according to any one of the above-mentioned point cloud data labeling or point cloud object detection methods.
The technical scheme provided by the invention has at least one or more of the following beneficial effects:
in the technical scheme for implementing the point cloud data labeling method, the initial labeling category and the size of the target detection frame in the point cloud data can be acquired, and then the initial labeling category is corrected according to the size of the target detection frame so as to determine the final labeling category of the target detection frame. By the method, objects with similar sizes but different types can be prevented from being confused, the labeling accuracy of the object detection frame is ensured, and the accuracy of point cloud object detection is further improved.
In the technical scheme for implementing the point cloud target detection, the initial annotation type of the target detection frame in the point cloud data can be corrected by adopting the point cloud data annotation method to determine the final annotation type of the target detection frame, point cloud training data is formed, then point cloud target detection training is carried out on a preset model by adopting the point cloud training data to obtain a point cloud target detection model, and finally target detection is carried out on the point cloud frame by adopting the point cloud target detection model. By the method, the point cloud training data with accurate labeling category can be obtained, so that the trained point cloud target detection model is guaranteed to have higher point cloud target detection capability, and the accuracy of target detection on the point cloud frame by adopting the point cloud target detection model is improved.
Drawings
The present disclosure will become more readily understood with reference to the accompanying drawings. As will be readily appreciated by those skilled in the art: the drawings are for illustrative purposes only and are not intended to limit the scope of the present invention. Wherein:
FIG. 1 is a flow chart illustrating main steps of a point cloud data labeling method according to an embodiment of the present invention;
FIG. 2 is a flow chart of the main steps of a method for correcting an initial annotation class according to the size of a target detection frame, according to one embodiment of the invention;
FIG. 3 is a flow chart illustrating the main steps of a method for detecting a point cloud object according to an embodiment of the present invention;
FIG. 4 is a flow chart of the main steps of a method for performing object detection on a point cloud frame using a point cloud object detection model according to one embodiment of the present invention;
fig. 5 is a main structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
In the description of the present invention, a "processor" may include hardware, software, or a combination of both. The processor may be a central processor, a microprocessor, an image processor, a digital signal processor, or any other suitable processor. The processor has data and/or signal processing functions. The processor may be implemented in software, hardware, or a combination of both. The computer readable storage medium includes any suitable medium that can store program code, such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random access memory, and the like.
The embodiment of the point cloud data labeling method provided by the invention is explained below.
Referring to fig. 1, fig. 1 is a schematic flow chart of main steps of a point cloud data labeling method according to an embodiment of the present invention. As shown in fig. 1, the point cloud data labeling method in the embodiment of the present invention mainly includes the following steps S101 to S103.
Step S101: and acquiring the initial annotation category of the target detection frame in the point cloud data.
The point cloud data may be data acquired using radar (e.g., lidar). In the embodiment of the invention, the category of the target detection frame in the point cloud data can be marked by adopting a conventional data marking method in the technical field of point cloud to form an initial marking category. In addition to labeling the category of the target detection frame, the position of the target detection frame may be labeled. The embodiment of the invention does not limit the data labeling method specifically. Taking the vehicle and the vulnerable road user (Vulnerable Road User, VRU) as an example, the initial labeling category of the target detection box may be the vehicle or the VRU.
Step S102: the size of the target detection frame is obtained.
Specifically, the size of the target detection frame can be obtained according to the position of the target detection frame. For example, the position of each vertex in the target detection frame is acquired, and then the length and width of the target detection frame are calculated from the position of each vertex, and the length and width are taken as the size of the target detection frame. In addition, the area of the target detection frame may be calculated from the length and width of the target detection frame, and the area may be set as the size of the target detection frame.
Step S103: and correcting the initial annotation category according to the size to determine the final annotation category of the target detection frame.
In practical applications, there are many targets that are similar in size, but different in type. Such as pedestrians (cyclist) and cyclists (cyclist) are similar in size and cars and tricycles are similar in size. Because of the similar sizes, the sizes of the target detection frames of the targets in the point cloud data are also similar, and confusion and wrong classification can easily occur when the targets are marked with the classes, for example, a car is marked as a tricycle. In contrast, in the embodiment of the invention, the initial labeling category can be corrected through the size of the target detection frame, so that category labeling is prevented, and the accuracy of category labeling of the target detection frame is improved.
Based on the methods described in the steps S101 to S103, the category of the target detection frames with similar sizes but different types can be prevented from being misplaced, and the accuracy of category labeling is ensured.
Step S103 is further described below.
In order to improve accuracy of category correction, in the embodiment of the invention, a size boundary value between target detection frames with similar sizes but different types can be obtained, and correction is performed according to the size boundary value. Specifically, in some embodiments of step S103 described above, the initial annotation class may be corrected by the following steps S1031-S1033.
Step S1031: and acquiring the size similar type of the initial labeling type, wherein the size of the target represented by the size similar type is similar to the size of the target represented by the initial labeling type.
In the embodiment of the invention, the sizes of the targets in different categories can be statistically analyzed, and the sizes of the targets are determined to be relatively close. For example, the initial labeling category is walker and the similar size category may be rider.
Step S1032: and acquiring a size boundary value between the target represented by the initial labeling category and the target represented by the size similar category.
The size boundary value may be used to distinguish between an initial labeling category and a similar size category.
Step S1033: and correcting the initial labeling category according to the size and the size boundary value of the target detection frame so as to determine the final labeling category of the target detection frame.
In the embodiment of the invention, the size of the target detection frame can be compared with the size boundary value, so that whether the target detection frame belongs to the initial labeling category or the similar size category is determined; if the label belongs to the initial labeling category, correction is not needed; if the correction belongs to the similar size category, the correction is corrected to the similar size category.
Based on the methods described in the steps S1031 to S1033, the size boundary value can be used to accurately distinguish whether the target detection frame belongs to the initial labeling category or the similar size category, thereby ensuring accuracy of category correction.
The above steps S1031 to S1033 are further described below.
1. Step S1031 will be described.
According to the description of the foregoing embodiments, in the embodiments of the present invention, the sizes of the targets of different categories may be statistically analyzed to determine which targets are relatively close to each other, so as to obtain the size similar category of the initial labeling category. In order to improve the accuracy and efficiency of statistical analysis, the embodiment of the invention can adopt a clustering method to carry out statistical analysis so as to determine the similar size categories. Specifically, in some embodiments of the step S1031, the similar size categories of each initial labeling category may be obtained through the following steps 11 to 12, respectively.
Step 11: and clustering all target detection frames marked with the current initial marking category according to the size of the target detection frame marked with the current initial marking category in the point cloud data according to each initial marking category, and determining the size range of the target represented by the current initial marking category according to the clustering result.
The target detection frames with the approximate sizes can be clustered into a cluster through clustering, then the size range of the cluster is obtained, and the size range is used as the size range of the target represented by the initial labeling category. In the embodiment of the invention, the target detection frames can be clustered by adopting a conventional clustering method in the technical field of data clustering, and the embodiment of the invention is not particularly limited.
Step 12: and respectively acquiring the similar size category of each initial annotation category from all the initial annotation categories according to the size range of the target represented by each initial annotation category.
Specifically, the size ranges of the targets represented by each initial labeling category may be compared, and the size ranges are similar as similar size categories.
Based on the methods described in the steps 11 to 12, the similar size category of each initial labeling category can be rapidly and accurately determined by a clustering method, which is beneficial to improving the accuracy and efficiency of the initial labeling category correction.
2. Step S1032 is explained.
After the size range of the target represented by each initial labeling category is obtained by the method described in the foregoing step 11, the size range may be used to determine the size boundary value between the targets represented by the initial labeling category and the targets represented by the similar size categories. Specifically, in some embodiments of the above step S1032, the size boundary value may be acquired by the following steps 21 to 22.
Step 21: and respectively acquiring the size range of the target represented by the initial labeling category and the size range of the target represented by the similar category.
Step 22: and acquiring a size boundary value according to the size range.
In the embodiment of the invention, a size value between the size ranges of the targets represented by the initial labeling category and the size similar category can be selected, and the size value is used as a size boundary value. The embodiment of the present invention is not limited to a specific method for selecting the size value, as long as a size value can be selected from the above size ranges. For example, an intermediate value between the above size ranges may be selected.
Based on the methods described in the steps 21 to 22, the size boundary values capable of effectively distinguishing the initial labeling category and the similar size categories can be accurately obtained, and the accuracy of correcting the initial labeling category is further improved.
3. Step S1033 will be described.
According to the foregoing description of the embodiments, in the embodiments of the present invention, the size of the target detection frame and the size boundary value may be compared, so as to complete the category correction. When there are more target classes included in the point cloud data, in order to improve the efficiency of class correction, in some embodiments of step S1033, the initial labeling class may be corrected to determine the final labeling class of the target detection frame through the following steps 31 to 34.
Step 31: and respectively determining a large-size category representing a large-size target and a small-size category representing a small-size target from the initial labeling category and the similar-size category according to the size relation between the target represented by the initial labeling category and the target represented by the similar-size category.
For example, for cars and tricycles, cars may be considered as large size categories and tricycles as small size categories; for cars and buses, the buses may be regarded as large-size categories, and the buses may be regarded as small-size categories.
Step 32: comparing the size of the target detection frame with a size boundary value;
if the size of the target detection frame is greater than the size boundary value, it indicates that the target detection frame belongs to the large-size category, and therefore, the process goes to step 33; if the size of the target detection frame is less than or equal to the size boundary value, it indicates that the target detection frame belongs to the small-size category, and therefore, the process proceeds to step 34.
As can be seen from the description of the foregoing embodiments, the size of the target detection frame may be the length and the width thereof, and the size of the target detection frame may be compared with the length and the width in the size boundary values, respectively; if the length and the width in the size of the target detection frame are respectively larger than the length and the width in the size boundary value, the target detection frame belongs to a large-size class; otherwise, the target detection frame belongs to the class of small size. By the method, the sizes of different target detection frames can be accurately distinguished, and the accuracy of category correction is improved.
Step 33: the final annotation class is determined to be a large-size class.
Step 34: the final annotation class is determined to be a small-size class.
Based on the methods described in steps 31 to 34, the initial labeling category and the similar size category thereof can be further divided into a group of large size category and small size category, so that even if the target categories contained in the point cloud data are more, the correction of each target category can be completed rapidly, and the efficiency of category correction is further improved.
The following describes an embodiment of a point cloud target detection method provided by the invention.
Referring to fig. 3, fig. 3 is a schematic flow chart of main steps of a point cloud object detection method according to an embodiment of the present invention. As shown in fig. 3, the method for detecting a point cloud target in the embodiment of the invention mainly includes the following steps S201 to S203.
Step S201: and correcting the initial annotation category of the target detection frame in the point cloud data by adopting a point cloud data annotation method so as to determine the final annotation category of the target detection frame and form point cloud training data.
The point cloud data labeling method in this step may be the point cloud data labeling method described in the foregoing method embodiment.
Step S202: and carrying out point cloud target detection training on the preset model by adopting the point cloud training data so as to acquire a point cloud target detection model.
In the embodiment of the invention, a conventional model training method in the technical field of machine learning can be adopted to perform point cloud target detection training on a preset model, and the embodiment of the invention is not particularly limited to the point cloud target detection training. For example, the point cloud training data are input into a preset model, the loss value of the model is calculated through forward propagation, the parameter gradient of the model parameter is calculated according to the loss value, the model parameter is updated according to the parameter gradient through backward propagation, training is stopped until the preset model meets the convergence condition, and the trained model is used as the point cloud target detection model.
Taking vehicle and VRU detection as an example, the method can train to obtain a point cloud target detection model capable of respectively detecting the vehicle and the VRU on the point cloud data.
Step S203: and adopting a point cloud target detection model to detect the target of the point cloud frame.
The point cloud frame may be a frame of point cloud data acquired by the radar, after each frame of point cloud data acquired by the radar is obtained, each frame of point cloud data is respectively input into a point cloud target detection model, and the point cloud target detection model may respectively detect a target of each frame of point cloud data and output whether each frame of point cloud data contains a category (such as a vehicle) of the target, or may further output information such as a position of the target.
Based on the methods described in the above steps S201 to S203, a point cloud target detection model with a higher point cloud target detection capability may be obtained through training, so that information such as a target class contained in a point cloud frame may be accurately detected by using the point cloud target detection model.
Step S203 is further described below.
In the embodiment of the invention, the point cloud target detection model can comprise a feature extraction network and a target detection head network, wherein the feature extraction network can be used for extracting the features of the point cloud frame, and the target detection head network can be used for carrying out target detection on the features of the point cloud frame. Specifically, the feature extraction network may include a three-dimensional sparse convolution network and a top view convolution network, the three-dimensional sparse convolution network may be used to extract a three-dimensional feature of a point cloud frame and input the three-dimensional feature into the top view convolution network, the top view convolution network may be used to perform feature extraction on the input three-dimensional feature again, obtain a two-dimensional feature of the point cloud frame and input the two-dimensional feature into the target detection head network, and the target detection head network may perform target detection on the two-dimensional feature. In some embodiments, different target detection head networks may be set for different types of targets, each target detection head network being used to detect a different type of target, respectively. Taking a vehicle and a VRU as an example, two target detection head networks may be provided, one for detecting the vehicle and the other for detecting the VRU.
Based on the model structure of the point cloud object detection model, in some embodiments of the step S203, object detection may be performed on a point cloud frame through the following steps S2031 to S2033 shown in fig. 4.
Step S2031: and extracting the characteristics of the point cloud frame by adopting a three-dimensional sparse convolution network in the point cloud target detection model.
Step S2032: and adopting a top view convolution network in the point cloud target detection model to perform feature extraction on the features extracted by the three-dimensional sparse convolution network again.
Step S2033: and (3) adopting a target detection head network in the point cloud target detection model to perform target detection on the characteristics extracted by the top view convolution network.
Based on the method described in the steps S2031 to S2033, the characteristics of the point cloud frame can be obtained from three-dimensional and two-dimensional different angles, the shape information of the target can be obtained by extracting the characteristics from the three-dimensional angles, the efficiency of target detection can be improved and the information such as the position of the target can be accurately obtained by extracting the characteristics from the two-dimensional angles, the target detection can be performed by utilizing the characteristics containing the information, and the accuracy and the efficiency of the target detection can be remarkably improved.
The effect of the point cloud target detection method provided by the invention is described below by taking pedestrians and bicycles as examples. As shown in table 1 below, if no category correction is performed for pedestrians and bicycles, the detection accuracy and recall rate of pedestrians are 77.0% and 63.7%, respectively, and the detection accuracy and recall rate of bicycles are 86.0% and 69.4%, respectively; if the classification correction is carried out on pedestrians and bicycles, the detection precision and recall rate of the pedestrians are 79.7% and 63.6%, the detection precision and recall rate of the bicycles are 86.0% and 72.2%, respectively, the detection precision of the pedestrians is improved, and the recall rate of the bicycles is improved.
TABLE 1
Whether to correct the category Pedestrian detection accuracy Pedestrian recall rate Bicycle detection precision Bicycle recall rate
Whether or not 77.0% 63.7% 86.0% 69.4%
Is that 79.7% 63.6% 86.0% 72.2%
It should be noted that, although the foregoing embodiments describe the steps in a specific order, it will be understood by those skilled in the art that, in order to achieve the effects of the present invention, the steps are not necessarily performed in such an order, and may be performed simultaneously (in parallel) or in other orders, and those solutions after these adjustments belong to equivalent solutions to those described in the present invention, and therefore will also fall within the scope of the present invention.
It will be appreciated by those skilled in the art that the present invention may implement all or part of the above-described methods according to the above-described embodiments, or may be implemented by means of a computer program for instructing relevant hardware, where the computer program may be stored in a computer readable storage medium, and where the computer program may implement the steps of the above-described embodiments of the method when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include: any entity or device, medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory, random access memory, electrical carrier wave signals, telecommunications signals, software distribution media, and the like capable of carrying the computer program code. It should be noted that the computer readable storage medium may include content that is subject to appropriate increases and decreases as required by jurisdictions and by jurisdictions in which such computer readable storage medium does not include electrical carrier signals and telecommunications signals.
Further, the invention also provides computer equipment.
Referring to fig. 5, fig. 5 is a schematic diagram showing the main structure of an embodiment of a computer device according to the present invention. As shown in fig. 5, the computer device in the embodiment of the present invention mainly includes a storage device and a processor, the storage device may be configured to store a program for executing the point cloud data labeling or point cloud target detection method of the above method embodiment, and the processor may be configured to execute the program in the storage device, where the program includes, but is not limited to, a program for executing the point cloud data labeling or point cloud target detection method of the above method embodiment. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention.
The computer device in the embodiments of the present invention may be a control apparatus device formed by including various electronic devices. In some possible implementations, a computer device may include a plurality of storage devices and a plurality of processors. The program for executing the point cloud data labeling or point cloud target detection method of the above method embodiment may be divided into a plurality of sub-programs, and each sub-program may be loaded and executed by the processor to execute different steps of the point cloud data labeling or point cloud target detection method of the above method embodiment. Specifically, each of the subroutines may be respectively stored in different storage devices, and each of the processors may be configured to execute the programs in one or more storage devices to jointly implement the point cloud data labeling or point cloud target detection method of the above method embodiment, that is, each of the processors respectively executes different steps of the point cloud data labeling or point cloud target detection method of the above method embodiment to jointly implement the point cloud data labeling or point cloud target detection method of the above method embodiment.
The plurality of processors may be processors disposed on the same device, for example, the computer device may be a high-performance device composed of a plurality of processors, and the plurality of processors may be processors configured on the high-performance device. In addition, the plurality of processors may be processors disposed on different devices, for example, the computer device may be a server cluster, and the plurality of processors may be processors on different servers in the server cluster.
Further, the invention also provides a computer readable storage medium.
In an embodiment of a computer readable storage medium according to the present invention, the computer readable storage medium may be configured to store a program for performing the point cloud data annotation or point cloud target detection method of the above-described method embodiment, which may be loaded and executed by a processor to implement the above-described point cloud data annotation or point cloud target detection method. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The computer readable storage medium may be a storage device including various electronic devices, and optionally, the computer readable storage medium in the embodiments of the present invention is a non-transitory computer readable storage medium.
Thus far, the technical solution of the present invention has been described in connection with one embodiment shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will fall within the scope of the present invention.

Claims (7)

1. A point cloud data labeling method, the method comprising:
acquiring an initial annotation category of a target detection frame in the point cloud data;
acquiring the size of the target detection frame;
acquiring a similar size category of the initial labeling category, and acquiring a size boundary value between a target represented by the initial labeling category and a target represented by the similar size category;
correcting the initial labeling category according to the size of the target detection frame and the size boundary value to determine the final labeling category of the target detection frame, wherein the correction comprises the steps of respectively determining a large-size category representing a large-size target and a small-size category representing a small-size target from the initial labeling category and the size similar category according to the size-size relation between the target represented by the initial labeling category and the target represented by the size similar category; comparing the size of the target detection frame with the length and the width in the size boundary value respectively; if the length and the width in the size of the target detection frame are respectively larger than the length and the width in the size boundary value, the final labeling category is the large-size category; otherwise, the final annotation class is the small-size class;
the size of the target represented by the similar size category is similar to the size of the target represented by the initial labeling category.
2. The method for labeling point cloud data according to claim 1, wherein the step of obtaining the size-similar class of the initial labeling class specifically comprises:
for each initial labeling category, clustering all target detection frames labeled with the initial labeling category according to the size of each target detection frame labeled with the initial labeling category in the point cloud data, and determining the size range of the target represented by the initial labeling category according to the clustering result;
and respectively acquiring the similar size category of each initial annotation category from all the initial annotation categories according to the size range of the target represented by each initial annotation category.
3. The method according to claim 2, wherein the step of obtaining a size boundary value between the target represented by the initial labeling category and the target represented by the size-close category specifically comprises:
respectively acquiring the size range of the target represented by the initial labeling category and the size range of the target represented by the similar size category;
and acquiring the size boundary value according to the size range.
4. A method for detecting a point cloud target, the method comprising:
correcting an initial annotation category of a target detection frame in point cloud data by adopting the point cloud data annotation method of any one of claims 1 to 3 to determine a final annotation category of the target detection frame, so as to form point cloud training data;
performing point cloud target detection training on a preset model by adopting the point cloud training data to acquire a point cloud target detection model;
and adopting the point cloud target detection model to carry out target detection on the point cloud frame.
5. The method for detecting a point cloud object according to claim 4, wherein the step of performing object detection on a point cloud frame by using the point cloud object detection model specifically comprises:
extracting features of the point cloud frame by adopting a three-dimensional sparse convolution network in the point cloud target detection model;
adopting a top view convolution network in the point cloud target detection model to perform feature extraction on the features extracted by the three-dimensional sparse convolution network again;
and adopting a target detection head network in the point cloud target detection model to perform target detection on the characteristics extracted by the top view convolution network.
6. A computer device comprising a processor and a storage means adapted to store a plurality of program code, characterized in that the program code is adapted to be loaded and executed by the processor to perform the point cloud data annotation method of any of claims 1 to 3 or to perform the point cloud object detection method of any of claims 4 to 5.
7. A computer readable storage medium, in which a plurality of program codes are stored, characterized in that the program codes are adapted to be loaded and executed by a processor to perform the point cloud data annotation method of any one of claims 1 to 3 or to perform the point cloud object detection method of any one of claims 4 to 5.
CN202310184313.6A 2023-03-01 2023-03-01 Point cloud data labeling method, point cloud target detection method, equipment and storage medium Active CN115965824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310184313.6A CN115965824B (en) 2023-03-01 2023-03-01 Point cloud data labeling method, point cloud target detection method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310184313.6A CN115965824B (en) 2023-03-01 2023-03-01 Point cloud data labeling method, point cloud target detection method, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115965824A CN115965824A (en) 2023-04-14
CN115965824B true CN115965824B (en) 2023-06-06

Family

ID=85901509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310184313.6A Active CN115965824B (en) 2023-03-01 2023-03-01 Point cloud data labeling method, point cloud target detection method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115965824B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022179164A1 (en) * 2021-02-24 2022-09-01 华为技术有限公司 Point cloud data processing method, training data processing method, and apparatus
CN115311512A (en) * 2022-06-28 2022-11-08 广州文远知行科技有限公司 Data labeling method, device, equipment and storage medium
JP7224682B1 (en) * 2021-08-17 2023-02-20 忠北大学校産学協力団 3D multiple object detection device and method for autonomous driving

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9536311B2 (en) * 2014-09-29 2017-01-03 General Electric Company System and method for component detection
US10936905B2 (en) * 2018-07-06 2021-03-02 Tata Consultancy Services Limited Method and system for automatic object annotation using deep network
CN109188457B (en) * 2018-09-07 2021-06-11 百度在线网络技术(北京)有限公司 Object detection frame generation method, device, equipment, storage medium and vehicle
US11774250B2 (en) * 2019-07-05 2023-10-03 Nvidia Corporation Using high definition maps for generating synthetic sensor data for autonomous vehicles
CN110647835B (en) * 2019-09-18 2023-04-25 合肥中科智驰科技有限公司 Target detection and classification method and system based on 3D point cloud data
WO2021222279A1 (en) * 2020-04-28 2021-11-04 Raven Industries, Inc. Object detection and tracking for automated operation of vehicles and machinery
CN111860493B (en) * 2020-06-12 2024-02-09 北京图森智途科技有限公司 Target detection method and device based on point cloud data
CN115049700A (en) * 2021-03-09 2022-09-13 华为技术有限公司 Target detection method and device
CN114882211B (en) * 2022-03-01 2024-10-01 广州文远知行科技有限公司 Automatic time sequence data labeling method and device, electronic equipment, medium and product
CN115147474B (en) * 2022-07-01 2023-05-02 小米汽车科技有限公司 Method and device for generating point cloud annotation model, electronic equipment and storage medium
CN115457506A (en) * 2022-08-31 2022-12-09 中汽创智科技有限公司 Target detection method, device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022179164A1 (en) * 2021-02-24 2022-09-01 华为技术有限公司 Point cloud data processing method, training data processing method, and apparatus
JP7224682B1 (en) * 2021-08-17 2023-02-20 忠北大学校産学協力団 3D multiple object detection device and method for autonomous driving
CN115311512A (en) * 2022-06-28 2022-11-08 广州文远知行科技有限公司 Data labeling method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Cloud-Assisted Malware Detection and Suppression Framework for Wireless Multimedia System in IoT Based on Dynamic Differential Game;Weiwei Zhou;Bin Yu;;中国通信(第02期);全文 *
基于多传感器融合的智能车在野外环境中的障碍物检测研究(英文);胡劲文;郑博尹;王策;赵春晖;侯晓磊;潘泉;徐钊;;Frontiers of Information Technology & Electronic Engineering(第05期);全文 *

Also Published As

Publication number Publication date
CN115965824A (en) 2023-04-14

Similar Documents

Publication Publication Date Title
CN111783844B (en) Deep learning-based target detection model training method, device and storage medium
CN111476234B (en) License plate character shielding recognition method and device, storage medium and intelligent equipment
CN110956081B (en) Method and device for identifying position relationship between vehicle and traffic marking and storage medium
CN113298050B (en) Lane line recognition model training method and device and lane line recognition method and device
WO2013082297A2 (en) Classifying attribute data intervals
CN112132033B (en) Vehicle type recognition method and device, electronic equipment and storage medium
CN111178367A (en) Feature determination device and method for adapting to multiple object sizes
CN104966109A (en) Medical laboratory report image classification method and apparatus
US11256922B1 (en) Semantic representation method and system based on aerial surveillance video and electronic device
CN111429512A (en) Image processing method and device, storage medium and processor
CN115965824B (en) Point cloud data labeling method, point cloud target detection method, equipment and storage medium
CN110826488B (en) Image identification method and device for electronic document and storage equipment
CN113590421A (en) Log template extraction method, program product, and storage medium
CN111984812B (en) Feature extraction model generation method, image retrieval method, device and equipment
CN112989869B (en) Optimization method, device, equipment and storage medium of face quality detection model
CN114359859A (en) Method and device for processing target object with shielding and storage medium
CN118035939B (en) Confidence coefficient acquisition method for perception target and automatic driving planning control method
CN116912634B (en) Training method and device for target tracking model
CN114022501B (en) Automatic detection method and system for arrow corner points, electronic equipment and storage medium
CN117831003A (en) Universal obstacle detection method, readable storage medium and intelligent device
EP4092565A1 (en) Device and method to speed up annotation quality check process
CN118097610A (en) Static obstacle detection method, storage medium and intelligent device
CN116863143A (en) Domain adaptive semantic segmentation method and device and electronic equipment
CN116503695A (en) Training method of target detection model, target detection method and device
CN117671660A (en) License plate training sample generation method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant