CN111931727A - Point cloud data labeling method and device, electronic equipment and storage medium - Google Patents

Point cloud data labeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111931727A
CN111931727A CN202011010562.6A CN202011010562A CN111931727A CN 111931727 A CN111931727 A CN 111931727A CN 202011010562 A CN202011010562 A CN 202011010562A CN 111931727 A CN111931727 A CN 111931727A
Authority
CN
China
Prior art keywords
point cloud
cloud data
frame
detection
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011010562.6A
Other languages
Chinese (zh)
Inventor
杨国润
梁曦文
王哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202011010562.6A priority Critical patent/CN111931727A/en
Publication of CN111931727A publication Critical patent/CN111931727A/en
Priority to PCT/CN2021/090660 priority patent/WO2022062397A1/en
Priority to JP2021564869A priority patent/JP2022552753A/en
Priority to KR1020217042834A priority patent/KR20220042313A/en
Priority to US17/529,749 priority patent/US20220122260A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/421Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation by analysing segments intersecting the pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a point cloud data labeling method and a point cloud data labeling device, wherein the method comprises the steps of firstly carrying out object identification on point cloud data to be identified to obtain a detection frame of an object in the point cloud data to be identified; then, according to the detection frame of the object identified in the point cloud data to be identified, point cloud data to be marked is determined; then acquiring an artificial marking frame of an object in the point cloud data to be marked; and finally, determining a marking frame of the object in the point cloud data to be identified according to the detection frame and the manual marking frame. According to the method and the device, the detection frame of the object obtained by automatically labeling the point cloud data is labeled with the residual point cloud data after the automatic point cloud data is labeled manually, the obtained manual labeling frame is merged, the labeling frame of the object can be accurately determined, the labeling speed is improved, and the labeling cost is reduced.

Description

Point cloud data labeling method and device, electronic equipment and storage medium
Technical Field
The application relates to the field of image processing, in particular to a point cloud data labeling method and device, electronic equipment and a storage medium.
Background
LiDAR-based 3D target detection is a core technology in the field of autonomous driving. Specifically, in the process of target detection, firstly, a laser radar is adopted to obtain point data of the appearance surface of an object in the environment to obtain point cloud data; and then, manually labeling the point cloud data to obtain a labeling frame of the target object.
The method for manually labeling the point cloud data has high labor cost, the point cloud labeling quality and the point cloud labeling quantity cannot be guaranteed, and the detection precision of 3D target detection is reduced.
Disclosure of Invention
The embodiment of the application at least provides a point cloud data labeling method and device, which improve the quality and quantity of point cloud labeling so as to improve the detection precision of 3D target detection.
In a first aspect, the present application provides a point cloud data labeling method, including:
carrying out object identification on point cloud data to be identified to obtain a detection frame of an object in the point cloud data to be identified;
determining point cloud data to be marked according to a detection frame of an object identified in the point cloud data to be identified;
acquiring an artificial labeling frame of an object in point cloud data to be labeled;
and determining a marking frame of an object in the point cloud data to be identified according to the detection frame and the manual marking frame.
In this respect, the detection frame of the object obtained by automatically labeling the point cloud data and the residual point cloud data after the automatic point cloud data is manually labeled are combined, so that the labeling frame of the object can be accurately determined, the labeling speed is increased, and the labeling cost is reduced.
In a possible implementation manner, the point cloud data labeling method further includes:
carrying out object identification on point cloud data to be identified to obtain the confidence of a detection frame of the identified object;
determining point cloud data to be marked according to a detection frame of an object identified in the point cloud data to be identified, wherein the determination comprises the following steps:
according to the confidence degree of the detection frame of the identified object, eliminating the detection frame with the confidence degree smaller than the confidence degree threshold value to obtain the rest detection frames;
and taking the point cloud data outside the rest detection frames in the point cloud data to be identified as the point cloud data to be marked.
According to the embodiment, the automatic point cloud data labeling result with low recognition accuracy is removed by using the preset confidence threshold, so that the point cloud data labeling quality is improved.
In a possible implementation manner, determining a labeling box of an object in the point cloud data to be identified according to the detection box and the manual labeling box includes:
and determining a marking frame of an object in the point cloud data to be identified according to the remaining detection frames and the manual marking frame.
According to the embodiment, the labeling frame of the object in the point cloud data to be recognized is determined based on the detection frame with higher confidence coefficient, so that the quality of point cloud labeling is improved.
In one possible embodiment, the confidence thresholds of the detection boxes of different classes of objects are different;
according to the confidence degree of the detection frame of the identified object, removing the detection frame with the confidence degree smaller than the confidence degree threshold value to obtain the remaining detection frames, and the method comprises the following steps:
and for each detection frame, when the confidence of the detection frame is greater than or equal to the confidence threshold of the detection frame corresponding to the class of the object in the detection frame, determining the detection frame as the rest detection frames.
In a possible implementation manner, the point cloud data labeling method further includes:
and for each detection frame, when the confidence coefficient of the detection frame is smaller than the confidence coefficient threshold value of the detection frame corresponding to the class of the object in the detection frame, rejecting the detection frame.
According to the embodiment, the detection frames with the corresponding object types and low confidence degrees are removed based on the confidence degree threshold value matched with the object types, so that the labeling quality of automatically labeling the point cloud data is improved.
In a possible implementation manner, the determining a labeling box of an object in the point cloud data to be identified according to the remaining detection boxes and the manual labeling box includes:
regarding each remaining detection frame, taking the detection frame and the artificial labeling frame at least partially overlapped with the detection frame as a labeling frame pair under the condition that the artificial labeling frame at least partially overlapped with the detection frame exists;
determining the overlapping degree of the remaining detection frames and the manual marking frames in each marking frame pair, and removing the manual marking frames when the overlapping degree is greater than a preset threshold value;
and taking the rest detection frames and the rest manual marking frames as marking frames of the objects in the point cloud data to be identified.
According to the embodiment, when the detection frame marking frame of the object obtained through automatic detection and the manual marking frame obtained through manual marking are overlapped, the manual marking frame is removed based on the overlapping degree of the detection frame marking frame and the manual marking frame and the preset threshold, and the marking precision of the object can be improved.
In a possible implementation, determining the overlapping degree of the remaining detection boxes and the manual labeling boxes in a labeling box pair includes:
determining the intersection between the point cloud data framed by the rest detection frames in the marking frame pair and the point cloud data framed by the manual marking frame;
determining a union set between the point cloud data framed by the rest detection frames in the marking frame pair and the point cloud data framed by the manual marking frame;
and determining the overlapping degree between the remaining detection frames and the manual labeling frames in the labeling frame pair based on the union set and the intersection set.
According to the embodiment, the overlapping degree of the detection frame and the manual marking frame of the object can be accurately determined by using the intersection and union of the point cloud data framed by the detection frame of the object and the point cloud data framed by the manual marking frame.
In a possible implementation manner, the performing object recognition on the point cloud data to be recognized to obtain a detection frame of an object in the point cloud data to be recognized includes:
and carrying out object recognition on the point cloud data to be recognized by utilizing the trained neural network, and outputting a detection frame of the recognized object by the neural network.
In a possible implementation manner, the point cloud data labeling method further includes:
the neural network also outputs the confidence of each detection box.
According to the embodiment, the object recognition is automatically carried out and the execution degree of the detection frame is determined based on the trained neural network, so that the precision and the speed of the object recognition can be ensured, and the instability caused by manual marking is avoided.
In a second aspect, the present application provides a point cloud data labeling apparatus, including:
the object identification module is used for carrying out object identification on the point cloud data to be identified to obtain a detection frame of an object in the point cloud data to be identified;
the point cloud processing module is used for determining point cloud data to be marked according to a detection frame of an identified object in the point cloud data to be identified;
the marking frame acquisition module is used for acquiring an artificial marking frame of an object in the point cloud data to be marked;
and the marking frame determining module is used for determining the marking frame of the object in the point cloud data to be identified according to the detection frame and the artificial marking frame.
In a third aspect, an embodiment of the present application provides an electronic device, including: the device comprises a processor, a memory and a bus, wherein the memory stores machine readable instructions executable by the processor, when the electronic device runs, the processor and the memory are communicated through the bus, and the machine readable instructions are executed by the processor to execute the steps of the point cloud data labeling method.
In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the point cloud data annotation method are performed.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments and are incorporated in and constitute a part of the specification will be briefly described below, and the drawings illustrate the embodiments consistent with the present application and together with the description serve to explain the technical solutions of the present application. It is appreciated that the following drawings depict only certain embodiments of the application and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flowchart illustrating a point cloud data annotation method provided in an embodiment of the present application;
FIG. 2A is a schematic diagram illustrating point cloud data after an object labeling box is screened in the embodiment of the application;
FIG. 2B is a schematic diagram illustrating point cloud data to be annotated in the embodiment of the present application;
FIG. 2C is a schematic diagram illustrating the remaining object labeling boxes obtained by filtering in the embodiment of the present application;
FIG. 2D is a schematic diagram illustrating point cloud data after manual annotation in the embodiment of the present application;
FIG. 2E is a schematic diagram illustrating point cloud data after an artificial mark box and an object mark box are combined in the embodiment of the application;
FIG. 3 is a schematic structural diagram illustrating a point cloud data annotation device according to an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Aiming at the defects that manual point cloud data is high in cost and incapable of guaranteeing quality and speed, the point cloud data labeling method is provided, a detection frame of an object obtained by automatically labeling the point cloud data is combined with manual labeling of residual point cloud data obtained after the automatic point cloud data is labeled, the obtained manual labeling frame can be accurately determined, the labeling speed is improved, and the labeling cost is reduced. .
The following describes a method, an apparatus, an electronic device, and a storage medium for point cloud data annotation disclosed in the present application with specific embodiments.
As shown in fig. 1, an embodiment of the present application discloses a point cloud data labeling method, which can be applied to a server or a client, and is used for determining collected point cloud data to be identified to perform object identification and determining a labeling frame of an object. Specifically, the point cloud data labeling method may include the following steps:
s110, carrying out object identification on the point cloud data to be identified to obtain a detection frame of an object in the point cloud data to be identified.
Here, the trained neural network may be used to perform object recognition on the point cloud data to be recognized, so as to obtain a detection frame of at least one object.
In addition, the neural network is used for identifying the object to obtain the detection frame of the object, and meanwhile, the confidence degree corresponding to the detection frame of each object can be obtained. The class of the object corresponding to the detection frame may be an automobile, a pedestrian on foot, a cyclist, a truck, and the like. The confidence of the detection boxes of different classes of objects is different.
The neural network can be obtained by training point cloud data samples labeled manually. The point cloud data sample comprises sample point cloud data and a detection frame obtained by manually marking the sample point cloud data.
The point cloud data to be identified may be a set of point cloud data obtained by detecting a preset area with a laser radar.
The trained neural network is used for automatically identifying the object and determining the execution degree of the detection frame, so that the accuracy and speed of object identification can be ensured, and the instability caused by manual marking is avoided.
S120, determining the point cloud data to be marked according to the detection frame of the object identified in the point cloud data to be identified.
And the neural network identifies the object of the point cloud data to be identified to determine the detection frame, and generates the confidence coefficient of each detection frame. Here, the point cloud data to be labeled may be specifically determined by using the following sub-steps:
according to the confidence degree of the detection frame of the identified object, eliminating the detection frame with the confidence degree smaller than the confidence degree threshold value to obtain the rest detection frames; and taking the point cloud data outside the rest detection frames in the point cloud data to be identified as the point cloud data to be marked.
And the automatic point cloud data labeling result with lower recognition accuracy is removed by using a preset confidence threshold, so that the point cloud data labeling quality is improved.
Because the neural network has different detection precision for different types of objects, if the detection frames of all types of objects are removed with the same confidence, the precision of the remaining detection frames can be reduced, so that different confidence threshold values can be set for the detection frames of different types of objects in advance according to the detection precision of the neural network for different types of objects.
For example, a confidence threshold of 0.81 is set for a detection frame of a car of the type of the object, a confidence threshold of 0.70 is set for a detection frame of a pedestrian of the type of the object, a confidence threshold of 0.72 is set for a detection frame of a cyclist of the type of the object, and a confidence threshold of 0.83 is set for a detection frame of a passenger car of the type of the object.
The confidence threshold is set based on the object identification precision of the neural network, so that inaccurate detection frames can be effectively removed, the precision of the rest detection frames is ensured, and the precision of the labeling frames of the objects determined based on the rest detection frames can be improved.
After setting different confidence threshold values, according to the confidence of the detected frame of the identified object, the detected frame with the confidence smaller than the confidence threshold value can be removed by the following steps to obtain the remaining detected frames:
and for each detection frame, when the confidence of the detection frame is greater than or equal to the confidence threshold of the detection frame corresponding to the class of the object in the detection frame, determining the detection frame as the rest detection frames. And for each detection frame, when the confidence coefficient of the detection frame is smaller than the confidence coefficient threshold value of the detection frame corresponding to the class of the object in the detection frame, rejecting the detection frame.
Based on the confidence threshold matched with the object type, the detection frame with the corresponding object type and low confidence is removed, and the labeling quality of automatically labeling the point cloud data is improved.
The detection frame comprises point cloud data of corresponding objects acquired by the laser radar.
S130, acquiring a manual marking frame of the object in the point cloud data to be marked.
Because some marking frames of the objects which need to be marked originally may be omitted from the detection frame of the automatically marked object, the point cloud data other than the point cloud data framed by the detection frame of the object needs to be manually marked, and the manual marking frame is obtained through manual marking. The detection frame of the object obtained by automatic detection and the manual marking frame obtained by manual marking can comprehensively and accurately represent the object in the point cloud data set.
Here, the manual labeling box may be obtained specifically by the following steps:
sending the point cloud data to be marked to a manual marking end so that a worker can manually mark the point cloud data to be marked through the manual marking end to obtain a manual marking frame; and the manual marking end sends the manual marking frame to the server or the client. And the server or the client receives the manual marking frame.
The remaining point cloud data except the point cloud data framed by the detection frame of the object obtained by automatic labeling are sent to the manual labeling end to obtain the manual labeling frame of the remaining point cloud data, so that the point cloud data amount of manual labeling is reduced, the cost is reduced, the point cloud data labeling quality is improved, and the point cloud data labeling speed is improved.
The point cloud data framed by the detection frame of the object comprises point cloud data located in the detection frame and located on the surface of the detection frame.
The manual marking frame comprises point cloud data of corresponding objects acquired by the laser radar.
S140, determining a marking frame of the object in the point cloud data to be identified according to the detection frame and the manual marking frame.
Here, the labeling frame of the object in the point cloud data to be identified may be specifically determined according to the remaining detection frames and the manual labeling frame.
And determining a labeling frame of the object in the point cloud data to be identified based on the detection frame with higher confidence coefficient, so that the quality of point cloud labeling is improved.
Here, the remaining detection frames of the object may be directly merged with the manual labeling frame to obtain the labeling frame of the object.
The method also comprises the following steps of removing the manual marking frames with more overlapped object detection frames and manual marking frames, and combining the rest detection frames and the rest manual marking frames to be used as the marking frames of the objects in the point cloud data to be identified:
firstly, whether an artificial labeling frame partially or completely overlapped with the detection frame of the object exists or not is detected aiming at the detection frame of each residual object. Taking the detection frame of the object and the artificial labeling frame at least partially overlapped with the detection frame as a labeling frame pair under the condition that the artificial labeling frame at least partially overlapped with the detection frame of the object exists; and then, determining the overlapping degree of the remaining detection frames and the manual marking frames in each marking frame pair, and rejecting the manual marking frames when the overlapping degree is greater than a preset threshold value.
When the detection frame of the object obtained through automatic detection and the manual marking frame obtained through manual marking are overlapped, the manual marking frame is removed based on the overlapping degree of the two types of the detection frame and the manual marking frame, and the marking precision of the object can be improved.
In a specific implementation, the overlapping degree may be determined by the following steps: firstly, determining the intersection between the point cloud data framed by the rest detection frames in the marking frame pair and the point cloud data framed by the manual marking frame; determining a union set between the point cloud data framed by the rest detection frames in the marking frame pair and the point cloud data framed by the manual marking frame; and then, based on the union and the intersection, determining the overlapping degree between the rest detection frames and the manual labeling frames in the labeling frame pair. Specifically, the intersection may be divided by the union, and a quotient obtained by dividing the intersection by the union may be calculated as the overlap degree.
The overlapping degree of the detection frame and the manual marking frame of the object can be accurately determined by using the intersection and union of the point cloud data framed by the detection frame of the object and the point cloud data framed by the manual marking frame.
In summary, the point cloud data labeling method provided by the present application may specifically include the following steps:
firstly, carrying out object recognition on point cloud data to be recognized by utilizing a pre-trained neural network to obtain at least one object detection frame and a confidence coefficient corresponding to each detection frame.
The point cloud data to be identified may include point cloud data acquired by a data frame of the laser radar.
And step two, determining the confidence threshold of the detection frame corresponding to each category according to the recognition precision of the neural network on the object of each category. And (3) eliminating the detection frames with the confidence degrees smaller than the corresponding confidence degree threshold value in the detection frames of the object obtained in the last step by using the confidence degree threshold value, wherein the identification precision of the rest detection frames is higher, and as shown in fig. 2A, the rest detection frames 21 are more accurate.
And step three, sending point cloud data except the point cloud data framed by the rest detection frames in the point cloud data to be identified to a manual labeling end as the point cloud data to be labeled so as to perform manual labeling.
Fig. 2B shows the point cloud data to be labeled, and fig. 2C shows the remaining detection frames. The point cloud data to be identified can be obtained by merging the point cloud data in fig. 2B and fig. 2C.
In specific implementation, the image including only the point cloud data to be labeled can be sent to the manual labeling end, and the image labeled with the remaining detection frames can also be sent to the manual labeling end.
Step four, as shown in fig. 2D, the worker performs manual labeling at the manual labeling end to obtain the manual labeling frame 22.
And fifthly, splicing the remaining detection frames of the object with the manual marking frames to obtain complete marking data, namely obtaining the marking frames of the object. In this process, there may be a phenomenon that some manual labeling frames and the remaining detection frames overlap due to the unclean point cloud filtering, and therefore, the overlapping degree needs to be calculated for the manual labeling frames and the detection frames that overlap. And if the overlapping degree of the manual marking frame and the detection frame is larger than a preset threshold value, the manual marking frame is removed. After this step, the cleaned manual labeling frame is obtained, and then the cleaned manual labeling frame is merged with the remaining detection frames to obtain complete label data, i.e., the labeling frame of the object, as shown by the markers 21 and 22 in fig. 2E.
According to the point cloud data labeling method, the object labeling frame is determined by combining the detection frame of the object generated by automatic detection and the manual labeling frame obtained by manual labeling, the labeling cost can be reduced, the object labeling precision and speed can be further improved, and the point cloud labeling result with high quality can be obtained at low cost.
Corresponding to the point cloud data labeling method, the application also discloses a point cloud data labeling device, which is applied to a server or a client, and each module in the device can realize each step in the point cloud data labeling method of each embodiment and can obtain the same beneficial effect, so that the description of the same parts is omitted. Specifically, as shown in fig. 3, the point cloud data labeling apparatus includes:
the object identification module 310 is configured to perform object identification on point cloud data to be identified, so as to obtain a detection frame of an object in the point cloud data to be identified.
The point cloud processing module 320 is configured to determine point cloud data to be labeled according to a detection frame of an object identified in the point cloud data to be identified.
And a labeling frame acquiring module 330, configured to acquire a manual labeling frame of an object in the point cloud data to be labeled.
And a labeling frame determining module 340, configured to determine a labeling frame of an object in the point cloud data to be identified according to the detection frame and the artificial labeling frame.
In some embodiments, the object recognition module 310 is further configured to perform object recognition on the point cloud data to be recognized, so as to obtain a confidence of a detection frame of the recognized object;
the point cloud processing module 320 is configured to, when determining the point cloud data to be labeled according to the detection frame of the object identified in the point cloud data to be identified:
according to the confidence degree of the detection frame of the identified object, eliminating the detection frame with the confidence degree smaller than the confidence degree threshold value to obtain the rest detection frames;
and taking the point cloud data outside the rest detection frames in the point cloud data to be identified as the point cloud data to be marked.
In some embodiments, the annotation box determination module 340, when determining the annotation box of the object in the point cloud data to be identified according to the detection box and the manual annotation box, is configured to:
and determining a marking frame of an object in the point cloud data to be identified according to the remaining detection frames and the manual marking frame.
In some embodiments, the confidence thresholds for the detection boxes of different classes of objects are different;
the point cloud processing module 320 is configured to, when removing the detection frame with the confidence coefficient smaller than the confidence coefficient threshold value according to the confidence coefficient of the detection frame of the identified object to obtain the remaining detection frames:
and for each detection frame, when the confidence of the detection frame is greater than or equal to the confidence threshold of the detection frame corresponding to the class of the object in the detection frame, determining the detection frame as the rest detection frames.
In some embodiments, point cloud processing module 320 is further configured to: and for each detection frame, when the confidence coefficient of the detection frame is smaller than the confidence coefficient threshold value of the detection frame corresponding to the class of the object in the detection frame, rejecting the detection frame.
In some embodiments, the annotation box determination module 340, when determining the annotation box of the object in the point cloud data to be identified according to the remaining detection boxes and the manual annotation box, is configured to:
regarding each remaining detection frame, taking the detection frame and the artificial labeling frame at least partially overlapped with the detection frame as a labeling frame pair under the condition that the artificial labeling frame at least partially overlapped with the detection frame exists;
determining the overlapping degree of the remaining detection frames and the manual marking frames in each marking frame pair, and removing the manual marking frames when the overlapping degree is greater than a preset threshold value;
and taking the rest detection frames and the rest manual marking frames as marking frames of the objects in the point cloud data to be identified.
In some embodiments, the labeling box determining module 340, when determining the degree of overlap between the remaining detection boxes in a pair of labeling boxes and the manual labeling box, is configured to:
determining the intersection between the point cloud data framed by the rest detection frames in the marking frame pair and the point cloud data framed by the manual marking frame;
determining a union set between the point cloud data framed by the rest detection frames in the marking frame pair and the point cloud data framed by the manual marking frame;
and determining the overlapping degree between the remaining detection frames and the manual labeling frames in the labeling frame pair based on the union set and the intersection set.
In some embodiments, the object identification module 310, when performing object identification on point cloud data to be identified to obtain a detection frame of an object in the point cloud data to be identified, is configured to:
and carrying out object recognition on the point cloud data to be recognized by utilizing the trained neural network, and outputting a detection frame of the recognized object by the neural network.
The neural network also outputs the confidence of each detection box.
Corresponding to the above point cloud data labeling method, an embodiment of the present application further provides an electronic device 400, and as shown in fig. 4, a schematic structural diagram of the electronic device 400 provided in the embodiment of the present application includes:
a processor 41, a memory 42, and a bus 43; the memory 42 is used for storing execution instructions and includes a memory 421 and an external memory 422; the memory 421 is also referred to as an internal memory, and is used for temporarily storing the operation data in the processor 41 and the data exchanged with the external memory 422 such as a hard disk, the processor 41 exchanges data with the external memory 422 through the memory 421, and when the electronic device 400 operates, the processor 41 communicates with the memory 42 through the bus 43, so that the processor 41 executes the following instructions: carrying out object identification on point cloud data to be identified to obtain a detection frame of an object in the point cloud data to be identified; determining point cloud data to be marked according to a detection frame of an object identified in the point cloud data to be identified; acquiring an artificial labeling frame of an object in point cloud data to be labeled; and determining a marking frame of an object in the point cloud data to be identified according to the detection frame and the manual marking frame.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the point cloud data annotation method in the above method embodiments are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the multi-user aggregation method provided in the embodiment of the present application includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the point cloud data labeling method described in the above method embodiment, which may be referred to in the above method embodiment specifically, and are not described herein again.
The embodiments of the present application also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A point cloud data labeling method is characterized by comprising the following steps:
carrying out object identification on point cloud data to be identified to obtain a detection frame of an object in the point cloud data to be identified;
determining point cloud data to be marked according to a detection frame of an object identified in the point cloud data to be identified;
acquiring an artificial labeling frame of an object in point cloud data to be labeled;
and determining a marking frame of an object in the point cloud data to be identified according to the detection frame and the manual marking frame.
2. The method of claim 1, further comprising:
carrying out object identification on point cloud data to be identified to obtain the confidence of a detection frame of the identified object;
determining point cloud data to be marked according to a detection frame of an object identified in the point cloud data to be identified, wherein the determination comprises the following steps:
according to the confidence degree of the detection frame of the identified object, eliminating the detection frame with the confidence degree smaller than the confidence degree threshold value to obtain the rest detection frames;
and taking the point cloud data outside the rest detection frames in the point cloud data to be identified as the point cloud data to be marked.
3. The method of claim 2, wherein determining a label box of the object in the point cloud data to be identified according to the detection box and the manual label box comprises:
and determining a marking frame of an object in the point cloud data to be identified according to the remaining detection frames and the manual marking frame.
4. The method according to claim 2 or 3, characterized in that the confidence thresholds of the detection boxes of different classes of objects are different;
according to the confidence degree of the detection frame of the identified object, removing the detection frame with the confidence degree smaller than the confidence degree threshold value to obtain the remaining detection frames, and the method comprises the following steps:
and for each detection frame, when the confidence of the detection frame is greater than or equal to the confidence threshold of the detection frame corresponding to the class of the object in the detection frame, determining the detection frame as the rest detection frames.
5. The method of claim 4, further comprising:
and for each detection frame, when the confidence coefficient of the detection frame is smaller than the confidence coefficient threshold value of the detection frame corresponding to the class of the object in the detection frame, rejecting the detection frame.
6. The method of claim 3, wherein determining a labeling box for an object in the point cloud data to be identified according to the remaining detection boxes and the manual labeling box comprises:
regarding each remaining detection frame, taking the detection frame and the artificial labeling frame at least partially overlapped with the detection frame as a labeling frame pair under the condition that the artificial labeling frame at least partially overlapped with the detection frame exists;
determining the overlapping degree of the remaining detection frames and the manual marking frames in each marking frame pair, and removing the manual marking frames when the overlapping degree is greater than a preset threshold value;
and taking the rest detection frames and the rest manual marking frames as marking frames of the objects in the point cloud data to be identified.
7. The method of claim 6, wherein determining the degree of overlap between the remaining detection boxes in a labeling box pair and the manual labeling box comprises:
determining the intersection between the point cloud data framed by the rest detection frames in the marking frame pair and the point cloud data framed by the manual marking frame;
determining a union set between the point cloud data framed by the rest detection frames in the marking frame pair and the point cloud data framed by the manual marking frame;
and determining the overlapping degree between the remaining detection frames and the manual labeling frames in the labeling frame pair based on the union set and the intersection set.
8. The method according to any one of claims 1 to 3, wherein the performing object recognition on the point cloud data to be recognized to obtain a detection frame of an object in the point cloud data to be recognized comprises:
and carrying out object recognition on the point cloud data to be recognized by utilizing the trained neural network, and outputting a detection frame of the recognized object by the neural network.
9. The method of claim 8, further comprising:
the neural network also outputs the confidence of each detection box.
10. A point cloud data labeling device is characterized by comprising:
the object identification module is used for carrying out object identification on the point cloud data to be identified to obtain a detection frame of an object in the point cloud data to be identified;
the point cloud processing module is used for determining point cloud data to be marked according to a detection frame of an identified object in the point cloud data to be identified;
the marking frame acquisition module is used for acquiring an artificial marking frame of an object in the point cloud data to be marked;
and the marking frame determining module is used for determining the marking frame of the object in the point cloud data to be identified according to the detection frame and the artificial marking frame.
11. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the steps of the point cloud data annotation method of any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the point cloud data annotation method according to one of claims 1 to 9.
CN202011010562.6A 2020-09-23 2020-09-23 Point cloud data labeling method and device, electronic equipment and storage medium Pending CN111931727A (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202011010562.6A CN111931727A (en) 2020-09-23 2020-09-23 Point cloud data labeling method and device, electronic equipment and storage medium
PCT/CN2021/090660 WO2022062397A1 (en) 2020-09-23 2021-04-28 Point cloud data annotation method and device, electronic equipment, and computer-readable storage medium
JP2021564869A JP2022552753A (en) 2020-09-23 2021-04-28 Point cloud data labeling method, device, electronic device and computer readable storage medium
KR1020217042834A KR20220042313A (en) 2020-09-23 2021-04-28 Point cloud data labeling method, apparatus, electronic device and computer readable storage medium
US17/529,749 US20220122260A1 (en) 2020-09-23 2021-11-18 Method and apparatus for labeling point cloud data, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011010562.6A CN111931727A (en) 2020-09-23 2020-09-23 Point cloud data labeling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111931727A true CN111931727A (en) 2020-11-13

Family

ID=73335132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011010562.6A Pending CN111931727A (en) 2020-09-23 2020-09-23 Point cloud data labeling method and device, electronic equipment and storage medium

Country Status (5)

Country Link
US (1) US20220122260A1 (en)
JP (1) JP2022552753A (en)
KR (1) KR20220042313A (en)
CN (1) CN111931727A (en)
WO (1) WO2022062397A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801200A (en) * 2021-02-07 2021-05-14 文远鄂行(湖北)出行科技有限公司 Data packet screening method, device, equipment and storage medium
CN112990293A (en) * 2021-03-10 2021-06-18 深圳一清创新科技有限公司 Point cloud marking method and device and electronic equipment
WO2022062397A1 (en) * 2020-09-23 2022-03-31 深圳市商汤科技有限公司 Point cloud data annotation method and device, electronic equipment, and computer-readable storage medium
CN114298982A (en) * 2021-12-14 2022-04-08 禾多科技(北京)有限公司 Image annotation method and device, computer equipment and storage medium
CN114549644A (en) * 2022-02-24 2022-05-27 北京百度网讯科技有限公司 Data labeling method and device, electronic equipment and storage medium
WO2022133776A1 (en) * 2020-12-23 2022-06-30 深圳元戎启行科技有限公司 Point cloud annotation method and apparatus, computer device and storage medium
CN114723940A (en) * 2022-04-22 2022-07-08 广州文远知行科技有限公司 Method, device and storage medium for labeling picture data based on rules

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230196731A1 (en) * 2021-12-20 2023-06-22 Gm Cruise Holdings Llc System and method for two-stage object detection and classification
CN115375987B (en) * 2022-08-05 2023-09-05 北京百度网讯科技有限公司 Data labeling method and device, electronic equipment and storage medium
CN116612474B (en) * 2023-07-20 2023-11-03 深圳思谋信息科技有限公司 Object detection method, device, computer equipment and computer readable storage medium
CN118587400A (en) * 2024-08-05 2024-09-03 中国交通信息科技集团有限公司杭州分公司 Labeling method, device, equipment and medium for P3D file

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040076335A1 (en) * 2002-10-17 2004-04-22 Changick Kim Method and apparatus for low depth of field image segmentation
CN109635685A (en) * 2018-11-29 2019-04-16 北京市商汤科技开发有限公司 Target object 3D detection method, device, medium and equipment
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110782517A (en) * 2019-10-10 2020-02-11 北京地平线机器人技术研发有限公司 Point cloud marking method and device, storage medium and electronic equipment
CN111401228A (en) * 2020-03-13 2020-07-10 中科创达软件股份有限公司 Video target labeling method and device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7311310B2 (en) * 2018-10-18 2023-07-19 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Information processing device, information processing method and program
WO2021081808A1 (en) * 2019-10-30 2021-05-06 深圳市大疆创新科技有限公司 Artificial neural network-based object detection system and method
CN111931727A (en) * 2020-09-23 2020-11-13 深圳市商汤科技有限公司 Point cloud data labeling method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040076335A1 (en) * 2002-10-17 2004-04-22 Changick Kim Method and apparatus for low depth of field image segmentation
CN109635685A (en) * 2018-11-29 2019-04-16 北京市商汤科技开发有限公司 Target object 3D detection method, device, medium and equipment
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110782517A (en) * 2019-10-10 2020-02-11 北京地平线机器人技术研发有限公司 Point cloud marking method and device, storage medium and electronic equipment
CN111401228A (en) * 2020-03-13 2020-07-10 中科创达软件股份有限公司 Video target labeling method and device and electronic equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022062397A1 (en) * 2020-09-23 2022-03-31 深圳市商汤科技有限公司 Point cloud data annotation method and device, electronic equipment, and computer-readable storage medium
WO2022133776A1 (en) * 2020-12-23 2022-06-30 深圳元戎启行科技有限公司 Point cloud annotation method and apparatus, computer device and storage medium
CN112801200A (en) * 2021-02-07 2021-05-14 文远鄂行(湖北)出行科技有限公司 Data packet screening method, device, equipment and storage medium
CN112801200B (en) * 2021-02-07 2024-02-20 文远鄂行(湖北)出行科技有限公司 Data packet screening method, device, equipment and storage medium
CN112990293A (en) * 2021-03-10 2021-06-18 深圳一清创新科技有限公司 Point cloud marking method and device and electronic equipment
CN112990293B (en) * 2021-03-10 2024-03-29 深圳一清创新科技有限公司 Point cloud labeling method and device and electronic equipment
CN114298982A (en) * 2021-12-14 2022-04-08 禾多科技(北京)有限公司 Image annotation method and device, computer equipment and storage medium
CN114549644A (en) * 2022-02-24 2022-05-27 北京百度网讯科技有限公司 Data labeling method and device, electronic equipment and storage medium
CN114723940A (en) * 2022-04-22 2022-07-08 广州文远知行科技有限公司 Method, device and storage medium for labeling picture data based on rules

Also Published As

Publication number Publication date
WO2022062397A1 (en) 2022-03-31
JP2022552753A (en) 2022-12-20
KR20220042313A (en) 2022-04-05
US20220122260A1 (en) 2022-04-21

Similar Documents

Publication Publication Date Title
CN111931727A (en) Point cloud data labeling method and device, electronic equipment and storage medium
US10217007B2 (en) Detecting method and device of obstacles based on disparity map and automobile driving assistance system
CN111695486B (en) High-precision direction signboard target extraction method based on point cloud
CN111310835B (en) Target object detection method and device
CN109871829B (en) Detection model training method and device based on deep learning
JP6317725B2 (en) System and method for determining clutter in acquired images
CN112489126A (en) Vehicle key point information detection method, vehicle control method and device and vehicle
CN111768450A (en) Automatic detection method and device for line deviation of structured light camera based on speckle pattern
CN113505781B (en) Target detection method, target detection device, electronic equipment and readable storage medium
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN109559342B (en) Method and device for measuring animal body length
CN113792600A (en) Video frame extraction method and system based on deep learning
CN108363942B (en) Cutter identification method, device and equipment based on multi-feature fusion
US20230096532A1 (en) Machine learning system, learning data collection method and storage medium
CN111209847A (en) Violence sorting identification method and device
CN111652145A (en) Formula detection method and device, electronic equipment and storage medium
CN110991357A (en) Answer matching method and device and electronic equipment
CN112380968A (en) Detection method, detection device, electronic equipment and storage medium
CN112818865A (en) Vehicle-mounted field image identification method, identification model establishing method, device, electronic equipment and readable storage medium
CN113420579A (en) Method and device for training and positioning identification code position positioning model and electronic equipment
CN112819953A (en) Three-dimensional reconstruction method, network model training method and device and electronic equipment
CN111950644A (en) Model training sample selection method and device and computer equipment
CN113378871A (en) Data annotation method and device and computing equipment
CN115249261B (en) Image gravity direction acquisition method and device, electronic equipment and storage medium
EP4092565A1 (en) Device and method to speed up annotation quality check process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40038873

Country of ref document: HK