CN113592897A - Point cloud data labeling method and device - Google Patents

Point cloud data labeling method and device Download PDF

Info

Publication number
CN113592897A
CN113592897A CN202010369838.3A CN202010369838A CN113592897A CN 113592897 A CN113592897 A CN 113592897A CN 202010369838 A CN202010369838 A CN 202010369838A CN 113592897 A CN113592897 A CN 113592897A
Authority
CN
China
Prior art keywords
frame
target
point cloud
cloud data
marked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010369838.3A
Other languages
Chinese (zh)
Other versions
CN113592897B (en
Inventor
袁彬
侯聪聪
董维山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Momenta Suzhou Technology Co Ltd
Original Assignee
Momenta Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Momenta Suzhou Technology Co Ltd filed Critical Momenta Suzhou Technology Co Ltd
Priority to CN202010369838.3A priority Critical patent/CN113592897B/en
Publication of CN113592897A publication Critical patent/CN113592897A/en
Application granted granted Critical
Publication of CN113592897B publication Critical patent/CN113592897B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a method and a device for marking point cloud data, wherein the method comprises the following steps: acquiring a point cloud data frame to be marked and corresponding acquisition equipment pose information; obtaining a point cloud data frame to be displayed corresponding to the marked point cloud data frame, and displaying the point cloud data frame; after the first selected operation is detected, determining first marking frame information corresponding to a target to be marked; determining the current superposed frame of the superposed frame number information frame behind or before the current display frame; based on the current display frame and the position and posture information of the acquisition equipment corresponding to the current superposition frame, superposing and displaying the current display frame and the current superposition frame; after the second selected operation is detected, determining second marking frame information corresponding to a target to be marked in a target frame of the current superposition frame; and determining third marking frame information corresponding to the target to be marked in each current superposed frame between the current display frame and the target frame based on the first marking frame information and the second marking frame information so as to realize simple, convenient and effective marking of point cloud data, reduce the burden of marking personnel and improve the efficiency.

Description

Point cloud data labeling method and device
Technical Field
The invention relates to the technical field of data annotation, in particular to a method and a device for annotating point cloud data.
Background
The training of the 3D target detection algorithm based on deep learning relies on a large amount of labeled 3D laser radar point cloud data, wherein the 3D laser radar point cloud data is point cloud data collected through a laser radar. In the related art, the labeling of the 3D lidar point cloud data is generally completed manually. Compared with the labeling of 2D picture data, in the manual labeling process of 3D laser radar point cloud data, the labeling steps are complex, the speed is low, and the cost is high.
In a scene of manual labeling of continuous multiframe 3D laser radar point cloud data, labeling personnel need to label the 3D laser radar point cloud data of the frame-by-frame pair, and in the scene, the labeling personnel need not only pay attention to each labeling frame, but also need to refer to the 3D laser radar point cloud data of a plurality of frames to associate the labeling frames of different frames. In some scenes, it is also necessary to ensure that the labeling frames having an association relationship on different frames correspond to the labeling frame of the same physical object, and some attribute information is consistent, which greatly affects the improvement of the labeling efficiency of the 3D laser radar point cloud data.
Therefore, how to provide a convenient method for labeling 3D lidar point cloud data becomes an urgent problem to be solved.
Disclosure of Invention
The invention provides a point cloud data labeling method and device, which are used for realizing simple, convenient and effective labeling of point cloud data, reducing the burden of labeling personnel and improving the labeling efficiency. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for labeling point cloud data, where the method includes:
acquiring point cloud data frames to be marked and acquisition equipment pose information corresponding to each point cloud data frame to be marked;
performing preset display processing on the point cloud data frame to be marked to obtain a point cloud data frame to be displayed, and displaying the point cloud data frame to be displayed;
after detecting a first selection operation triggered by point cloud data corresponding to a target to be marked in a current display frame currently displayed in the point cloud data frames to be displayed, determining first marking frame information corresponding to the target to be marked based on the first selection operation;
after acquiring the information of the superimposed frame number, determining the point cloud data frame of the superimposed frame number information frame behind or in front of the current display frame from the point cloud data frame to be marked as the current superimposed frame;
based on the acquisition equipment pose information corresponding to the current display frame and the acquisition equipment pose information corresponding to each current superposition frame, superposing and displaying each current superposition frame on the current display frame to display the motion track information corresponding to the target to be marked;
after a second selection operation triggered by point cloud data corresponding to the target to be marked in the target frame of the current superposition frame is detected, determining second marking frame information corresponding to the target to be marked in the target frame of the current superposition frame based on the second selection operation;
and determining third labeling frame information corresponding to the target to be labeled in each current superposition frame between the current display frame and the target frame of the current superposition frame based on the first labeling frame information and the second labeling frame information.
Optionally, the step of performing preset display processing on the point cloud data frame to be marked to obtain the point cloud data frame to be displayed includes:
and carrying out ground point cloud data deletion operation on the point cloud data frame to be marked to obtain the point cloud data frame to be displayed.
Optionally, the step of determining, based on the first annotation frame information and the second annotation frame information, third annotation frame information corresponding to the target to be annotated in each current overlay frame between the current display frame and the target frame of the current overlay frame includes:
determining intermediate labeling frame information corresponding to the target to be labeled in each current superposition frame between the current display frame and the target frame of the current superposition frame based on the first labeling frame information and the second labeling frame information;
and aiming at each current superposition frame between the current display frame and a target frame of the current superposition frame, adjusting the middle marking frame information corresponding to the target to be marked in the current superposition based on the distribution characteristics of the point cloud data corresponding to the target to be marked in the current superposition, and determining the third marking frame information corresponding to the target to be marked in the current superposition frame.
Optionally, after the step of determining, based on the first selected operation, first labeling frame information corresponding to the target to be labeled, the method further includes:
displaying a first labeling frame corresponding to the first labeling frame information in the current display frame;
after the step of determining, based on the second selected operation, second annotation frame information corresponding to the target to be annotated in the target frame of the current overlay frame, the method further includes:
and displaying a second labeling frame corresponding to the second labeling frame information in the target frame of the current superposition frame.
Optionally, after the step of determining, based on the first labeling frame information and the second labeling frame information, third labeling frame information corresponding to the target to be labeled in each current overlay frame between the current display frame and the target frame of the current overlay frame, the method further includes:
if a fine tuning instruction for a marking frame corresponding to information of a marking frame to be adjusted corresponding to a target to be marked in a frame to be adjusted is detected, determining an adjustment direction corresponding to the information of the marking frame corresponding to the target to be marked based on the fine tuning instruction, wherein the frame to be adjusted is: the current display frame or the current superposition frame of the point cloud data corresponding to the target to be marked is included, and the marking frame corresponding to the information of the marking frame to be adjusted comprises: a marking frame corresponding to the first marking frame information, a marking frame corresponding to the second marking frame information or a marking frame corresponding to the third marking frame information corresponding to the target to be marked;
determining target marking frame information corresponding to the target to be marked, which meets preset edge fitting conditions, based on the relative position relationship between the point cloud data corresponding to the target to be marked in the frame to be adjusted and a marking frame corresponding to the information of the marking frame to be adjusted, and the adjustment direction, wherein the preset edge fitting conditions are as follows: and limiting the specified edge of the point cloud data corresponding to the target to be marked in the adjusting direction, and the condition that the specified edge of the marking frame corresponding to the information of the marking frame to be adjusted in the adjusting direction is overlapped.
Optionally, before the displaying the point cloud data frame to be displayed, the method further includes:
obtaining pre-labeling data corresponding to the point cloud data frame to be labeled, wherein the pre-labeling data comprises: pre-labeling frame information corresponding to point cloud data corresponding to a pre-labeling target in each frame of point cloud data to be labeled;
the step of displaying the point cloud data frame to be displayed comprises the following steps:
and displaying the point cloud data frames to be displayed frame by frame, and correspondingly displaying the pre-marked frames corresponding to the pre-marked frame information corresponding to the point cloud data corresponding to the pre-marked targets in the pre-marked data corresponding to each point cloud data frame to be displayed.
Optionally, after detecting a first selection operation triggered by point cloud data corresponding to a target to be marked in a current display frame currently displayed in the point cloud data frame to be displayed, and before determining first marking frame information corresponding to the target to be marked based on the first selection operation, the method further includes:
after detecting a modification operation on a pre-labeling frame corresponding to point cloud data corresponding to a first pre-labeling target in a displayed point cloud data frame to be displayed, modifying the pre-labeling frame corresponding to the point cloud data corresponding to the first pre-labeling target based on the modification operation, wherein the modification operation comprises the following steps: at least one type of operation among delete, split, and merge.
Optionally, the step of displaying the point cloud data frame to be displayed includes:
and displaying the point cloud data frame to be displayed in a two-dimensional overlooking angle.
Optionally, after the step of determining, based on the first selected operation, first labeling frame information corresponding to the target to be labeled, the method further includes:
displaying the target to be marked and a first marking frame corresponding to the first marking frame information in a preset three-dimensional space display form; and/or
Displaying the target to be marked and a first marking frame corresponding to the first marking frame information at a preset two-dimensional non-overlooking angle; and/or
Obtaining a two-dimensional image corresponding to the current display frame; based on first labeling frame information corresponding to the target to be labeled, projecting a first labeling frame corresponding to the first labeling frame information to a two-dimensional image corresponding to a current display frame to obtain a projection frame corresponding to the first labeling frame information corresponding to the target to be labeled; displaying a two-dimensional image corresponding to a current display frame and a projection frame corresponding to first labeling frame information corresponding to the target to be labeled, wherein the two-dimensional image comprises the target to be labeled.
In a second aspect, an embodiment of the present invention provides an apparatus for labeling point cloud data, where the apparatus includes:
the system comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is configured to obtain point cloud data frames to be marked and acquisition equipment pose information corresponding to each point cloud data frame to be marked;
the processing and displaying module is configured to perform preset display processing on the point cloud data frame to be marked to obtain a point cloud data frame to be displayed and display the point cloud data frame to be displayed;
the first determining module is configured to determine first marking frame information corresponding to a target to be marked based on a first selected operation after the first selected operation triggered by point cloud data corresponding to the target to be marked in a current display frame currently displayed in the point cloud data frames to be displayed is detected;
the second determination module is configured to determine the superposed frame number information frame point cloud data frame behind or in front of the current display frame from the point cloud data frame to be marked as a current superposed frame after superposed frame number information is obtained;
the superposition display module is configured to superpose and display each current superposition frame on the current display frame based on the acquisition device pose information corresponding to the current display frame and the acquisition device pose information corresponding to each current superposition frame so as to display the motion trail information corresponding to the target to be marked;
the third determination module is configured to determine second marking frame information corresponding to the target to be marked in the target frame of the current superposition frame based on a second selection operation after the second selection operation triggered by the point cloud data corresponding to the target to be marked in the target frame of the current superposition frame is detected;
a fourth determining module, configured to determine, based on the first annotation frame information and the second annotation frame information, third annotation frame information corresponding to the target to be annotated in each current overlay frame between the current display frame and a target frame of the current overlay frame.
Optionally, the processing and displaying module is specifically configured to perform ground point cloud data deletion operation on the point cloud data frame to be marked, so as to obtain the point cloud data frame to be displayed.
Optionally, the fourth determining module is specifically configured to determine, based on the first annotation frame information and the second annotation frame information, middle annotation frame information corresponding to the target to be annotated in each current overlay frame between the current display frame and the target frame of the current overlay frame;
and aiming at each current superposition frame between the current display frame and a target frame of the current superposition frame, adjusting the middle marking frame information corresponding to the target to be marked in the current superposition based on the distribution characteristics of the point cloud data corresponding to the target to be marked in the current superposition, and determining the third marking frame information corresponding to the target to be marked in the current superposition frame.
Optionally, the apparatus further comprises:
a first display module, configured to display a first annotation frame corresponding to the first annotation frame information in the current display frame after the first annotation frame information corresponding to the target to be annotated is determined based on the first selected operation;
the device further comprises:
and the second display module is configured to display a second labeling frame corresponding to the second labeling frame information in the target frame of the current superposition frame after the second labeling frame information corresponding to the target to be labeled in the target frame of the current superposition frame is determined based on the second selected operation.
Optionally, the apparatus further comprises:
a fifth determining module, configured to, after determining, based on the first annotation frame information and the second annotation frame information, third annotation frame information corresponding to the target to be annotated in each current overlay frame between the current display frame and a target frame of the current overlay frame, if a fine-tuning instruction for a annotation frame corresponding to the annotation frame information to be adjusted corresponding to the target to be annotated in a frame to be adjusted is detected, determine, based on the fine-tuning instruction, an adjustment direction corresponding to the annotation frame information corresponding to the target to be annotated, where the frame to be adjusted is: the current display frame or the current superposition frame of the point cloud data corresponding to the target to be marked is included, and the marking frame corresponding to the information of the marking frame to be adjusted comprises: a marking frame corresponding to the first marking frame information, a marking frame corresponding to the second marking frame information or a marking frame corresponding to the third marking frame information corresponding to the target to be marked;
a sixth determining module, configured to determine, based on a relative position relationship between the point cloud data corresponding to the target to be marked in the frame to be adjusted and a marking frame corresponding to the marking frame information to be adjusted, and the adjustment direction, target marking frame information corresponding to the target to be marked that meets a preset edge fitting condition, where the preset edge fitting condition is: and limiting the specified edge of the point cloud data corresponding to the target to be marked in the adjusting direction, and the condition that the specified edge of the marking frame corresponding to the information of the marking frame to be adjusted in the adjusting direction is overlapped.
Optionally, the apparatus further comprises:
a second obtaining module, configured to obtain pre-labeled data corresponding to the point cloud data frame to be labeled before the point cloud data frame to be displayed is displayed, where the pre-labeled data includes: pre-labeling frame information corresponding to point cloud data corresponding to a pre-labeling target in each frame of point cloud data to be labeled;
the processing and displaying module is specifically configured to display the point cloud data frames to be displayed frame by frame, and correspondingly display the pre-marked frames corresponding to the pre-marked frame information corresponding to the point cloud data corresponding to the pre-marked targets in the pre-marked data corresponding to each point cloud data frame to be displayed.
Optionally, the apparatus further comprises:
a modification module configured to, after detecting a first selection operation triggered by point cloud data corresponding to a target to be marked in a current display frame currently displayed in the point cloud data frame to be displayed, determine first marking frame information corresponding to the target to be marked based on the first selection operation, and after detecting a modification operation for a pre-marking frame corresponding to point cloud data corresponding to a first pre-marking target in the displayed point cloud data frame to be displayed, modify the pre-marking frame corresponding to the point cloud data corresponding to the first pre-marking target based on the modification operation, where the modification operation includes: at least one type of operation among delete, split, and merge.
Optionally, the processing and displaying module is specifically configured to display the to-be-displayed point cloud data frame in a two-dimensional top view angle.
Optionally, the apparatus further comprises: the third display module is configured to display the target to be marked and the first marking frame corresponding to the first marking frame information in a preset three-dimensional space display form after the first marking frame information corresponding to the target to be marked is determined based on the first selected operation; and/or
The device further comprises: the fourth display module is configured to display the target to be marked and the first marking frame corresponding to the first marking frame information in a preset two-dimensional non-overlooking angle; and/or
The device further comprises: a third obtaining module configured to obtain a two-dimensional image corresponding to the current display frame;
the projection module is configured to project a first labeling frame corresponding to the first labeling frame information to a two-dimensional image corresponding to a current display frame based on the first labeling frame information corresponding to the target to be labeled, so as to obtain a projection frame corresponding to the first labeling frame information corresponding to the target to be labeled;
and the fifth display module is configured to display a two-dimensional image corresponding to the current display frame and a projection frame corresponding to the first labeling frame information corresponding to the target to be labeled, wherein the two-dimensional image comprises the target to be labeled.
As can be seen from the above, the point cloud data labeling method and device provided by the embodiment of the invention obtain point cloud data frames to be labeled and acquisition device pose information corresponding to each point cloud data frame to be labeled; performing preset display processing on the point cloud data frame to be annotated to obtain a point cloud data frame to be displayed, and displaying the point cloud data frame to be displayed; after detecting a first selection operation triggered by point cloud data corresponding to a target to be marked in a current display frame currently displayed in a point cloud data frame to be displayed, determining first marking frame information corresponding to the target to be marked based on the first selection operation; after the information of the superimposed frame number is obtained, determining the information frame point cloud data frame of the superimposed frame number after or before the current display frame from the point cloud data frame to be marked as the current superimposed frame; based on the acquisition equipment pose information corresponding to the current display frame and the acquisition equipment pose information corresponding to each current superposition frame, superposing and displaying each current superposition frame on the current display frame to display the motion trail information corresponding to the target to be marked; after a second selection operation triggered by point cloud data corresponding to a target to be marked in a target frame of the current superposition frame is detected, determining second marking frame information corresponding to the target to be marked in the target frame of the current superposition frame based on the second selection operation; and determining third labeling frame information corresponding to the target to be labeled in each current superposition frame between the current display frame and the target frame of the current superposition frame based on the first labeling frame information and the second labeling frame information.
By applying the embodiment of the invention, after the first marking frame information corresponding to the target to be marked is determined, the current superposed frame of the superposed frame number information frame behind the current display frame is determined, and each current superposed frame is superposed and displayed on the current display frame based on the acquisition equipment pose information corresponding to the current display frame and the acquisition equipment pose information corresponding to each current superposed frame so as to display the motion track information corresponding to the target to be marked, thereby providing an accurate reference basis for the marking of the target to be marked by a user, being more convenient for the user to mark the target to be marked, and simultaneously being capable of marking the accurate second marking frame information corresponding to the target to be marked based on the motion track information corresponding to the target to be marked under the condition that the positions of a plurality of similar targets to be marked are dense, the method causes interference to the labeling of the user, and the labeling error occurs. The accuracy of the second labeling frame information corresponding to the target to be labeled is improved, the accuracy of the third labeling frame information determined based on the first labeling frame information and the second labeling frame information is improved, the point cloud data is simply and effectively labeled, the burden of labeling personnel is reduced, and the labeling efficiency is improved. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The innovation points of the embodiment of the invention comprise:
1. the method can determine the current superposition frame of the superposition frame number information frame after the current display frame after determining the first marking frame information corresponding to the target to be marked, and superpose and display each current superposition frame on the current display frame based on the acquisition equipment pose information corresponding to the current display frame and the acquisition equipment pose information corresponding to each current superposition frame to display the motion track information corresponding to the target to be marked so as to provide an accurate reference basis for the marking of the target to be marked by a user, so that the marking of the target to be marked by the user is more convenient, and simultaneously, under the condition that the positions of a plurality of similar targets to be marked are dense, the method can mark the accurate second marking frame information corresponding to the target to be marked based on the motion track information corresponding to the target to be marked, thereby avoiding the interference to the marking of the user when the positions of the plurality of similar targets to be marked are dense, a situation with labeling errors occurs. The accuracy of the second labeling frame information corresponding to the target to be labeled is improved, the accuracy of the third labeling frame information determined based on the first labeling frame information and the second labeling frame information is improved, the point cloud data is simply and effectively labeled, the burden of labeling personnel is reduced, and the labeling efficiency is improved.
2. And carrying out ground point cloud data deletion operation on the point cloud data frame to be marked to obtain the point cloud data frame to be displayed, and displaying the point cloud data frame for a user to check marks, so that the interference of the point cloud data on the user marking process can be eliminated to a certain extent, the user marking difficulty is reduced to a certain extent, and the marking efficiency is improved.
3. And determining the middle marking frame information based on the first marking frame information and the second marking frame information, and further automatically adjusting the middle marking frame information corresponding to the target to be marked in the current superposition based on the distribution characteristics of the point cloud data corresponding to the target to be marked in each current superposition so as to obtain the third marking frame information corresponding to the target to be marked in the current superposition frame with higher accuracy, so that the marking efficiency and accuracy of the user are improved.
4. The fine adjustment function of the marking frame corresponding to the marking frame information is provided, so that a user can perform fine adjustment on the marking frame corresponding to the marked marking frame information under the condition that the marked marking frame information is not attached to the point cloud data corresponding to the corresponding target, and the marked marking frame is accurate in position.
5. The method supports the loading and displaying functions of the pre-marked data corresponding to the point cloud data to be marked, provides the modifying function of the pre-marked frame with marking errors in the pre-marked data, reduces the marking workload of a user to a certain extent, reduces the data acquisition cost, and provides the accuracy of the marked frame to a certain extent.
6. Displaying the point cloud data corresponding to the target to be marked in the point cloud data to be displayed and the marking frame corresponding to the marking information in a multi-view mode, so that a user can check whether the marked marking frame is accurate or not from all angles. And the marked frame is projected to the two-dimensional image corresponding to the current display frame, so that a user can better check whether the marked frame is accurate or not, and a basis is provided for accurate marking of the user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a schematic flow chart of a point cloud data labeling method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an example of displaying a current display frame and a current overlay frame in an overlay manner
FIG. 3A is a schematic diagram illustrating an exemplary back view of a target to be labeled and a label box corresponding to the label box to be adjusted;
FIG. 3B is an exemplary diagram of a result of fine tuning of the label frame corresponding to the label frame to be adjusted in FIG. 3A;
FIGS. 4A and 4B are display examples of a current display frame, a target to be labeled in the current display frame, and a label frame corresponding to label frame information
Fig. 5 is a schematic structural diagram of a point cloud data labeling apparatus according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The invention provides a point cloud data labeling method and device, which are used for realizing simple, convenient and effective labeling of point cloud data, reducing the burden of labeling personnel and improving the labeling efficiency. The following provides a detailed description of embodiments of the invention.
Fig. 1 is a schematic flow chart of a point cloud data labeling method according to an embodiment of the present invention. The method may comprise the steps of:
s101: and acquiring the point cloud data frames to be marked and the position and posture information of the acquisition equipment corresponding to each point cloud data frame to be marked.
The point cloud data labeling method provided by the embodiment of the invention can be applied to any type of electronic equipment with computing capacity, and the electronic equipment can be a server or a terminal. In an implementation manner, the electronic device may be pre-installed with a preset labeling tool, and the labeling function of the point cloud data labeling method provided by the embodiment of the invention is implemented by the preset labeling tool.
The electronic equipment can firstly obtain point cloud data frames to be marked as point cloud data frames to be marked and obtain the position and attitude information of the acquisition equipment corresponding to each point cloud data frame to be marked. The point cloud data frames to be marked comprise a plurality of continuous point cloud data frames, and the point cloud data frames to be marked are arranged according to the sequence of the acquisition time of the acquisition equipment corresponding to each point cloud data frame to be marked. Each frame of point cloud data to be marked comprises point cloud data acquired by point cloud data acquisition equipment at the same moment. The pose information of the acquisition equipment corresponding to each point cloud data frame to be marked can be as follows: and acquiring pose information of the point cloud data when the corresponding point cloud data frame to be marked is acquired. In one case, the point cloud data collection device may be a laser radar.
In one implementation, the point cloud data frame to be labeled may be: when a test vehicle runs in a target scene, a point cloud data acquisition device such as a laser radar is arranged to acquire a point cloud data frame aiming at the surrounding environment of the test vehicle in the running process.
In one case, the electronic device may display a display interface for the user to select a point cloud data frame to be labeled, the display interface displays an icon corresponding to each point cloud data frame, the user may select the point cloud data frame to be labeled through a mouse, a stylus, or a finger, and the electronic device obtains the point cloud data frame to be labeled by the user based on the operation of the user, and the point cloud data frame to be labeled by the user is used as the point cloud data to be labeled and the pose information of the acquisition device corresponding to each point cloud data frame to be labeled.
S102: and performing preset display processing on the point cloud data frame to be annotated to obtain the point cloud data frame to be displayed, and displaying the point cloud data frame to be displayed.
After the electronic equipment obtains the point cloud data frames to be marked, preset display processing is carried out on each frame of point cloud data frames to be marked so as to delete point cloud data which can interfere with a subsequent marking process in each point cloud data frame to be marked, the point cloud data frames to be displayed are obtained, and the point cloud data frames to be displayed are displayed frame by frame. In one implementation, the electronic device first displays a designated frame in the point cloud data frames to be displayed, and then displays the point cloud data frames to be displayed frame by frame according to a next frame display instruction triggered by a user until the user triggers other marking operations, and executes a corresponding marking process. In another implementation, the electronic device first displays a designated frame in the point cloud data frame to be displayed, and displays the point cloud data frame to be displayed frame by frame based on a preset display time interval under the condition that the user does not trigger other marking operations until the user triggers other marking operations, and executes a corresponding marking process. The designated frame may be a first frame in the point cloud data frame to be displayed, or may be a last frame marked when a marking task is triggered for the latest time in the point cloud data frame to be displayed.
In one implementation, the preset display processing may refer to preset different angle display processing. For example: the point cloud data frame to be marked can be processed in a overlooking angle, so that the point cloud data frame to be displayed viewed in the overlooking angle is obtained.
In another implementation manner, the S102 may include:
and carrying out ground point cloud data deletion operation on the point cloud data frame to be marked to obtain the point cloud data frame to be displayed.
In this implementation manner, for each point cloud data frame to be marked, the electronic device determines ground point cloud data corresponding to the ground from the point cloud data included in the point cloud data frame to be marked, and deletes the ground point cloud data from the point cloud data included in the point cloud data frame to be marked, so as to obtain the point cloud data frame to be displayed including the remaining point cloud data. The determination method for determining the ground point cloud data corresponding to the ground in the point cloud data included in the point cloud data frame to be annotated can refer to the determination method for the ground point cloud data in the related art, and is not described herein again.
S103: after a first selection operation triggered by point cloud data corresponding to a target to be marked in a current display frame currently displayed in a point cloud data frame to be displayed is detected, first marking frame information corresponding to the target to be marked is determined based on the first selection operation.
In this step, the electronic device may monitor the point cloud data frame to be displayed in real time during the process of displaying the point cloud data frame to be displayed, and the user may observe whether the point cloud data corresponding to the unmarked target exists in the point cloud data frame to be displayed currently displayed by the electronic device according to the display process of the point cloud data frame to be displayed. For convenience of description, subsequently, a frame of point cloud data to be displayed currently displayed by the electronic device is referred to as a current display frame. The method comprises the steps that a user determines that point cloud data corresponding to an unmarked target exists in a current display frame currently displayed in a point cloud data frame to be displayed, the user can trigger a first selection operation aiming at the point cloud data corresponding to the unmarked target, and correspondingly, for convenience in description, the point cloud data corresponding to the unmarked target aiming at which the user triggers the first selection operation are called the point cloud data corresponding to the target to be marked. The method comprises the steps that after detecting a first selection operation triggered by point cloud data corresponding to a target to be marked in a current display frame currently displayed in a point cloud data frame to be displayed, electronic equipment determines first marking frame information corresponding to the target to be marked based on the first selection operation.
In one implementation, the first selection operation may be an operation that a user inputs selection frame information for point cloud data corresponding to a target to be marked through an input device corresponding to the electronic device, that is, first marking frame information corresponding to the target to be marked, where the first marking frame information corresponding to the target to be marked includes position information of a specified vertex of a frame and corresponding length, width, and height information. In one case, the first annotation frame information corresponding to the target to be annotated may further include: and marking frame identifications corresponding to the targets to be marked, wherein the marking frame identifications corresponding to the targets to be marked are the information capable of only marking the targets to be marked, the marking frame identifications corresponding to the same targets are the same, and the marking frame identifications corresponding to different targets are different.
The input device corresponding to the electronic device includes, but is not limited to, a mouse, a keyboard, and the like.
For example, when it is determined that the current display frame includes an unmarked target satisfying the first preset marking condition, the user may input checkbox information for point cloud data corresponding to the target to be marked through an input device corresponding to the electronic device. Wherein, the unmarked target meeting the first preset marking condition may be: compared with the target corresponding to the point cloud data contained in the point cloud data frame before the current display frame, the unmarked target meeting the first preset marking condition can be a newly appeared target.
For example, the user observes that 3-10 frames of the point cloud data frames to be displayed include point cloud data corresponding to the target a, however, none of the point cloud data corresponding to the target a in the 3-10 frames is labeled, and accordingly, the target a may be regarded as an unlabeled target satisfying the first preset labeling condition. At this time, the user may trigger a first selection operation for the point cloud data corresponding to the target a in the frame 3, and the electronic device determines, based on the first selection operation, the position information of the designated vertex including the frame corresponding to the target to be labeled and the first labeling frame information of the corresponding length, width, and height information.
For another example, the user observes that 1-30 frames of the point cloud data frames to be displayed include point cloud data corresponding to the target B, where the point cloud data corresponding to the target B in 10-30 frames is labeled, the point cloud data corresponding to the target B in 1-10 frames is not labeled, and correspondingly, the target B may also be regarded as an unmarked target meeting the first preset labeling condition.
In another implementation, the first selection operation may be a click operation of a user on point cloud data corresponding to a target to be marked through an input device corresponding to the electronic device, and the electronic device may determine first marking frame information corresponding to the target to be marked through the first selection operation. When the user determines that the current display frame comprises the unmarked target meeting the second preset marking condition, the user can click the point cloud data corresponding to the target to be marked through the input device corresponding to the electronic device, so that the electronic device detects the first selection operation. The unmarked targets meeting the second preset marking condition may be: and detecting the target of leakage detection marking at intervals. For example: a user observes that 1-20 frames of point cloud data frames to be displayed comprise point cloud data corresponding to a target C, wherein the point cloud data corresponding to the target C in the 1-4 frames are marked, namely each frame corresponds to marking frame information corresponding to the point cloud data corresponding to the target C, namely the subsequently mentioned pre-marking frame information; the point cloud data corresponding to the target C in the 5 to 20 frames is not labeled, and the corresponding target C can be regarded as a target with frame-separated leak detection labels. Correspondingly, at this time, the user may trigger a first selection operation for the point cloud data corresponding to the target C in the 4 th frame, and the electronic device determines, based on the first selection operation, the pre-labeling frame information corresponding to the point cloud data corresponding to the target C in the 4 th frame as the first labeling frame information corresponding to the target to be labeled.
For another example, a user observes that 1-30 frames of point cloud data frames to be displayed include point cloud data corresponding to a target D, where the point cloud data corresponding to the target D in the 1-4 frames and 10-30 frames are labeled, the point cloud data corresponding to the target D in the 5-9 frames are not labeled, and correspondingly, the target D may also be regarded as a target with frame-separated leak detection labels.
S104: after the information of the superimposed frame number is obtained, determining the information frame point cloud data frame of the superimposed frame number after or before the current display frame from the point cloud data frame to be marked as the current superimposed frame.
The superimposed frame number information may be manually input by a user or automatically generated by an electronic device. In one implementation, after a user performs a first selection operation triggered by point cloud data corresponding to a target to be marked in a current display frame, superimposed frame number information may be input based on the observed number of frames of the point cloud data frames containing the point cloud data corresponding to the target to be marked in the point cloud data frames to be displayed, and after the electronic device obtains the superimposed frame number information, a superimposed frame number information frame point cloud data frame after or before the current display frame is determined from the point cloud data frames to be marked to serve as the current superimposed frame.
In another implementation, after a user performs a first selection operation triggered by point cloud data corresponding to a target to be marked in a current display frame, an electronic device may determine first marking frame information corresponding to the target to be marked based on the first selection operation, and if the electronic device detects that a marking frame identifier included in marking frame information corresponding to first point cloud data included in a first point cloud data frame to be marked in the point cloud data frame to be marked is the same as the marking frame identifier corresponding to the first marking frame information, the electronic device may determine superimposed frame number information based on frame number information of the current display frame and frame number information of the first point cloud data frame to be marked, and further determine a superimposed frame number information frame point cloud data frame after or before the current display frame from the point cloud data frame to be marked as a current superimposed frame.
Correspondingly, if the first point cloud data frame to be marked is behind the current display frame and comprises multiple frames, the current superimposed frame comprises: the frame between the current display frame and the earliest first point cloud data frame to be marked; if the first point cloud data frame to be marked is before the current display frame and comprises a plurality of frames, the current superposition frame comprises: and a frame between the current display frame and the latest first point cloud data frame to be marked.
For example, a user observes that 1-30 frames of point cloud data frames to be displayed include point cloud data corresponding to a target B, wherein the point cloud data corresponding to the target B in 10-30 frames are labeled, the point cloud data corresponding to the target B in 1-10 frames are not labeled, the user triggers a first selection operation for the point cloud data corresponding to the target B in the 1 st frame, and the electronic device determines first labeling frame information corresponding to the target B; the electronic equipment determines that the frame identification information in the marking frame information corresponding to the point cloud data corresponding to the target B in 10-30 frames is the same as the frame identification information in the first marking frame information corresponding to the point cloud data corresponding to the target B; the electronic equipment determines the information of the superposed frame number based on the frame number information 1 of the first frame and the frame number information 10 of the 10 frames, and further determines the 2 nd frame to the 10 th frame from the point cloud data frame to be marked as the current superposed frame.
S105: and based on the acquisition equipment pose information corresponding to the current display frame and the acquisition equipment pose information corresponding to each current superposition frame, superposing and displaying each current superposition frame on the current display frame so as to display the motion trail information corresponding to the target to be marked.
In this step, the electronic device determines, based on the pose information of the acquisition device corresponding to the current display frame and the pose information of the acquisition device corresponding to each current overlay frame, a true relative position relationship between the point cloud data corresponding to the target to be marked in the current display frame and the point cloud data corresponding to the target to be marked in each current overlay frame; and based on the determined real relative position relationship, overlapping and displaying each current overlapped frame on the current display frame so as to display the motion trail information corresponding to the target to be marked. As shown in fig. 2, an exemplary diagram of displaying a current display frame and a current overlay frame for overlay is shown.
S106: and after detecting a second selection operation triggered by point cloud data corresponding to the target to be marked in the target frame of the current superposition frame, determining second marking frame information corresponding to the target to be marked in the target frame of the current superposition frame based on the second selection operation.
In this step, after the electronic device displays the current display frame displayed in a superimposed manner and each current superimposed frame, the user can observe the motion track information corresponding to the target to be marked, and the user can accurately determine the end point of the track of the target to be marked based on the motion track information corresponding to the target to be marked, and further, can trigger a second selection operation for the point cloud data corresponding to the target to be marked in the target frame of the current superimposed frame. And after detecting a second selection operation triggered by point cloud data corresponding to the target to be marked in the target frame of the current superposition frame, the electronic equipment determines second marking frame information corresponding to the target to be marked in the target frame of the current superposition frame.
In one implementation, the second selection operation may be an operation that a user inputs selection frame information for point cloud data corresponding to a target to be marked through an input device corresponding to the electronic device, that is, second marking frame information corresponding to the target to be marked, where the second marking frame information corresponding to the target to be marked includes position information of a specified vertex of the frame and corresponding length, width, and height information. In one case, the second annotation frame information corresponding to the target to be annotated may further include: and marking frame identification corresponding to the target to be marked.
In another implementation, the second selection operation may be a click operation performed by the user on the point cloud data corresponding to the target to be annotated through an input device corresponding to the electronic device. In one case, after the electronic device detects the second selection operation, preset labeling frame information may be generated by taking the click position of the second selection operation as a center, and the preset labeling frame information is used as second labeling frame information corresponding to the target to be labeled. In another case, after detecting the second selection operation, the electronic device may determine, by using the click position of the second selection operation, pre-labeling frame information corresponding to the target to be labeled in the target frame of the current selected superposition frame corresponding to the second selection operation, as second labeling frame information corresponding to the target to be labeled in the target frame of the current superposition frame.
Theoretical mountain, a case, in which the current superimposed frame is a frame after the current display frame, the target frame of the current superimposed frame may be a last frame in the current superimposed frame; in another case, in the case that the current overlay frame is a frame before the current display frame, the target frame of the current overlay frame may be a first frame in the current overlay frame.
S107: and determining third labeling frame information corresponding to the target to be labeled in each current superposition frame between the current display frame and the target frame of the current superposition frame based on the first labeling frame information and the second labeling frame information.
In one implementation, the electronic device determines, based on the first and second label frame information, third label frame information corresponding to a target to be labeled in each current overlay frame between the current display frame and a target frame of the current overlay frame by using a preset difference algorithm. The preset difference algorithm may be any algorithm that can generate labeling frame information corresponding to a target to be labeled in each current superposition frame between the previous display frame and the target frame of the current superposition frame according to the difference in the related art, and details are not repeated here.
By applying the embodiment of the invention, after the first marking frame information corresponding to the target to be marked is determined, the current superposed frame of the superposed frame number information frame behind the current display frame is determined, and each current superposed frame is superposed and displayed on the current display frame based on the acquisition equipment pose information corresponding to the current display frame and the acquisition equipment pose information corresponding to each current superposed frame so as to display the motion track information corresponding to the target to be marked, thereby providing an accurate reference basis for the marking of the target to be marked by a user, being more convenient for the user to mark the target to be marked, and simultaneously being capable of marking the accurate second marking frame information corresponding to the target to be marked based on the motion track information corresponding to the target to be marked under the condition that the positions of a plurality of similar targets to be marked are dense, the method causes interference to the labeling of the user, and the labeling error occurs. The accuracy of the second labeling frame information corresponding to the target to be labeled is improved, the accuracy of the third labeling frame information determined based on the first labeling frame information and the second labeling frame information is improved, the point cloud data is simply and effectively labeled, the burden of labeling personnel is reduced, and the labeling efficiency is improved.
In another embodiment of the present invention, the step S107 may include the following steps 011-:
011: and determining intermediate labeling frame information corresponding to the target to be labeled in each current superposition frame between the current display frame and the target frame of the current superposition frame based on the first labeling frame information and the second labeling frame information.
012: and aiming at each current superposition frame between the current display frame and a target frame of the current superposition frame, adjusting the middle marking frame information corresponding to the target to be marked in the current superposition based on the distribution characteristics of the point cloud data corresponding to the target to be marked in the current superposition, and determining the third marking frame information corresponding to the target to be marked in the current superposition frame.
In the embodiment of the invention, the electronic equipment generates the middle marking frame information corresponding to the target to be marked in each current superposed frame between the current display frame and the target frame of the current superposed frame by using a preset difference value algorithm based on the first marking frame information and the second marking frame information. And aiming at each current superposition frame between the current display frame and the target frame of the current superposition frame, determining the distribution characteristics of the point cloud data corresponding to the target to be marked in the current superposition, and further adjusting the intermediate marking frame information corresponding to the target to be marked in the current superposition, so that the edge of the intermediate marking frame corresponding to the intermediate marking frame information corresponding to the target to be marked in the current superposition surrounds the point cloud data corresponding to the target to be marked in the current superposition, and the third marking frame information corresponding to the target to be marked in the current superposition frame is obtained.
And automatically correcting the intermediate marking frame information generated by interpolation by using the distribution characteristics of the point cloud data corresponding to the target to be marked in the current superposition to obtain third marking frame information corresponding to the target to be marked in the current superposition frame with higher accuracy, thereby improving the marking efficiency and accuracy of the user.
In another embodiment of the present invention, after the S103, the method may further include:
and displaying a first labeling frame corresponding to the first labeling frame information in the current display frame.
In order to facilitate a user to check whether the corresponding first marking frame information determined for the target to be marked is accurate and whether the marking is reasonable, after the first marking frame information corresponding to the target to be marked is determined, the electronic device can display the first marking frame corresponding to the first marking frame information at the corresponding position in the current display frame, so that the user can check the accuracy of the position of the first marking frame. Subsequently, if the user detects that the first labeling frame is not accurate and reasonable enough, the length, width, height, position and the like of the first labeling frame can be adjusted.
In another embodiment of the present invention, after the S105, the method may further include:
and displaying a second labeling frame corresponding to the second labeling frame information in the target frame of the current superposition frame.
In order to facilitate the user to check whether the information of the second labeling frame corresponding to the target to be labeled is accurate or not and whether the labeling is reasonable or not, after the information of the second labeling frame corresponding to the target to be labeled is determined, the electronic device can display the second labeling frame corresponding to the information of the second labeling frame at the corresponding position in the target frame of the current superposition frame, so that the user can check the accuracy of the position of the second labeling frame. Subsequently, if the user detects that the second labeling frame is not accurate and reasonable enough, the length, width, height, position and the like of the second labeling frame can be adjusted.
In another embodiment of the present invention, after the S107, the method may further include the following steps 021-:
021: and if a fine adjustment instruction for a marking frame corresponding to the information of the marking frame to be adjusted corresponding to the target to be marked in the frame to be adjusted is detected, determining an adjustment direction corresponding to the information of the marking frame corresponding to the target to be marked based on the fine adjustment instruction.
Wherein, the frame to be adjusted is: the method comprises the following steps of including a current display frame or a current superposition frame of point cloud data corresponding to a target to be marked, wherein a marking frame corresponding to information of the marking frame to be adjusted comprises: and marking frames corresponding to the first marking frame information, the second marking frame information or the third marking frame information corresponding to the target to be marked.
022: determining target marking frame information corresponding to the target to be marked meeting preset edge fitting conditions based on the relative position relation between the point cloud data corresponding to the target to be marked in the frame to be adjusted and the marking frame corresponding to the information of the marking frame to be adjusted and the adjustment direction, wherein the preset edge fitting conditions are as follows: and limiting the specified edge of the point cloud data corresponding to the target to be marked in the adjustment direction, and the condition that the specified edge of the marking frame corresponding to the information of the marking frame to be adjusted in the adjustment direction is overlapped.
In the embodiment of the present invention, after the electronic device determines the information of the third label frame corresponding to the target to be labeled in each current superimposed frame between the current display frame and the target frame of the current superimposed frame, the electronic device may display the third label frame corresponding to the information of the third label frame corresponding to the target to be labeled at a corresponding position in each current superimposed frame between each current display frame and the target frame of the current superimposed frame, so that a user can check whether the generated information of the third label frame is accurate and whether the label is reasonable. In the checking process, if it is determined that the information of the marking frame corresponding to the target to be marked in a certain frame needs to be adjusted, a fine-tuning instruction can be triggered according to the information of the marking frame needing to be adjusted. For clarity of description, in the embodiment of the present invention, the marking frame information that needs to be adjusted is referred to as marking frame information to be adjusted, and the point cloud data frame to be displayed corresponding to the marking frame information that needs to be adjusted is referred to as a frame to be adjusted.
If the electronic equipment detects a fine adjustment instruction for a marking frame corresponding to information of a marking frame to be adjusted corresponding to a target to be marked in a frame to be adjusted, wherein the fine adjustment instruction can carry information indicating the direction of adjusting the information of the marking frame corresponding to the target to be marked, and the electronic equipment determines the adjustment direction corresponding to the information of the marking frame corresponding to the target to be marked based on the fine adjustment instruction; determining the position information of the marking frame information to be adjusted corresponding to the target to be marked and the position information of the point cloud data corresponding to the target to be marked in the frame to be adjusted, and determining the relative position relationship between the position information of the marking frame information to be adjusted corresponding to the target to be marked and the position information of the point cloud data corresponding to the target to be marked in the frame to be adjusted; and determining target marking frame information corresponding to the target to be marked which meets the preset edge fitting condition based on the determined relative position relation and the adjusting direction between the point cloud data and the target to be marked, namely determining the appointed edge of the point cloud data corresponding to the target to be marked in the adjusting direction and the target marking frame information corresponding to the target to be marked which is overlapped with the appointed edge of the marking frame corresponding to the marking frame information to be adjusted in the adjusting direction.
For example, if the adjustment direction is rightward adjustment, the left edge of the point cloud data corresponding to the target to be marked may coincide with the left edge of the marking frame corresponding to the information of the marking frame to be adjusted. If the adjustment direction is upward adjustment, the lower side edge of the point cloud data corresponding to the target to be marked can be overlapped with the lower side edge of the marking frame corresponding to the information of the marking frame to be adjusted.
The position information of the point cloud data corresponding to the target to be marked in the frame to be adjusted may refer to the position information of a rectangular frame which minimally surrounds the point cloud data corresponding to the target to be marked.
In order to ensure that the user can better detect the accuracy of the information of the marked marking frame, the electronic equipment can display the point cloud data frame to be displayed from different angles. In one implementation, the point cloud data frame to be displayed may be displayed in a two-dimensional overlooking angle, so that the overall structure of the corona data frame to be displayed may be viewed, facilitating determination of the point cloud data corresponding to each target. In another implementation manner, the point cloud data frame to be displayed can be displayed from a side view angle and a back view angle, so that the distribution condition of point cloud data in the point cloud data frame to be displayed is displayed from different angles.
In order to better and comprehensively show the annotation box annotated to the target to be annotated, so that the user can better check whether the annotated annotation box information is suitable, in another embodiment of the present invention, after the S103, the method may further include the following steps:
031: and displaying the target to be marked and the first marking frame corresponding to the first marking frame information in a preset three-dimensional space display form. And/or
032: and displaying the target to be marked and the first marking frame corresponding to the first marking frame information in a preset two-dimensional non-overlooking angle. And/or
033: obtaining a two-dimensional image corresponding to a current display frame; based on first labeling frame information corresponding to a target to be labeled, projecting a first labeling frame corresponding to the first labeling frame information to a two-dimensional image corresponding to a current display frame to obtain a projection frame corresponding to the first labeling frame information corresponding to the target to be labeled; and displaying the two-dimensional image corresponding to the current display frame and the projection frame corresponding to the first labeling frame information corresponding to the target to be labeled.
The two-dimensional image comprises a target to be marked. The two-dimensional image corresponding to the current display frame is an image acquired in the same acquisition period as the current display frame.
The target to be marked and the first marking frame corresponding to the first marking frame information are displayed in a preset three-dimensional space display mode, so that more visual display can be provided for a user, and the user can visually check whether the first marking frame corresponding to the marked first marking frame information is attached to the target to be marked or not and whether the first marking frame is suitable or not.
Correspondingly, in an implementation manner, after the electronic device determines the second marking frame information corresponding to the target to be marked, the target to be marked and the second marking frame corresponding to the second marking frame information can be displayed in a preset three-dimensional space display form, so that a user can visually check whether the second marking frame corresponding to the second marking frame information marked by the user is attached to the target to be marked or not, and whether the second marking frame is suitable or not is determined.
Correspondingly, after the electronic device determines the third marking frame information corresponding to the target to be marked, the target to be marked and the third marking frame corresponding to the third marking frame information can be displayed in a preset three-dimensional space display form, so that a user can visually check whether the third marking frame corresponding to the marked third marking frame information is attached to the target to be marked or not and whether the third marking frame is suitable or not.
The preset two-dimensional non-overlook angle may include a side view angle, a front view angle, a back view angle, and the like of the target to be marked. And displaying the target to be marked and the first marking frame corresponding to the first marking frame information at different angles, so that a user can determine whether the first marking frame corresponding to the marked first marking frame information is attached to the target to be marked or not and is suitable or not from different angles. Correspondingly, the target to be marked and a second marking frame corresponding to the second marking frame information can be displayed in a preset two-dimensional non-overlooking angle; displaying the target to be marked and a third marking frame corresponding to the third marking frame information at a preset two-dimensional non-overlooking angle, so that a user can determine whether a second marking frame corresponding to the marked second marking frame information is attached to the target to be marked or not and whether the second marking frame is proper or not from different angles; and whether a third marking frame corresponding to the third marking frame information is attached to the target to be marked or not is judged to be proper.
In one case, the user may trigger the fine-tuning instruction with reference to the target to be labeled and the label frame corresponding to the label frame information displayed at the preset two-dimensional non-overlooking angle. Fig. 3A is an exemplary diagram of a back view angle of a labeling frame corresponding to a target to be labeled and a corresponding labeling frame to be adjusted. As shown in fig. 3A, the user may determine that the position of the point cloud data corresponding to the target to be marked is deviated to the left with respect to the marking frame corresponding to the target to be marked that corresponds to the marking frame to be adjusted. The corresponding user can trigger a fine adjustment instruction for indicating that the marking frame corresponding to the marking frame to be adjusted is adjusted to the right according to the marking frame corresponding to the marking frame to be adjusted, after the electronic equipment obtains the fine adjustment instruction, the marking frame corresponding to the marking frame information to be adjusted is adjusted to the right based on the relative position relation between the point cloud data corresponding to the target to be marked in the frame to be adjusted and the marking frame corresponding to the marking frame information to be adjusted and the adjustment direction, and the left edge of the marking frame corresponding to the marking frame information to be adjusted is attached to the left edge of the point cloud data corresponding to the target to be marked. The adjusted results are illustrated in the graph shown in FIG. 3B.
In an implementation manner of the present invention, after determining first labeling frame information corresponding to a target to be labeled, an electronic device may further obtain a two-dimensional image corresponding to a current display frame, and then, based on a relative position relationship between an image acquisition device corresponding to the two-dimensional image and an acquisition device corresponding to the current display frame and the first labeling frame information corresponding to the target to be labeled, project a first labeling frame corresponding to the first labeling frame information into a two-dimensional image corresponding to the current display frame, obtain a projection frame corresponding to the first labeling frame information corresponding to the target to be labeled, and further display the two-dimensional image corresponding to the current display frame and the projection frame corresponding to the first labeling frame information corresponding to the target to be labeled. And the user determines whether the first labeling frame corresponding to the labeled first labeling frame information is attached to the target to be labeled or not and is suitable.
Correspondingly, after determining the second labeling frame information corresponding to the target to be labeled, the electronic device may further obtain a two-dimensional image corresponding to the target frame of the current superimposed frame, and then, based on a relative position relationship between the image acquisition device corresponding to the two-dimensional image and the acquisition device corresponding to the current display frame and the second labeling frame information corresponding to the target to be labeled, project the second labeling frame corresponding to the second labeling frame information into the two-dimensional image corresponding to the target frame of the current superimposed frame, obtain a projection frame corresponding to the second labeling frame information corresponding to the target to be labeled, and then display the two-dimensional image corresponding to the target frame of the current superimposed frame and the projection frame corresponding to the second labeling frame information corresponding to the target to be labeled. And the user determines whether the second labeling frame corresponding to the labeled second labeling frame information is attached to the target to be labeled or not and is suitable.
After determining the third annotation frame information corresponding to the target to be annotated, the electronic device may further obtain a two-dimensional image corresponding to the point cloud data frame for each frame of point cloud data frame between the current display frame and the target frame of the current overlay frame, and then project the third annotation frame corresponding to the third annotation frame information to the two-dimensional image corresponding to the target to be annotated based on the relative position relationship between the image acquisition equipment corresponding to the two-dimensional image and the acquisition equipment corresponding to the point cloud data frame and the third annotation frame information corresponding to the target to be annotated in the point cloud data frame to obtain a projection frame corresponding to the third annotation frame information corresponding to the target to be annotated, and then display the two-dimensional image corresponding to the point cloud data frame and the projection frame corresponding to the third annotation frame information corresponding to the target to be annotated in the point cloud data frame. And the user can determine whether the third labeling frame corresponding to the labeled third labeling frame information is attached to the target to be labeled or not and is suitable.
Fig. 4A and 4B are schematic diagrams illustrating a display example of a current display frame, an object to be annotated in the current display frame, and an annotation box corresponding to annotation box information provided in an embodiment of the present invention. And a two-dimensional picture view is displayed at the upper left corner of the figure, wherein the two-dimensional picture view comprises a two-dimensional image corresponding to a current display frame and a projection frame corresponding to first labeling frame information corresponding to a target to be labeled. The left lower corner in the figure shows a three-dimensional view, wherein the three-dimensional view comprises a target to be marked and a first marking frame corresponding to the first marking frame information which are displayed in a preset three-dimensional space display form. The middle area in the figure shows a point cloud data frame to be displayed which is displayed in a two-dimensional overlooking angle. And a two-dimensional back view is displayed at the upper right corner of the figure, wherein the two-dimensional back view comprises a target to be marked and a first marking frame corresponding to the first marking frame information which are displayed at a back view angle. The lower right corner in the figure shows a two-dimensional side view, wherein the two-dimensional side view comprises a target to be marked and a first marking frame corresponding to the first marking frame information, which are displayed in a side view angle. Fig. 3A is an exemplary diagram before ground point cloud data deleting operation is performed on a point cloud data frame to be marked, and fig. 3B is an exemplary diagram after ground point cloud data deleting operation is performed on the point cloud data frame to be marked.
In another embodiment of the present invention, before the displaying the to-be-displayed point cloud data frame, the method may further include: and acquiring pre-marked data corresponding to the point cloud data frame to be marked.
Wherein the pre-annotation data comprises: pre-labeling frame information corresponding to point cloud data corresponding to a pre-labeling target in each frame of point cloud data to be labeled;
the step of displaying the point cloud data frame to be displayed may include:
displaying the point cloud data frames to be displayed frame by frame, and correspondingly displaying the pre-marked frames corresponding to the pre-marked frame information corresponding to the point cloud data corresponding to the pre-marked targets in the pre-marked data corresponding to each point cloud data frame to be displayed.
The pre-marked data corresponding to the point cloud data frame to be marked can be: and detecting each target in the point cloud data frame to be labeled by using a 3D-based target detection algorithm to obtain a detection labeling result, or manually labeling each target in the point cloud data frame to be labeled by a user history. The pre-marked data corresponding to the point cloud data frame to be marked can comprise: the method comprises the steps of pre-labeling the position information of a pre-labeling frame of a target contained in each frame in a pre-labeled point cloud data frame to be labeled, labeling the position information of the target in different frames to be the same physical object, and semantic information corresponding to point cloud data in each frame in the point cloud data frame to be labeled.
The association relationship between targets marked in different frames as the same physical object can be represented by the frame identification information of the pre-marked frame corresponding to the target. For example: can be as follows: the frame identification information of the corresponding pre-marked frames is the same between the targets marked in different frames as the same physical object. The frame identification information of the pre-marked frames corresponding to the targets of the same physical object is the same, and the frame identification information of the pre-marked frames corresponding to the targets of different physical objects is different.
The electronic equipment can obtain corresponding pre-marked data while obtaining the point cloud data frames to be marked, and further correspondingly displays pre-marked frames corresponding to pre-marked frame information corresponding to point cloud data corresponding to pre-marked targets in the pre-marked data corresponding to each point cloud data frame to be displayed when displaying the point cloud data frames to be displayed.
Subsequently, the user may determine, for the displayed point cloud data to be displayed and the displayed pre-labeled frame therein, whether a pre-labeled frame with an error exists in the pre-labeled frame, for example, different physical objects are labeled as the same physical object, or an object that should not be detected and labeled is labeled by an error; or labeling the same physical object as a different physical object, and so on. And the user can correspondingly modify the pre-marked frame with the error in the pre-marked frame.
In another embodiment of the present invention, before the S103, the method may further include:
after detecting a modification operation on a pre-labeling frame corresponding to point cloud data corresponding to a first pre-labeling target in a displayed point cloud data frame to be displayed, modifying the pre-labeling frame corresponding to the point cloud data corresponding to the first pre-labeling target based on the modification operation, wherein the modification operation comprises the following steps: at least one type of operation among delete, split, and merge.
Aiming at the pre-marked frames which mark different physical objects as the same physical object, a user can trigger modification operation for indicating splitting of the pre-marked frames, and after the electronic equipment detects the corresponding modification operation, the electronic equipment modifies the pre-marked frames corresponding to the point cloud data corresponding to the first pre-marked target based on the modification operation, namely, the association relation between the pre-marked frames is split. For example, the point cloud data frame to be displayed includes 1-10 frames, 1-5 frames include point cloud data corresponding to a target 1, 6-10 frames include point cloud data corresponding to a target 2, in the corresponding pre-labeled data, a label frame identifier in a pre-label frame corresponding to the target 1 is the same as a label frame identifier in a pre-label frame corresponding to the target 2, a user may select the pre-label frame corresponding to the target 2 in the 6 th frame, and trigger a modification operation indicating splitting, the electronic device modifies the label frame identifier of the pre-label frame corresponding to the target 2 in the 6 th-10 th frames to another identifier based on the modification operation, and the other identifier has uniqueness and is different from the existing label frame identifier.
Aiming at the objects which are not to be detected and labeled and the labeled pre-labeling frames which are labeled by mistake, a user can trigger the modification operation of indicating to delete the pre-labeling frames, the user can directly select the labeled pre-labeling frames and trigger the modification operation of indicating to delete, and the electronic equipment deletes the information of the pre-labeling frames corresponding to the labeled pre-labeling frames by mistake based on the modification operation.
For the pre-labeled frames which label the same physical object as different physical objects, the user can trigger the modification operation which indicates to merge the pre-labeled frames, and the user can directly select one of the pre-labeled frames and trigger the modification operation which indicates to merge, for example, the label frame identifier in the information of the pre-labeled frame corresponding to the selected pre-labeled frame is modified into the label frame identifier in the new west of the pre-labeled frame corresponding to another type of pre-labeled frame corresponding to the same physical object. And the electronic equipment modifies the mark frame identification in the selected pre-mark frame and the pre-mark frame information corresponding to the corresponding pre-mark frame based on the modification operation. Wherein, the pre-labeling frame corresponding to the selected pre-labeling frame is: and the corresponding mark frame identification is the same as the mark frame identification corresponding to the selected pre-marked frame.
Corresponding to the above method embodiment, an embodiment of the present invention provides a device for labeling point cloud data, as shown in fig. 5, the device includes:
a first obtaining module 510 configured to obtain point cloud data frames to be labeled and acquisition device pose information corresponding to each point cloud data frame to be labeled;
a processing and displaying module 520, configured to perform preset display processing on the point cloud data frame to be marked, obtain a point cloud data frame to be displayed, and display the point cloud data frame to be displayed;
a first determining module 530, configured to, after detecting a first selection operation triggered by point cloud data corresponding to a target to be marked in a current display frame currently displayed in the point cloud data frames to be displayed, determine first marking frame information corresponding to the target to be marked based on the first selection operation;
a second determining module 540, configured to determine, after obtaining the overlay frame number information, the overlay frame number information frame point cloud data frame after or before the current display frame from the point cloud data frame to be marked as a current overlay frame;
an overlay display module 550 configured to overlay and display each current overlay frame on the current display frame based on the pose information of the capture device corresponding to the current display frame and the pose information of the capture device corresponding to each current overlay frame, so as to display the motion trajectory information corresponding to the target to be marked;
a third determining module 560, configured to, after detecting a second selection operation triggered by point cloud data corresponding to the target to be labeled in the target frame of the current overlay frame, determine, based on the second selection operation, second labeling frame information corresponding to the target to be labeled in the target frame of the current overlay frame;
a fourth determining module 570, configured to determine, based on the first annotation frame information and the second annotation frame information, third annotation frame information corresponding to the target to be annotated in each current overlay frame between the current display frame and the target frame of the current overlay frame.
By applying the embodiment of the invention, after the first marking frame information corresponding to the target to be marked is determined, the current superposed frame of the superposed frame number information frame behind the current display frame is determined, and each current superposed frame is superposed and displayed on the current display frame based on the acquisition equipment pose information corresponding to the current display frame and the acquisition equipment pose information corresponding to each current superposed frame so as to display the motion track information corresponding to the target to be marked, thereby providing an accurate reference basis for the marking of the target to be marked by a user, being more convenient for the user to mark the target to be marked, and simultaneously being capable of marking the accurate second marking frame information corresponding to the target to be marked based on the motion track information corresponding to the target to be marked under the condition that the positions of a plurality of similar targets to be marked are dense, the method causes interference to the labeling of the user, and the labeling error occurs. The accuracy of the second labeling frame information corresponding to the target to be labeled is improved, the accuracy of the third labeling frame information determined based on the first labeling frame information and the second labeling frame information is improved, the point cloud data is simply and effectively labeled, the burden of labeling personnel is reduced, and the labeling efficiency is improved.
In another embodiment of the present invention, the processing and displaying module 520 is specifically configured to perform a ground point cloud data deleting operation on the point cloud data frame to be marked, so as to obtain a point cloud data frame to be displayed.
In another embodiment of the present invention, the fourth determining module 570 is specifically configured to determine, based on the first annotation frame information and the second annotation frame information, intermediate annotation frame information corresponding to the target to be annotated in each current overlay frame between the current display frame and the target frame of the current overlay frame;
and aiming at each current superposition frame between the current display frame and a target frame of the current superposition frame, adjusting the middle marking frame information corresponding to the target to be marked in the current superposition based on the distribution characteristics of the point cloud data corresponding to the target to be marked in the current superposition, and determining the third marking frame information corresponding to the target to be marked in the current superposition frame.
In another embodiment of the present invention, the apparatus further comprises: a first display module (not shown in the figure), configured to display a first annotation frame corresponding to the first annotation frame information in the current display frame after the first annotation frame information corresponding to the target to be annotated is determined based on the first selected operation;
the device further comprises: a second display module (not shown in the figures), configured to, after determining, based on the second selected operation, second annotation frame information corresponding to the target to be annotated in the target frame of the current overlay frame, display a second annotation frame corresponding to the second annotation frame information in the target frame of the current overlay frame.
In another embodiment of the present invention, the apparatus further comprises:
a fifth determining module (not shown in the drawings), configured to, after determining, based on the first annotation frame information and the second annotation frame information, third annotation frame information corresponding to the target to be annotated in each current overlay frame between the current display frame and a target frame of the current overlay frame, if a fine-tuning instruction for a annotation frame corresponding to the annotation frame information to be adjusted corresponding to the target to be annotated in a frame to be adjusted is detected, determine, based on the fine-tuning instruction, an adjustment direction corresponding to the annotation frame information to be annotated, where the frame to be adjusted is: the current display frame or the current superposition frame of the point cloud data corresponding to the target to be marked is included, and the marking frame corresponding to the information of the marking frame to be adjusted comprises: a marking frame corresponding to the first marking frame information, a marking frame corresponding to the second marking frame information or a marking frame corresponding to the third marking frame information corresponding to the target to be marked;
a sixth determining module (not shown in the drawings), configured to determine, based on a relative position relationship between the point cloud data corresponding to the target to be marked in the frame to be adjusted and a marking frame corresponding to the marking frame information to be adjusted, and the adjustment direction, target marking frame information corresponding to the target to be marked that meets a preset edge fitting condition, where the preset edge fitting condition is: and limiting the specified edge of the point cloud data corresponding to the target to be marked in the adjusting direction, and the condition that the specified edge of the marking frame corresponding to the information of the marking frame to be adjusted in the adjusting direction is overlapped.
In another embodiment of the present invention, the apparatus further comprises:
a second obtaining module (not shown in the figure), configured to obtain pre-labeled data corresponding to the point cloud data frame to be labeled before the point cloud data frame to be displayed is displayed, where the pre-labeled data includes: pre-labeling frame information corresponding to point cloud data corresponding to a pre-labeling target in each frame of point cloud data to be labeled;
the processing and displaying module 520 is specifically configured to display the point cloud data frames to be displayed frame by frame, and correspondingly display the pre-marked frames corresponding to the pre-marked frame information corresponding to the point cloud data corresponding to the pre-marked targets in the pre-marked data corresponding to each point cloud data frame to be displayed.
In another embodiment of the present invention, the apparatus further comprises:
a modifying module (not shown in the figures), configured to, after detecting a first selection operation triggered by point cloud data corresponding to a target to be annotated in a current display frame currently displayed in the point cloud data frame to be displayed, before determining first annotation frame information corresponding to the target to be annotated based on the first selection operation, and after detecting a modifying operation for a pre-annotation frame corresponding to the point cloud data corresponding to the first pre-annotation target in the displayed point cloud data frame to be displayed, modify the pre-annotation frame corresponding to the point cloud data corresponding to the first pre-annotation target based on the modifying operation, where the modifying operation includes: at least one type of operation among delete, split, and merge.
In another embodiment of the present invention, the processing and displaying module 520 is specifically configured to display the to-be-displayed point cloud data frame in a two-dimensional top view.
In another embodiment of the present invention, the apparatus further comprises: a third display module (not shown in the figures), configured to display, in a preset three-dimensional space display form, the target to be annotated and the first annotation frame corresponding to the first annotation frame information after the first annotation frame information corresponding to the target to be annotated is determined based on the first selected operation; and/or the apparatus further comprises: a fourth display module (not shown in the figures), configured to display the target to be annotated and the first annotation frame corresponding to the first annotation frame information at a preset two-dimensional non-top view angle; and/or
The device further comprises: a third obtaining module (not shown in the figure) configured to obtain a two-dimensional image corresponding to the current display frame; a projection module (not shown in the figure), configured to project, based on first annotation frame information corresponding to the target to be annotated, a first annotation frame corresponding to the first annotation frame information to a two-dimensional image corresponding to a current display frame, so as to obtain a projection frame corresponding to the first annotation frame information corresponding to the target to be annotated; a fifth display module (not shown in the figure), configured to display a two-dimensional image corresponding to the current display frame and a projection frame corresponding to the first annotation frame information corresponding to the target to be annotated, where the two-dimensional image includes the target to be annotated.
The device and system embodiments correspond to the method embodiments, and have the same technical effects as the method embodiments, and specific descriptions refer to the method embodiments. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again. Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A point cloud data labeling method is characterized by comprising the following steps:
acquiring point cloud data frames to be marked and acquisition equipment pose information corresponding to each point cloud data frame to be marked;
performing preset display processing on the point cloud data frame to be marked to obtain a point cloud data frame to be displayed, and displaying the point cloud data frame to be displayed;
after detecting a first selection operation triggered by point cloud data corresponding to a target to be marked in a current display frame currently displayed in the point cloud data frames to be displayed, determining first marking frame information corresponding to the target to be marked based on the first selection operation;
after acquiring the information of the superimposed frame number, determining the point cloud data frame of the superimposed frame number information frame behind or in front of the current display frame from the point cloud data frame to be marked as the current superimposed frame;
based on the acquisition equipment pose information corresponding to the current display frame and the acquisition equipment pose information corresponding to each current superposition frame, superposing and displaying each current superposition frame on the current display frame to display the motion track information corresponding to the target to be marked;
after a second selection operation triggered by point cloud data corresponding to the target to be marked in the target frame of the current superposition frame is detected, determining second marking frame information corresponding to the target to be marked in the target frame of the current superposition frame based on the second selection operation;
and determining third labeling frame information corresponding to the target to be labeled in each current superposition frame between the current display frame and the target frame of the current superposition frame based on the first labeling frame information and the second labeling frame information.
2. The method of claim 1, wherein the step of performing the preset display processing on the point cloud data frame to be annotated to obtain the point cloud data frame to be displayed comprises:
and carrying out ground point cloud data deletion operation on the point cloud data frame to be marked to obtain the point cloud data frame to be displayed.
3. The method of claim 1, wherein the step of determining, based on the first annotation frame information and the second annotation frame information, third annotation frame information corresponding to the target to be annotated in each current overlay frame between the current display frame and the target frame of the current overlay frame comprises:
determining intermediate labeling frame information corresponding to the target to be labeled in each current superposition frame between the current display frame and the target frame of the current superposition frame based on the first labeling frame information and the second labeling frame information;
and aiming at each current superposition frame between the current display frame and a target frame of the current superposition frame, adjusting the middle marking frame information corresponding to the target to be marked in the current superposition based on the distribution characteristics of the point cloud data corresponding to the target to be marked in the current superposition, and determining the third marking frame information corresponding to the target to be marked in the current superposition frame.
4. The method of claim 1, wherein after the step of determining the first annotation box information corresponding to the target to be annotated based on the first selected operation, the method further comprises:
displaying a first labeling frame corresponding to the first labeling frame information in the current display frame;
after the step of determining, based on the second selected operation, second annotation frame information corresponding to the target to be annotated in the target frame of the current overlay frame, the method further includes:
and displaying a second labeling frame corresponding to the second labeling frame information in the target frame of the current superposition frame.
5. The method of claim 4, wherein after the step of determining the third annotation frame information corresponding to the target to be annotated in each current overlay frame between the current display frame and the target frame of the current overlay frame based on the first annotation frame information and the second annotation frame information, the method further comprises:
if a fine tuning instruction for a marking frame corresponding to information of a marking frame to be adjusted corresponding to a target to be marked in a frame to be adjusted is detected, determining an adjustment direction corresponding to the information of the marking frame corresponding to the target to be marked based on the fine tuning instruction, wherein the frame to be adjusted is: the current display frame or the current superposition frame of the point cloud data corresponding to the target to be marked is included, and the marking frame corresponding to the information of the marking frame to be adjusted comprises: a marking frame corresponding to the first marking frame information, a marking frame corresponding to the second marking frame information or a marking frame corresponding to the third marking frame information corresponding to the target to be marked;
determining target marking frame information corresponding to the target to be marked, which meets preset edge fitting conditions, based on the relative position relationship between the point cloud data corresponding to the target to be marked in the frame to be adjusted and a marking frame corresponding to the information of the marking frame to be adjusted, and the adjustment direction, wherein the preset edge fitting conditions are as follows: and limiting the specified edge of the point cloud data corresponding to the target to be marked in the adjusting direction, and the condition that the specified edge of the marking frame corresponding to the information of the marking frame to be adjusted in the adjusting direction is overlapped.
6. The method of any one of claims 1-5, wherein prior to said displaying the frame of point cloud data to be displayed, the method further comprises:
obtaining pre-labeling data corresponding to the point cloud data frame to be labeled, wherein the pre-labeling data comprises: pre-labeling frame information corresponding to point cloud data corresponding to a pre-labeling target in each frame of point cloud data to be labeled;
the step of displaying the point cloud data frame to be displayed comprises the following steps:
and displaying the point cloud data frames to be displayed frame by frame, and correspondingly displaying the pre-marked frames corresponding to the pre-marked frame information corresponding to the point cloud data corresponding to the pre-marked targets in the pre-marked data corresponding to each point cloud data frame to be displayed.
7. The method of claim 6, wherein after detecting a first selection operation triggered by point cloud data corresponding to an object to be marked in a currently displayed frame currently displayed in the frames of point cloud data to be displayed, and before determining first marking frame information corresponding to the object to be marked based on the first selection operation, the method further comprises:
after detecting a modification operation on a pre-labeling frame corresponding to point cloud data corresponding to a first pre-labeling target in a displayed point cloud data frame to be displayed, modifying the pre-labeling frame corresponding to the point cloud data corresponding to the first pre-labeling target based on the modification operation, wherein the modification operation comprises the following steps: at least one type of operation among delete, split, and merge.
8. The method of any one of claims 1-7, wherein the step of displaying the frame of point cloud data to be displayed comprises:
and displaying the point cloud data frame to be displayed in a two-dimensional overlooking angle.
9. The method of claim 8, wherein after the step of determining the first annotation box information corresponding to the target to be annotated based on the first selected operation, the method further comprises:
displaying the target to be marked and a first marking frame corresponding to the first marking frame information in a preset three-dimensional space display form; and/or
Displaying the target to be marked and a first marking frame corresponding to the first marking frame information at a preset two-dimensional non-overlooking angle; and/or
Obtaining a two-dimensional image corresponding to the current display frame; based on first labeling frame information corresponding to the target to be labeled, projecting a first labeling frame corresponding to the first labeling frame information to a two-dimensional image corresponding to a current display frame to obtain a projection frame corresponding to the first labeling frame information corresponding to the target to be labeled; displaying a two-dimensional image corresponding to a current display frame and a projection frame corresponding to first labeling frame information corresponding to the target to be labeled, wherein the two-dimensional image comprises the target to be labeled.
10. An apparatus for annotating point cloud data, the apparatus comprising:
the system comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is configured to obtain point cloud data frames to be marked and acquisition equipment pose information corresponding to each point cloud data frame to be marked;
the processing and displaying module is configured to perform preset display processing on the point cloud data frame to be marked to obtain a point cloud data frame to be displayed and display the point cloud data frame to be displayed;
the first determining module is configured to determine first marking frame information corresponding to a target to be marked based on a first selected operation after the first selected operation triggered by point cloud data corresponding to the target to be marked in a current display frame currently displayed in the point cloud data frames to be displayed is detected;
the second determination module is configured to determine the superposed frame number information frame point cloud data frame behind or in front of the current display frame from the point cloud data frame to be marked as a current superposed frame after superposed frame number information is obtained;
the superposition display module is configured to superpose and display each current superposition frame on the current display frame based on the acquisition device pose information corresponding to the current display frame and the acquisition device pose information corresponding to each current superposition frame so as to display the motion trail information corresponding to the target to be marked;
the third determination module is configured to determine second marking frame information corresponding to the target to be marked in the target frame of the current superposition frame based on a second selection operation after the second selection operation triggered by the point cloud data corresponding to the target to be marked in the target frame of the current superposition frame is detected;
a fourth determining module, configured to determine, based on the first annotation frame information and the second annotation frame information, third annotation frame information corresponding to the target to be annotated in each current overlay frame between the current display frame and a target frame of the current overlay frame.
CN202010369838.3A 2020-04-30 2020-04-30 Point cloud data labeling method and device Active CN113592897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010369838.3A CN113592897B (en) 2020-04-30 2020-04-30 Point cloud data labeling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010369838.3A CN113592897B (en) 2020-04-30 2020-04-30 Point cloud data labeling method and device

Publications (2)

Publication Number Publication Date
CN113592897A true CN113592897A (en) 2021-11-02
CN113592897B CN113592897B (en) 2024-03-29

Family

ID=78237762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010369838.3A Active CN113592897B (en) 2020-04-30 2020-04-30 Point cloud data labeling method and device

Country Status (1)

Country Link
CN (1) CN113592897B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114792343A (en) * 2022-06-21 2022-07-26 阿里巴巴达摩院(杭州)科技有限公司 Calibration method of image acquisition equipment, and method and device for acquiring image data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154560A (en) * 2018-01-25 2018-06-12 北京小马慧行科技有限公司 Laser point cloud mask method, device and readable storage medium storing program for executing
CN108280886A (en) * 2018-01-25 2018-07-13 北京小马智行科技有限公司 Laser point cloud mask method, device and readable storage medium storing program for executing
CN110176078A (en) * 2019-05-26 2019-08-27 初速度(苏州)科技有限公司 A kind of mask method and device of training set data
WO2020052540A1 (en) * 2018-09-11 2020-03-19 腾讯科技(深圳)有限公司 Object labeling method and apparatus, movement control method and apparatus, device, and storage medium
WO2020064955A1 (en) * 2018-09-26 2020-04-02 Five AI Limited Structure annotation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154560A (en) * 2018-01-25 2018-06-12 北京小马慧行科技有限公司 Laser point cloud mask method, device and readable storage medium storing program for executing
CN108280886A (en) * 2018-01-25 2018-07-13 北京小马智行科技有限公司 Laser point cloud mask method, device and readable storage medium storing program for executing
WO2020052540A1 (en) * 2018-09-11 2020-03-19 腾讯科技(深圳)有限公司 Object labeling method and apparatus, movement control method and apparatus, device, and storage medium
WO2020064955A1 (en) * 2018-09-26 2020-04-02 Five AI Limited Structure annotation
CN110176078A (en) * 2019-05-26 2019-08-27 初速度(苏州)科技有限公司 A kind of mask method and device of training set data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杜万和;杨晶东;杨敬辉;: "基于点云的地平面初系数估测", 电子科技, no. 09, 15 September 2015 (2015-09-15) *
江文婷;龚小谨;刘济林;: "基于增量计算的大规模场景致密语义地图构建", 浙江大学学报(工学版), no. 02, 15 February 2016 (2016-02-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114792343A (en) * 2022-06-21 2022-07-26 阿里巴巴达摩院(杭州)科技有限公司 Calibration method of image acquisition equipment, and method and device for acquiring image data
CN114792343B (en) * 2022-06-21 2022-09-30 阿里巴巴达摩院(杭州)科技有限公司 Calibration method of image acquisition equipment, method and device for acquiring image data

Also Published As

Publication number Publication date
CN113592897B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
Braun et al. Combining inverse photogrammetry and BIM for automated labeling of construction site images for machine learning
CN108694882B (en) Method, device and equipment for labeling map
US11978243B2 (en) System and method using augmented reality for efficient collection of training data for machine learning
JP6016268B2 (en) Field work support device, method and program
US10393515B2 (en) Three-dimensional scanner and measurement assistance processing method for same
CN108053473A (en) A kind of processing method of interior three-dimensional modeling data
WO2023093217A1 (en) Data labeling method and apparatus, and computer device, storage medium and program
US10949983B2 (en) Image processing apparatus, image processing system, image processing method, and computer-readable recording medium
Zollmann et al. Interactive 4D overview and detail visualization in augmented reality
CN112549034B (en) Robot task deployment method, system, equipment and storage medium
CN111192331A (en) External parameter calibration method and device for laser radar and camera
CN102722349B (en) A kind of image processing method based on Geographic Information System and system
US9167290B2 (en) City scene video sharing on digital maps
CN107784038A (en) A kind of mask method of sensing data
JP4242529B2 (en) Related information presentation device and related information presentation method
US20180020203A1 (en) Information processing apparatus, method for panoramic image display, and non-transitory computer-readable storage medium
Stanimirovic et al. [Poster] A Mobile Augmented reality system to assist auto mechanics
CN115424265A (en) Point cloud semantic segmentation and labeling method and system
CN113592897A (en) Point cloud data labeling method and device
CN113838193A (en) Data processing method and device, computer equipment and storage medium
CN112017202B (en) Point cloud labeling method, device and system
CN116978010A (en) Image labeling method and device, storage medium and electronic equipment
JP7334460B2 (en) Work support device and work support method
CN112948605A (en) Point cloud data labeling method, device, equipment and readable storage medium
CN114092753A (en) Method and device for tracking and labeling objects in multi-frame 3D point cloud data and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211125

Address after: 215100 floor 23, Tiancheng Times Business Plaza, No. 58, qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou, Jiangsu Province

Applicant after: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.

Address before: Room 601-a32, Tiancheng information building, No. 88, South Tiancheng Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Applicant before: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant