CN111666137B - Data annotation method and device, computer equipment and storage medium - Google Patents

Data annotation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111666137B
CN111666137B CN202010338827.9A CN202010338827A CN111666137B CN 111666137 B CN111666137 B CN 111666137B CN 202010338827 A CN202010338827 A CN 202010338827A CN 111666137 B CN111666137 B CN 111666137B
Authority
CN
China
Prior art keywords
point cloud
state
cloud data
frame
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010338827.9A
Other languages
Chinese (zh)
Other versions
CN111666137A (en
Inventor
赵宇奇
陈坤杰
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Weride Technology Co Ltd
Original Assignee
Guangzhou Weride Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weride Technology Co Ltd filed Critical Guangzhou Weride Technology Co Ltd
Priority to CN202010338827.9A priority Critical patent/CN111666137B/en
Publication of CN111666137A publication Critical patent/CN111666137A/en
Application granted granted Critical
Publication of CN111666137B publication Critical patent/CN111666137B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Abstract

The application relates to a data annotation method, a data annotation device, computer equipment and a storage medium. The method comprises the following steps: the point cloud data to be marked is divided into a plurality of tasks, the tasks are marked respectively, the marking result of each frame in each task is obtained, the marking results of the tasks are connected in series according to the relation between each frame and the task, the target marking result of the point cloud data to be marked is obtained, the marking time is shortened on the whole, and the marking quality is improved.

Description

Data annotation method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of deep learning technologies, and in particular, to a data annotation method, apparatus, computer device, and storage medium.
Background
In recent years, the deep learning technology has been rapidly developed, and the automatic driving technology attracts more and more scientific researchers as an important branch of deep learning. The error rate tolerance of the automatic driving technology to the algorithm is very low, so that the real-time property of the deep learning model processing is ensured and the high precision of the deep learning model is ensured before the deep learning model is adopted for real road test, and therefore, the establishment of a complete point cloud data evaluation data set has very important significance for the deep learning model.
Generally, when a point cloud data evaluation dataset is constructed, point cloud data needs to be labeled, for example, in the stages of training, verification and the like of a deep learning model, the labeled point cloud data is often used. Currently, a commonly used labeling method is to label point cloud data of a task by a labeling person, for example, in a Robot Operation System (ROS), the point cloud data of a drive test environment in a rossbag unit is used as a task, and is labeled by a labeling person to construct a point cloud data evaluation dataset.
However, the current labeling method has the problems of low labeling efficiency and incomplete labeling quality.
Disclosure of Invention
In view of the above, it is necessary to provide a data annotation method, apparatus, computer device and storage medium capable of improving annotation efficiency and annotation quality.
A method of data annotation, the method comprising:
dividing point cloud data to be marked into a plurality of tasks, and marking the tasks respectively;
acquiring a labeling result of each frame in each task;
and connecting the labeling results of the tasks in series according to the relationship between the frames and the tasks to obtain a target labeling result of the point cloud data to be labeled.
In one embodiment, the step of concatenating the labeling results of the tasks according to the relationship between the frames and the tasks to obtain the target labeling result of the point cloud data to be labeled includes:
for each frame of point cloud data in each task, determining the prediction state of each tracked object in the next frame according to the current state of each tracked object in the point cloud data of the current frame;
judging whether the point cloud data of the next frame is the data in the current task or not according to the relation between each frame and the task, and acquiring a judgment result;
updating a tracking list according to the prediction state of each tracked object in the next frame and the judgment result to obtain a target labeling result of the point cloud data to be labeled; the tracking list includes an observed state of the tracker for each tracked object.
In one embodiment, the updating the tracking list according to the predicted state of each tracked object in the next frame and the determination result includes:
if the point cloud data of the next frame belongs to the data in the current task, matching the tracked objects in the point cloud data of the current frame with the labeled objects in the point cloud data of the next frame according to the task identification of each tracked object, and determining the target labeled objects which are successfully matched with each tracked object in the point cloud data of the current frame; the target labeling object is a labeling object which is the same as the task identifier of the tracking object in the point cloud data of the next frame;
and updating the observation state of the tracker of the successfully paired target annotation object in the tracking list by adopting the prediction state of each tracked object in the next frame and the annotation state of the successfully paired target annotation object.
In one embodiment, the updating the tracking list according to the predicted state of each tracked object in the next frame and the determination result includes:
if the point cloud data of the next frame does not belong to the point cloud data in the current task, pairwise matching is carried out on the prediction state of each tracked object in the next frame and the labeling state of each labeled object, and the tracked object and the target labeled object which are successfully matched are obtained; the marked object is a marked object in the first frame of point cloud data in the new task;
and updating the observation state of the tracker of the successfully paired target annotation object in the tracking list by adopting the prediction state of each tracked object in the next frame and the annotation state of the successfully paired target annotation object.
In one embodiment, pairwise pairing the predicted state of each tracked object in the next frame with the labeled state of each labeled object to obtain a successfully paired tracked object and target labeled object includes:
calculating the characteristic distance between the predicted state of each tracked object in the next frame and the labeling state of each labeled object;
and determining the tracking object and the marked object corresponding to the characteristic distance smaller than the preset threshold value as the successfully paired tracking object and target marked object.
In one embodiment, the updating, in the tracking list, the observation state of the tracker of the successfully paired target annotation object by using the predicted state of each tracked object in the next frame and the annotation state of the successfully paired target annotation object includes:
correcting the prediction state of the tracking object in the next frame according to the labeling state and the preset residual error of the successfully paired target labeling object to obtain a correction state; the preset residual error represents an error between a prediction state and a labeling state of the tracked object;
and updating the observation state of the tracker of the successfully paired target labeling object in the tracking list according to the correction state of each tracking object.
In one embodiment, the method further comprises:
determining each tracked object in a first frame of point cloud data in the point cloud data to be labeled;
initializing a tracker and a global tracking identifier of each tracked object;
and establishing the tracking list according to the tracker of the tracked object and the global tracking identifier.
In one embodiment, the updating the tracking list further comprises:
and after the tracked object in the point cloud data of the current frame is successfully matched with the labeled object in the point cloud data of the next frame, assigning the global tracking identification of the tracked object in the point cloud data of the current frame to the successfully matched labeled object.
In one embodiment, the method further comprises:
if a new tracking object appears in the point cloud data of the next frame, adding a tracker of the new tracking object into the tracking list; and/or the presence of a gas in the gas,
and if no tracking object appears in the point cloud data of the preset number of continuous frames, deleting the tracker of the tracking object which does not appear from the tracking list.
In one embodiment, the method further comprises:
acquiring correction information aiming at a target tracking object in a link frame; the link frame is a first frame in a task to be corrected;
and correcting the identifiers of the target tracking objects in all frames after the link frame in the target labeling result according to the correction information so as to enable the identifiers of the same tracking objects in each task to be consistent.
A data annotation device, comprising:
the system comprises a splitting module, a storage module and a processing module, wherein the splitting module is used for splitting point cloud data to be marked into a plurality of tasks and marking the tasks respectively;
the acquisition module is used for acquiring the labeling result of each frame in each task;
and the serial connection module is used for serially connecting the labeling results of the tasks according to the relation between the frames and the tasks to obtain the target labeling result of the point cloud data to be labeled.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
dividing point cloud data to be marked into a plurality of tasks, and marking the tasks respectively;
acquiring a labeling result of each frame in each task;
and connecting the labeling results of the tasks in series according to the relationship between the frames and the tasks to obtain a target labeling result of the point cloud data to be labeled.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
dividing point cloud data to be marked into a plurality of tasks, and marking the tasks respectively;
acquiring a labeling result of each frame in each task;
and connecting the labeling results of the tasks in series according to the relationship between the frames and the tasks to obtain a target labeling result of the point cloud data to be labeled.
The data marking method, the device, the computer equipment and the storage medium divide the point cloud data to be marked into a plurality of tasks, mark the tasks respectively to obtain the marking result of each frame in each task, and serially connect the marking results of each task according to the relationship between each frame and each task to obtain the target marking result of the point cloud data to be marked, because the point cloud data to be marked is divided into a plurality of tasks to be marked respectively for marking, the data quantity of each task is very small compared with the data quantity of the complete point cloud data to be marked, the time for marking each task is greatly reduced, the tasks are simultaneously marked, the marking efficiency is obviously improved, particularly, the time required by a marker is short when the marker marks one task, the marker can continuously focus on marking, the marking quality is improved, and the marking results of each task are serially connected, and a complete marking result of the point cloud data to be marked is obtained, so that the marking time is shortened on the whole, and the marking quality is improved.
Drawings
FIG. 1 is a diagram of an application environment of a data annotation process in one embodiment;
FIG. 2 is a flow chart illustrating a data annotation process according to an embodiment;
FIG. 3 is a flow chart diagram illustrating a method of data annotation in one embodiment;
FIG. 4 is a flow chart illustrating a data annotation process according to another embodiment;
FIG. 5 is a schematic diagram showing a flow chart of a data annotation process in another embodiment;
FIG. 6 is a schematic diagram showing a flow chart of a data annotation process in another embodiment;
FIG. 7 is a schematic diagram showing a flowchart of a data annotation process in another embodiment;
FIG. 8 is a flowchart of a data annotation process, according to an embodiment;
FIG. 9 is a block diagram showing the structure of a data annotation device according to an embodiment;
FIG. 10 is a block diagram showing the construction of a data annotation device according to another embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The data annotation method provided by the application can be applied to the application environment shown in fig. 1. As shown in fig. 1, the data annotation method can be applied to a computer device, which can be a server, and its internal structure diagram can be as shown in fig. 1. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing point cloud data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a data annotation method.
In one embodiment, as shown in fig. 2, a data annotation method is provided, which is described by taking the method as an example applied to the computer device in fig. 1, and includes the following steps:
s201, dividing point cloud data to be marked into a plurality of tasks, and marking the tasks respectively.
The point cloud data to be marked can be point cloud data acquired by a laser radar in real time or point cloud data acquired by the laser radar and stored in computer equipment or other storage equipment. The point cloud data to be marked can be road test environment data, traffic data and the like of the vehicle.
In this embodiment, when the computer device obtains a group of point cloud data to be labeled, the group of point cloud data to be labeled may be divided, and a plurality of divided tasks are uploaded to the labeling system and labeled by different labeling personnel. When point cloud data to be marked are divided, the point cloud data to be marked can be divided into a plurality of tasks with the same data size according to a preset data size unit, or the point cloud data to be marked can be divided into a plurality of tasks according to the correlation characteristics among the point cloud data; or, the point cloud data to be labeled may be divided into a plurality of tasks according to the collection time of the point cloud data according to a preset time unit, which is not limited in the embodiment of the present application. After point cloud data to be marked are divided into a plurality of tasks, the tasks are uploaded to a marking system, and a plurality of markers mark the tasks, wherein one marker can mark one task or one marker can mark a plurality of tasks; alternatively, the labeling may be performed automatically, for example, by labeling each task with one or more neural networks.
Taking the ROS framework mainly adopted in the current autonomous driving industry as an example, since the ROS itself is based on a message subscription and message publishing mechanism, the entire autonomous driving can be divided into a plurality of independent functional modules based on required functions in the development process of the entire system. Each module only needs to subscribe the needed messages through the mode of subscribing the messages, and solves the problems of how to process the messages and publish the processing results of the module. Therefore, each functional module can be independently debugged and verified when being off-line. When all modules are deployed on the vehicle, the various functional modules can be quickly integrated on the autonomous vehicle through the ROS framework. For example, the autonomous vehicle depends on a plurality of algorithm modules when running on a real road, and may include an environment sensing module, a path planning module, a vehicle control module, and the like, the plurality of modules depend on the ROS for mutual communication, each module in the ROS acquires information of an upstream module and a downstream module by sending topic and subscribing topic, and stores real drive test environment data by taking rosbag as a unit, and the drive test environment data may be point cloud data acquired by a laser radar. In this embodiment, the cloud data of one Rosbg is divided into a plurality of tasks, each task has about 50 frames of laser point cloud data, and the annotating staff can keep attention during the process of annotating one task.
S202, obtaining the labeling result of each frame in each task.
The labeling result may include feature information of some or all obstacles, for example, information such as a position, a size, an orientation, an obstacle type, and a trackID of an obstacle, where the trackID is an identification ID unique to a certain obstacle in the task, and trackids of the same object in different data frames are consistent. For example, if a vehicle appears in frames 4-21 and 30-40, the ID of the vehicle in these frames must be kept uniform, so as to better analyze the information of continuous speed, acceleration, motion trend and the like of the frames before and after the object.
In this embodiment, the computer device may obtain the labeling result of each data frame of each task in real time, or obtain the complete labeling result of one task after all data frames of one task are labeled.
S203, the labeling results of the tasks are connected in series according to the relation between the frames and the tasks, and the target labeling result of the point cloud data to be labeled is obtained.
The computer equipment can establish a corresponding relation between each frame and the task while dividing the task, wherein the relation between each frame and the task can comprise a corresponding relation between the identifier of each frame and the identifier of the task; alternatively, the relationship between each frame and each task may include a start frame position and an end frame position corresponding to each task, which is not limited in the embodiment of the present application.
In this embodiment, since one task is too short, it is not very reliable to directly use as a lidar point cloud evaluation dataset for model evaluation, so that the model evaluation usually selects complete rosbag labeling data as the lidar point cloud evaluation dataset, and at this time, each task of the rosbag needs to be connected in series to form a complete rosbag labeling result. Therefore, after the labeling results of each task are obtained, the labeling results of each task need to be concatenated according to the relationship between each frame and each task, so as to obtain the target labeling result of the point cloud data to be labeled, where the target labeling result includes the labeling result of each frame in the point cloud data to be labeled. For example, the labeling results of the tasks may be concatenated according to the time sequence of each frame to obtain the completed labeling result. Optionally, a tracking algorithm may be further used in the concatenation process to track each object in the point cloud data to be annotated to correct the state of each object, and the identifiers of each object in different tasks may also be corrected to ensure that the identifiers of the same object in different tasks are consistent, but the embodiment of the present application is not limited thereto.
The data labeling method provided by the embodiment of the application divides point cloud data to be labeled into a plurality of tasks, labels the tasks respectively to obtain labeling results of frames in the tasks, and concatenates the labeling results of the tasks according to the relationship between the frames and the tasks to obtain target labeling results of the point cloud data to be labeled, because the point cloud data to be labeled is divided into the tasks to be labeled respectively, the data volume of each task is very small compared with the data volume of the complete point cloud data to be labeled, the time for labeling each task is greatly reduced, the tasks are labeled simultaneously, the labeling efficiency is obviously improved, particularly, the time required by a labeler when labeling one task is short, the labeler can label the task continuously and intensively, the labeling quality is improved, and then the labeling results of the tasks are labeled, and a complete marking result of the point cloud data to be marked is obtained, so that the marking time is shortened on the whole, and the marking quality is improved.
Fig. 3 is a schematic view of a flow chart of a data annotation method in an embodiment, which relates to a specific implementation manner of concatenating annotation results of each task, and as shown in fig. 3, step S203 may include the following steps:
s301, determining the prediction state of each tracked object in the next frame according to the current state of each tracked object in the point cloud data of the current frame for each frame of point cloud data in each task.
In this embodiment, the current state of each tracked object in the point cloud data of the current frame may be a labeling state, or may be obtained according to a state of the tracked object in the previous frame and a predicted state of the current frame, for example, for the current state of the tracked object in the first frame point cloud data of the point cloud data to be labeled, the state of the tracked object labeled in the labeling result of the first frame may be the current state of the tracked object, and for other subsequent frames, the current state of each tracked object may be obtained according to the state of the tracked object in the previous frame and the predicted state of the current frame. The current state of each tracked object in the point cloud data of the current frame is obtained, and the state of the tracked object in the next frame can be predicted according to the current state.
For example, suppose an exploitation
Figure BDA0002467787860000091
Representing the state of the tracked object at the current t, ptTo track the position of an object at time t, vtTo track the velocity of an object at time t, if the state of the tracked object at the next time (i.e., t +1) at time t is known, the state at time t +1 can be predicted from equations (1) and (2).
Figure BDA0002467787860000092
vt+1=vt+ut+1Δ t equation (2);
where Δ t is the time interval t to t +1, ut+1Is the acceleration.
According to the above formula, the output variables are all linear combinations of the input variables, and since the above formula represents a linear relationship, it can be abstracted into a matrix calculation for convenience of calculation, such as formula (3):
Figure BDA0002467787860000093
in the embodiment of the present application, kalman filtering may be further used to track the target, for example, the first formula of the kalman filtering tracking process, the state prediction formula (4) is used:
Figure BDA0002467787860000094
the above formula is Ft+1I.e. a state transition matrix, representing how the state of the tracked object of the next frame is inferred from the current frame, Bt+1To control the matrix, a control quantity u is representedt+1How to act on the state of the next frame, ut+1Typically acceleration.
Figure BDA0002467787860000095
Indicating that the state is an estimated value rather than a true value, and x is superscripted on the right "-" indicating that the state was inferred from the previous state rather than the best estimate after correction, i.e., the predicted state of the tracked object in the next frame
Figure BDA0002467787860000096
That is, the system utilizes the state of the current frame
Figure BDA0002467787860000097
And (4) predicting.
Since the true values are estimated in the above formula, the influence of Noise should be considered, and in this embodiment, the Noise is assumed to follow a gaussian distribution Noise of 0 mean value to Guassian (0, σ), for example, for a one-dimensional data estimation, only variance needs to be considered if the influence of Noise is to be introduced, but after the dimensionality is increased, a covariance matrix concept needs to be introduced in order to integrate the degree of deviation of each dimensionality from its mean value.
In the present embodiment, the uncertainty at each time in the system is given by the covariance matrix Σ. And this uncertainty is also communicated between each time instance. That is, not only the state (e.g., position or velocity) of the current frame tracking the object is conveyed in each frame, but also the uncertainty of the object state is conveyed in each frame. The transfer of the uncertainty can be represented by a state transition matrix, and a covariance matrix Q needs to be introduced on the basis of the fact that the prediction model itself is not absolutely accurate, and formula (5) is synthesized to represent the transfer relationship of the uncertainty in each frame:
Figure BDA0002467787860000101
s302, judging whether the point cloud data of the next frame is the data in the current task or not according to the relation between each frame and the task, and obtaining a judgment result.
In this embodiment, whether the point cloud data of the next array is the data in the current task is determined according to the relationship between each frame and the task, for example, if the task identifier of the point cloud data of the next frame is the same as the task identifier of the point cloud data of the current frame, it indicates that the point cloud data of the next array is the data in the current task; or determining the point cloud data of the next frame as the data in the current task according to the corresponding relation between the next frame and the task.
S303, updating the tracking list according to the prediction state and the judgment result of each tracked object in the next frame to obtain a target labeling result of the point cloud data to be labeled; the tracking list includes the observed state of each tracked object's tracker.
In this embodiment, the tracking list may be updated according to the predicted state and the determination result of each tracked object in the next frame, and the updating of the tracking list may be updating the observation state of the tracker in the tracking list. And the judgment results are different, and the method for updating the tracking list is also different.
Optionally, the method for generating the tracking list includes: determining each tracked object in a first frame of point cloud data in point cloud data to be marked; initializing a tracker and a global tracking identifier of each tracked object; and establishing a tracking list according to the tracker of the tracked object and the global tracking identifier.
In this embodiment, a first frame of a group of point cloud data to be labeled is initialized, a tracker is newly created for each tracked object in the first frame and added to a tracking list, and an initial prediction state of each tracker may be a labeling state of each tracked object in a labeling result of the first frame. A global tracking identifier (globaltrakid) may also be set for each tracked object, where the globaltrakid may be a task identifier of each tracked object in the first frame, or may be a reset task identifier, which is not limited in this embodiment of the present application.
Further, updating the tracking list further comprises: after the tracked object in the point cloud data of the current frame is successfully matched with the labeled object in the point cloud data of the next frame, the global tracking identification of the tracked object in the point cloud data of the current frame is assigned to the successfully matched labeled object.
In this embodiment, after the tracked object in the point cloud data of the current frame is successfully paired with the tagged object in the point cloud data of the next frame, that is, when a certain tracker is successfully paired with a certain object in the next frame, the tracked object and the tagged object are the same object, and the globalttrackid in the rossbag is the same, the globalttrackid of the tracker needs to be assigned to the tracked object successfully paired with the tracked object in the next frame. The task identifiers of the successfully matched marked objects are replaced by the global tracking identifiers of the tracked objects, so that the task identifiers of the same tracked objects in all tasks are kept consistent, and the reliability of the complete marking result of the point cloud data to be marked is higher.
Optionally, the trackers in the tracking list may also be updated according to the changing conditions of the tracked object. The method further comprises the following steps: if a new tracking object appears in the point cloud data of the next frame, adding a tracker of the new tracking object into the tracking list; and/or deleting the tracker of the tracking object which does not appear from the tracking list if the tracking object does not appear in the point cloud data of the preset number of continuous frames.
In this embodiment, the tracker of the tracked object is updated by using the state of the tracked object successfully paired in the next frame, and for a newly appeared tracked object, the tracker needs to be initialized and added to the tracking list, and if a certain object disappears for more than 15 frames, the tracker needs to be deleted from the tracking list, thereby ensuring the accuracy of the tracking list.
In one embodiment, as shown in fig. 4, this embodiment relates to a specific implementation of updating the tracking list when the point cloud data of the next frame belongs to the data in the current task. As shown in fig. 4, step S303 may include the steps of:
s401, if the point cloud data of the next frame belongs to the data in the current task, matching the tracked object in the point cloud data of the current frame with the labeled object in the point cloud data of the next frame according to the task identification of each tracked object, and determining a target labeled object successfully matched with each tracked object in the point cloud data of the current frame; and the target labeling object is the labeling object which is the same as the task identifier of the tracking object in the point cloud data of the next frame.
In this embodiment, when the point cloud data of the next frame belongs to data in the current task, since task identifiers of the same object in the same task are the same, objects of the current frame that are the same as the task identifiers of the next frame are directly paired, and a target labeling object of the next frame that is the same as the task identifier of the tracked object in the current frame is determined.
For example, assuming that the total rossbag has 300 frames and is divided by 50 frames, every 50 frames are divided into one task, and one task can be labeled by one label, and the ID (identification) of the same tracked object in each task is the same. Assuming that the current processing is completed to the 6 th frame (relative to the whole rossbag), the 6 th frame is a data frame in the first task, and since every 50 frames are a task, and the tasks of the next frame (7 th frame) and the current frame (6 th frame) are the first task according to the division of the task, the tracked objects with the same ID (identification) in the 6 th frame and the 7 th frame are directly found for pairing, since the same task is labeled by the same labeling member, the IDs of the same tracked object in different frames in the same task are the same, and therefore, the corresponding object with the same ID in the 7 th frame is the target labeled object. For example, if the ID of the tracked object a in the 6 th frame is 2, the tracked object b with the ID of 2 in the 7 th frame is the target tagged object successfully paired with the tracked object a.
S402, updating the observation state of the tracker of the successfully paired target annotation object in the tracking list by adopting the prediction state of each tracking object in the next frame and the annotation state of the successfully paired target annotation object.
And the marking state is the marking state of the target marking object obtained from the marking result of the point cloud data of the next frame.
In this embodiment, the computer device updates the observation state of the tracker of each tracked object according to the predicted state of each tracked object in the next frame and the annotation state of the successfully paired target annotation object. For example, the annotation state of the successfully paired target annotation object may be corrected according to the predicted state of each tracked object in the next frame, and the observation state of the tracker of the target annotation object may be updated with the corrected state.
In the data labeling method provided in the embodiment of the application, if the point cloud data of the next frame belongs to the data in the current task, the tracked object in the point cloud data of the current frame is paired with the labeled object in the point cloud data of the next frame according to the task identification of each tracked object, the target labeled object successfully paired with each tracked object in the point cloud data of the current frame is determined, the predicted state of each tracked object in the next frame and the labeled state of the target labeled object successfully paired are adopted, the observation state of the tracker of the target labeled object successfully paired is updated in the tracking list, because the task identification of the same object in the same task is the same, the objects in the current frame, which are the same as the task identification of the next frame, are directly paired, the predicted state of each tracked object in the next frame and the labeled state of the target labeled object successfully paired are adopted, and updating the observation state of the tracker of the successfully paired target labeling object in the tracking list, and realizing the rapid tracking of the target object in the same task so as to improve the labeling efficiency.
In another embodiment, as shown in fig. 5, this embodiment relates to a specific implementation of updating the tracking list when the point cloud data of the next frame does not belong to the data in the current task. As shown in fig. 5, step S303 may include the steps of:
s501, if the point cloud data of the next frame does not belong to the point cloud data in the current task, pairwise matching is carried out on the prediction state of each tracked object in the next frame and the labeling state of each labeled object, and the tracked object and the target labeled object which are successfully matched are obtained; and the marked object is a marked object in the first frame of point cloud data in the new task.
In this embodiment, if the point cloud data of the next frame does not belong to the point cloud data in the current task, pairwise pairing is performed between the predicted state of each tracked object in the current frame and the labeling state of the labeled object in the next frame, so as to obtain the successfully paired tracked object and target labeled object.
Optionally, pairwise matching the predicted state of each tracked object in the next frame with the labeling state of each labeled object to obtain a successfully-matched tracked object and a successfully-matched target labeled object, including: calculating the characteristic distance between the prediction state of each tracked object in the next frame and the labeling state of each labeled object; and determining the tracking object and the marked object corresponding to the characteristic distance smaller than the preset threshold value as the successfully paired tracking object and target marked object.
For example, assuming that the total rossbag has 300 frames and is divided by 50 frames, every 50 frames are divided into one task, and one task can be labeled by one label, and the ID (identification) of the same tracked object in each task is the same. Assuming that the 50 th frame (relative to the whole rossbag) is processed, the 50 th frame is a data frame in one task, and since each 50 th frame is a task, the task of the next frame (51 th frame) and the current frame (50 th frame) is different according to the division of the task, the corresponding target marking object cannot be directly searched by using the ID. Firstly, predicting the prediction states of all tracked objects in a 51 th frame according to the current states of all tracked objects in a 50 th frame to obtain a prediction state list, pairwise calculating the characteristic distances between the prediction states of all tracked objects in the prediction state list and the labeling states of all labeled objects in the 51 th frame, obtaining the global maximum matching by using a Hungarian matching algorithm, and finding out each prediction state and the corresponding target labeled object.
In this embodiment, the preset threshold may be set according to actual requirements, for example, the preset threshold may be 0.2, 0.3, and the like. And calculating the characteristic distance between the predicted state of each tracked object in the next frame and the labeling state of the labeled object in the first frame in the new task, wherein if the characteristic distance is smaller than a preset threshold value, the tracked object and the labeled object corresponding to the characteristic distance are successfully paired. Optionally, after the characteristic distance is obtained through calculation, maximum matching can be performed by adopting a Hungarian bipartite matching algorithm, and the successfully paired tracked object and the target labeled object can be obtained.
S502, updating the observation state of the tracker of the successfully paired target annotation object in the tracking list by adopting the prediction state of each tracking object in the next frame and the annotation state of the successfully paired target annotation object.
In this embodiment, the computer device updates the observation state of the tracker of each tracked object according to the predicted state of each tracked object in the next frame and the annotation state of the successfully paired target annotation object. For example, the annotation state of the successfully paired target annotation object may be corrected according to the predicted state of each tracked object in the next frame, and the observation state of the tracker of the target annotation object may be updated with the corrected state.
In the data labeling method provided by the embodiment of the application, if the point cloud data of the next frame does not belong to the point cloud data in the current task, pairing the predicted state of each tracked object in the next frame with the labeling state of each labeled object in pairs to obtain a successfully paired tracked object and a successfully paired target labeled object, adopting the predicted state of each tracked object in the next frame and the successfully paired labeling state of the target labeled object, the observation state of the tracker of the successfully paired target labeling object is updated in the tracking list, the target tracking between different tasks is realized by pairwise pairing the prediction state of each tracking object in the next frame and the labeling state of each labeling object, therefore, the labeling results of the tasks are connected in series to obtain a complete labeling result, and the labeling results of the tasks are automatically connected in series in the mode, so that the series connection efficiency is high.
Based on the embodiments of fig. 4 and fig. 5, as shown in fig. 6, the step S402 or S502 "updating the observation state of the tracker of the successfully paired target annotation object in the tracking list by using the predicted state of each tracking object in the next frame and the annotation state of the successfully paired target annotation object" may include:
s601, correcting the prediction state of the tracking object in the next frame according to the labeling state and the preset residual error of the successfully matched target labeling object to obtain a correction state; the preset residual error represents an error between a predicted state and an annotated state of the tracked object.
In this embodiment, the observation state of the tracker is not the real state of the tracked object, and if there is a conversion relationship from the real state of the tracked object to the observation state of the tracked object, which is h (#), and is also a linear function, then y (t) ═ h [ x (t)]+ V (t), V (t) representing the observation error, written in matrix form Yt=H*Xt+ v, then the predicted state is corrected according to the labeled state using equation (6):
Figure BDA0002467787860000151
wherein the content of the first and second substances,
Figure BDA0002467787860000152
to correct the state, ytThe tagged state of the object is tracked in the next frame.
Predicted state estimated from state of tracking object of previous frame
Figure BDA0002467787860000153
The difference from the true label state is at the right end of the equation
Figure BDA0002467787860000154
The residual between the actual observed value (labeled state) and the estimated observed value (predicted state) is represented, and then multiplied by the system K to correct the estimated value (predicted state), wherein K is Kalman gain which is a weighting matrix for the residual,
Figure BDA0002467787860000155
wherein R is covariance and is used to represent uncertainty of the observed value.
Finally, the optimal estimated value can be also matched
Figure BDA0002467787860000156
The noise distribution of (2) is updated, and the formula used is formula (7):
Figure BDA0002467787860000157
wherein I is an identity matrix.
And S602, updating the observation state of the tracker of the successfully paired target labeled object in the tracking list according to the correction state of each tracked object.
In this embodiment, the observation state of the tracker of the target tagged object is updated by using the correction state of the tracked object, for example, the observation state of the tracker of the target tagged object that is successfully paired in the tracking list may be replaced by using the correction state of each tracked object.
According to the data labeling method provided by the embodiment of the application, the prediction state of the tracked object in the next frame is corrected according to the labeling state and the preset residual error of the successfully-paired target labeled object, the correction state is obtained, the observation state of the tracker of the successfully-paired target labeled object is updated in the tracking list according to the correction state of each tracked object, the preset residual error represents the error between the prediction state and the labeling state of the tracked object, the labeling state and the preset residual error of the target labeled object are adopted to correct the prediction state of the tracked object in the next frame, the obtained observation state of the tracker of the target labeled object is more accurate, and the labeling quality of point cloud data is improved.
On the basis of the foregoing embodiment, the target labeling result may be further modified, as shown in fig. 7, the method further includes:
s701, acquiring correction information aiming at a target tracking object in a link frame; the join frame is the first frame in the task to be modified.
In this embodiment, if there is an individual tracking error, it can also be modified manually by the annotator, for example, assuming that id of a certain vehicle in task 2 is #1, globaltrakid is #1, and id in the following task 3 is #8, but due to the tracking algorithm error, its globaltrakid is not correctly set to #1 but set to #70, and at this time, the annotator needs to change #70 to #1 in the join frame.
S702, according to the correction information, the marks of the target tracking objects in all the frames after the frame is connected in the target labeling result are corrected, so that the marks of the same tracking objects in each task are consistent.
In this embodiment, after the computer device detects the correction information of the annotator, the computer device automatically sets #1 to the globaltrakid with id #8 in all tasks after task 3 and task 3, that is, the annotator only needs to correct the error at the join frame, and the computer devices in other frames in the subsequent tasks automatically correct.
According to the data labeling method provided by the embodiment of the application, the correction information of the target tracking object in the link frame is obtained, the identification of the target tracking object in all frames after the link frame is corrected in the target labeling result according to the correction information, so that the identification of the same tracking object in each task is consistent, after the labeling results of each task are connected in series, the wrong globaltTrackID is further modified, the quality of labeling data can be improved, and a labeling worker only needs to correct the error at the link frame, other frame computer equipment in the subsequent task can automatically correct, and the labeling efficiency is further improved.
Fig. 8 is a flowchart of a data annotation method according to an embodiment, and as shown in fig. 8, the method may include the following steps:
s801, dividing the point cloud data to be marked into a plurality of tasks, and marking the tasks respectively.
S802, obtaining the labeling result of each frame in each task.
And S803, initializing a tracker list.
S804, for each frame of point cloud data in each task, determining the prediction state of each tracked object in the next frame according to the current state of each tracked object in the point cloud data of the current frame.
S805, determining whether the point cloud data of the next frame is the data in the current task according to the relationship between each frame and the task, if yes, executing step 806, and if not, executing step 807.
And S806, according to the task identification of each tracked object, pairing the tracked object in the point cloud data of the current frame with the labeled object in the point cloud data of the next frame, determining the target labeled object successfully paired with each tracked object in the point cloud data of the current frame, and executing S808.
And S807, pairing the predicted state of each tracked object in the next frame with the labeling state of each labeled object to obtain the successfully paired tracked object and target labeled object.
And S808, updating the observation state of the tracker of the successfully paired target annotation object in the tracking list by adopting the prediction state of each tracked object in the next frame and the annotation state of the successfully paired target annotation object, and returning to execute the step S804.
According to the data labeling method provided by the embodiment of the application, point cloud data to be labeled is divided into a plurality of tasks to be labeled respectively, the data volume of each task is small compared with the data volume of the complete point cloud data to be labeled, so that the time for labeling each task is greatly reduced, the tasks are labeled simultaneously, the labeling efficiency is obviously improved, especially, the time required by a labeler to label one task is short, the labeler can continuously concentrate on labeling, the labeling quality is improved, the labeling results of the tasks are connected in series, the complete labeling result of the point cloud data to be labeled is obtained, the labeling time is shortened on the whole, and the labeling quality is improved.
It should be understood that although the various steps in the flow charts of fig. 2-8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-8 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 9, there is provided a data annotation device, including:
the splitting module 11 is configured to split point cloud data to be labeled into a plurality of tasks, and label the tasks respectively;
an obtaining module 12, configured to obtain a labeling result of each frame in each task;
and the concatenation module 13 is configured to concatenate the labeling results of the tasks according to the relationship between each frame and the task, so as to obtain a target labeling result of the point cloud data to be labeled.
In one embodiment, as shown in fig. 10, the concatenation module 13 includes:
a determining unit 131, configured to determine, for each frame of point cloud data in each task, a predicted state of each tracked object in a next frame according to a current state of each tracked object in the point cloud data of a current frame;
a determining unit 132, configured to determine whether the point cloud data of the next frame is data in the current task according to the relationship between each frame and the task, and obtain a determination result;
an updating unit 133, configured to update a tracking list according to the predicted state of each tracked object in the next frame and the determination result, so as to obtain a target labeling result of the point cloud data to be labeled; the tracking list includes an observed state of the tracker for each tracked object.
In one embodiment, the updating unit 133 is configured to, if the point cloud data of the next frame belongs to data in a current task, pair a tracked object in the point cloud data of the current frame with a labeled object in the point cloud data of the next frame according to a task identifier of each tracked object, and determine a target labeled object that is successfully paired with each tracked object in the point cloud data of the current frame; the target labeling object is a labeling object which is the same as the task identifier of the tracking object in the point cloud data of the next frame; and updating the observation state of the tracker of the successfully paired target annotation object in the tracking list by adopting the prediction state of each tracked object in the next frame and the annotation state of the successfully paired target annotation object.
In one embodiment, the updating unit 133 is configured to pair the predicted state of each tracked object in the next frame with the labeled state of each labeled object to obtain a successfully paired tracked object and target labeled object if the point cloud data of the next frame does not belong to the point cloud data in the current task; the marked object is a marked object in the first frame of point cloud data in the new task; and updating the observation state of the tracker of the successfully paired target annotation object in the tracking list by adopting the prediction state of each tracked object in the next frame and the annotation state of the successfully paired target annotation object.
In one embodiment, the updating unit 133 is configured to calculate a characteristic distance between a predicted state of each of the tracked objects in a next frame and an annotated state of each of the annotated objects; and determining the tracking object and the marked object corresponding to the characteristic distance smaller than the preset threshold value as the successfully paired tracking object and target marked object.
In an embodiment, the updating unit 133 is configured to modify a predicted state of the tracked object in a next frame according to the labeling state and a preset residual error of the successfully paired target labeled object, so as to obtain a modified state; the preset residual error represents an error between a prediction state and a labeling state of the tracked object; and updating the observation state of the tracker of the successfully paired target labeling object in the tracking list according to the correction state of each tracking object.
In one embodiment, the obtaining module 12 is further configured to determine each tracked object in a first frame of point cloud data in the point cloud data to be labeled; initializing a tracker and a global tracking identifier of each tracked object; and establishing the tracking list according to the tracker of the tracked object and the global tracking identifier.
In an embodiment, the concatenation module 13 is further configured to assign the global tracking identifier of the tracked object in the point cloud data of the current frame to the successfully paired tagged object after the tracked object in the point cloud data of the current frame is successfully paired with the tagged object in the point cloud data of the next frame.
In one embodiment, the obtaining module 12 is further configured to add a tracker of a new tracked object to the tracking list if the new tracked object appears in the point cloud data of the next frame; and/or deleting the tracker of the tracking object which does not appear from the tracking list if the tracking object does not appear in the point cloud data of the preset number of continuous frames.
In one embodiment, the concatenation module 13 is further configured to obtain correction information for the target tracking object in the connection frame; the link frame is a first frame in a task to be corrected; and correcting the identifiers of the target tracking objects in all frames after the link frame in the target labeling result according to the correction information so as to enable the identifiers of the same tracking objects in each task to be consistent.
For specific limitations of the data annotation device, reference may be made to the above limitations of the data annotation method, which is not described herein again. The modules in the data labeling device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, and in one embodiment, a computer device is provided, the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a data annotation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
dividing point cloud data to be marked into a plurality of tasks, and marking the tasks respectively;
acquiring a labeling result of each frame in each task;
and connecting the labeling results of the tasks in series according to the relationship between the frames and the tasks to obtain a target labeling result of the point cloud data to be labeled.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
dividing point cloud data to be marked into a plurality of tasks, and marking the tasks respectively;
acquiring a labeling result of each frame in each task;
and connecting the labeling results of the tasks in series according to the relationship between the frames and the tasks to obtain a target labeling result of the point cloud data to be labeled.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (13)

1. A method for annotating data, the method comprising:
dividing point cloud data to be marked into a plurality of tasks, and marking the tasks respectively;
acquiring a labeling result of each frame in each task;
according to the relation between each frame and each task, the labeling results of each task are connected in series to obtain a target labeling result of the point cloud data to be labeled;
the step of serially connecting the labeling results of the tasks according to the relationship between the frames and the tasks to obtain the target labeling result of the point cloud data to be labeled comprises the following steps:
for each frame of point cloud data in each task, determining the prediction state of each tracked object in the next frame according to the current state of each tracked object in the point cloud data of the current frame;
judging whether the point cloud data of the next frame is the data in the current task or not according to the relation between each frame and the task, and acquiring a judgment result;
updating a tracking list according to the prediction state of each tracked object in the next frame and the judgment result to obtain a target labeling result of the point cloud data to be labeled; the tracking list includes an observation state of the tracker for each tracked object;
updating a tracking list according to the predicted state of each tracked object in the next frame and the judgment result, wherein the updating comprises:
if the point cloud data of the next frame belongs to the data in the current task, matching the tracked objects in the point cloud data of the current frame with the labeled objects in the point cloud data of the next frame according to the task identification of each tracked object, and determining the target labeled objects which are successfully matched with each tracked object in the point cloud data of the current frame; the target labeling object is a labeling object which is the same as the task identifier of the tracking object in the point cloud data of the next frame;
and updating the observation state of the tracker of the successfully paired target annotation object in the tracking list by adopting the prediction state of each tracked object in the next frame and the annotation state of the successfully paired target annotation object.
2. The method according to claim 1, wherein said updating the tracking list according to the predicted state of each of the tracked objects in the next frame and the determination result comprises:
if the point cloud data of the next frame does not belong to the point cloud data in the current task, pairwise matching is carried out on the prediction state of each tracked object in the next frame and the labeling state of each labeled object, and the tracked object and the target labeled object which are successfully matched are obtained; the marked object is a marked object in the first frame of point cloud data in the new task;
and updating the observation state of the tracker of the successfully paired target annotation object in the tracking list by adopting the prediction state of each tracked object in the next frame and the annotation state of the successfully paired target annotation object.
3. The method according to claim 2, wherein pairwise pairing the predicted state of each tracked object in the next frame with the labeled state of each labeled object to obtain a successfully paired tracked object and target labeled object comprises:
calculating the characteristic distance between the predicted state of each tracked object in the next frame and the labeling state of each labeled object;
and determining the tracking object and the marked object corresponding to the characteristic distance smaller than the preset threshold value as the successfully paired tracking object and target marked object.
4. The method according to claim 1 or 2, wherein the updating the observation state of the tracker of the successfully paired target annotation object in the tracking list using the predicted state of each tracked object in the next frame and the annotation state of the successfully paired target annotation object comprises:
correcting the prediction state of the tracking object in the next frame according to the labeling state and the preset residual error of the successfully paired target labeling object to obtain a correction state; the preset residual error represents an error between a prediction state and a labeling state of the tracked object;
and updating the observation state of the tracker of the successfully paired target labeling object in the tracking list according to the correction state of each tracking object.
5. The method according to any one of claims 1-3, further comprising:
determining each tracked object in a first frame of point cloud data in the point cloud data to be labeled;
initializing a tracker and a global tracking identifier of each tracked object;
and establishing the tracking list according to the tracker of the tracked object and the global tracking identifier.
6. The method of claim 5, wherein updating the tracking list further comprises:
and after the tracked object in the point cloud data of the current frame is successfully matched with the labeled object in the point cloud data of the next frame, assigning the global tracking identification of the tracked object in the point cloud data of the current frame to the successfully matched labeled object.
7. The method according to any one of claims 1-3, further comprising:
if a new tracking object appears in the point cloud data of the next frame, adding a tracker of the new tracking object into the tracking list; and/or the presence of a gas in the gas,
and if no tracking object appears in the point cloud data of the preset number of continuous frames, deleting the tracker of the tracking object which does not appear from the tracking list.
8. The method according to any one of claims 1-3, further comprising:
acquiring correction information aiming at a target tracking object in a link frame; the link frame is a first frame in a task to be corrected;
and correcting the identifiers of the target tracking objects in all frames after the link frame in the target labeling result according to the correction information so as to enable the identifiers of the same tracking objects in each task to be consistent.
9. A data annotation device, comprising:
the system comprises a splitting module, a storage module and a processing module, wherein the splitting module is used for splitting point cloud data to be marked into a plurality of tasks and marking the tasks respectively;
the acquisition module is used for acquiring the labeling result of each frame in each task;
the serial connection module is used for serially connecting the labeling results of the tasks according to the relation between each frame and the task to obtain a target labeling result of the point cloud data to be labeled;
the concatenation module includes:
the determining unit is used for determining the prediction state of each tracked object in the next frame according to the current state of each tracked object in the point cloud data of the current frame for each frame of point cloud data in each task;
the judging unit is used for judging whether the point cloud data of the next frame is the data in the current task or not according to the relation between each frame and the task, and acquiring a judging result;
the updating unit is used for matching the tracked objects in the point cloud data of the current frame with the marked objects in the point cloud data of the next frame according to the task identifiers of the tracked objects if the point cloud data of the next frame belongs to the data in the current task, and determining the target marked objects which are successfully matched with the tracked objects in the point cloud data of the current frame; the target labeling object is a labeling object which is the same as the task identifier of the tracking object in the point cloud data of the next frame; updating the observation state of the tracker of the successfully paired target marking object in a tracking list by adopting the prediction state of each tracking object in the next frame and the marking state of the successfully paired target marking object; the tracking list includes an observed state of the tracker for each tracked object.
10. The apparatus according to claim 9, wherein the updating unit is further configured to pair the predicted state of each tracked object in the next frame with the labeled state of each labeled object if the point cloud data of the next frame does not belong to the point cloud data in the current task, so as to obtain a successfully paired tracked object and target labeled object; the marked object is a marked object in the first frame of point cloud data in the new task; and updating the observation state of the tracker of the successfully paired target annotation object in the tracking list by adopting the prediction state of each tracked object in the next frame and the annotation state of the successfully paired target annotation object.
11. The apparatus according to claim 10, wherein the updating unit is further configured to calculate a characteristic distance between a predicted state of each of the tracked objects in a next frame and an annotated state of each of the annotated objects; and determining the tracking object and the marked object corresponding to the characteristic distance smaller than the preset threshold value as the successfully paired tracking object and target marked object.
12. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202010338827.9A 2020-04-26 2020-04-26 Data annotation method and device, computer equipment and storage medium Active CN111666137B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010338827.9A CN111666137B (en) 2020-04-26 2020-04-26 Data annotation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010338827.9A CN111666137B (en) 2020-04-26 2020-04-26 Data annotation method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111666137A CN111666137A (en) 2020-09-15
CN111666137B true CN111666137B (en) 2022-04-05

Family

ID=72382976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010338827.9A Active CN111666137B (en) 2020-04-26 2020-04-26 Data annotation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111666137B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801200B (en) * 2021-02-07 2024-02-20 文远鄂行(湖北)出行科技有限公司 Data packet screening method, device, equipment and storage medium
CN112991389B (en) * 2021-03-24 2024-04-12 深圳一清创新科技有限公司 Target tracking method and device and mobile robot

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446585A (en) * 2018-01-31 2018-08-24 深圳市阿西莫夫科技有限公司 Method for tracking target, device, computer equipment and storage medium
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110750523A (en) * 2019-09-12 2020-02-04 苏宁云计算有限公司 Data annotation method, system, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5480914B2 (en) * 2009-12-11 2014-04-23 株式会社トプコン Point cloud data processing device, point cloud data processing method, and point cloud data processing program
CN107818293A (en) * 2016-09-14 2018-03-20 北京百度网讯科技有限公司 Method and apparatus for handling cloud data
CN109509260B (en) * 2017-09-14 2023-05-26 阿波罗智能技术(北京)有限公司 Labeling method, equipment and readable medium of dynamic obstacle point cloud

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446585A (en) * 2018-01-31 2018-08-24 深圳市阿西莫夫科技有限公司 Method for tracking target, device, computer equipment and storage medium
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110750523A (en) * 2019-09-12 2020-02-04 苏宁云计算有限公司 Data annotation method, system, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于PointNet++的船体分段合拢面智能识别方法;陈尚伟,等;;《船舶工程》;20191231;第41卷(第12期);138-141 *

Also Published As

Publication number Publication date
CN111666137A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
CN108921200B (en) Method, apparatus, device and medium for classifying driving scene data
US10748061B2 (en) Simultaneous localization and mapping with reinforcement learning
US20190011550A1 (en) Method and apparatus for determing obstacle speed
Ball et al. OpenRatSLAM: an open source brain-based SLAM system
US20160371394A1 (en) Indoor localization using crowdsourced data
CN108460427B (en) Classification model training method and device and classification method and device
CN111666137B (en) Data annotation method and device, computer equipment and storage medium
CN107742304B (en) Method and device for determining movement track, mobile robot and storage medium
CN110414526B (en) Training method, training device, server and storage medium for semantic segmentation network
Akai et al. Simultaneous pose and reliability estimation using convolutional neural network and Rao–Blackwellized particle filter
CN110162058B (en) AGV planning method and device
Tran et al. Goal-driven long-term trajectory prediction
CN112815948B (en) Method, device, computer equipment and storage medium for identifying yaw mode
Peršić et al. Online multi-sensor calibration based on moving object tracking
Junior et al. A new approach for mobile robot localization based on an online IoT system
US11867795B2 (en) System and method for constructing fused tracks from radar detections
CA2894863A1 (en) Indoor localization using crowdsourced data
Nowicki et al. Leveraging visual place recognition to improve indoor positioning with limited availability of WiFi scans
CN116645612A (en) Forest resource asset determination method and system
Nowicki et al. A multi-user personal indoor localization system employing graph-based optimization
CN110824496A (en) Motion estimation method, motion estimation device, computer equipment and storage medium
CN114046787B (en) Pose optimization method, device and equipment based on sensor and storage medium
Jackermeier et al. Exploring the limits of PDR-based indoor localisation systems under realistic conditions
CN113761367A (en) System, method and device for pushing robot process automation program and computing equipment
CN113609947A (en) Motion trajectory prediction method, motion trajectory prediction device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant