CN113127666A - Continuous frame data labeling system, method and device - Google Patents

Continuous frame data labeling system, method and device Download PDF

Info

Publication number
CN113127666A
CN113127666A CN202010041206.4A CN202010041206A CN113127666A CN 113127666 A CN113127666 A CN 113127666A CN 202010041206 A CN202010041206 A CN 202010041206A CN 113127666 A CN113127666 A CN 113127666A
Authority
CN
China
Prior art keywords
labeling
frame data
continuous frame
result
labeled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010041206.4A
Other languages
Chinese (zh)
Other versions
CN113127666B (en
Inventor
马贤忠
胡皓瑜
江浩
董维山
范一磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Momenta Suzhou Technology Co Ltd
Original Assignee
Momenta Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Momenta Suzhou Technology Co Ltd filed Critical Momenta Suzhou Technology Co Ltd
Priority to CN202010041206.4A priority Critical patent/CN113127666B/en
Priority to DE112020003085.7T priority patent/DE112020003085T5/en
Priority to PCT/CN2020/121362 priority patent/WO2021143230A1/en
Publication of CN113127666A publication Critical patent/CN113127666A/en
Application granted granted Critical
Publication of CN113127666B publication Critical patent/CN113127666B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/945User interactive design; Environments; Toolboxes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Library & Information Science (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a system, a method and a device for marking continuous frame data, wherein the system comprises a cloud end and a marking end; the cloud end reads continuous frame data, and performs target detection on each frame data in the continuous frame data according to the labeling task to obtain a detection result of an object to be labeled in each frame data; according to the detection result and the time sequence information among the frames of data, establishing an association relation among the same object to be labeled in the frames of data as a pre-labeling result; generating an extensible pre-labeled file from the pre-labeled result, and sending the pre-labeled file and the continuous frame data to a labeling end; and the annotation end receives the continuous frame data and the corresponding pre-annotated file sent by the cloud end, and corrects the annotated file according to the correction instruction after receiving the correction instruction of the pre-annotated file to obtain a target annotation result. By adopting the scheme, the manual time consumption of continuous frame data labeling is shortened, the labeling efficiency of the continuous frame data is improved, and the labeling cost is reduced.

Description

Continuous frame data labeling system, method and device
Technical Field
The invention relates to the technical field of automatic driving, in particular to a system, a method and a device for labeling continuous frame data.
Background
In the field of automatic driving, a sensing module takes data of various sensors and information of a high-precision map as input, and accurately senses the surrounding environment of an automatic driving vehicle through a series of calculation and processing. The automatic driving perception algorithm is mainly used for deep learning at present, and a large amount of labeled data sets are needed to train the model, so that a large amount of labeled data can be generated more quickly and efficiently, and the method is the key of automatic driving perception.
Currently, most annotation data is manually annotated, including 2D images, 3D lidar point cloud data, and the like, which is a very slow and inefficient process. It requires a person sitting in front of the computer screen to operate the marking tools, marking them one by one, which is extremely labor intensive. For the laser radar data, due to the complexity and sparsity of the data form, errors or missing marks are easily marked, and even negative influence is possibly brought to the neural network training.
Disclosure of Invention
The embodiment of the invention discloses a system, a method and a device for labeling continuous frame data, which greatly shorten the manual time of continuous frame data labeling, improve the labeling efficiency of the continuous frame data and reduce the labeling cost.
In a first aspect, an embodiment of the present invention discloses a system for labeling continuous frame data, including: a cloud end and a labeling end; wherein the content of the first and second substances,
the cloud is configured to: acquiring a labeling task, wherein the labeling task comprises the category, the position and the output file format of an object to be labeled;
the cloud end reads continuous frame data, performs target detection on each frame data in the continuous frame data according to the labeling task, and takes the category and the position of an object to be labeled in each frame data as a detection result;
the cloud end establishes an association relation between the same object to be labeled in each frame data according to the detection result and the time sequence information among the frame data, wherein the association relation is a pre-labeling result of the continuous frame data;
the cloud end generates an extensible pre-labeling file from the pre-labeling result according to the output file format, and sends the pre-labeling file and the continuous frame data to the labeling end;
the annotation end is configured to: and receiving continuous frame data and a corresponding pre-labeled file sent by the cloud, after receiving a correction instruction of the pre-labeled file, correcting the labeled file according to the correction instruction, and taking a corrected labeling result as a target labeling result of the continuous frame data.
In a second aspect, an embodiment of the present invention further provides a method for labeling continuous frame data, which is applied to a cloud, and the method includes:
acquiring a labeling task, wherein the labeling task comprises the category and the position of an object to be labeled;
reading continuous frame data, performing target detection on each frame data in the continuous frame data according to the labeling task, and taking the type and position of an object to be labeled in each frame data as a detection result;
and establishing an association relation between the same object to be labeled in each frame data according to the detection result and the time sequence information between each frame data, wherein the association relation is used as a pre-labeling result of the continuous frame data and is used for correcting at a labeling end according to a correction instruction, and the labeling result corrected at the labeling end is a target labeling result of the continuous frame data.
Optionally, the method further includes:
and correcting the detection result based on a machine learning method to ensure that the same object to be marked has the same size, wherein the machine learning method comprises a Kalman filtering algorithm.
Optionally, the labeling task further includes an output file format;
correspondingly, the method further comprises the following steps:
and generating an extensible pre-labeling file according to the pre-labeling result in the output file format, and sending the pre-labeling file and the continuous frame data to the labeling end for a labeling person to correct at the labeling end.
Optionally, the continuous frame data is a picture or a 3D lidar point cloud.
In a third aspect, an embodiment of the present invention further discloses a device for labeling continuous frame data, which is applied to a cloud, and the device includes:
the annotation task acquisition module is configured to acquire an annotation task, wherein the annotation task comprises the category and the position of an object to be annotated;
the target detection module is configured to read continuous frame data, perform target detection on each frame data in the continuous frame data according to the labeling task, and take the category and the position of an object to be labeled in each frame data as a detection result;
and the association module is configured to establish an association relationship between the same object to be labeled in each frame data according to the detection result and the time sequence information between each frame data, wherein the association relationship is used as a pre-labeling result of the continuous frame data and is used for performing correction at a labeling end according to a correction instruction, and the labeling result after correction at the labeling end is a target labeling result of the continuous frame data.
Optionally, the apparatus further comprises:
and the correction module is configured to correct the detection result based on a machine learning method so that the same object to be labeled has the same size, wherein the machine learning method comprises a Kalman filtering algorithm.
Optionally, the labeling task further includes an output file format;
correspondingly, the device further comprises:
and the file generation module is configured to generate an extensible pre-labeling file from the pre-labeling result according to the output file format, and send the pre-labeling file and the continuous frame data to the labeling end for a labeling person to correct at the labeling end.
Optionally, the continuous frame data is a picture or a 3D lidar point cloud.
In a fourth aspect, an embodiment of the present invention further discloses a method for labeling continuous frame data, which is applied to a labeling end, and the method includes:
acquiring a pre-labeling result of continuous frame data sent by a cloud end;
if a correction instruction for the pre-labeling result is received, correcting the labeling result according to the correction instruction, and taking the corrected labeling result as a target labeling result of the continuous frame data;
wherein the pre-labeling result is: after reading the continuous frame data, the cloud end carries out target detection on the object to be marked in each frame data according to the marking task to obtain a detection result and time sequence information among all the frame data, and establishes an association relation among the same object to be marked in all the frame data; the detection result comprises the category and the position of the object to be marked;
the labeling task comprises the category and the position of an object to be labeled.
In a fifth aspect, an embodiment of the present invention further provides a device for labeling continuous frame data, where the device is applied to a labeling end, and the device includes:
the system comprises a pre-labeling result acquisition module, a pre-labeling result acquisition module and a data processing module, wherein the pre-labeling result acquisition module is configured to acquire a pre-labeling result of continuous frame data sent by a cloud end;
the target labeling result generating module is configured to correct the labeling result according to the correction instruction if the correction instruction of the pre-labeling result is received, and take the corrected labeling result as the target labeling result of the continuous frame data;
wherein the pre-labeling result is: after reading the continuous frame data, the cloud end carries out target detection on the object to be marked in each frame data according to the marking task to obtain a detection result and time sequence information among all the frame data, and establishes an association relation among the same object to be marked in all the frame data; the detection result comprises the category and the position of the object to be marked;
the labeling task comprises the category and the position of an object to be labeled.
In a sixth aspect, an embodiment of the present invention further provides a cloud server, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute part or all of the steps of the method for labeling continuous frame data applied to the cloud end provided by any embodiment of the invention.
In a seventh aspect, the present invention further provides an annotation terminal in real time, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute part or all of the steps of the labeling method applied to the continuous frame data of the labeling end provided by any embodiment of the invention.
In an eighth aspect, an embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, where the computer program includes instructions for executing part or all of the steps of the method for labeling continuous frame data applied to a cloud end provided in any embodiment of the present invention.
In a ninth aspect, the present invention further provides a computer-readable storage medium storing a computer program, where the computer program includes instructions for executing part or all of the steps of the method for labeling continuous frame data applied to a labeling end, provided by any of the embodiments of the present invention.
In a tenth aspect, an embodiment of the present invention further provides a computer program product, which when running on a computer, causes the computer to perform part or all of the steps of the method for annotating continuous frame data applied to a cloud end provided in any embodiment of the present invention.
In an eleventh aspect, the embodiment of the present invention further provides a computer program product, which when run on a computer, causes the computer to execute part or all of the steps of the annotation method applied to the continuous frame data at the annotation end, provided by any embodiment of the present invention.
According to the technical scheme provided by the embodiment, the target detection is carried out on the single-frame data, and the detection result is correlated according to the time sequence information among the frame data, so that the pre-labeling result of the continuous frame data can be obtained. And a follow-up manual marking person only needs to check missing and filling on the basis of the pre-marking result through the marking end. In addition, because some function keys are arranged at the labeling end, convenience can be provided for the modification of labeling personnel, and the labeling efficiency of continuous frame data is improved to a certain extent. In conclusion, the technical scheme that this embodiment provided can effectively reduce artifical labeller's marking work load, reduces the marking cost, improves marking speed and rate of accuracy through adopting high in the clouds and marking end matched with mark mode.
The invention comprises the following steps:
1. on the basis of the prior art, before continuous frame data is labeled at a labeling end, the technical scheme of the embodiment of the invention adds auxiliary labeling links such as target detection on single frame data and correlation on the continuous frame data at a cloud end. The pre-labeling result obtained after the cloud carries out auxiliary labeling can be used as a basis for auditing of subsequent labeling personnel, and the labeling personnel can adjust and correct the pre-labeling result through the labeling end on the basis, so that the problem of low manual labeling efficiency in the prior art is solved, and the method is one of the invention points.
2. And some auxiliary function keys are added at the labeling end, and a labeling person can trigger a correction instruction through the function keys so as to provide convenience for the labeling person to adjust the pre-labeled file. The embodiment of the invention adopts the marking mode that the cloud end and the marking end are matched with each other, thereby effectively improving the marking efficiency and reducing the marking cost.
3. The cloud adopts a preset target detection model when performing target detection on single-frame data, and the preset target detection model establishes the association relationship between an object to be marked and the category and the position of the object in each frame of data. The loss function adopted by the model in the training process is a weighted sum obtained by sequencing the positions of the objects to be marked according to the normalized error, wherein the weight of the normalized error is the k-th power of w, w is a hyperparameter, and k is a bit sequence value obtained by sequencing the normalized errors. By the arrangement, the times and time for adjusting the auxiliary frame by a marking person are reduced, and the marking efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a labeling system for continuous frame data according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for annotating continuous frame data applied to a cloud according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a method for annotating continuous frame data applied to an annotation end according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus for annotating continuous frame data applied to a cloud according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a labeling apparatus for continuous frame data applied to a labeling end according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a cloud server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Example one
Referring to fig. 1, fig. 1 is a schematic structural diagram of a system for labeling continuous frame data according to an embodiment of the present invention. The system can be applied to automatic driving, and a large amount of marking data can be generated more quickly and efficiently through the system so as to train the model. As shown in fig. 1, the system for labeling continuous frame data provided in this embodiment specifically includes: cloud end 110 and annotation end 120; wherein the content of the first and second substances,
a cloud 110 configured to: acquiring a labeling task, wherein the labeling task comprises the category, the position and the output file format of an object to be labeled;
the labeling task is used as prior information of a labeling process, and includes objects to be labeled (such as vehicles, pedestrians, and the like), categories of the objects to be labeled (such as tricycles, buses, cars, and the like), preset sizes, output file formats of labeling files, and the like. The annotation task can be set by the annotation personnel modifying the parameters of the cloud model according to actual requirements, or the annotation task can be sent to the cloud from the annotation end by the annotation personnel. Because the cloud is not limited by computer resources, the continuous frame data can be pre-labeled by using the deep learning algorithm of the cloud, so that the workload of subsequent manual labeling is reduced, and the working efficiency is improved. Specifically, the cloud end specific labeling process is as follows:
the cloud 110 reads the continuous frame data, performs target detection on each frame data in the continuous frame data according to the labeling task, and takes the category and position of the object to be labeled in each frame data as a detection result. The cloud 110 establishes an association relationship between the same object to be labeled in each frame data according to the detection result and the time sequence information between each frame data, wherein the association relationship is a pre-labeling result of continuous frame data; the cloud terminal 110 generates an extensible pre-labeled file from the pre-labeled result according to the output file format, and sends the pre-labeled file and the continuous frame data to the labeling terminal 120;
in this embodiment, the continuous frame data is a sequence of several data of the same type with time sequence and equal interval, and may be a picture or a 3D laser radar point cloud. Especially for 3D laser radar point clouds, in the process of marking the point clouds by using the existing marking technology, the marking speed is low and the cost is high. The labeling system provided by the embodiment can be used as an auxiliary labeling link of the 3D laser radar point cloud. Because the cloud is not limited by computer resources, the cloud is pre-labeled, so that the labeling workload of manual labeling personnel is reduced, the labeling cost is reduced, and the labeling efficiency is improved.
For example, the cloud performs target detection on each frame of the continuous frame data, and may be implemented by using a preset target detection model, where the preset target detection model establishes an association relationship between an object to be labeled and its category and position in each frame of data. The type and the position of the object to be marked can be obtained by presetting a target detection model.
For example, the preset target detection model may be PointRCNN (Regions with constraint Neural Network) or may also perform fusion processing by using output results of multiple models, which is not specifically limited herein. In this embodiment, the position of the object to be labeled may be calibrated by using an auxiliary frame, which is a rectangular parallelepiped, and the specific position information of the rectangular parallelepiped may be represented by coordinates (x, y, z) of the center of the rectangular parallelepiped, a length, a width, and a height (w, h, d) of the rectangular parallelepiped, and an orientation angle θ of the rectangular parallelepiped, that is, the position of the object to be labeled, which is regressed by the preset target detection model, is seven variables, that is, x, y, z, w, h, d, and θ. These variables may be represented in the form of auxiliary boxes.
It should be noted that, because the preset target detection model provided in this embodiment is mainly used for identifying the type and the position of the object to be labeled. Whether the type of the object to be labeled is the object to be labeled in the labeling task or not can be achieved in a classification mode, and the position of the object to be labeled can be determined in a regression mode. Accordingly, the loss function used by the predetermined target detection model in the training process thereof generally includes two parts, i.e., classification and regression. In the process of training the preset target detection model, the regression part of the adopted loss function is as follows: and performing weighted sum after sequencing on the positions of the objects to be marked according to the normalized error, wherein the weight of the normalized error is the k power of w, w is a hyperparameter, and k is a bit sequence value after sequencing of the normalized error. The reason for this is as follows:
in the prior art, the regression part of the target detection model generally adopts loss functions in the forms of predicted values and truth difference values L1, L2, Smooth L1 of physical quantities such as position (x, y, z), size (w, h, d), and orientation angle (θ), and loss functions in the forms of IoU (Intersection over unit), GIoU (Generalized Intersection over unit), DIoU, and the like of a predicted frame and a real frame, and these loss functions can make the predicted value of the target detection model as close to the true value as possible. However, the loss function adopted at present generally only considers the accuracy of the positions of the predicted frame and the real frame, and does not consider the specific requirements during labeling, that is, the number of times of modifying the auxiliary frame by a labeling person is reduced as much as possible.
The loss function adopted by the preset target detection model provided by the embodiment in the training process can be used for making only a few terms in the result of the loss function have some deviations by adjusting the weights of different terms of the loss function, and other terms are close to 0, but not every term has a deviation. The setting reduces the times and time for a marker to adjust the auxiliary frame, and improves the marking efficiency.
After obtaining the category and the position of the object to be labeled based on the preset target detection model, the cloud 110 may establish an association relationship between the same object to be labeled in each frame data according to the detection result and the time sequence information between each frame data. And the same object to be marked in each frame data can be represented by the same number. The association relationship between the same object to be labeled in each frame data is mainly established by tracking the same object to be labeled, for example, if a vehicle 1 appears in the current frame data, it needs to be determined whether the vehicle 1 can be detected in the next frame data, and if the vehicle 1 can still be detected, the association between the vehicle 1 in the current frame data and the vehicle 1 in the next frame data can be established according to the time sequence information. The specific correlation method may be performed by a machine learning method, such as a kalman filter algorithm.
In addition, according to the time sequence information, because the same object to be marked should have the same length, width, height and size, and the position and orientation of the object are continuously changed, the single frame result can be checked and corrected by using a machine learning method, such as a kalman filter algorithm. For example, the object to be marked which is missed in detection in the continuous frame data may be supplemented, for example, if the vehicle 2 exists in the front and rear frames, if the vehicle 2 is not detected in the middle frame, the method may be used to indicate that the vehicle 2 is missed in detection in a single frame. Similarly, the method can be used for deleting the false detection item in the single-frame detection result. By adopting the implementation mode, the tracking of the object to be marked in the continuous frame data can be realized.
In this embodiment, after the association relationship is determined, the association relationship may be used as a pre-annotation result of the continuous frame data, and the cloud 110 may generate an extensible pre-annotation file from the pre-annotation result according to an output file format in the annotation task, and send the pre-annotation file and the continuous frame data to the annotation terminal 120, so that an annotation person may correct the pre-annotation file and the continuous frame data at the annotation terminal 120.
An annotation end 120 configured to: receiving continuous frame data and a corresponding pre-labeled file sent by the cloud 110, after receiving a correction instruction for the pre-labeled file, correcting the labeled file according to the correction instruction, and taking a corrected labeling result as a target labeling result of the continuous frame data.
For example, a function key for correcting the pre-labeled file is added at the labeling end, and when the function key is triggered, the pre-labeled file can be corrected, for example, for the detection of a vehicle, the vehicle orientation detected by the preset target detection model at the cloud end is not necessarily accurate, so that a function of changing the orientation by one key at the labeling end by 180 degrees can be added, so that the labeling personnel can check and modify the vehicle.
In addition, when the preset target detection model at the cloud is trained, the weights of different items of the loss function are adjusted, so that only fewer items in the result of the loss function have some deviations, other items are close to 0, and not all the items have the deviations, therefore, when a standard person modifies the detection result of the preset target detection model at the labeling end, namely the auxiliary frame of an object to be labeled, the times and time for the labeling person to adjust the auxiliary frame are reduced, and the labeling efficiency is improved.
According to the technical scheme provided by the embodiment, the target detection is carried out on the single-frame data, and the detection result is correlated according to the time sequence information among the frame data, so that the pre-labeling result of the continuous frame data can be obtained. And a follow-up manual marking person only needs to check missing and filling on the basis of the pre-marking result through the marking end. In addition, because some function keys are arranged at the labeling end, convenience can be provided for the modification of labeling personnel, and the labeling efficiency of continuous frame data is improved to a certain extent. The technical scheme that this embodiment provided promptly can effectively reduce artifical label personnel's marking work load through adopting the high in the clouds and annotating the looks complex mark mode, reduces the marking cost, improves mark speed and rate of accuracy.
Example two
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for annotating continuous frame data applied to a cloud according to an embodiment of the present invention. The method of this embodiment may be executed by a labeling device for continuous frame data, and the device may be implemented in a software and/or hardware manner, and may generally be integrated in a cloud server such as an aristoloc cloud and a hundredth cloud. As shown in fig. 2, the method provided in this embodiment specifically includes:
210. and acquiring the labeling task.
The labeling task comprises the category and the position of the object to be labeled.
220. And reading the continuous frame data, performing target detection on each frame data in the continuous frame data according to the labeling task, and taking the type and the position of an object to be labeled in each frame data as a detection result.
For a specific target detection method, reference may be made to the description of the above embodiments, which are not described herein again.
230. And establishing an association relation between the same object to be labeled in each frame data according to the detection result and the time sequence information among the frame data, wherein the association relation is used as a pre-labeling result of the continuous frame data and is used for correcting at a labeling end according to a correction instruction.
In this embodiment, the target detection and association of the cloud to the continuous frame data is an auxiliary labeling link of the continuous frame data before labeling at the labeling end. The algorithm of the auxiliary labeling link runs at the cloud end, and the limitation of computer resources is avoided. The pre-labeling result obtained after the cloud end is used for auxiliary labeling can be used as the basis for auditing of subsequent labeling personnel, and the labeling personnel can adjust on the basis, so that the workload of the labeling personnel is reduced, and the labeling efficiency and accuracy are improved.
EXAMPLE III
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for annotating continuous frame data applied to an annotation end according to an embodiment of the present invention. The method can be executed by a labeling device for continuous frame data, which can be implemented by software and/or hardware, and can be generally integrated in a labeling terminal. As shown in fig. 3, the method provided in this embodiment specifically includes:
310. and acquiring a pre-labeling result of continuous frame data sent by the cloud.
320. And if a correction instruction for the pre-labeling result is received, correcting the labeling result according to the correction instruction, and taking the corrected labeling result as a target labeling result of the continuous frame data.
Wherein, the result of the pre-labeling is as follows: after reading the continuous frame data, the cloud end carries out target detection on the object to be marked in each frame data according to the marking task to obtain a detection result and time sequence information among all the frame data, and establishes an association relation among the same object to be marked in all the frame data. And the detection result comprises the category and the position of the object to be marked.
In this embodiment, some auxiliary function keys may be added to the labeling end, for example, a key of the vehicle is rotated by 180 ° to facilitate manual labeling.
In addition, the loss function of the regression part adopted by the preset target detection model used by the cloud in the process of training the single-frame data is as follows: and the position of the object to be marked is subjected to weighted sum after sequencing according to the size of the normalized error, wherein the weight of the normalized error is the k power of w, w is a hyperparameter, and k is the position of the object to be marked after sequencing of the normalized error. Set up like this for only less term has some deviations in the result of loss function, and other terms all are close 0, and not every term all has the deviation, thereby make the personnel of annotating when carrying out artifical mark, reduce the number of times and the time that the personnel of annotating adjusted the auxiliary frame, promote marking efficiency.
In this embodiment, the pre-marked file sent by the cloud is used as the basis for the correction of the marking end, and on this basis, marking personnel can further check the missing and filling up the pre-marked file. By adopting the marking mode that the pre-marking of the cloud end is matched with the marking end, the marking efficiency can be effectively improved, and the marking cost is reduced.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of a device for annotating continuous frame data applied to a cloud according to an embodiment of the present invention. As shown in fig. 4, the apparatus includes: an annotation task acquisition module 410, a target detection module 420 and an association module 430; wherein the content of the first and second substances,
the annotation task obtaining module 410 is configured to obtain an annotation task, where the annotation task includes a category and a position of an object to be annotated;
a target detection module 420, configured to read continuous frame data, perform target detection on each frame data in the continuous frame data according to the labeling task, and use the category and position of an object to be labeled in each frame data as a detection result;
and the association module 430 is configured to establish an association relationship between the same object to be labeled in each frame data according to the detection result and the time sequence information between each frame data, where the association relationship is used as a pre-labeling result of the continuous frame data and is used for performing modification at a labeling end according to a modification instruction, and the labeling result after modification at the labeling end is a target labeling result of the continuous frame data.
Optionally, the apparatus further comprises:
and the correction module is configured to correct the detection result based on a machine learning method so that the same object to be labeled has the same size, wherein the machine learning method comprises a Kalman filtering algorithm.
Optionally, the labeling task further includes an output file format;
correspondingly, the device further comprises:
and the file generation module is configured to generate an extensible pre-labeling file from the pre-labeling result according to the output file format, and send the pre-labeling file and the continuous frame data to the labeling end for a labeling person to correct at the labeling end.
Optionally, the continuous frame data is a picture or a 3D lidar point cloud.
The continuous frame data labeling device provided by the embodiment of the invention can execute the continuous frame data labeling method applied to the cloud end provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For details of the technology not described in detail in the above embodiments, reference may be made to a method for annotating continuous frame data applied to a cloud end according to any embodiment of the present invention.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of a annotating device for continuous frame data at an annotating end according to an embodiment of the present invention, as shown in fig. 5, the device includes: a pre-labeling result obtaining module 510 and a target labeling result generating module 520; wherein the content of the first and second substances,
a pre-annotation result obtaining module 510 configured to obtain a pre-annotation result of continuous frame data sent by the cloud;
a target labeling result generating module 520, configured to, if a correction instruction for the pre-labeling result is received, correct the labeling result according to the correction instruction, and use the corrected labeling result as a target labeling result of the continuous frame data;
wherein the pre-labeling result is: after reading the continuous frame data, the cloud end carries out target detection on the object to be marked in each frame data according to the marking task to obtain a detection result and time sequence information among all the frame data, and establishes an association relation among the same object to be marked in all the frame data; the detection result comprises the category and the position of the object to be marked;
the labeling task comprises the category and the position of an object to be labeled.
The continuous frame data labeling device provided by the embodiment of the invention can execute the continuous frame data labeling method applied to the labeling end provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For the technical details not described in detail in the above embodiments, reference may be made to the annotation method applied to the continuous frame data at the annotation end according to any embodiment of the present invention.
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic structural diagram of a cloud server according to an embodiment of the present invention. As shown in fig. 6, the cloud server may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
the processor 702 calls the executable program code stored in the memory 701 to execute the method for labeling continuous frame data applied to the cloud end according to any embodiment of the present invention.
The embodiment of the invention also provides another labeling terminal, which comprises a memory for storing executable program codes; a processor coupled to the memory; the processor calls the executable program code stored in the memory to execute the continuous frame data annotation method applied to the annotation terminal provided by any embodiment of the invention.
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute the method for labeling continuous frame data applied to a cloud end provided by any embodiment of the invention.
The embodiment of the invention also discloses a computer readable storage medium which stores a computer program, wherein the computer program enables a computer to execute the labeling method applied to the continuous frame data of the labeling end provided by any embodiment of the invention.
The embodiment of the invention discloses a computer program product, wherein when the computer program product runs on a computer, the computer is enabled to execute part or all of the steps of the method for labeling continuous frame data applied to a cloud end provided by any embodiment of the invention.
The embodiment of the invention also discloses a computer program product, wherein when the computer program product runs on a computer, the computer is enabled to execute part or all of the steps of the labeling method applied to the continuous frame data of the labeling end provided by any embodiment of the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to A" means that B is associated with A from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The above detailed description is given to a system, a method and a device for labeling continuous frame data disclosed in the embodiments of the present invention, and a specific example is applied in the present disclosure to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A continuous frame data annotation system is characterized by comprising a cloud end and an annotation end; wherein the content of the first and second substances,
the cloud is configured to: acquiring a labeling task, wherein the labeling task comprises the category, the position and the output file format of an object to be labeled;
the cloud end reads continuous frame data, performs target detection on each frame data in the continuous frame data according to the labeling task, and takes the category and the position of an object to be labeled in each frame data as a detection result;
the cloud end establishes an association relation between the same object to be labeled in each frame data according to the detection result and the time sequence information among the frame data, wherein the association relation is a pre-labeling result of the continuous frame data;
the cloud end generates an extensible pre-labeling file from the pre-labeling result according to the output file format, and sends the pre-labeling file and the continuous frame data to the labeling end;
the annotation end is configured to: and receiving continuous frame data and a corresponding pre-labeled file sent by the cloud, after receiving a correction instruction of the pre-labeled file, correcting the labeled file according to the correction instruction, and taking a corrected labeling result as a target labeling result of the continuous frame data.
2. A method for labeling continuous frame data is applied to a cloud end, and is characterized by comprising the following steps:
acquiring a labeling task, wherein the labeling task comprises the category and the position of an object to be labeled;
reading continuous frame data, performing target detection on each frame data in the continuous frame data according to the labeling task, and taking the type and position of an object to be labeled in each frame data as a detection result;
and establishing an association relation between the same object to be labeled in each frame data according to the detection result and the time sequence information between each frame data, wherein the association relation is used as a pre-labeling result of the continuous frame data and is used for correcting at a labeling end according to a correction instruction, and the labeling result corrected at the labeling end is a target labeling result of the continuous frame data.
3. The method of claim 2, further comprising:
and correcting the detection result based on a machine learning method to ensure that the same object to be marked has the same size, wherein the machine learning method comprises a Kalman filtering algorithm.
4. The method of claim 2, wherein the annotation task further comprises outputting a file format;
correspondingly, the method further comprises the following steps:
and generating an extensible pre-labeling file according to the pre-labeling result in the output file format, and sending the pre-labeling file and the continuous frame data to the labeling end for a labeling person to correct at the labeling end.
5. The method of any of claims 2-4, wherein the continuous frame data is a picture or a 3D lidar point cloud.
6. The utility model provides a mark device of continuous frame data, is applied to the high in the clouds, a serial communication port, includes:
the annotation task acquisition module is configured to acquire an annotation task, wherein the annotation task comprises the category and the position of an object to be annotated;
the target detection module is configured to read continuous frame data, perform target detection on each frame data in the continuous frame data according to the labeling task, and take the category and the position of an object to be labeled in each frame data as a detection result;
and the association module is configured to establish an association relationship between the same object to be labeled in each frame data according to the detection result and the time sequence information between each frame data, wherein the association relationship is used as a pre-labeling result of the continuous frame data and is used for performing correction at a labeling end according to a correction instruction, and the labeling result after correction at the labeling end is a target labeling result of the continuous frame data.
7. The apparatus of claim 6, further comprising:
and the correction module is configured to correct the detection result based on a machine learning method so that the same object to be labeled has the same size, wherein the machine learning method comprises a Kalman filtering algorithm.
8. The apparatus of claim 6, wherein the labeling task further comprises an output file format;
correspondingly, the device further comprises:
and the file generation module is configured to generate an extensible pre-labeling file from the pre-labeling result according to the output file format, and send the pre-labeling file and the continuous frame data to the labeling end for a labeling person to correct at the labeling end.
9. A method for labeling continuous frame data is applied to a labeling end, and is characterized by comprising the following steps:
acquiring a pre-labeling result of continuous frame data sent by a cloud end;
if a correction instruction for the pre-labeling result is received, correcting the labeling result according to the correction instruction, and taking the corrected labeling result as a target labeling result of the continuous frame data;
wherein the pre-labeling result is: after reading the continuous frame data, the cloud end carries out target detection on the object to be marked in each frame data according to the marking task to obtain a detection result and time sequence information among all the frame data, and establishes an association relation among the same object to be marked in all the frame data; the detection result comprises the category and the position of the object to be marked;
the labeling task comprises the category and the position of an object to be labeled.
10. The utility model provides a mark device of continuous frame data, is applied to mark end, its characterized in that includes:
the system comprises a pre-labeling result acquisition module, a pre-labeling result acquisition module and a data processing module, wherein the pre-labeling result acquisition module is configured to acquire a pre-labeling result of continuous frame data sent by a cloud end;
the target labeling result generating module is configured to correct the labeling result according to the correction instruction if the correction instruction of the pre-labeling result is received, and take the corrected labeling result as the target labeling result of the continuous frame data;
wherein the pre-labeling result is: after reading the continuous frame data, the cloud end carries out target detection on the object to be marked in each frame data according to the marking task to obtain a detection result and time sequence information among all the frame data, and establishes an association relation among the same object to be marked in all the frame data; the detection result comprises the category and the position of the object to be marked;
the labeling task comprises the category and the position of an object to be labeled.
CN202010041206.4A 2020-01-15 2020-01-15 Continuous frame data labeling system, method and device Active CN113127666B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010041206.4A CN113127666B (en) 2020-01-15 2020-01-15 Continuous frame data labeling system, method and device
DE112020003085.7T DE112020003085T5 (en) 2020-01-15 2020-10-16 System, method and apparatus for identifying data in consecutive frames
PCT/CN2020/121362 WO2021143230A1 (en) 2020-01-15 2020-10-16 Labeling system, method and apparatus for continuous frame data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010041206.4A CN113127666B (en) 2020-01-15 2020-01-15 Continuous frame data labeling system, method and device

Publications (2)

Publication Number Publication Date
CN113127666A true CN113127666A (en) 2021-07-16
CN113127666B CN113127666B (en) 2022-06-24

Family

ID=76771378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010041206.4A Active CN113127666B (en) 2020-01-15 2020-01-15 Continuous frame data labeling system, method and device

Country Status (3)

Country Link
CN (1) CN113127666B (en)
DE (1) DE112020003085T5 (en)
WO (1) WO2021143230A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067091A (en) * 2022-01-17 2022-02-18 深圳慧拓无限科技有限公司 Multi-source data labeling method and system, electronic equipment and storage medium
CN116681123A (en) * 2023-07-31 2023-09-01 福思(杭州)智能科技有限公司 Perception model training method, device, computer equipment and storage medium
CN117784162A (en) * 2024-02-26 2024-03-29 安徽蔚来智驾科技有限公司 Target annotation data acquisition method, target tracking method, intelligent device and medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827242B (en) * 2022-04-24 2023-10-20 深圳市元征科技股份有限公司 Method, device, equipment and medium for correcting flow control frame
CN116665177B (en) * 2023-07-31 2023-10-13 福思(杭州)智能科技有限公司 Data processing method, device, electronic device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106385640A (en) * 2016-08-31 2017-02-08 北京旷视科技有限公司 Video marking method and device
CN108491774A (en) * 2018-03-12 2018-09-04 北京地平线机器人技术研发有限公司 The method and apparatus that multiple targets in video are marked into line trace
CN108986134A (en) * 2018-08-17 2018-12-11 浙江捷尚视觉科技股份有限公司 A kind of semi-automatic mask method of video object based on correlation filtering tracking
CN109145836A (en) * 2018-08-28 2019-01-04 武汉大学 Ship target video detection method based on deep learning network and Kalman filtering
CN110084895A (en) * 2019-04-30 2019-08-02 上海禾赛光电科技有限公司 The method and apparatus that point cloud data is labeled
CN110288629A (en) * 2019-06-24 2019-09-27 湖北亿咖通科技有限公司 Target detection automatic marking method and device based on moving Object Detection

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9747648B2 (en) * 2015-01-20 2017-08-29 Kuo-Chun Fang Systems and methods for publishing data on social media websites
US10373372B2 (en) * 2017-10-05 2019-08-06 Applications Mobiles Overview Inc. System and method for object recognition
CN108830466A (en) * 2018-05-31 2018-11-16 长春博立电子科技有限公司 A kind of image content semanteme marking system and method based on cloud platform
CN109949439B (en) * 2019-04-01 2020-10-30 星觅(上海)科技有限公司 Driving live-action information labeling method and device, electronic equipment and medium
CN110674295A (en) * 2019-09-11 2020-01-10 成都数之联科技有限公司 Data labeling system based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106385640A (en) * 2016-08-31 2017-02-08 北京旷视科技有限公司 Video marking method and device
CN108491774A (en) * 2018-03-12 2018-09-04 北京地平线机器人技术研发有限公司 The method and apparatus that multiple targets in video are marked into line trace
CN108986134A (en) * 2018-08-17 2018-12-11 浙江捷尚视觉科技股份有限公司 A kind of semi-automatic mask method of video object based on correlation filtering tracking
CN109145836A (en) * 2018-08-28 2019-01-04 武汉大学 Ship target video detection method based on deep learning network and Kalman filtering
CN110084895A (en) * 2019-04-30 2019-08-02 上海禾赛光电科技有限公司 The method and apparatus that point cloud data is labeled
CN110288629A (en) * 2019-06-24 2019-09-27 湖北亿咖通科技有限公司 Target detection automatic marking method and device based on moving Object Detection

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067091A (en) * 2022-01-17 2022-02-18 深圳慧拓无限科技有限公司 Multi-source data labeling method and system, electronic equipment and storage medium
CN116681123A (en) * 2023-07-31 2023-09-01 福思(杭州)智能科技有限公司 Perception model training method, device, computer equipment and storage medium
CN116681123B (en) * 2023-07-31 2023-11-14 福思(杭州)智能科技有限公司 Perception model training method, device, computer equipment and storage medium
CN117784162A (en) * 2024-02-26 2024-03-29 安徽蔚来智驾科技有限公司 Target annotation data acquisition method, target tracking method, intelligent device and medium
CN117784162B (en) * 2024-02-26 2024-05-14 安徽蔚来智驾科技有限公司 Target annotation data acquisition method, target tracking method, intelligent device and medium

Also Published As

Publication number Publication date
CN113127666B (en) 2022-06-24
WO2021143230A1 (en) 2021-07-22
DE112020003085T5 (en) 2022-04-07

Similar Documents

Publication Publication Date Title
CN113127666B (en) Continuous frame data labeling system, method and device
CN113139559B (en) Training method of target detection model, and data labeling method and device
CN110827202A (en) Target detection method, target detection device, computer equipment and storage medium
CN113807350A (en) Target detection method, device, equipment and storage medium
CN109726195A (en) A kind of data enhancement methods and device
CN111402332B (en) AGV composite map building and navigation positioning method and system based on SLAM
CN114998856B (en) 3D target detection method, device, equipment and medium for multi-camera image
CN111597857A (en) Logistics package detection method, device and equipment and readable storage medium
CN115223166A (en) Picture pre-labeling method, picture labeling method and device, and electronic equipment
CN112671487A (en) Vehicle testing method, server and testing vehicle
CN110717141A (en) Lane line optimization method and device and storage medium
CN117115823A (en) Tamper identification method and device, computer equipment and storage medium
CN109978043B (en) Target detection method and device
CN117079238A (en) Road edge detection method, device, equipment and storage medium
CN115810115A (en) Image and multi-frame millimeter wave radar target fusion method based on image characteristics
CN115082523A (en) Vision-based robot intelligent guiding system and method
CN113903029A (en) Method and device for marking 3D frame in point cloud data
US11420325B2 (en) Method, apparatus and system for controlling a robot, and storage medium
CN112766487A (en) Target detection model updating method and server
CN110619354A (en) Image recognition system and method for unmanned sales counter
CN114779271B (en) Target detection method and device, electronic equipment and storage medium
CN113111677B (en) Bar code reading method, device, equipment and medium
CN115407800B (en) Unmanned aerial vehicle inspection method in agricultural product storage fresh-keeping warehouse
US20240101149A1 (en) Apparatus and method of automatically detecting dynamic object recognition errors in autonomous vehicles
CN117788636A (en) Multi-marking generation method, device, electronic equipment, medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211122

Address after: 215100 floor 23, Tiancheng Times Business Plaza, No. 58, qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou, Jiangsu Province

Applicant after: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.

Address before: Room 601-a32, Tiancheng information building, No. 88, South Tiancheng Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Applicant before: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant