CN113407083A - Data labeling method and device, electronic equipment and storage medium - Google Patents

Data labeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113407083A
CN113407083A CN202110704502.2A CN202110704502A CN113407083A CN 113407083 A CN113407083 A CN 113407083A CN 202110704502 A CN202110704502 A CN 202110704502A CN 113407083 A CN113407083 A CN 113407083A
Authority
CN
China
Prior art keywords
tool
labeling
target
marking
annotation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110704502.2A
Other languages
Chinese (zh)
Inventor
牛菜梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Technology Development Co Ltd
Original Assignee
Shanghai Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Technology Development Co Ltd filed Critical Shanghai Sensetime Technology Development Co Ltd
Priority to CN202110704502.2A priority Critical patent/CN113407083A/en
Publication of CN113407083A publication Critical patent/CN113407083A/en
Priority to PCT/CN2021/126182 priority patent/WO2022267279A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a data labeling method and device, electronic equipment and a storage medium, wherein the method is applied to the field of intelligent automobiles and comprises the following steps: acquiring an object to be marked; presenting a target marking tool corresponding to the marking task type of the object to be marked in the plurality of marking tools to a user interaction interface; and responding to the labeling operation executed based on the target labeling tool to obtain the labeling information of the object to be labeled. The embodiment of the disclosure can quickly and accurately provide a labeling tool according to the labeling task type, and can determine a data labeling scheme required in the research and development process of the intelligent automobile for the traditional automobile enterprise so as to make up a data labeling short board in the combination of the traditional automobile enterprise and an artificial intelligence technology and reduce the development resources input in the research and development process.

Description

Data labeling method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data annotation method and apparatus, an electronic device, and a storage medium.
Background
With the falling-to-ground application of artificial intelligence technology in the traditional industrial field, the automobile industry has also begun to develop new developments. New directions such as automatic driving, intelligent cockpit and the like are gradually added into the automobile research and development field. In these directions, data annotation is not necessary.
However, for the traditional vehicle enterprises, due to the lack of an artificial intelligence solution of the system, the solution of data annotation in enterprises is too scattered, and a single annotation scheme is often formulated when a single detection and identification task is faced; in addition, the research and development force of the enterprise is limited, and the required labeling tool platform and the required data labeling scheme in the research and development process are difficult to rapidly and accurately create according to the self automatic driving and intelligent cockpit research and development requirements of the enterprise. Therefore, a complete system data annotation solution is needed to make up for the short data annotation board in the combination of the conventional vehicle enterprise and the artificial intelligence technology.
Disclosure of Invention
The present disclosure provides a data annotation technical solution.
According to one aspect of the disclosure, a data annotation method is provided, which is applied to the field of intelligent automobiles, and comprises the following steps:
acquiring an object to be marked;
presenting a target marking tool corresponding to the marking task type of the object to be marked in the plurality of marking tools to a user interaction interface;
and responding to the labeling operation executed based on the target labeling tool to obtain the labeling information of the object to be labeled.
In a possible implementation manner, the presenting, to a user interaction interface, a target annotation tool corresponding to an annotation task type of the object to be annotated in the plurality of annotation tools includes:
receiving a data field selection instruction input by a user, wherein the data field selection instruction is used for determining a target data field to which an object to be labeled belongs, and the target data field comprises an intelligent automobile field;
and according to the determined target data field, determining a target marking tool for marking the object to be marked in the target data field from the tool set, opening the use permission of the target marking tool, and displaying the use permission on a marking interface for a user to use.
In a possible implementation manner, after the object to be labeled is obtained, the method further includes:
and responding to the selection operation of the user on the plurality of marking tools, and taking the marking tool selected by the user as a target marking tool.
In a possible implementation manner, after the object to be labeled is obtained, the method further includes:
identifying the labeling task type of the object to be labeled to obtain a target labeling task type;
and determining a marking tool having a corresponding relation with the target marking task type as a target marking tool according to the corresponding relation between the preset marking task type and the marking tool.
In a possible implementation manner, the obtaining, in response to a labeling operation executed based on the target labeling tool, labeling information of the object to be labeled includes:
in response to the position marking operation executed based on the target marking tool, taking the marked position as the position of the marked target object;
and taking preset default attributes as the attributes of the labeled target object.
In a possible implementation manner, the labeling task type of the object to be labeled includes a lane marking task, and the target labeling tool corresponding to the lane marking task includes a line tool and a lane attribute labeling tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a line drawing operation based on the line tool, taking an area indicated by a drawn line as a position where a marked lane line is located;
and determining the attribute of the marked lane line in response to the lane line attribute selection operation based on the lane line attribute marking tool.
In one possible implementation manner, the labeling task type of the object to be labeled includes a drivable region labeling task, and the target labeling tool corresponding to the drivable region labeling task includes a polygon tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
and in response to the polygon drawing operation based on the polygon tool, taking the area where the drawn polygon is as the position of the marked travelable area.
In a possible implementation manner, the labeling task type of the object to be labeled includes a target object labeling task, and a target labeling tool corresponding to the target object labeling task includes a rectangular frame tool and a target object attribute labeling tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position where a marked target object is located;
and determining the attribute of the target object marked in the objects to be marked in response to the attribute selection operation of the target object attribute marking tool.
In one possible implementation manner, the object to be labeled is a point cloud image, the labeling task type includes a point cloud object labeling task, the target labeling tool corresponding to the point cloud object labeling task includes a plurality of point cloud object framing tools, and the plurality of point cloud object framing tools correspond to different point cloud object attributes;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
determining a target point cloud object framing tool of the plurality of point cloud object framing tools in response to a point cloud object framing tool selection operation;
and responding to the framing operation based on the target point cloud object framing tool, taking the framed area as the position of the marked point cloud object, and taking the point cloud object attribute corresponding to the target point cloud framing tool as the attribute of the marked point cloud object.
In one possible implementation manner, the annotation task type includes a passenger attribute annotation task, and the target annotation tool corresponding to the passenger attribute annotation task includes a rectangular frame tool and a passenger attribute annotation tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as a position where the marked passenger is located;
at least one attribute of the tagged passenger is determined in response to a passenger attribute selection operation based on the passenger attribute tagging tool.
In one possible implementation manner, the annotation task type includes a face annotation task, and the target annotation tool corresponding to the face annotation task includes a rectangular frame tool and a key point annotation tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as the position of the marked face;
and determining the key points of the face in the labeled face in response to the key point labeling operation based on the key point labeling tool.
In one possible implementation manner, the annotation task type includes a gesture annotation task, and the target annotation tool corresponding to the gesture annotation task includes a rectangular box tool and a key point annotation tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position where the marked hand is located;
and determining key points of the marked hand in response to key point marking operation based on the key point marking tool.
In one possible implementation manner, the tagging task type includes a driving fatigue tagging task, and the target tagging tool corresponding to the driving fatigue task includes a rectangular frame tool and a fatigue attribute tagging tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position of a labeled fatigue detection target;
and determining the attribute of the labeled fatigue detection target in response to the key point labeling operation based on the fatigue attribute labeling tool.
In one possible implementation manner, the type of the labeling task includes a dangerous driving labeling task, and the target labeling tool corresponding to the dangerous driving task includes a rectangular frame tool and a dangerous attribute labeling tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position of the marked dangerous driving detection target;
and determining the attribute of the marked dangerous driving detection target in response to the attribute selection operation based on the dangerous attribute marking tool.
According to an aspect of this disclosure, a data annotation device is provided, is applied to the intelligent automobile field, includes:
the annotation object acquisition unit is used for acquiring an object to be annotated;
the target marking tool display unit is used for presenting a target marking tool corresponding to the marking task type of the object to be marked in the plurality of marking tools to a user interaction interface;
and the marking information determining unit is used for responding to the marking operation executed based on the target marking tool to obtain the marking information of the object to be marked.
In one possible implementation manner, the target labeling tool presentation unit includes:
the data field determining unit is used for receiving a data field selection instruction input by a user, wherein the data field selection instruction is used for determining a target data field to which an object to be labeled belongs, and the target data field comprises the field of intelligent automobiles;
and the target marking tool display subunit is used for determining a target marking tool for marking the object to be marked in the target data field from the tool set according to the determined target data field, opening the use permission of the target marking tool and displaying the target marking tool on a marking interface for a user to use.
In one possible implementation, the apparatus further includes:
and the first target marking tool determining unit is used for responding to the selection operation of the user on the plurality of marking tools and taking the marking tool selected by the user as the target marking tool.
In one possible implementation, the apparatus further includes:
the second target labeling tool determining unit is used for identifying the labeling task type of the object to be labeled to obtain a target labeling task type; and determining a marking tool having a corresponding relation with the target marking task type as a target marking tool according to the corresponding relation between the preset marking task type and the marking tool.
In a possible implementation manner, the annotation information determining unit includes:
a target object position determination unit configured to take the annotated position as a position of the annotated target object in response to a position annotation operation performed based on the target annotation tool;
and the target object attribute determining unit is used for taking preset default attributes as the attributes of the labeled target object.
In a possible implementation manner, the labeling task type of the object to be labeled includes a lane marking task, and the target labeling tool corresponding to the lane marking task includes a line tool and a lane attribute labeling tool;
the marking information determining unit is used for responding to the line drawing operation based on the line tool and taking the area indicated by the drawn line as the position of the marked lane line; and determining the attribute of the marked lane line in response to the lane line attribute selection operation based on the lane line attribute marking tool.
In one possible implementation manner, the labeling task type of the object to be labeled includes a drivable region labeling task, and the target labeling tool corresponding to the drivable region labeling task includes a polygon tool;
and the marking information determining unit is used for responding to the polygon drawing operation based on the polygon tool and taking the area where the drawn polygon is as the position of the marked travelable area.
In a possible implementation manner, the labeling task type of the object to be labeled includes a target object labeling task, and a target labeling tool corresponding to the target object labeling task includes a rectangular frame tool and a target object attribute labeling tool;
the marking information determination unit is used for responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as the position of the marked target object; and determining the attribute of the target object marked in the objects to be marked in response to the attribute selection operation of the target object attribute marking tool.
In one possible implementation manner, the object to be labeled is a point cloud image, the labeling task type includes a point cloud object labeling task, the target labeling tool corresponding to the point cloud object labeling task includes a plurality of point cloud object framing tools, and the plurality of point cloud object framing tools correspond to different point cloud object attributes;
the marking information determining unit is used for responding to the point cloud object framing tool selecting operation and determining a target point cloud object framing tool in the point cloud object framing tools; and responding to the framing operation based on the target point cloud object framing tool, taking the framed area as the position of the marked point cloud object, and taking the point cloud object attribute corresponding to the target point cloud framing tool as the attribute of the marked point cloud object.
In one possible implementation manner, the annotation task type includes a passenger attribute annotation task, and the target annotation tool corresponding to the passenger attribute annotation task includes a rectangular frame tool and a passenger attribute annotation tool;
the marking information determination unit is used for responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as the position of the marked passenger; at least one attribute of the tagged passenger is determined in response to a passenger attribute selection operation based on the passenger attribute tagging tool.
In one possible implementation manner, the annotation task type includes a face annotation task, and the target annotation tool corresponding to the face annotation task includes a rectangular frame tool and a key point annotation tool;
the labeling information determining unit is used for responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as the position of the labeled human face; and determining the key points of the face in the labeled face in response to the key point labeling operation based on the key point labeling tool.
In one possible implementation manner, the annotation task type includes a gesture annotation task, and the target annotation tool corresponding to the gesture annotation task includes a rectangular box tool and a key point annotation tool;
the marking information determination unit is used for responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as the position of the marked hand; and determining key points of the marked hand in response to key point marking operation based on the key point marking tool.
In one possible implementation manner, the tagging task type includes a driving fatigue tagging task, and the target tagging tool corresponding to the driving fatigue task includes a rectangular frame tool and a fatigue attribute tagging tool;
the marking information determination unit is used for responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as the position of the marked fatigue detection target; and determining the attribute of the labeled fatigue detection target in response to the key point labeling operation based on the fatigue attribute labeling tool.
In one possible implementation manner, the type of the labeling task includes a dangerous driving labeling task, and the target labeling tool corresponding to the dangerous driving task includes a rectangular frame tool and a dangerous attribute labeling tool;
the marking information determination unit is used for responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as the position of the marked dangerous driving detection target; and determining the attribute of the marked dangerous driving detection target in response to the attribute selection operation based on the dangerous attribute marking tool.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, after the object to be labeled is obtained, a target labeling tool corresponding to the labeling task type of the object to be labeled in the plurality of labeling tools may be presented to the user interaction interface, and a user may execute a labeling operation based on the target labeling tool to obtain the labeling information of the object to be labeled. Therefore, a marking tool can be provided quickly and accurately according to the marking task type, a data marking scheme required in the research and development process of the intelligent automobile can be determined for the traditional automobile enterprise, so that a data marking short board in the combination of the traditional automobile enterprise and an artificial intelligence technology is made up, and development resources input in the research and development process are reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow chart of a data annotation method according to an embodiment of the disclosure.
FIG. 2 shows a block diagram of a data annotation device according to an embodiment of the disclosure.
Fig. 3 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Fig. 4 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
With the continuous integration of artificial intelligence technology and traditional industry, the combination of artificial intelligence application and a whole vehicle system is imperative, and in the related technology, the data labeling scheme required by computer vision model training is zero-scattered, and the labeling scheme is different according to different recognition tasks. For the whole traditional automobile industry, due to the limitations of research and development force and scale, the method is trapped by dazzling annotation schemes in related technologies in the search process, and excessive development resources are often input in the research and development process due to the fact that specific application scenes are not matched with the annotation method, so that resource waste is caused.
Based on the above problems in practice, the embodiments of the present disclosure provide a data annotation scheme, which can be applied to annotation tasks in various scenarios and obtain annotation information. The method can be widely applied to the requirements of the intelligent automobile field for the marking, and saves time and labor.
The data labeling method provided by the embodiment of the disclosure can label samples for neural network training in the intelligent automobile industry, can train the neural network by using labeling information obtained by labeling, and can realize services such as automatic driving and intelligent cockpit based on the trained neural network. The specific annotation scenario may include, for example: lane line labeling, travelable region labeling, target object labeling, point cloud object labeling, passenger attribute labeling, face labeling, gesture labeling, driving fatigue labeling, dangerous driving labeling and the like.
In a possible implementation manner, the data annotation method may be performed by an electronic device such as a terminal device or a server, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server.
For convenience of description, in one or more embodiments of the present specification, an execution subject of the identity authentication method may be a tagging platform, and hereinafter, an implementation of the method is described by taking the execution subject as the tagging platform as an example. It is understood that the implementation of the method on the annotation platform is only an exemplary illustration and should not be construed as a limitation of the method.
The labeling method of the embodiment of the present disclosure can also be applied to other technical fields related to various labeling task types, and the embodiment of the present disclosure does not specifically limit this. The following description will be given mainly taking the smart car domain as a target domain.
Fig. 1 shows a flowchart of a data annotation method according to an embodiment of the present disclosure, and as shown in fig. 1, the data annotation method includes:
in step S11, an object to be labeled is acquired.
The user can load the object to be labeled through a menu of the user interaction interface, or drag the object to be labeled to a designated area of the user interaction interface in a direct dragging mode, so as to complete the loading of the labeled object. And loading the object to be labeled to realize the acquisition of the object to be labeled.
The object to be marked can be one or more images; but may also be a single frame or multiple frames in a video. And in the subsequent steps, the marking operation of the target object is carried out on the object to be marked.
The content displayed in the object to be annotated may be: the contents displayed by the method include target objects to be marked. In addition, target objects to be labeled in the objects to be labeled in different application scenes can be different.
In step S12, a target annotation tool corresponding to the annotation task type of the object to be annotated in the plurality of annotation tools is presented to the user interaction interface.
After the object to be labeled is obtained, the user can determine the type of the labeling task. The types of the annotation task can include: lane line detection, travelable area segmentation, target detection, point cloud labeling and the like. And according to the determined annotation task type, selecting a suitable annotation tool as a target annotation tool.
Each type of annotation task may correspond to at least one target annotation tool. The target labeling tool herein may be a control for labeling a target object in an object to be labeled, such as: line tools, polygonal tools, rectangular frame tools, etc.
The controls may be presented in pre-set locations in the user interface.
In step S13, in response to the labeling operation performed based on the target labeling tool, the labeling information of the object to be labeled is obtained.
The user performs a labeling operation according to the labeling task in the object to be labeled obtained in step S11 using the target labeling tool presented in step S12, and labels the target object. In the labeling operation, it is understood that the user may use the target labeling tool to determine points capable of identifying the target object and connecting lines between the points, determine the position or range of the object to be labeled by using the points and the connecting lines, and label the attributes of the object. After the labeling operation is completed, the labeling information such as the position, the range, the attribute and the like of the labeled object is obtained.
In the embodiment of the disclosure, after the object to be labeled is obtained, a target labeling tool corresponding to the labeling task type of the object to be labeled in the plurality of labeling tools may be presented to the user interaction interface, and a user may execute a labeling operation based on the target labeling tool to obtain the labeling information of the object to be labeled. Therefore, a marking tool can be provided quickly and accurately according to the marking task type, a data marking scheme required in the research and development process of the intelligent automobile can be determined for the traditional automobile enterprise, so that a data marking short board in the combination of the traditional automobile enterprise and an artificial intelligence technology is made up, and development resources input in the research and development process are reduced.
In a possible implementation manner, after the object to be labeled is obtained, the method further includes: and responding to the selection operation of the user on the plurality of marking tools, and taking the marking tool selected by the user as a target marking tool.
After the object to be marked is obtained, the user can select a target marking tool from the plurality of marking tools according to the characteristics of the shape and the like of the object to be marked. For example, a rectangular frame tool can be selected to perform frame selection labeling on traffic lights, guideboards, signboard and the like; the lines can be selected to mark the lane lines.
In a possible implementation manner, after the object to be labeled is obtained, the method further includes: identifying the labeling task type of the object to be labeled to obtain a target labeling task type; and determining a marking tool having a corresponding relation with the target marking task type as a target marking tool according to the corresponding relation between the preset marking task type and the marking tool.
In the implementation mode, the target annotation task type can be determined according to the application scene of the annotation task set by the user; or, the target annotation task type can be determined according to the content of the image and the video to be annotated based on the image recognition technology. For example, in an application scenario of the annotation task set by the user, to detect whether the driver is tired, it may be determined that the type of the annotation task is: and detecting fatigue of the driver. In the second example, the content of the object (image) to be labeled is a road surface, and the labeling task type can be determined as follows: and detecting the lane line.
In a possible implementation manner, a menu of "annotation task" may also be displayed in a tab of the user interaction interface, and then the type of the annotation task selected by the user based on the menu of "annotation task" is received as the target annotation task type.
In order to adapt to different labeling task types, a corresponding relationship between a target labeling tool and a labeling task type can be established in advance. Based on the corresponding relation, the target marking tool corresponding to the obtained target marking task type can be determined, and the target marking tool is presented on the user interaction interface.
Illustratively, when the determined target labeling task type is "drivable region segmentation". A polygonal tool can be used as a tool for marking the travelable area.
In the embodiment of the disclosure, by classifying the labeling tasks and associating the labeling task types with the labeling tools, after the target labeling task types are determined, the corresponding target labeling tools can be obtained, and the labeling efficiency is improved.
In a possible implementation manner, the presenting, to a user interaction interface, a target annotation tool corresponding to an annotation task type of the object to be annotated in the plurality of annotation tools includes: receiving a data field selection instruction input by a user, wherein the data field selection instruction is used for determining a target data field to which an object to be labeled belongs, and the target data field comprises an intelligent automobile field; and according to the determined target data field, determining a target marking tool for marking the object to be marked in the target data field from the tool set, opening the use permission of the target marking tool, and displaying the use permission on a marking interface for a user to use.
The data labeling method provided by the disclosure can be suitable for labeling tasks in various data fields, the data fields can be fields to which artificial intelligence technology is applied, such as the field of intelligent automobiles, the field of intelligent retail, the field of intelligent security and the like, and for different data fields, the used labeling tools are often determined, so that in the implementation mode, the target labeling tools for labeling the objects to be labeled in the field of target data can be determined from the tool set according to the field of the target data to which the objects to be labeled belong, and the labeling efficiency is improved.
The target data field can be determined based on a data field selection instruction input by a user, so that the user can select the target data field to which the object to be labeled belongs. For example, in the case that the object to be labeled belongs to the smart car domain, the user may select the smart car domain based on the user operation interface.
In this implementation manner, the tool set may include a plurality of tools, such as a line tool, a polygon tool, a rectangular box tool, an attribute labeling tool, and the like, and in order to determine the target labeling tool according to the data field, a corresponding relationship between the data field and the labeling tool may be established in advance, that is, the labeling tool corresponding to each data field is determined in advance. Then, after the target data field is determined, a labeling tool corresponding to the target data field may be determined from the tool set as a target labeling tool according to the corresponding relationship.
And aiming at the determined target marking tool, the use permission of the target marking tool is opened and displayed in a marking interface, so that a marking person can execute a marking task based on the target marking tool.
In the embodiment of the disclosure, a target data field is determined based on a data field selection instruction input by a user, and then a target marking tool for marking an object to be marked in the target data field is determined from a tool set according to the determined target data field for the user to use. Therefore, the marking tool required for executing the marking task can be quickly provided for the user, the marking time and the labor cost are greatly reduced, and the convenience of the marking tool is improved.
In a possible implementation manner, the obtaining, in response to a labeling operation executed based on the target labeling tool, labeling information of the object to be labeled includes: in response to the position marking operation executed based on the target marking tool, taking the marked position as the position of the marked target object; and taking preset default attributes as the attributes of the labeled target object.
The position of the target object may be obtained by labeling based on a polygon tool, a rectangle tool, and the like, and the user determines the position of the labeled target object by operating the polygon tool, the rectangle tool, and the like.
In order to reduce the marking time and the labor cost, the position of the target object can be acquired and the attribute of the target object can be given as a step, namely, a default attribute is preset, wherein the default attribute can be preset by a developer or can be set by a marking person, and when the position of the target object is marked based on tools such as a polygonal tool, a rectangular frame tool and the like, the default attribute is automatically given to the marked target position, so that the marking time and the labor cost are greatly reduced, and the convenience of the marking tool is improved.
Next, possible implementations of the data annotation method according to the embodiments of the present disclosure are exemplarily described according to the types of the plurality of annotation tasks.
In a possible implementation manner, the labeling task type of the object to be labeled includes a lane marking task, and the target labeling tool corresponding to the lane marking task includes a line tool and a lane attribute labeling tool; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: in response to a line drawing operation based on the line tool, taking an area indicated by a drawn line as a position where a marked lane line is located; and determining the attribute of the marked lane line in response to the lane line attribute selection operation based on the lane line attribute marking tool.
Illustratively, when an image or a video applied to lane line detection is labeled, an image or a video frame to be labeled is displayed in a user interaction interface in a loading or importing mode. Because the marked object is a lane line, the corresponding relation between the marking task type 'lane line marking task' and the line tool can be preset. Based on the corresponding relationship, when the type of the marking task is determined to be the lane marking task, the line tool can be presented in the user interaction interface.
The user can select a line tool in the user interaction interface, then perform a line drawing operation, and then select an attribute of the line, where the attribute is used to characterize the type of the lane line, for example, a white solid line, a white dotted line, a left lane line, a right lane line, and the like. Alternatively, the user may first complete the selection of the line attribute, and then perform the line drawing operation.
In the process of executing the line drawing operation by the user, the user may determine that any end of the lane line to be marked is a starting end, specifically, a midpoint of the starting end in the width direction of the lane line may be used as a starting point, a position of the starting point may be determined in an operation mode of clicking a left mouse button, a point representing the lane line is determined along the lane line in the same operation mode, and the determined point should be on the lane line. Then, the other end point of the lane line is used as a terminal point, and when the terminal point is selected, a right mouse button can be clicked to finish the labeling of the lane line.
After the lane line in the image or the video is labeled, labeling information such as the position (coordinates for identifying each point of the lane line), the attribute and the like of the lane line is obtained.
In the embodiment of the disclosure, by the method, the marking information of the lane line can be obtained after the marking of the position of the lane line is completed, so that the efficiency of marking the lane line is improved.
From a cost saving perspective, the lane lines may also be marked using a polygon tool. Thus, the same marking tool is shared with other marking tasks provided by the present disclosure, wherein the polygon tool can be used, so as to save development cost.
In one possible implementation manner, the labeling task type of the object to be labeled includes a drivable region labeling task, and the target labeling tool corresponding to the drivable region labeling task includes a polygon tool; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: and in response to the polygon drawing operation based on the polygon tool, taking the area where the drawn polygon is as the position of the marked travelable area.
Illustratively, when the travelable area of the image or video is labeled, the image or video frame to be labeled is displayed in the user interactive interface in a loading or importing mode. In most cases, the travelable region may be a polygon. The correspondence between the polygon tool and the "drivable region label" labeling task type can be preset. Based on the corresponding relation, when the type of the labeling task is determined to be the 'driving-capable region labeling', the polygonal tool can be presented in the user interaction interface. In addition, a corresponding relation between the scene segmentation tool and the labeling task type of the 'driving area labeling' can be preset, and the scene segmentation tool can be presented in the interactive interface after the labeling task type is determined to be the 'driving area labeling'.
The scene segmentation tool is a pre-integrated module. The user roughly selects the boundary of the object to be segmented, and then the scene segmentation tool can accurately determine the boundary of the object to be segmented and the background according to the boundary so as to segment the object to be segmented. In the embodiment of the present disclosure, the object to be segmented may be a vehicle travelable region.
The properties of the polygon may be: a drivable region and an undrivable region. The user can select according to the annotation task. It will be appreciated that if only a single category of region needs to be annotated in an annotation task, for example: only the drivable region is labeled or only the undrivable region is labeled, so that under the condition that only one attribute exists in the labeled polygon, a user does not need to select the attribute of the polygon, but a default attribute is directly given, and the region corresponding to the default attribute is labeled by the user.
In the process of polygon drawing operation, taking the travelable area label as an example, a user can select any point on the boundary of the travelable area as a starting point, then sequentially determine a plurality of points indicating the travelable area, wherein the points can be inflection points on the boundary line of the travelable area or any point on the boundary line, click a right mouse button to add the last point, the last point is automatically connected with the starting point to form a polygon, the drawing operation of the polygon is completed, and the closed area obtained by the drawing operation can be determined as the travelable area.
According to the method, each travelable area can be determined on the image, and meanwhile, the labeling information such as the range (coordinates), attributes and the like of the travelable area is obtained, so that the efficiency of labeling the travelable area is improved.
In a possible implementation manner, the labeling task type of the object to be labeled includes a target object labeling task, and a target labeling tool corresponding to the target object labeling task includes a rectangular frame tool and a target object attribute labeling tool; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position where a marked target object is located; and determining the attribute of the target object marked in the objects to be marked in response to the attribute selection operation of the target object attribute marking tool.
In the field of automotive autopilot, object detection may include, for example, detection of objects such as signal lights, traffic signs, surface signs, pedestrians, vehicles, and the like.
In the embodiment of the present disclosure, a rectangular box tool may be associated with a labeling task type of "target object labeling" in advance. After the object to be marked is loaded and the target marking task type is determined to be the target object marking, the rectangular frame tool can be presented in the user interaction interface. The user may configure the properties of the rectangular box tool, which may include pedestrians, motor vehicles, non-motor vehicles, and the like. The user can select the attribute of the rectangular frame firstly and then select a rectangular frame tool to carry out the drawing operation of the rectangular frame; or selecting a rectangular frame tool to perform rectangular frame drawing operation to obtain a rectangular frame, and configuring the attributes of the rectangular frame.
In the process of drawing the rectangular frame, taking the rectangular frame for drawing the attribute of the pedestrian as an example, clicking the left mouse button to create a starting point, performing frame selection on the pedestrian in the image, and clicking the left mouse button again to finish the labeling operation. After the rectangular frame is finished, the pedestrians in the rectangular frame are tangent by the upper side, the lower side, the left side and the right side of the rectangular frame. Taking a rectangular coordinate system as an example, the point with the maximum y value in all points indicating the positions of pedestrians is positioned on the upper side of a rectangular frame; of the points indicating the positions of pedestrians, the point with the smallest y value falls on the lower side of the rectangular frame; of the points indicating the positions of the pedestrians, the point at which the value x is the largest is located on the right side of the rectangular frame; the point where the value of x is smallest is located on the left side of the rectangular box.
Then, the pedestrians can be selected one by one according to the process, the attributes of the rectangular frame are configured into the pedestrians, and then the marking of the pedestrians can be completed.
By the same method, the labeling of other target objects can be completed, and the labeling information of the target object can be obtained by selecting respective attributes, so that the efficiency of labeling the target object is improved.
In one possible implementation manner, the object to be labeled is a point cloud image, the labeling task type includes a point cloud object labeling task, the target labeling tool corresponding to the point cloud object labeling task includes a plurality of point cloud object framing tools, and the plurality of point cloud object framing tools correspond to different point cloud object attributes; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: determining a target point cloud object framing tool of the plurality of point cloud object framing tools in response to a point cloud object framing tool selection operation; and responding to the framing operation based on the target point cloud object framing tool, taking the framed area as the position of the marked point cloud object, and taking the point cloud object attribute corresponding to the target point cloud framing tool as the attribute of the marked point cloud object.
Illustratively, a user loads or imports a point cloud image into a user interaction interface. The point cloud image is a data set of points in a coordinate system. The point cloud image includes position information of each point, and a part of the point cloud image also includes color information, reflection intensity information, and the like.
In the embodiment of the present disclosure, a corresponding relationship between the point cloud object framing tool and the labeling task type of "point cloud object labeling task" may be established in advance. Based on the corresponding relation, after the target marking task type is determined to be the point cloud object marking task, the point cloud object framing tool can be presented in the user interaction interface. The user may then configure attributes of the point cloud object framing tool, which may include, for example, people, trees, vehicles, and the like.
In the process of executing the frame selection operation, a user clicks a left mouse button to create a starting point, pulls out a rectangular frame, performs frame selection on a target object in the point cloud image, and clicks the left mouse button again to complete the frame selection operation. After the framing operation is finished, the rectangular frame is tangent to the framed target object. It can be understood that, if the object to be labeled is a 3D point cloud image, the rectangular frame is a cube, and at this time, each surface of the rectangular frame is tangent to the target object.
According to the method, the cloud objects of all points in the point cloud image can be selected in a frame mode, corresponding attributes are selected, and then the labeling information of the point cloud objects can be obtained, and the efficiency of point cloud object labeling is improved.
The data labeling method provided by the disclosure can be used for realizing the data labeling task in the field of the automobile automatic driving, and can also be used for realizing the data labeling task in the field of the automobile intelligent cockpit, and several possible implementation modes are described in detail below.
In one possible implementation manner, the annotation task type includes a passenger attribute annotation task, and the target annotation tool corresponding to the passenger attribute annotation task includes a rectangular frame tool and a passenger attribute annotation tool; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as a position where the marked passenger is located; at least one attribute of the tagged passenger is determined in response to a passenger attribute selection operation based on the passenger attribute tagging tool.
In the embodiment of the present disclosure, a correspondence relationship between the labeling task type of "rectangular box tool and passenger attribute labeling tool" and "passenger attribute labeling task" may be established in advance. Based on the corresponding relation, when the type of the labeling task is the passenger attribute labeling task, it can be determined that the target labeling tool corresponding to the passenger attribute labeling task comprises a rectangular frame tool and a passenger attribute labeling tool, and the tools are presented in a user interface. The user may select the rectangular box tool and then select the attributes of the rectangular box in the passenger attribute labeling tool, such as: riding position, age, mood, wearing mask, etc. In addition, in the passenger attribute labeling tool, a plurality of secondary attributes of each attribute may also be set, for example, for the "age" attribute, the secondary attributes may be set to 10-20 years, 20-30 years, and the like.
Then, the user performs a rectangular frame drawing operation to frame the position of the passenger using the rectangular frame tool, and the specific process of the rectangular frame drawing operation can be referred to the related description of the present disclosure, which is not described herein again. After the framing is completed, the framed passenger is in the corresponding rectangular frame and tangent to the corresponding rectangular frame. The framed passengers simultaneously obtain the corresponding attributes.
In addition, the rectangular box tool box can be used for selecting passengers, and then the attributes of the rectangular box can be set.
According to the method, each passenger in the image can be selected in a frame mode, the corresponding attribute is selected, the marking information of the passenger is obtained, and the efficiency of marking the attribute of the passenger is improved.
In one possible implementation manner, the annotation task type includes a face annotation task, and the target annotation tool corresponding to the face annotation task includes a rectangular frame tool and a key point annotation tool; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as the position of the marked face; and determining the key points of the face in the labeled face in response to the key point labeling operation based on the key point labeling tool.
In the embodiment of the present disclosure, a correspondence relationship between a labeling task type of a "rectangular box tool and a key point labeling tool" and a "face labeling task" may be established in advance. Based on the corresponding relation, when the type of the labeling task is the face labeling task, it can be determined that the target labeling tools corresponding to the face labeling task comprise a rectangular frame tool and a key point labeling tool, and the tools are presented in a user interface. The user can select the rectangular frame tool, and select the position of the face through the rectangular frame drawing operation frame, and the specific process of the rectangular frame drawing operation can refer to the relevant description of the disclosure, which is not repeated herein. And selecting a target face through the frame to determine the position of the face, so that the rectangular frame is tangent to the edge of the face. Then, open the key point marking tool, set the number of the face key points, for example: face 5 points, face 106 points. And marking the positions of the five sense organs of the face by using a key point marking tool on the target face selected by the frame.
By the method, the labeling information of the face can be obtained after the position of the face is labeled, and the efficiency of face labeling is improved.
In one possible implementation manner, the annotation task type includes a gesture annotation task, and the target annotation tool corresponding to the gesture annotation task includes a rectangular box tool and a key point annotation tool; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position where the marked hand is located; and determining key points of the marked hand in response to key point marking operation based on the key point marking tool.
In the embodiment of the present disclosure, a correspondence relationship between a labeling task type of "rectangular box tool and key point labeling tool" and "gesture labeling task" may be established in advance. Based on the corresponding relation, when the type of the labeling task is the gesture labeling, it can be determined that the target labeling tool corresponding to the gesture labeling task comprises a rectangular box tool and a key point labeling tool, and the tools are presented in a user interface. The user may select a rectangular box tool, box a target object, in this example a target hand, through a rectangular box draw operation, such that the rectangular box is tangent to the target hand edge to determine the position of the hand. The frame ranges from a person's finger to a wrist.
Then, open the key point marking tool, set the number of hand key points, for example: 21 points of human hand and 5 points of finger tip. And marking joints of the hand by using the key point marking tool on the target hand selected by the frame. And after the labeling is finished, obtaining the corresponding attributes (coordinates, point intervals and the like) of the labeled key points.
By the method, the label information of the hand can be obtained after the position of the hand is labeled, and the gesture labeling efficiency is improved.
In one possible implementation manner, the tagging task type includes a driving fatigue tagging task, and the target tagging tool corresponding to the driving fatigue task includes a rectangular frame tool and a fatigue attribute tagging tool; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position of a labeled fatigue detection target; and determining the attribute of the labeled fatigue detection target in response to the key point labeling operation based on the fatigue attribute labeling tool.
In the embodiment of the disclosure, a corresponding relationship between the labeling task type of the "rectangular box tool and fatigue attribute labeling tool" and the "driving fatigue labeling task" may be established in advance. Based on the corresponding relation, when the labeling task is the driving fatigue labeling task, it can be determined that the target labeling tool corresponding to the driving fatigue labeling task comprises a rectangular frame tool and a fatigue attribute labeling tool, and the tools are presented in a user interface. The user selects the rectangular frame tool, the objects of the rectangular frame selection can be the eyes and the mouth of the person, the eyes and the mouth of the person are framed by the rectangular frame drawing operation, and then the attributes of the rectangular frame are selected, such as the eyes, the uncertainty and the like.
In another implementation, if the definition of the object to be annotated is limited, the image segmentation process may be performed on the object to be annotated (image or single frame video). The eye and mouth frames in the image are divided to generate an eye image and a mouth image. The newly generated eye image or mouth image is then loaded into the interactive interface, where the user can select attributes of the eye image and mouth image.
By the method, the position of the fatigue detection target can be labeled, the labeling information of the fatigue detection target is obtained, and the efficiency of the driving fatigue labeling task is improved. When the definition of the marking object is limited, the accuracy of the driving fatigue marking can be improved.
In another implementation mode, the type of the labeling task comprises a dangerous driving labeling task, and a target labeling tool corresponding to the dangerous driving task comprises a rectangular frame tool and a dangerous attribute labeling tool; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position of the marked dangerous driving detection target; and determining the attribute of the marked dangerous driving detection target in response to the attribute selection operation based on the dangerous attribute marking tool.
In the embodiment of the disclosure, a corresponding relationship between the labeling task type of the "rectangular frame tool and the dangerous attribute labeling tool" and the "dangerous driving labeling task" may be established in advance. Based on the corresponding relation, when the labeling task is a dangerous driving labeling task, it can be determined that the target labeling tool corresponding to the driving fatigue labeling task comprises a rectangular frame tool and a fatigue attribute labeling tool, and the tools are presented in a user interface. The user selects a rectangular frame tool, determines the position of a target object in dangerous driving detection through rectangular frame drawing operation, wherein the target object is usually a human body, and selects the attributes of the rectangular frame (such as smoking, calling, drinking and the like) after the frame selection is finished. Or, the user may set the default attribute of the rectangular frame first, and then use the rectangular frame to frame the target object (target human body) in the image, so that the drawn attribute of the rectangular frame is the preset default attribute.
By the method, the marking information of the dangerous driving detection target can be obtained after the position of the dangerous driving detection target is marked, and the efficiency of marking the dangerous driving is improved.
The data annotation method of the embodiment of the disclosure can also be applied to other annotation scenes, and the specific annotation concept is the same as that of the above implementation manner, and is not introduced one by one due to space limitations. In addition, to same mark task, also can have multiple suitable marking instrument, consequently, in this disclosed embodiment, can give different marking schemes, provide the marking instrument of multiple alternatives, the person of being convenient for mark gets or rejects according to the production operating condition of difference in the instrument is built to maximize ground improvement mark efficiency, accelerate the research and development process.
The systematic solution for data annotation of artificial intelligence landing in the automobile industry can be used as an integral research and development scheme to support data annotation of a traditional automobile enterprise research and development intelligent automobile, and can also be used as a combination scheme alone to provide data annotation support for artificial intelligence landing of the automobile enterprise in an automatic driving and intelligent cockpit. Through the scheme, the data marking tools required by the traditional vehicle enterprises in artificial intelligence landing can be determined, and the detailed scheme of data marking is avoided, so that the integration of all market marking tools by investing excessive development resources in the research and development process is avoided. In addition, in the process of model training, time arrangement can be definitely developed according to business requirements, and a more urgent marking tool on business is preferentially developed. Alternatively, for some general labeling tools which can be applied to a plurality of different labeling tasks, the tool can be developed preferentially, for example, if there are a plurality of labeling tasks which need to label irregular objects, the line tool can be abandoned, and only the polygon tool is developed to complete all irregular target objects which need to be labeled.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a data labeling apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any data labeling method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are not repeated.
Fig. 2 is a block diagram of a data annotation device according to an embodiment of the disclosure, and as shown in fig. 2, the device 20 is applied to the field of smart cars, and includes:
an annotated object acquisition unit 201, configured to acquire an object to be annotated;
a target labeling tool display unit 202, configured to present, to a user interaction interface, a target labeling tool corresponding to the labeling task type of the object to be labeled in the multiple labeling tools;
a labeling information determining unit 203, configured to obtain labeling information of the object to be labeled in response to a labeling operation performed based on the target labeling tool.
In a possible implementation manner, the target labeling tool presentation unit 202 includes:
the data field determining unit is used for receiving a data field selection instruction input by a user, wherein the data field selection instruction is used for determining a target data field to which an object to be labeled belongs, and the target data field comprises the field of intelligent automobiles;
and the target marking tool display subunit is used for determining a target marking tool for marking the object to be marked in the target data field from the tool set according to the determined target data field, opening the use permission of the target marking tool and displaying the target marking tool on a marking interface for a user to use.
In one possible implementation, the apparatus further includes:
and the first target marking tool determining unit is used for responding to the selection operation of the user on the plurality of marking tools and taking the marking tool selected by the user as the target marking tool.
In one possible implementation, the apparatus further includes:
the second target labeling tool determining unit is used for identifying the labeling task type of the object to be labeled to obtain a target labeling task type; and determining a marking tool having a corresponding relation with the target marking task type as a target marking tool according to the corresponding relation between the preset marking task type and the marking tool.
In a possible implementation manner, the annotation information determining unit 203 includes:
a target object position determination unit configured to take the annotated position as a position of the annotated target object in response to a position annotation operation performed based on the target annotation tool;
and the target object attribute determining unit is used for taking preset default attributes as the attributes of the labeled target object.
In a possible implementation manner, the labeling task type of the object to be labeled includes a lane marking task, and the target labeling tool corresponding to the lane marking task includes a line tool and a lane attribute labeling tool;
the marking information determining unit 203 is configured to, in response to a line drawing operation based on the line tool, take an area indicated by a drawn line as a position where a marked lane line is located; and determining the attribute of the marked lane line in response to the lane line attribute selection operation based on the lane line attribute marking tool.
In one possible implementation manner, the labeling task type of the object to be labeled includes a drivable region labeling task, and the target labeling tool corresponding to the drivable region labeling task includes a polygon tool;
the labeling information determining unit 203 is configured to, in response to a polygon drawing operation based on the polygon tool, take a region where the drawn polygon is located as a position of a labeled travelable region.
In a possible implementation manner, the labeling task type of the object to be labeled includes a target object labeling task, and a target labeling tool corresponding to the target object labeling task includes a rectangular frame tool and a target object attribute labeling tool;
the labeling information determining unit 203, configured to, in response to a rectangular frame drawing operation based on the rectangular frame tool, take a region surrounded by the drawn rectangular frame as a position where a labeled target object is located; and determining the attribute of the target object marked in the objects to be marked in response to the attribute selection operation of the target object attribute marking tool.
In one possible implementation manner, the object to be labeled is a point cloud image, the labeling task type includes a point cloud object labeling task, the target labeling tool corresponding to the point cloud object labeling task includes a plurality of point cloud object framing tools, and the plurality of point cloud object framing tools correspond to different point cloud object attributes;
the labeling information determining unit 203 is configured to determine a target point cloud object framing tool among the plurality of point cloud object framing tools in response to a point cloud object framing tool selection operation; and responding to the framing operation based on the target point cloud object framing tool, taking the framed area as the position of the marked point cloud object, and taking the point cloud object attribute corresponding to the target point cloud framing tool as the attribute of the marked point cloud object.
In one possible implementation manner, the annotation task type includes a passenger attribute annotation task, and the target annotation tool corresponding to the passenger attribute annotation task includes a rectangular frame tool and a passenger attribute annotation tool;
the labeling information determining unit 203, configured to, in response to a rectangular frame drawing operation based on the rectangular frame tool, use an area surrounded by the drawn rectangular frame as a position where a labeled passenger is located; at least one attribute of the tagged passenger is determined in response to a passenger attribute selection operation based on the passenger attribute tagging tool.
In one possible implementation manner, the annotation task type includes a face annotation task, and the target annotation tool corresponding to the face annotation task includes a rectangular frame tool and a key point annotation tool;
the labeling information determining unit 203 is configured to, in response to a rectangular frame drawing operation based on the rectangular frame tool, use a region surrounded by the drawn rectangular frame as a position where the labeled human face is located; and determining the key points of the face in the labeled face in response to the key point labeling operation based on the key point labeling tool.
In one possible implementation manner, the annotation task type includes a gesture annotation task, and the target annotation tool corresponding to the gesture annotation task includes a rectangular box tool and a key point annotation tool;
the labeling information determining unit 203, configured to, in response to a rectangular frame drawing operation based on the rectangular frame tool, take an area surrounded by the drawn rectangular frame as a position where a labeled hand is located; and determining key points of the marked hand in response to key point marking operation based on the key point marking tool.
In one possible implementation manner, the tagging task type includes a driving fatigue tagging task, and the target tagging tool corresponding to the driving fatigue task includes a rectangular frame tool and a fatigue attribute tagging tool;
the labeling information determining unit 203, configured to, in response to a rectangular frame drawing operation based on the rectangular frame tool, use a region surrounded by the drawn rectangular frame as a position of a labeled fatigue detection target; and determining the attribute of the labeled fatigue detection target in response to the key point labeling operation based on the fatigue attribute labeling tool.
In one possible implementation manner, the type of the labeling task includes a dangerous driving labeling task, and the target labeling tool corresponding to the dangerous driving task includes a rectangular frame tool and a dangerous attribute labeling tool;
the labeling information determining unit 203, configured to, in response to a rectangular frame drawing operation based on the rectangular frame tool, use an area surrounded by the drawn rectangular frame as a position of a labeled dangerous driving detection target; and determining the attribute of the marked dangerous driving detection target in response to the attribute selection operation based on the dangerous attribute marking tool.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 3 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 3, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 4 shows a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 4, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. Electronic device 1900 may operate based on storageAn operating system in memory 1932, such as the Microsoft Server operating System (Windows Server)TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (17)

1. A data annotation method is characterized by being applied to the field of intelligent automobiles and comprising the following steps:
acquiring an object to be marked;
presenting a target marking tool corresponding to the marking task type of the object to be marked in the plurality of marking tools to a user interaction interface;
and responding to the labeling operation executed based on the target labeling tool to obtain the labeling information of the object to be labeled.
2. The method of claim 1, wherein the presenting, to the user interaction interface, a target annotation tool corresponding to the annotation task type of the object to be annotated in the plurality of annotation tools comprises:
receiving a data field selection instruction input by a user, wherein the data field selection instruction is used for determining a target data field to which an object to be labeled belongs, and the target data field comprises an intelligent automobile field;
and according to the determined target data field, determining a target marking tool for marking the object to be marked in the target data field from the tool set, opening the use permission of the target marking tool, and displaying the use permission on a marking interface for a user to use.
3. The method according to any one of claims 1 or 2, wherein after obtaining the object to be labeled, the method further comprises:
and responding to the selection operation of the user on the plurality of marking tools, and taking the marking tool selected by the user as a target marking tool.
4. The method according to any one of claims 1 or 2, wherein after obtaining the object to be labeled, the method further comprises:
identifying the labeling task type of the object to be labeled to obtain a target labeling task type;
and determining a marking tool having a corresponding relation with the target marking task type as a target marking tool according to the corresponding relation between the preset marking task type and the marking tool.
5. The method according to any one of claims 1 to 4, wherein the obtaining the labeling information of the object to be labeled in response to the labeling operation performed based on the target labeling tool comprises:
in response to the position marking operation executed based on the target marking tool, taking the marked position as the position of the marked target object;
and taking preset default attributes as the attributes of the labeled target object.
6. The method according to any one of claims 1 to 5, wherein the labeling task type of the object to be labeled comprises a lane marking task, and the target labeling tool corresponding to the lane marking task comprises a line tool and a lane attribute labeling tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a line drawing operation based on the line tool, taking an area indicated by a drawn line as a position where a marked lane line is located;
and determining the attribute of the marked lane line in response to the lane line attribute selection operation based on the lane line attribute marking tool.
7. The method according to any one of claims 1 to 6, wherein the labeling task type of the object to be labeled comprises a travelable region labeling task, and the target labeling tool corresponding to the travelable region labeling task comprises a polygon tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
and in response to the polygon drawing operation based on the polygon tool, taking the area where the drawn polygon is as the position of the marked travelable area.
8. The method according to any one of claims 1 to 7, wherein the annotation task type of the object to be annotated comprises a target object annotation task, and the target annotation tool corresponding to the target object annotation task comprises a rectangular box tool and a target object attribute annotation tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position where a marked target object is located;
and determining the attribute of the target object marked in the objects to be marked in response to the attribute selection operation of the target object attribute marking tool.
9. The method according to any one of claims 1 to 8, wherein the object to be labeled is a point cloud image, the labeling task type comprises a point cloud object labeling task, the target labeling tool corresponding to the point cloud object labeling task comprises a plurality of point cloud object framing tools, and the plurality of point cloud object framing tools correspond to different point cloud object attributes;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
determining a target point cloud object framing tool of the plurality of point cloud object framing tools in response to a point cloud object framing tool selection operation;
and responding to the framing operation based on the target point cloud object framing tool, taking the framed area as the position of the marked point cloud object, and taking the point cloud object attribute corresponding to the target point cloud framing tool as the attribute of the marked point cloud object.
10. The method according to any one of claims 1 to 9, wherein the annotation task type comprises a passenger attribute annotation task, and the target annotation tool corresponding to the passenger attribute annotation task comprises a rectangular box tool and a passenger attribute annotation tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as a position where the marked passenger is located;
at least one attribute of the tagged passenger is determined in response to a passenger attribute selection operation based on the passenger attribute tagging tool.
11. The method according to any one of claims 1 to 10, wherein the annotation task type comprises a face annotation task, and the target annotation tool corresponding to the face annotation task comprises a rectangular box tool and a key point annotation tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as the position of the marked face;
and determining the key points of the face in the labeled face in response to the key point labeling operation based on the key point labeling tool.
12. The method according to any one of claims 1 to 11, wherein the annotation task type comprises a gesture annotation task, and the target annotation tool corresponding to the gesture annotation task comprises a rectangular box tool and a key point annotation tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position where the marked hand is located;
and determining key points of the marked hand in response to key point marking operation based on the key point marking tool.
13. The method according to any one of claims 1 to 12, wherein the annotation task type comprises a driving fatigue annotation task, and the target annotation tool corresponding to the driving fatigue task comprises a rectangular frame tool and a fatigue attribute annotation tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position of a labeled fatigue detection target;
and determining the attribute of the labeled fatigue detection target in response to the key point labeling operation based on the fatigue attribute labeling tool.
14. The method according to any one of claims 1 to 13, wherein the annotation task type comprises a dangerous driving annotation task, and the target annotation tool corresponding to the dangerous driving task comprises a rectangular frame tool and a dangerous attribute annotation tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position of the marked dangerous driving detection target;
and determining the attribute of the marked dangerous driving detection target in response to the attribute selection operation based on the dangerous attribute marking tool.
15. A data annotation device, comprising:
the annotation object acquisition unit is used for acquiring an object to be annotated;
the target marking tool display unit is used for presenting a target marking tool corresponding to the marking task type of the object to be marked in the plurality of marking tools to a user interaction interface;
and the marking information determining unit is used for responding to the marking operation executed based on the target marking tool to obtain the marking information of the object to be marked.
16. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 14.
17. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 14.
CN202110704502.2A 2021-06-24 2021-06-24 Data labeling method and device, electronic equipment and storage medium Pending CN113407083A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110704502.2A CN113407083A (en) 2021-06-24 2021-06-24 Data labeling method and device, electronic equipment and storage medium
PCT/CN2021/126182 WO2022267279A1 (en) 2021-06-24 2021-10-25 Data annotation method and apparatus, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110704502.2A CN113407083A (en) 2021-06-24 2021-06-24 Data labeling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113407083A true CN113407083A (en) 2021-09-17

Family

ID=77682957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110704502.2A Pending CN113407083A (en) 2021-06-24 2021-06-24 Data labeling method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113407083A (en)
WO (1) WO2022267279A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022267279A1 (en) * 2021-06-24 2022-12-29 上海商汤科技开发有限公司 Data annotation method and apparatus, and electronic device and storage medium
CN116385459A (en) * 2023-03-08 2023-07-04 阿里巴巴(中国)有限公司 Image segmentation method and device
CN117174261A (en) * 2023-11-03 2023-12-05 神州医疗科技股份有限公司 Multi-type labeling flow integrating system for medical images

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156025A (en) * 2015-03-25 2016-11-23 阿里巴巴集团控股有限公司 The management method of a kind of data mark and device
CN108829435A (en) * 2018-06-19 2018-11-16 数据堂(北京)科技股份有限公司 A kind of image labeling method and general image annotation tool
CN109886338A (en) * 2019-02-25 2019-06-14 苏州清研精准汽车科技有限公司 A kind of intelligent automobile test image mask method, device, system and storage medium
CN110457494A (en) * 2019-08-01 2019-11-15 新华智云科技有限公司 Data mask method, device, electronic equipment and storage medium
CN111400581A (en) * 2020-03-13 2020-07-10 京东数字科技控股有限公司 System, method and apparatus for annotating samples
CN111598120A (en) * 2020-03-31 2020-08-28 宁波吉利汽车研究开发有限公司 Data labeling method, equipment and device
CN111857893A (en) * 2019-04-08 2020-10-30 百度在线网络技术(北京)有限公司 Method and device for generating label graph
CN111860304A (en) * 2020-07-17 2020-10-30 北京百度网讯科技有限公司 Image labeling method, electronic device, equipment and storage medium
CN112529055A (en) * 2020-12-02 2021-03-19 博云视觉科技(青岛)有限公司 Image annotation and annotation data set processing method
CN112800255A (en) * 2019-11-14 2021-05-14 阿里巴巴集团控股有限公司 Data labeling method, data labeling device, object tracking method, object tracking device, equipment and storage medium
CN112949437A (en) * 2021-02-21 2021-06-11 深圳市优必选科技股份有限公司 Gesture recognition method, gesture recognition device and intelligent equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309995A (en) * 2020-01-19 2020-06-19 北京市商汤科技开发有限公司 Labeling method and device, electronic equipment and storage medium
CN112527374A (en) * 2020-12-11 2021-03-19 北京百度网讯科技有限公司 Marking tool generation method, marking method, device, equipment and storage medium
CN113407083A (en) * 2021-06-24 2021-09-17 上海商汤科技开发有限公司 Data labeling method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156025A (en) * 2015-03-25 2016-11-23 阿里巴巴集团控股有限公司 The management method of a kind of data mark and device
CN108829435A (en) * 2018-06-19 2018-11-16 数据堂(北京)科技股份有限公司 A kind of image labeling method and general image annotation tool
CN109886338A (en) * 2019-02-25 2019-06-14 苏州清研精准汽车科技有限公司 A kind of intelligent automobile test image mask method, device, system and storage medium
CN111857893A (en) * 2019-04-08 2020-10-30 百度在线网络技术(北京)有限公司 Method and device for generating label graph
CN110457494A (en) * 2019-08-01 2019-11-15 新华智云科技有限公司 Data mask method, device, electronic equipment and storage medium
CN112800255A (en) * 2019-11-14 2021-05-14 阿里巴巴集团控股有限公司 Data labeling method, data labeling device, object tracking method, object tracking device, equipment and storage medium
CN111400581A (en) * 2020-03-13 2020-07-10 京东数字科技控股有限公司 System, method and apparatus for annotating samples
CN111598120A (en) * 2020-03-31 2020-08-28 宁波吉利汽车研究开发有限公司 Data labeling method, equipment and device
CN111860304A (en) * 2020-07-17 2020-10-30 北京百度网讯科技有限公司 Image labeling method, electronic device, equipment and storage medium
CN112529055A (en) * 2020-12-02 2021-03-19 博云视觉科技(青岛)有限公司 Image annotation and annotation data set processing method
CN112949437A (en) * 2021-02-21 2021-06-11 深圳市优必选科技股份有限公司 Gesture recognition method, gesture recognition device and intelligent equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022267279A1 (en) * 2021-06-24 2022-12-29 上海商汤科技开发有限公司 Data annotation method and apparatus, and electronic device and storage medium
CN116385459A (en) * 2023-03-08 2023-07-04 阿里巴巴(中国)有限公司 Image segmentation method and device
CN116385459B (en) * 2023-03-08 2024-01-09 阿里巴巴(中国)有限公司 Image segmentation method and device
CN117174261A (en) * 2023-11-03 2023-12-05 神州医疗科技股份有限公司 Multi-type labeling flow integrating system for medical images
CN117174261B (en) * 2023-11-03 2024-03-01 神州医疗科技股份有限公司 Multi-type labeling flow integrating system for medical images

Also Published As

Publication number Publication date
WO2022267279A1 (en) 2022-12-29

Similar Documents

Publication Publication Date Title
CN112419328B (en) Image processing method and device, electronic equipment and storage medium
CN113486765B (en) Gesture interaction method and device, electronic equipment and storage medium
CN113407083A (en) Data labeling method and device, electronic equipment and storage medium
WO2019095392A1 (en) Method and device for dynamically displaying icon according to background image
CN112991553B (en) Information display method and device, electronic equipment and storage medium
CN110865756B (en) Image labeling method, device, equipment and storage medium
CN112907760B (en) Three-dimensional object labeling method and device, tool, electronic equipment and storage medium
CN110781957A (en) Image processing method and device, electronic equipment and storage medium
CN109145970B (en) Image-based question and answer processing method and device, electronic equipment and storage medium
CN110991491A (en) Image labeling method, device, equipment and storage medium
CN113806054A (en) Task processing method and device, electronic equipment and storage medium
CN113052328A (en) Deep learning model production system, electronic device, and storage medium
CN112950525A (en) Image detection method and device and electronic equipment
CN109255784B (en) Image processing method and device, electronic equipment and storage medium
CN109344703B (en) Object detection method and device, electronic equipment and storage medium
EP3287747A1 (en) Method and apparatus for controlling a balance car
CN110909203A (en) Video analysis method and device, electronic equipment and storage medium
CN112911239A (en) Video processing method and device, electronic equipment and storage medium
CN114066858A (en) Model training method and device, electronic equipment and storage medium
CN111176533A (en) Wallpaper switching method, device, storage medium and terminal
CN113989469A (en) AR (augmented reality) scenery spot display method and device, electronic equipment and storage medium
CN113052874B (en) Target tracking method and device, electronic equipment and storage medium
CN114463212A (en) Image processing method and device, electronic equipment and storage medium
CN114066856A (en) Model training method and device, electronic equipment and storage medium
JP2023510443A (en) Labeling method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40051280

Country of ref document: HK

RJ01 Rejection of invention patent application after publication

Application publication date: 20210917

RJ01 Rejection of invention patent application after publication