CN111309995A - Labeling method and device, electronic equipment and storage medium - Google Patents

Labeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111309995A
CN111309995A CN202010059389.2A CN202010059389A CN111309995A CN 111309995 A CN111309995 A CN 111309995A CN 202010059389 A CN202010059389 A CN 202010059389A CN 111309995 A CN111309995 A CN 111309995A
Authority
CN
China
Prior art keywords
marking
labeling
tool
task
annotation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010059389.2A
Other languages
Chinese (zh)
Inventor
焦斌斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010059389.2A priority Critical patent/CN111309995A/en
Publication of CN111309995A publication Critical patent/CN111309995A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Document Processing Apparatus (AREA)

Abstract

The present disclosure relates to a labeling method and apparatus, an electronic device, and a storage medium, the method comprising: acquiring a labeling task, wherein the labeling task indicates that a labeling operation is executed on a data set to be labeled; acquiring the data set to be marked and a marking tool corresponding to the marking operation; and displaying a marking tool corresponding to the marking operation and at least one object to be marked in the data set to be marked through a marking interface, so as to mark the at least one object to be marked on the marking interface and obtain a marking result corresponding to the at least one object to be marked.

Description

Labeling method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a labeling method and apparatus, an electronic device, and a storage medium.
Background
With the popularization of artificial intelligence, data annotation becomes more and more important. The high-quality labeled data can improve the accuracy of the training model and is beneficial to improving and optimizing the artificial intelligence technology.
The marking tool can quickly, conveniently and efficiently produce marking data required by the artificial intelligence technology. In the related art, manufacturers can develop various marking tools according to their own requirements, the tools are often bound with company services, and when the services are changed, resources need to be expended again for development, which results in resource waste.
Disclosure of Invention
The disclosure provides a labeling method and device, electronic equipment and a storage medium.
According to a first aspect of the present disclosure, there is provided an annotation method, including:
acquiring a labeling task, wherein the labeling task indicates that a labeling operation is executed on a data set to be labeled;
acquiring the data set to be marked and a marking tool corresponding to the marking operation;
and displaying a marking tool corresponding to the marking operation and at least one object to be marked in the data set to be marked through a marking interface, so as to mark the at least one object to be marked on the marking interface and obtain a marking result corresponding to the at least one object to be marked.
In one possible implementation manner, the labeling operation includes a plurality of drawing actions, and each drawing action corresponds to one labeling tool;
the displaying of the marking tool corresponding to the marking operation through the marking interface includes:
acquiring an execution sequence of each drawing action in the plurality of drawing actions;
determining a display mode of a marking tool corresponding to the marking operation according to the execution sequence and the marking tool corresponding to each drawing action;
and displaying the marking tools corresponding to the plurality of drawing actions according to the display mode through the marking interface.
Therefore, the display mode of the marking tool is determined based on the execution sequence of the drawing action, the use convenience of the marking tool is improved, and the marking efficiency is improved.
In a possible implementation manner, the obtaining of the annotation tool corresponding to the annotation operation includes:
responding to the triggering operation of the labeling task, and acquiring a task identifier of the labeling task;
acquiring a marking tool set corresponding to the task identifier, wherein the marking tool set comprises tool identifiers of all marking tools corresponding to the marking operation;
and calling each marking tool corresponding to the tool identification from a tool library.
Therefore, the marking tool is called according to the task identifier of the marking task, so that the targeted marking tool calling is realized, a marker can complete marking processes with different requirements, changed services can be supported, the development cost is saved, the selection cost of the marker is reduced, and the operation of the marker is facilitated.
In a possible implementation manner, the data set to be labeled includes an encrypted data set, and before the acquiring the data set to be labeled, the method further includes:
acquiring an encrypted data set corresponding to the labeling task from a database, wherein the encrypted data set comprises at least one encrypted object to be labeled;
decrypting each encrypted object to be marked in the encrypted data set respectively to obtain at least one decrypted object to be marked, wherein each encrypted object to be marked corresponds to one decrypted object to be marked;
and determining the objects to be marked in the data set to be marked, except the encrypted data set, and the decrypted objects to be marked obtained by decrypting the encrypted data set as the data set to be marked.
The encrypted object to be marked is stored in the database, and the encrypted object to be marked is decrypted when the database is used, so that the safety of the object to be marked can be improved, and the risk of data leakage is reduced.
In a possible implementation manner, before the acquiring the data set to be labeled and the labeling tool corresponding to the labeling operation, the method further includes:
and displaying related information of the labeling task through a task interface, wherein the related information at least comprises one of a task identifier of the labeling task, the acquisition progress of the data set to be labeled and the acquisition progress of a labeling tool corresponding to the labeling operation.
The progress is obtained through the display, so that a marker can accurately master the progress and the waiting time, the marker can conveniently carry out work arrangement, and the marking efficiency of the marker can be improved.
In a possible implementation manner, before the obtaining of the annotation tool corresponding to the annotation operation, the method further includes: configuring a task identifier for the labeling task; decomposing the labeling operation corresponding to the labeling task into at least one drawing action; determining a marking tool required to be used for completing the marking task according to the marking tool required to be used for completing each drawing action; generating the marking tool set according to the tool identification of the marking tool required to be used for completing the marking task; and establishing a corresponding relation between the task identifier and the labeling tool set.
By establishing the corresponding relation between the task identifier and the labeling tool set, the labeling tool can be called based on the task identifier, so that a labeler can finish labeling processes with different requirements, the selection cost of the labeler is reduced, and the operation of the labeler is facilitated.
In a possible implementation manner, under the condition that an intermediate labeling result of the at least one object to be labeled is obtained after the labeling interface labels the at least one object to be labeled;
the obtaining of the labeling result corresponding to the at least one object to be labeled includes:
adding a labeling identifier for the intermediate labeling result, and associating the labeling identifier with the at least one object to be labeled;
and acquiring the intermediate labeling result associated with the labeling identifier, labeling the intermediate labeling result as an object to be labeled to obtain a final labeling result, and determining the final labeling result as a labeling result corresponding to the at least one object to be labeled.
Therefore, multiple times of labeling of one object to be labeled can be realized.
In a possible implementation manner, the set of annotation tools further includes an operation specification of each annotation tool;
labeling the at least one object to be labeled through a labeling interface and obtaining a labeling result corresponding to the at least one object to be labeled, including:
acquiring drawing information of a marking tool under the condition of collecting drawing operation based on the marking tool;
and under the condition that the drawing information conforms to the operation specification of the marking tool, processing the object to be marked according to the marking information of the marking tool to obtain a marking result corresponding to the object to be marked.
Under the condition of meeting the operation standard, the objects to be labeled are processed based on the labels, so that the reasonability of the labels is improved.
In a possible implementation manner, the drawing action includes one or more of punctuation, drawing a line, drawing a frame, drawing a polygon, identifying an attribute, and adding a character, the object to be labeled includes one or more of a picture, a video, and an audio, and the labeling result includes coordinate information and/or attribute information.
In this way, annotation of pictures, video or audio can be achieved.
According to a second aspect of the present disclosure, there is provided an annotation apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a marking module, wherein the first acquisition module is used for acquiring a marking task, and the marking task indicates that a marking operation is executed on a data set to be marked;
the second acquisition module is used for acquiring the data set to be annotated indicated by the annotation task acquired by the first acquisition module and an annotation tool corresponding to the annotation operation indicated by the annotation task acquired by the first acquisition module;
the first display module is used for displaying the marking tool corresponding to the marking operation acquired by the second acquisition module and the at least one object to be marked in the data set to be marked acquired by the second acquisition module through a marking interface, so that the marking interface marks the at least one object to be marked and obtains a marking result corresponding to the at least one object to be marked.
In one possible implementation manner, the labeling operation includes a plurality of drawing actions, and each drawing action corresponds to one labeling tool;
the first display module is further configured to:
acquiring an execution sequence of each drawing action in the plurality of drawing actions;
determining a display mode of a marking tool corresponding to the marking operation according to the execution sequence and the marking tool corresponding to each drawing action;
and displaying the marking tools corresponding to the plurality of drawing actions according to the display mode through the marking interface.
In a possible implementation manner, the second obtaining module is further configured to:
responding to the triggering operation of the labeling task, and acquiring a task identifier of the labeling task;
acquiring a marking tool set corresponding to the task identifier, wherein the marking tool set comprises tool identifiers of all marking tools corresponding to the marking operation;
and calling each marking tool corresponding to the tool identification from a tool library.
In a possible implementation manner, the data set to be labeled includes an encrypted data set, and the apparatus further includes:
a third obtaining module, configured to obtain an encrypted data set corresponding to the annotation task from a database, where the encrypted data set includes at least one encrypted object to be annotated;
the decryption module is used for decrypting each encrypted object to be marked in the encrypted data set respectively to obtain at least one decrypted object to be marked, wherein each encrypted object to be marked corresponds to one decrypted object to be marked;
and the first determining module is used for determining the objects to be marked in the data set to be marked, except the encrypted data set, and the decrypted objects to be marked, which are obtained by decrypting the encrypted data set, as the data set to be marked.
In one possible implementation, the apparatus further includes:
and the second display module is used for displaying the related information of the annotation task through a task interface, wherein the related information at least comprises one of the task identifier of the annotation task, the acquisition progress of the data set to be annotated and the acquisition progress of the annotation tool corresponding to the annotation operation.
In one possible implementation, the apparatus further includes:
the configuration module is used for configuring a task identifier for the labeling task;
the decomposition module is used for decomposing the labeling operation corresponding to the labeling task into at least one drawing action;
the second determination module is used for determining the marking tools needed to be used for completing the marking tasks according to the marking tools needed to be used for completing each drawing action;
the generating module is used for generating the marking tool set according to the tool identification of the marking tool which is required to be used for completing the marking task;
and the establishing module is used for establishing the corresponding relation between the task identifier and the marking tool set.
In a possible implementation manner, the display module is further configured to:
under the condition that an intermediate labeling result of the at least one object to be labeled is obtained after the at least one object to be labeled is labeled on the labeling interface, adding a labeling identifier for the intermediate labeling result, and associating the labeling identifier with the at least one object to be labeled;
and acquiring the intermediate labeling result associated with the labeling identifier, labeling the intermediate labeling result as an object to be labeled to obtain a final labeling result, and determining the final labeling result as a labeling result corresponding to the at least one object to be labeled.
In a possible implementation manner, the set of annotation tools further includes an operation specification of each annotation tool;
the display module is further configured to:
acquiring drawing information of a marking tool under the condition of collecting drawing operation based on the marking tool;
and under the condition that the drawing information conforms to the operation specification of the marking tool, processing the object to be marked according to the marking information of the marking tool to obtain a marking result corresponding to the object to be marked.
In a possible implementation manner, the drawing action includes one or more of punctuation, drawing a line, drawing a frame, drawing a polygon, identifying an attribute, and adding a character, the object to be labeled includes one or more of a picture, a video, and an audio, and the labeling result includes coordinate information and/or attribute information.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, a labeling tool corresponding to the labeling operation indicated by the labeling task can be provided, so that a labeling operator can perform the labeling operation on the data set to be labeled, and further complete the labeling task; because repeated marking operation may exist among the marking tasks, the marking tool is provided based on the marking operation, so that the marking requirement of the marking tasks is met, and the repeated utilization rate of the marking tool is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of an annotation method according to an embodiment of the disclosure;
FIG. 2 illustrates an architectural diagram of an annotation system in accordance with an embodiment of the disclosure;
FIG. 3 illustrates one example of a task interface according to an embodiment of the present disclosure;
FIG. 4 illustrates one example of a annotator's work folder of an embodiment of the present disclosure;
FIG. 5a illustrates one example of a callout interface of an embodiment of the present disclosure;
FIG. 5b illustrates one example of a callout interface of an embodiment of the present disclosure;
FIG. 6 shows a block diagram of an annotation device in accordance with an embodiment of the disclosure;
FIG. 7 shows a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure;
fig. 8 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flow chart of an annotation method according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
and step S11, acquiring the annotation task.
And the annotation task indicates that annotation operation is executed on the data set to be annotated.
And step S12, acquiring the data set to be labeled and a labeling tool corresponding to the labeling operation.
Step S13, displaying, through a labeling interface, a labeling tool corresponding to the labeling operation and at least one object to be labeled in the data set to be labeled, so as to label the at least one object to be labeled on the labeling interface and obtain a labeling result corresponding to the at least one object to be labeled.
In the embodiment of the disclosure, a labeling tool corresponding to the labeling operation indicated by the labeling task can be provided, so that a labeling operator can perform the labeling operation on the data set to be labeled, and further complete the labeling task; because repeated marking operation may exist among the marking tasks, the marking tool is provided based on the marking operation, so that the marking requirement of the marking tasks is met, and the repeated utilization rate of the marking tool is improved.
In one possible implementation, the labeling method may be performed by an electronic device such as a terminal device, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling a computer-readable instruction stored in a memory. In the embodiment of the present disclosure, the annotation method is described by taking a client installed in a terminal device as an example, and the client may be an independent client or a web client, which is not limited to this disclosure.
FIG. 2 shows an architectural diagram of an annotation system according to an embodiment of the disclosure. As shown in fig. 2, the annotation system includes a server and a client. The server and the client can be arranged in the same device or different devices. The server can be used for dispatching the annotation task to the annotator, and the annotator can process the dispatched annotation task by logging in the client. The object to be labeled may include an unmarked picture, an unmarked video, an unmarked audio, and one or more of an unmarked picture, an unmarked video, an unmarked audio, and the like. Taking the example that the object to be labeled comprises a picture, the corresponding labeling task can be to segment the foreground and the background in the picture, depict the human body outline in the picture or label the facial organ in the picture, etc. Taking the object to be labeled as audio as an example, the corresponding labeling task may be language classification or style classification, etc.
The server is configured with a plurality of tasks, each task corresponding to a plurality of objects to be labeled, for example, one thousand objects to be labeled or ten thousand objects to be labeled. In order to improve the labeling efficiency, for each task, the server may first split the object to be labeled corresponding to the task into N (N is a positive integer, and values of N are set as needed, for example, may be set according to the number of labels) data sets to be labeled, and then generate N labeling tasks based on the N data sets, so that each labeling task corresponds to one data set to be labeled, and finally the server may assign the generated N labeling tasks to a plurality of labels for processing. For example, assume that a task a and a task B are configured in the server, where the task a is to segment the foreground and the background in the picture, and the task B is to label the face organ in the picture. For the task a, the server may split 1000 objects to be annotated corresponding to the task a into 10 data sets to be annotated, each data set to be annotated includes 100 objects to be annotated, then the server may generate 10 annotation tasks based on the 10 data sets to be annotated, so that each annotation task corresponds to one data set to be annotated, and finally the server may assign the 10 annotation tasks to the annotator C and the annotator D. And the annotators C and D can respectively process the distributed annotation tasks by logging in the client. For task B, the server may perform similar processing, which is not described herein again.
It should be noted that the process of splitting the multiple objects to be labeled corresponding to the task into N data sets to be labeled and the process of assigning the labeling task to multiple labeling personnel may be automatically completed by the server, or may be manually completed by the administrator after logging in the server. In the automatic completion process, the server side can determine the number of the data sets allocated to each annotator according to the combination of one or more of the annotation proficiency, the annotation speed and the annotation quality score of the annotator.
After the annotator logs in the client, the annotator can check the annotation task assigned to the annotator in the task interface. FIG. 3 illustrates one example of a task interface according to an embodiment of the present disclosure. As shown in fig. 3, the task interface may display related information of a plurality of annotation tasks, such as task identifiers of the annotation tasks, task names of the annotation tasks, and the like. Wherein, the task identifier can be used to distinguish different labeling tasks, and the task identifier can be a combination of one or more of numbers, letters and symbols, such as CutTool/1 and polyttool/1 shown in fig. 3. The task name may be used for the purpose of simply describing the task. The annotator can know the basic conditions of the annotation tasks according to the related information of the annotation tasks displayed in the task interface, such as the purpose of the annotation tasks (for example, human body segmentation and face frame-pulling shown in fig. 3) and the number of the annotation tasks (including the number of incomplete annotation tasks and the number of completed annotation tasks).
In step S11, the annotation task can indicate to perform an annotation operation on the data set to be annotated. After the client side obtains the annotation task from the server side, the annotator can execute the annotation operation on the data set to be annotated corresponding to the annotation task in the client side.
In one example, the annotator can see relevant information distributed to his annotating task from the task interface. The annotator can select to acquire the annotation task to trigger according to the related information. In one example, the annotator can trigger the annotation task by clicking on the annotation button shown in FIG. 3. The annotator can also act on the task identifier or task name of an annotation task by operations such as double-clicking, single-clicking, sliding, touching and the like so as to trigger the annotation task. The embodiment of the present disclosure does not limit the trigger operation of the annotation task.
In step S12, a data set to be annotated corresponding to the annotation task and an annotation tool corresponding to the annotation operation indicated by the annotation task may be obtained, so that the annotator can annotate the object to be annotated with the obtained annotation tool.
Wherein a marking tool may represent a drawing action. In one example, the drawing action may include one or more of a punctuation tool, a line drawing tool, a draw frame tool, a drawing polygon tool, an identifying attribute tool, and an adding text tool, and the corresponding marking tool may be one or more of a punctuation tool, a line drawing tool, a draw frame tool, a polygon tool, an identifying attribute tool, and an adding text tool.
The annotation operation indicated by the annotation task may include one or more drawing actions, one for each annotation tool. Under the condition that the labeling operation comprises a plurality of drawing actions, the labeling operation to be executed on the data to be labeled can be completed through the combination of the drawing actions, and at the moment, the labeling task can be completed through the combination of a plurality of labeling tools.
In one possible implementation, step S12 may include: responding to the triggering operation of the labeling task, and acquiring a task identifier of the labeling task; acquiring a marking tool set corresponding to the task identifier; calling each marking tool corresponding to the tool identification from a tool library; and acquiring a data set to be marked corresponding to the task identifier from a database. The embodiment of the present disclosure does not limit the sequence of obtaining the annotation tool set and obtaining the data set to be annotated.
The labeling tool set includes tool identifiers of each labeling tool corresponding to the labeling operation, the tool identifiers may be used to distinguish different labeling tools, and the tool identifiers may be a combination of one or more of numbers, letters, and symbols, such as LineTool and PolyTool.
In one possible implementation, in an online scenario: the client can call a marking tool corresponding to the marking operation from a tool library of the server according to the tool identification; the database may be set in the server shown in fig. 2, and the client may obtain the data set to be annotated corresponding to the annotation task from the database of the server according to the task identifier of the annotation task.
In one possible implementation, in an offline scenario: a tool library can be pre-installed in a client, and then a marking tool corresponding to marking operation is called from a local tool library according to a tool identifier; the database can be arranged in the client, and the client can acquire the data set to be labeled corresponding to the labeling task from the local database according to the task identifier of the labeling task.
In a possible implementation manner, under the condition that the annotation task is triggered, the client may create a working folder for the annotation task, where the working folder may include a task identifier of the annotation task, a data set to be annotated of the annotation task, and an annotation tool set.
The client can firstly obtain a work folder of the labeling task according to the task identifier of the labeling task, and then obtain a labeling tool set and a data set to be labeled from the work folder. And after the labeling tool set is obtained, calling each labeling tool corresponding to the tool identifier included in the labeling tool set from the database.
In one possible implementation, the client may create a work folder for the annotator, and the work folder for the annotator may include work folders of the annotation tasks assigned to the annotator.
For example, FIG. 4 illustrates one example of a annotator's work folder of an embodiment of the present disclosure. As shown in fig. 4, the annotator is assigned annotation tasks with task identifications taskId1 and taskId2, and the annotator's work folder "jobs" includes a work folder labeled task taskId1 (i.e., annotation task with task identification "taskId 1") and a work folder labeled task taskId2 (i.e., annotation task with task identification "taskId 2"). Taking the working folder labeled with task taskId1 as an example, the working folder labeled with task taskId2 includes a task identifier "taskId 1", a data set identifier "packageId 1_ 1", and a data set "image. Wherein the data set identification may be used to distinguish between different data sets, the data set identification may be composed of one or more of numbers, letters and symbols. The data set identifier "packageId 1_ 1" is a data set identifier of a data set to be annotated of the annotation task taskId2, the data set "image. It should be noted that the data set to be labeled may be a compressed package, or may be in other forms, for example, a folder, and the disclosure is not limited thereto.
In one possible implementation, the related files such as the annotation tool set of the annotation task may also be stored in the working folder of the annotation task. As shown in fig. 4, the working folder of the annotation task taskId1 further includes related files required for completing the annotation task, such as an annotation tool set "job. In one example, other files, such as reference pictures required by a punctuation tool, may also be included in the related files.
In a possible implementation manner, before obtaining an annotation tool corresponding to the annotation operation in step S12, the method further includes: configuring a task identifier for the labeling task; decomposing the labeling operation corresponding to the labeling task into at least one drawing action; determining a marking tool required to be used for completing the marking task according to the marking tool required to be used for completing each drawing action; generating the marking tool set according to the tool identification of the marking tool required to be used for completing the marking task; and establishing a corresponding relation between the task identifier and the labeling tool set.
In one example, in the case that the annotation task is to annotate a plurality of polygons in the picture, the annotation task can be completed in one step, that is, annotating the polygons. It can be determined that the annotation operation corresponding to the annotation task includes a drawing action "draw polygon". And the marking tool needed to be used for drawing the polygon is a polygon tool, so that the polygon tool can be determined as the marking tool needed to be used for finishing the marking task, a marking tool set is generated according to the tool identification of the polygon tool, and the corresponding relation between the task identification of the marking task and the marking tool set is established.
In yet another example, in the case that the annotation task is to annotate a plurality of polygons in a picture and describe an item within the polygons, the annotation task needs two steps to be completed, the first step is to annotate the polygons, and the second step is to add descriptions. Therefore, the annotation operation corresponding to the annotation task comprises two drawing actions of drawing a polygon and adding a description. And the marking tool needed to be used for drawing the polygon is a polygon tool, and the marking tool needed to be used for adding description is a character adding tool, so that the polygon tool and the character adding tool can be determined as the marking tool needed to be used for completing the marking task, a marking tool file set is generated according to the tool identification of the polygon tool and the tool identification of the character adding tool, and the corresponding relation between the task identification of the marking task and the marking tool set is established.
Under the condition that the corresponding relationship between the task identifier and the annotation tool set is established, the client may obtain, in step S12, the annotation tool set corresponding to the task identifier of the annotation task to be completed according to the corresponding relationship, and then call the annotation tool corresponding to the annotation task from the tool library, thereby implementing the annotation operation on the annotation data set.
The client can collect the drawing operation of the marker by adopting each marking tool and acquire the drawing information of the marking tool corresponding to the collected drawing operation.
The drawing operation may represent an operation performed by a annotator using an annotating tool, for example, drawing a frame using a "frame-drawing tool", marking a point using a "marking tool", or drawing a line using a "line-drawing tool". The drawing information may be used to locate the drawing operation, for example, the drawing information may be coordinate information of four vertices of a box drawn based on a "box drawing tool", coordinate information of a point drawn based on a "punctuation tool", or trajectory information of a line drawn based on a "line drawing tool". In one example, the annotation tool of the annotation task includes a "polygon tool", and the client may acquire drawing information of the "polygon tool" when the drawing operation based on the "polygon tool" is acquired. In another example, the annotation tool of the annotation task includes a "polygon tool" and a "text attribute annotation tool", and the client may obtain the rendering information of the "polygon tool" when the rendering operation based on the "polygon tool" is collected, and obtain the rendering information of the "text attribute annotation tool" when the rendering operation based on the "text attribute annotation tool" is collected.
In the embodiment of the present disclosure, the drawing operation of the annotation tool may be normalized, so as to determine whether the drawing operation performed by the annotator is valid. For example, the operating specification of the "draw box tool" may include a minimum size and/or a maximum size of the box. The operating specification of the "screening tool" may include an attribute category. The command line parameters of the "line drawing tool" may include the color and thickness of the line.
In a possible implementation manner, the set of annotation tools further includes an operation specification of each annotation tool; in step S13, labeling the at least one object to be labeled through the labeling interface and obtaining a labeling result corresponding to the at least one object to be labeled, includes: acquiring drawing information of a marking tool under the condition of collecting drawing operation based on the marking tool; and under the condition that the drawing information conforms to the operation specification of the marking tool, processing the object to be marked according to the marking information of the marking tool to obtain a marking result corresponding to the object to be marked.
And under the condition that the drawing information of the marking tool conforms to the operation specification of the marking tool, the drawing operation performed by a marker is effective, and the object to be marked can be processed according to the marking information of the marking tool to obtain a marking result corresponding to the object to be marked. In this way, the labeling rationality can be improved.
And under the condition that the drawing information of the marking tool does not accord with the operation specification of the marking tool, the drawing operation performed by the marking operator is indicated to be invalid, the drawing operation performed by the marking operator based on the marking tool can be ignored or cancelled or deleted, and the processing of the object to be marked according to the marking information of the marking tool is not performed. In this way, the labeling rationality can be improved.
In a possible implementation manner, the data set to be annotated may include an encrypted data set, and before the obtaining of the data set to be annotated in step S12, the method may further include: acquiring an encrypted data set corresponding to the labeling task from a database, wherein the encrypted data set comprises at least one encrypted object to be labeled; decrypting each encrypted object to be marked in the encrypted data set respectively to obtain at least one decrypted object to be marked, wherein each encrypted object to be marked corresponds to one decrypted object to be marked; and determining the objects to be marked in the data set to be marked, except the encrypted data set, and the decrypted objects to be marked obtained by decrypting the encrypted data set as the data set to be marked.
In the embodiment of the present disclosure, the data set to be labeled stored in the database may include encrypted objects to be labeled and/or unencrypted objects to be labeled. Each encrypted object to be marked can be called an encrypted data set, and objects to be marked in the data set to be marked stored in the database, except the encrypted data set, are unencrypted objects to be marked.
The client can acquire the data set to be marked from the database, and can decrypt each encrypted object to be marked in the encrypted data set respectively under the condition that the encrypted data set exists in the data set to be marked, so as to obtain the decrypted object to be marked. Then, each decrypted object to be annotated and the unencrypted object to be annotated in the data set to be annotated, which is obtained from the database, are determined as the data set to be annotated, which is obtained in step S12. The embodiment of the present disclosure does not limit the encryption and decryption method.
The encrypted object to be marked is stored in the database, and the encrypted object to be marked is decrypted when the database is used, so that the safety of the object to be marked can be improved, and the risk of data leakage is reduced.
In one possible implementation, before step S12, the method may further include: and displaying the related information of the labeling task through a task interface. The related information of the labeling task at least comprises one of the task identification of the labeling task, the acquisition progress of the data set to be labeled and the acquisition progress of the labeling tool corresponding to the labeling operation.
For example, as shown in fig. 3, the annotator can execute the downloading operation of the corresponding data set to be annotated by triggering the "downloading" control corresponding to the annotation task. In one example, in response to the downloading operation, the client may obtain the data set to be annotated of the annotation task from a database of the server (corresponding to an online scenario) or the local (corresponding to an offline scenario), and the client may determine the progress of obtaining the data set to be annotated as the progress of obtaining the data set to be annotated. In one example, in response to the downloading operation, the client may further obtain an annotation tool corresponding to the annotation operation from a tool library of the server (corresponding to an online scenario) or the local (corresponding to an offline scenario), and the client may determine a progress of obtaining the annotation tool as a progress of obtaining the annotation tool.
When the progress acquired by the annotator in the task interface shown in fig. 3 reaches the threshold (which may be set as required, for example, 100% or 90%), the annotator may perform the triggering operation of the annotation task by triggering the "annotation control" corresponding to the annotation task, and then jump to the annotation interface in step S13.
In step S13, after acquiring the to-be-annotated data set and the annotation tool corresponding to the annotation operation, the client may display the to-be-annotated data set and the annotation tool in the annotation interface. The annotator can label the object to be labeled displayed in the labeling interface based on the labeling tool displayed in the labeling interface, so as to obtain a labeling result corresponding to the labeled object to be labeled.
In a possible implementation manner, displaying at least one object to be labeled in the data set to be labeled through a labeling interface includes: and displaying an object to be labeled in the data set to be labeled through a labeling interface.
FIG. 5a illustrates one example of a callout interface of an embodiment of the present disclosure. Assume that in step S11, the annotator performs a trigger operation on the annotation task CutTool/1 shown in fig. 3 (i.e., the annotation task with the task identifier "CutTool/1"). In step S12, in response to the trigger operation, the client acquires the to-be-annotated data set and the corresponding annotation tool "line drawing tool" of the annotation task CutTool/1. In step S13, as shown in fig. 5a, the client may display the annotation tool "line drawing tool" and a picture (i.e. an object to be annotated) in the data set to be annotated through the annotation interface. The annotator can adopt a line drawing tool to mark a displayed picture.
Under the condition that one object to be labeled is displayed in a labeling interface of the client, a labeling person can label one object to be labeled at a time. Therefore, the annotating personnel can concentrate on the annotating system, and the accuracy of the annotation is improved.
In one possible implementation manner, displaying at least one object to be labeled in the data set to be labeled through a labeling interface includes: and displaying the plurality of objects to be labeled in the data set to be labeled through a labeling interface.
The marking method includes the steps that when a plurality of objects to be marked are displayed, the plurality of objects to be marked can be displayed simultaneously without shielding, for example, the objects to be marked are displayed side by side or displayed in a shielding mode in a stacking mode.
FIG. 5b illustrates one example of a callout interface of an embodiment of the present disclosure. Assume that in step S11, the annotator performs a trigger operation on the annotation task LabelingShapeFilter/1 shown in fig. 3 (i.e., the annotation task identified as LabelingShapeFilter/1). In step S12, in response to the triggering operation, the client may obtain the data set to be annotated and the corresponding annotation tool "filter tool" of the annotation task LabelingShapeFilter/1. In step S13, as shown in fig. 5b, the client may display the annotation tool "filtering tool" and multiple pictures (i.e. multiple objects to be annotated) in the data set to be annotated through the annotation interface. The 'annotator' can adopt the 'screening tool' to label a plurality of pictures displayed.
Under the condition that the client displays a plurality of objects to be labeled in the labeling interface, the labeling personnel can label the objects to be labeled at one time. Therefore, the operation of a marking operator can be simplified, and the marking efficiency can be improved.
As shown in fig. 5a and 5b, the annotation interface may further include other basic operation controls, such as "previous page" and "next page" controls for changing the displayed object to be annotated. In one example, the annotation interface can also include controls (not shown) such as "zoom" and "drag" to change the presentation of the object to be annotated. The embodiment of the disclosure does not limit the labeling interface.
The annotator can adopt an annotation tool to label the object to be labeled so as to obtain a labeling result corresponding to the labeled object. In one possible implementation, the annotation result includes coordinate information and/or attribute information.
In one example, as shown in fig. 5a, the annotator can click on the "line drawing tool" and then click on the left mouse button and drag to paint the background that needs to be erased in the picture using the "line drawing tool" and/or click on the right mouse button and drag to paint the background that needs to be added in the picture using the "line drawing tool". Of course, in the embodiment of the present disclosure, the thickness and the color of the line painted by the line drawing tool may also be adjusted, which is not limited to this disclosure. After the annotator uses the line drawing tool to complete the partition of the foreground area and the background area, the client can determine the coordinate information of the foreground area and the coordinate information of the background area as the corresponding annotation result of the picture.
In one example, as shown in FIG. 5b, assuming two attribute categories of "dog" and "cat," the annotator can click on the "filter tool" and then click on the left mouse button on the picture to mark the picture as "dog" category or click on the right mouse button to mark the picture as "cat" category. After the annotator determines the category of each picture by using the "screening tool", the client may determine the attribute category (i.e., attribute information) of each picture as the annotation result corresponding to each picture.
After the labeling of the labeling task is completed (namely, after the labeling of each object to be labeled in the data set to be labeled is completed), the labeling operator can submit the labeling result by triggering a "submission control" corresponding to the labeling task. In a possible implementation manner, the annotation result corresponding to the object to be annotated may be stored in the working folder of the annotation task. As shown in fig. 4, the annotation result "label. zip" in the working folder of the annotation task taskId2 includes an annotation result corresponding to each object to be annotated in the data set "image. zip" to be annotated. Zip may be stored under jobs/taskID/packageID.
In the embodiment of the present disclosure, the work folder of the annotation task may further include format information of the annotation result. The format information is used to indicate a storage format of the annotation result in the working folder, for example, the storage format may be json format or html format, and the disclosure is not limited thereto. It should be noted that, if it is desired that the annotation task can be performed based on the previous annotation result, the json file corresponding to each object to be annotated is stored in the annotation result.
With reference to the implementation manners illustrated in fig. 5a and 5b, it can be known that, by using the technical solution provided in the embodiment of the present application, a corresponding labeling tool can be provided for each of two or more different labeling requirements, so that a labeling worker can complete the labeling process of different requirements. That is, in the update iteration process of the product, the technical scheme provided by the embodiment of the application can support the changed service without spending too much resources to develop new labeling software under the condition of service change. Therefore, the development cost is saved, the operation of a marking operator is facilitated, and the maintenance of different marking tools in combination, coordination and the like is not needed frequently.
As shown in fig. 3, the task interface may further show the last annotation time of the annotation task, and for an annotation task that is not annotated, "unmarked" may be displayed; for the annotated annotation task, a specific annotation time, such as 1 month 2 day 2019, may be displayed.
In embodiments of the present disclosure, the annotation operation may comprise one or more drawing actions.
Under the condition that the marking operation comprises a drawing action and can be a punctuation, a drawing line or a drawing frame, a marker executes the marking operation through a marking tool, and a client can display the marking tool through a marking interface.
Under the condition that the labeling operation comprises a plurality of drawing actions, for example, the labeling operation comprises two drawing actions of drawing a frame and adding characters, or the labeling operation comprises three drawing actions of drawing a frame, identifying attributes and adding characters, a labeling operator executes the labeling operation through a plurality of labeling tools, and the client can display the plurality of labeling tools through a labeling interface.
In a possible implementation manner, in a case that a labeling operation includes a plurality of drawing actions, a labeling tool corresponding to the labeling operation is displayed through a labeling interface, including: acquiring the execution sequence of each drawing action in a plurality of drawing actions included in the labeling operation; determining a display mode of a marking tool corresponding to the marking operation according to the execution sequence and the marking tool corresponding to each drawing action; and displaying the marking tools corresponding to the plurality of drawing actions according to the display mode through the marking interface.
Since the annotation operation may include multiple drawing actions, and there is a precedence order before these multiple drawing actions, there is also a precedence order in the use of the annotation tool. The client can determine the calling sequence of each marking tool according to the execution sequence of each drawing action; and determining the display mode of the marking tools according to the calling sequence of each marking tool and the calling condition of each marking tool.
The calling condition of the marking tool comprises that the marking tool is not called and called.
In a possible implementation manner, the client may display all the labeling tools corresponding to the labeling operation in the labeling interface.
In one example, the client may show all the annotation tools corresponding to the annotation operation in the annotation interface, and set each annotation tool to be in a triggerable state.
It is understood that in the case that the annotation tool is in a triggerable state, the annotation tool can be invoked; in the case that the annotation tool is in the non-triggerable state, the annotation tool is not invokable.
Therefore, all marking tools are set to be in a triggerable state, so that a marker can conveniently and automatically select the tools to be used, and the flexibility is improved.
In one example, the client may show all the marking tools corresponding to the marking operation in the marking interface, and set each marking tool to be in a non-triggerable state; and then, sequentially converting each marking tool from the non-triggerable state to the triggerable state according to the calling sequence of the marking tools and the calling condition of the marking tools.
In an initial state, the client may set all the annotation tools corresponding to the annotation operation to be in a non-triggerable state. Then, the marking tool with the calling sequence arranged at the first position in each marking tool is converted from the non-triggerable state to the triggerable state. And finally, under the condition that any marking tool is called, converting the marking tool with the calling sequence arranged at the next position of the called marking tool from the non-triggerable state to the triggerable state.
For example, assume that the annotation tool corresponding to the annotation operation includes: tool A1, tool A2, tool A3 and tool A4, and the calling sequence of each marking tool is tool A1, tool A2, tool A3 and tool A4 from front to back. The client may show tool a1, tool a2, tool A3, and tool a4 in the callout interface, and set tool a1 with the calling order first to a triggerable state, and set tools a2, A3, and a4 other than tool a1 to a non-triggerable state. Thereafter, the annotator may invoke tool a1 in the client and may not invoke tool a2, tool A3, and tool a 4. Tool A2 transitions from a non-triggerable state to a triggerable state if tool A1 is invoked, tool A3 transitions from a non-triggerable state to a triggerable state if tool A2 is invoked, and tool A4 transitions from a non-triggerable state to a triggerable state if tool A3 is invoked.
In one implementation, the tools that are set to a non-triggerable state may be different from the tools that are set to a triggerable state during the presentation. For example, the tool in the non-triggerable state may be distinguished from the tool in the triggerable state in various ways, and specifically, the tool may be one or a combination of a plurality of items selected from icon blurring, higher transparency, dark background setting, and the like. Therefore, the marking personnel can intuitively know whether each marking tool can be triggered or not, and the time consumed by selecting the marking tool in the marking process is saved.
Therefore, the possibility that the annotator mistakenly finds the use sequence of the annotation tools can be reduced, the selection cost of the annotator is reduced, the annotator can quickly determine the annotation tools to be used, and the annotation efficiency is improved.
In a possible implementation manner, the client may display, in the annotation interface, part of the annotation tools in all the annotation tools corresponding to the annotation operation.
In one example, the client determines the currently-callable annotation tool according to the calling sequence of the annotation tool and the calling condition of the annotation tool, and displays the currently-callable annotation tool in the annotation interface.
For example, for tool a1, tool a2, tool A3, and tool a4 as above. The client may present tool a1 in the annotation interface in an initial state, tool a2 in the annotation interface if tool a1 is invoked, tool A3 in the annotation interface if tool a2 is invoked, and tool a4 in the annotation interface if tool A3 is invoked.
It should be noted that, in the case that the annotation tool has been invoked, the client may hide the annotation tool in the annotation interface. Therefore, enough space can be reserved, and for the icons, touch inlets and the like of the marking tools which are not hidden, in one implementation mode, an amplification display mode can be adopted to facilitate the identification of a marking person. For the hidden marking tools in all the marking tools corresponding to the marking operation, the marker can enable the client to display all the marking tools corresponding to the marking operation in the marking interface through specified operation (for example, tool expansion operation or all tool display operation).
In one example, the client may determine the currently callable annotation tool and the subsequently callable annotation tool according to the calling order of the annotation tool and the calling condition of the annotation tool, and the currently callable annotation tool and the subsequently callable annotation tool are in the annotation interface. Wherein, the subsequently invokable tools may include N annotation tools (N is an integer greater than 0) that are arranged in the calling order after and adjacent to the currently invokable annotation tool.
For example, for tool a1, tool a2, tool A3, and tool a4 as above. The client may present tools a1, a2, and A3 in the annotation interface in an initial state, and tools a2, A3, and a4 in the annotation interface if tool a1 is invoked.
Therefore, a limited number of marking tools are displayed, the design of a marking interface can be facilitated, the typesetting of the marking interface is facilitated, and the neatness of the marking interface can be improved. In one example, the number of annotation tools displayed (i.e., N +1) can be determined based on the number of annotation tools that can be accommodated by the toolbar in the annotation interface.
In view of the fact that some objects to be labeled can be labeled for multiple times, some labeled objects cannot be labeled for multiple times, and the number of times that the objects to be labeled which can be labeled for multiple times can be labeled may also be limited, in the embodiment of the present disclosure, the objects to be labeled may be images, videos, audios, and the like which are not labeled, or images, videos, audios which are labeled, and therefore, in the embodiment of the present disclosure, a label identifier may be set, and the label identifier may be used to distinguish whether the objects to be labeled can be labeled again. The label identifier may be set as needed, and the disclosure is not limited thereto.
For example, when the labeling interface obtains an intermediate labeling result of the at least one object to be labeled after labeling the at least one object to be labeled, the obtaining of the labeling result corresponding to the at least one object to be labeled may include: adding a labeling identifier for the intermediate labeling result, and associating the labeling identifier with the at least one object to be labeled; and acquiring the intermediate labeling result associated with the identifier, labeling the intermediate labeling result as an object to be labeled to obtain a final labeling result, and determining the final labeling result as a labeling result corresponding to the at least one object to be labeled.
In the labeling process, whether the object to be labeled has the associated labeling identifier or not can be detected, under the condition that the object to be detected has the associated labeling identifier, an intermediate labeling result associated with the labeling identifier is obtained, and the obtained intermediate labeling result is used as the object to be labeled for labeling so as to obtain a final labeling result.
Taking an object to be labeled which needs to be labeled twice as an example.
In the first annotation process, it is detected that there is no associated annotation identifier for the object to be annotated, and the client may obtain an intermediate annotation result of the object to be annotated according to steps S11 to S13 in the embodiment of the present disclosure. Because the object to be labeled needs to be labeled again, the client can add a labeling identifier to the intermediate labeling result of the object to be labeled and associate the labeling identifier with the object to be labeled.
In the second labeling process, it is detected that the object to be labeled has the associated labeling identifier, and the client can obtain the intermediate labeling result associated with the labeling identifier. Because the object to be labeled does not need to be labeled again, the client can determine the intermediate labeling result of the object to be labeled as the final labeling result corresponding to the object to be labeled.
Take the object to be labeled which needs to be labeled for M times (M is an integer greater than 2) as an example.
The first labeling process may refer to a first labeling process of an object to be labeled, which is labeled twice, and is not described herein again.
In the second labeling process, it is detected that the object to be labeled has the associated labeling identifier, and the client can obtain the intermediate labeling result associated with the labeling identifier. Because the object to be labeled needs to be labeled again, the client can add a labeling identifier to the intermediate labeling result of the object to be labeled and associate the labeling identifier with the object to be labeled.
The third labeling process to the M-1 labeling process may refer to the second labeling process, which is not described herein again.
In the mth annotation process, a second annotation process of the object to be annotated, which is annotated twice, may be referred to, and details are not repeated here.
It should be noted that, whether the object to be labeled is labeled for multiple times, and the number of times that the labeling can be performed may be set as required, which is not limited in this embodiment of the present disclosure.
In a possible implementation manner, the labeling completion identifier may also be associated with the object to be labeled, which is not labeled again. When it is detected that the object to be labeled has the associated labeling completion identifier, the intermediate labeling result of the object to be labeled can be determined as the final labeling result of the object to be labeled. In the case that it is detected that there is no associated labeling completion identifier for the object to be labeled, the steps S11 to S13 may be executed again to label the object to be labeled.
As shown in fig. 4, the working folder of the annotation task taskId2 includes a data set to be annotated "image. zip" and an associated annotation completion identifier "commit. The working folder of the annotation task taskId1 does not include the annotation completion identifier of the data set "image. zip" to be annotated, which indicates that the object to be annotated in the data set "image. zip" to be annotated of the annotation task taskId1 can be annotated again.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an annotation device, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the annotation methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are referred to and are not described again.
FIG. 6 shows a block diagram of an annotation device according to an embodiment of the disclosure. As shown in fig. 6, the labeling device 60 includes:
the first obtaining module 61 is configured to obtain an annotation task, where the annotation task indicates that an annotation operation is performed on a data set to be annotated;
a second obtaining module 62, configured to obtain a data set to be annotated indicated by the annotation task obtained by the first obtaining module 61, and an annotation tool corresponding to an annotation operation indicated by the annotation task obtained by the first obtaining module;
the first display module 63 is configured to display, through a labeling interface, the labeling tool corresponding to the labeling operation acquired by the second acquisition module 62 and at least one object to be labeled in the data set to be labeled acquired by the second acquisition module 62, so that the labeling interface labels the at least one object to be labeled and obtains a labeling result corresponding to the at least one object to be labeled.
In the embodiment of the disclosure, a labeling tool corresponding to the labeling operation indicated by the labeling task can be provided, so that a labeling operator can perform the labeling operation on the data set to be labeled, and further complete the labeling task; because repeated marking operation may exist among the marking tasks, the marking tool is provided based on the marking operation, so that the marking requirement of the marking tasks is met, and the repeated utilization rate of the marking tool is improved.
In one possible implementation manner, the labeling operation includes a plurality of drawing actions, and each drawing action corresponds to one labeling tool;
the first display module is further configured to:
acquiring an execution sequence of each drawing action in the plurality of drawing actions;
determining a display mode of a marking tool corresponding to the marking operation according to the execution sequence and the marking tool corresponding to each drawing action;
and displaying the marking tools corresponding to the plurality of drawing actions according to the display mode through the marking interface.
In a possible implementation manner, the second obtaining module is further configured to:
responding to the triggering operation of the labeling task, and acquiring a task identifier of the labeling task;
acquiring a marking tool set corresponding to the task identifier, wherein the marking tool set comprises tool identifiers of all marking tools corresponding to the marking operation;
and calling each marking tool corresponding to the tool identification from a tool library.
In a possible implementation manner, the data set to be labeled includes an encrypted data set, and the apparatus further includes:
a third obtaining module, configured to obtain an encrypted data set corresponding to the annotation task from a database, where the encrypted data set includes at least one encrypted object to be annotated;
the decryption module is used for decrypting each encrypted object to be marked in the encrypted data set respectively to obtain at least one decrypted object to be marked, wherein each encrypted object to be marked corresponds to one decrypted object to be marked;
and the first determining module is used for determining the objects to be marked in the data set to be marked, except the encrypted data set, and the decrypted objects to be marked, which are obtained by decrypting the encrypted data set, as the data set to be marked.
In one possible implementation, the apparatus further includes:
and the second display module is used for displaying the related information of the annotation task through a task interface, wherein the related information at least comprises one of the task identifier of the annotation task, the acquisition progress of the data set to be annotated and the acquisition progress of the annotation tool corresponding to the annotation operation.
In one possible implementation, the apparatus further includes:
the configuration module is used for configuring a task identifier for the labeling task;
the decomposition module is used for decomposing the labeling operation corresponding to the labeling task into at least one drawing action;
the second determination module is used for determining the marking tools needed to be used for completing the marking tasks according to the marking tools needed to be used for completing each drawing action;
the generating module is used for generating the marking tool set according to the tool identification of the marking tool which is required to be used for completing the marking task;
and the establishing module is used for establishing the corresponding relation between the task identifier and the marking tool set.
In a possible implementation manner, the display module is further configured to:
under the condition that an intermediate labeling result of the at least one object to be labeled is obtained after the at least one object to be labeled is labeled on the labeling interface, adding a labeling identifier for the intermediate labeling result, and associating the labeling identifier with the at least one object to be labeled;
and acquiring the intermediate labeling result associated with the labeling identifier, labeling the intermediate labeling result as an object to be labeled to obtain a final labeling result, and determining the final labeling result as a labeling result corresponding to the at least one object to be labeled.
In a possible implementation manner, the set of annotation tools further includes an operation specification of each annotation tool;
the display module is further configured to:
acquiring drawing information of a marking tool under the condition of collecting drawing operation based on the marking tool;
and under the condition that the drawing information conforms to the operation specification of the marking tool, processing the object to be marked according to the marking information of the marking tool to obtain a marking result corresponding to the object to be marked.
In a possible implementation manner, the drawing action includes one or more of punctuation, drawing a line, drawing a frame, drawing a polygon, identifying an attribute, and adding a character, the object to be labeled includes one or more of a picture, a video, and an audio, and the labeling result includes coordinate information and/or attribute information.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The embodiments of the present disclosure also provide a computer program product, which includes computer readable code, and when the computer readable code runs on a device, a processor in the device executes instructions for implementing the labeling method provided in any of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the annotation method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 7 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 7, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The Memory 804 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random-Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a photosensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge-coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic Device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 8 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 8, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium is not limited to electronic, magnetic, optical, electromagnetic, semiconductor memory devices, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-only Memory (ROM), an erasable programmable Read-only Memory (EPROM or flash Memory), a static random-Access Memory (SRAM), a portable Compact Disc Read-only Memory (CD-ROM), a Digital Versatile Disc (DVD), a Memory stick, a floppy disk, a mechanical coding device, a punch card or an in-groove protrusion structure such as those having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry can execute computer-readable program instructions to implement aspects of the present disclosure by utilizing state information of the computer-readable program instructions to personalize custom electronic circuitry, such as Programmable logic circuits, Field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (20)

1. A method of labeling, the method comprising:
acquiring a labeling task, wherein the labeling task indicates that a labeling operation is executed on a data set to be labeled;
acquiring the data set to be marked and a marking tool corresponding to the marking operation;
and displaying a marking tool corresponding to the marking operation and at least one object to be marked in the data set to be marked through a marking interface, so as to mark the at least one object to be marked on the marking interface and obtain a marking result corresponding to the at least one object to be marked.
2. The method of claim 1, wherein the annotation operation comprises a plurality of drawing actions, one for each annotation tool;
the displaying of the marking tool corresponding to the marking operation through the marking interface includes:
acquiring an execution sequence of each drawing action in the plurality of drawing actions;
determining a display mode of a marking tool corresponding to the marking operation according to the execution sequence and the marking tool corresponding to each drawing action;
and displaying the marking tools corresponding to the plurality of drawing actions according to the display mode through the marking interface.
3. The method according to claim 1 or 2, wherein the obtaining of the annotation tool corresponding to the annotation operation comprises:
responding to the triggering operation of the labeling task, and acquiring a task identifier of the labeling task;
acquiring a marking tool set corresponding to the task identifier, wherein the marking tool set comprises tool identifiers of all marking tools corresponding to the marking operation;
and calling each marking tool corresponding to the tool identification from a tool library.
4. The method according to any one of claims 1 to 3, wherein the data set to be labeled comprises an encrypted data set, and before the acquiring the data set to be labeled, the method further comprises:
acquiring an encrypted data set corresponding to the labeling task from a database, wherein the encrypted data set comprises at least one encrypted object to be labeled;
decrypting each encrypted object to be marked in the encrypted data set respectively to obtain at least one decrypted object to be marked, wherein each encrypted object to be marked corresponds to one decrypted object to be marked;
and determining the objects to be marked in the data set to be marked, except the encrypted data set, and the decrypted objects to be marked obtained by decrypting the encrypted data set as the data set to be marked.
5. The method according to any one of claims 1 to 4, wherein before the acquiring the data set to be labeled and the labeling tool corresponding to the labeling operation, the method further comprises:
and displaying related information of the labeling task through a task interface, wherein the related information at least comprises one of a task identifier of the labeling task, the acquisition progress of the data set to be labeled and the acquisition progress of a labeling tool corresponding to the labeling operation.
6. The method according to claim 3, wherein before the obtaining of the annotation tool corresponding to the annotation operation, the method further comprises:
configuring a task identifier for the labeling task;
decomposing the labeling operation corresponding to the labeling task into at least one drawing action;
determining a marking tool required to be used for completing the marking task according to the marking tool required to be used for completing each drawing action;
generating the marking tool set according to the tool identification of the marking tool required to be used for completing the marking task;
and establishing a corresponding relation between the task identifier and the labeling tool set.
7. The method according to any one of claims 1 to 6, wherein in a case that an intermediate labeling result of the at least one object to be labeled is obtained after the labeling interface labels the at least one object to be labeled;
the obtaining of the labeling result corresponding to the at least one object to be labeled includes:
adding a labeling identifier for the intermediate labeling result, and associating the labeling identifier with the at least one object to be labeled;
and acquiring the intermediate labeling result associated with the labeling identifier, labeling the intermediate labeling result as an object to be labeled to obtain a final labeling result, and determining the final labeling result as a labeling result corresponding to the at least one object to be labeled.
8. The method of claim 3 or 6, wherein the set of annotation tools further comprises an operating specification for each annotation tool;
labeling the at least one object to be labeled through a labeling interface and obtaining a labeling result corresponding to the at least one object to be labeled, including:
acquiring drawing information of a marking tool under the condition of collecting drawing operation based on the marking tool;
and under the condition that the drawing information conforms to the operation specification of the marking tool, processing the object to be marked according to the marking information of the marking tool to obtain a marking result corresponding to the object to be marked.
9. The method according to claim 2 or 6, wherein the drawing action comprises one or more of punctuation, drawing a line, drawing a frame, drawing a polygon, identifying an attribute and adding a character, the object to be annotated comprises one or more of a picture, video and audio, and the annotation result comprises coordinate information and/or attribute information.
10. A marking device, the device comprising:
the system comprises a first acquisition module, a second acquisition module and a marking module, wherein the first acquisition module is used for acquiring a marking task, and the marking task indicates that a marking operation is executed on a data set to be marked;
the second acquisition module is used for acquiring the data set to be annotated indicated by the annotation task acquired by the first acquisition module and an annotation tool corresponding to the annotation operation indicated by the annotation task acquired by the first acquisition module;
the first display module is used for displaying the marking tool corresponding to the marking operation acquired by the second acquisition module and the at least one object to be marked in the data set to be marked acquired by the second acquisition module through a marking interface, so that the marking interface marks the at least one object to be marked and obtains a marking result corresponding to the at least one object to be marked.
11. The apparatus of claim 10, wherein the annotation operation comprises a plurality of drawing actions, one for each annotation tool;
the first display module is further configured to:
acquiring an execution sequence of each drawing action in the plurality of drawing actions;
determining a display mode of a marking tool corresponding to the marking operation according to the execution sequence and the marking tool corresponding to each drawing action;
and displaying the marking tools corresponding to the plurality of drawing actions according to the display mode through the marking interface.
12. The apparatus of claim 10 or 11, wherein the second obtaining module is further configured to:
responding to the triggering operation of the labeling task, and acquiring a task identifier of the labeling task;
acquiring a marking tool set corresponding to the task identifier, wherein the marking tool set comprises tool identifiers of all marking tools corresponding to the marking operation;
and calling each marking tool corresponding to the tool identification from a tool library.
13. The apparatus according to any one of claims 10 to 12, wherein the data set to be labeled comprises an encrypted data set, the apparatus further comprising:
a third obtaining module, configured to obtain an encrypted data set corresponding to the annotation task from a database, where the encrypted data set includes at least one encrypted object to be annotated;
the decryption module is used for decrypting each encrypted object to be marked in the encrypted data set respectively to obtain at least one decrypted object to be marked, wherein each encrypted object to be marked corresponds to one decrypted object to be marked;
and the first determining module is used for determining the objects to be marked in the data set to be marked, except the encrypted data set, and the decrypted objects to be marked, which are obtained by decrypting the encrypted data set, as the data set to be marked.
14. The apparatus of any one of claims 10 to 13, further comprising:
and the second display module is used for displaying the related information of the annotation task through a task interface, wherein the related information at least comprises one of the task identifier of the annotation task, the acquisition progress of the data set to be annotated and the acquisition progress of the annotation tool corresponding to the annotation operation.
15. The apparatus of claim 12, further comprising:
the configuration module is used for configuring a task identifier for the labeling task;
the decomposition module is used for decomposing the labeling operation corresponding to the labeling task into at least one drawing action;
the second determination module is used for determining the marking tools needed to be used for completing the marking tasks according to the marking tools needed to be used for completing each drawing action;
the generating module is used for generating the marking tool set according to the tool identification of the marking tool which is required to be used for completing the marking task;
and the establishing module is used for establishing the corresponding relation between the task identifier and the marking tool set.
16. The apparatus of any one of claims 10 to 15, wherein the display module is further configured to:
under the condition that an intermediate labeling result of the at least one object to be labeled is obtained after the at least one object to be labeled is labeled on the labeling interface, adding a labeling identifier for the intermediate labeling result, and associating the labeling identifier with the at least one object to be labeled;
and acquiring the intermediate labeling result associated with the labeling identifier, labeling the intermediate labeling result as an object to be labeled to obtain a final labeling result, and determining the final labeling result as a labeling result corresponding to the at least one object to be labeled.
17. The apparatus of claim 12 or 15, wherein the set of annotation tools further comprises an operating specification for each annotation tool;
the display module is further configured to:
acquiring drawing information of a marking tool under the condition of collecting drawing operation based on the marking tool;
and under the condition that the drawing information conforms to the operation specification of the marking tool, processing the object to be marked according to the marking information of the marking tool to obtain a marking result corresponding to the object to be marked.
18. The apparatus according to claim 11 or 15, wherein the drawing action includes one or more of punctuation, drawing a line, drawing a frame, drawing a polygon, identifying an attribute, and adding a text, the object to be annotated includes one or more of a picture, video, and audio, and the annotation result includes coordinate information and/or attribute information.
19. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 9.
20. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 9.
CN202010059389.2A 2020-01-19 2020-01-19 Labeling method and device, electronic equipment and storage medium Withdrawn CN111309995A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010059389.2A CN111309995A (en) 2020-01-19 2020-01-19 Labeling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010059389.2A CN111309995A (en) 2020-01-19 2020-01-19 Labeling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111309995A true CN111309995A (en) 2020-06-19

Family

ID=71144918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010059389.2A Withdrawn CN111309995A (en) 2020-01-19 2020-01-19 Labeling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111309995A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346807A (en) * 2020-11-06 2021-02-09 广州小鹏自动驾驶科技有限公司 Image annotation method and device
CN112686009A (en) * 2020-12-23 2021-04-20 中国人民解放军战略支援部队信息工程大学 Voice marking system and method
CN112732949A (en) * 2021-01-19 2021-04-30 广州虎牙科技有限公司 Service data labeling method and device, computer equipment and storage medium
CN113377477A (en) * 2021-06-24 2021-09-10 上海商汤科技开发有限公司 Data labeling method, device, equipment and storage medium
CN113377980A (en) * 2021-06-24 2021-09-10 上海商汤科技开发有限公司 Information labeling method and device, electronic equipment and storage medium
CN113392263A (en) * 2021-06-24 2021-09-14 上海商汤科技开发有限公司 Data labeling method and device, electronic equipment and storage medium
CN113407869A (en) * 2021-06-02 2021-09-17 北京爱笔科技有限公司 Beacon labeling method, device, computer equipment and storage medium
CN113420149A (en) * 2021-06-30 2021-09-21 北京百度网讯科技有限公司 Data labeling method and device
CN113449142A (en) * 2021-06-30 2021-09-28 北京百度网讯科技有限公司 Information processing method and device, electronic equipment, storage medium and product
CN113742553A (en) * 2021-09-03 2021-12-03 上海哔哩哔哩科技有限公司 Data processing method and device
WO2022267279A1 (en) * 2021-06-24 2022-12-29 上海商汤科技开发有限公司 Data annotation method and apparatus, and electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016150328A1 (en) * 2015-03-25 2016-09-29 阿里巴巴集团控股有限公司 Data annotation management method and apparatus
US20170178266A1 (en) * 2015-12-16 2017-06-22 Sap Se Interactive data visualisation of volume datasets with integrated annotation and collaboration functionality
CN108829435A (en) * 2018-06-19 2018-11-16 数据堂(北京)科技股份有限公司 A kind of image labeling method and general image annotation tool
CN110457494A (en) * 2019-08-01 2019-11-15 新华智云科技有限公司 Data mask method, device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016150328A1 (en) * 2015-03-25 2016-09-29 阿里巴巴集团控股有限公司 Data annotation management method and apparatus
US20170178266A1 (en) * 2015-12-16 2017-06-22 Sap Se Interactive data visualisation of volume datasets with integrated annotation and collaboration functionality
CN108829435A (en) * 2018-06-19 2018-11-16 数据堂(北京)科技股份有限公司 A kind of image labeling method and general image annotation tool
CN110457494A (en) * 2019-08-01 2019-11-15 新华智云科技有限公司 Data mask method, device, electronic equipment and storage medium

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346807A (en) * 2020-11-06 2021-02-09 广州小鹏自动驾驶科技有限公司 Image annotation method and device
CN112686009A (en) * 2020-12-23 2021-04-20 中国人民解放军战略支援部队信息工程大学 Voice marking system and method
CN112732949A (en) * 2021-01-19 2021-04-30 广州虎牙科技有限公司 Service data labeling method and device, computer equipment and storage medium
CN112732949B (en) * 2021-01-19 2023-10-17 广州虎牙科技有限公司 Service data labeling method and device, computer equipment and storage medium
CN113407869A (en) * 2021-06-02 2021-09-17 北京爱笔科技有限公司 Beacon labeling method, device, computer equipment and storage medium
CN113377477A (en) * 2021-06-24 2021-09-10 上海商汤科技开发有限公司 Data labeling method, device, equipment and storage medium
CN113392263A (en) * 2021-06-24 2021-09-14 上海商汤科技开发有限公司 Data labeling method and device, electronic equipment and storage medium
WO2022267279A1 (en) * 2021-06-24 2022-12-29 上海商汤科技开发有限公司 Data annotation method and apparatus, and electronic device and storage medium
CN113377980A (en) * 2021-06-24 2021-09-10 上海商汤科技开发有限公司 Information labeling method and device, electronic equipment and storage medium
CN113420149A (en) * 2021-06-30 2021-09-21 北京百度网讯科技有限公司 Data labeling method and device
CN113449142A (en) * 2021-06-30 2021-09-28 北京百度网讯科技有限公司 Information processing method and device, electronic equipment, storage medium and product
CN113742553A (en) * 2021-09-03 2021-12-03 上海哔哩哔哩科技有限公司 Data processing method and device
CN113742553B (en) * 2021-09-03 2024-03-19 上海哔哩哔哩科技有限公司 Data processing method and device

Similar Documents

Publication Publication Date Title
CN111309995A (en) Labeling method and device, electronic equipment and storage medium
EP3301558A1 (en) Method and device for sharing content
EP2699029B1 (en) Method and device for providing a message function
EP3515046A1 (en) Task management based on instant communication message
JP2020516994A (en) Text editing method, device and electronic device
EP4002107B1 (en) Data binding method, apparatus, and device of mini program, and storage medium
CN112465843A (en) Image segmentation method and device, electronic equipment and storage medium
CN106775202B (en) Information transmission method and device
CN113806054A (en) Task processing method and device, electronic equipment and storage medium
CN111554382B (en) Medical image processing method and device, electronic equipment and storage medium
CN106155542B (en) Picture processing method and device
CN105045504A (en) Image content extraction method and apparatus
CN113065591B (en) Target detection method and device, electronic equipment and storage medium
CN112035031B (en) Note generation method and device, electronic equipment and storage medium
CN114691115A (en) Business process system generation method and device, electronic equipment and storage medium
CN112508020A (en) Labeling method and device, electronic equipment and storage medium
CN112102300B (en) Counting method and device, electronic equipment and storage medium
CN113705653A (en) Model generation method and device, electronic device and storage medium
CN113031781A (en) Augmented reality resource display method and device, electronic equipment and storage medium
CN106775249B (en) Method for setting communication shortcut and electronic equipment
CN110196747B (en) Information processing method and device
CN113392263A (en) Data labeling method and device, electronic equipment and storage medium
CN112565844A (en) Video communication method and device and electronic equipment
CN113778431B (en) Method, apparatus, device, readable storage medium and product for dynamic rendering of applet
CN111273973B (en) Copy and paste method, apparatus and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200619