CN113839953A - Labeling method and device, electronic equipment and storage medium - Google Patents

Labeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113839953A
CN113839953A CN202111137177.2A CN202111137177A CN113839953A CN 113839953 A CN113839953 A CN 113839953A CN 202111137177 A CN202111137177 A CN 202111137177A CN 113839953 A CN113839953 A CN 113839953A
Authority
CN
China
Prior art keywords
image
labeling
annotation
request
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111137177.2A
Other languages
Chinese (zh)
Inventor
孙岳枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Technology Development Co Ltd
Original Assignee
Shanghai Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Technology Development Co Ltd filed Critical Shanghai Sensetime Technology Development Co Ltd
Priority to CN202111137177.2A priority Critical patent/CN113839953A/en
Publication of CN113839953A publication Critical patent/CN113839953A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution

Abstract

The present disclosure relates to a labeling method and apparatus, an electronic device, and a storage medium, where the method is applied to a server and includes: receiving a labeling request sent by a client, wherein the labeling request is used for requesting to label an image to be labeled; based on the marking request, requesting an image marking end to mark the image to be marked; receiving a marking result of the image to be marked returned by the image marking end, and storing the marking result; and sending the labeling result to the client under the condition of receiving a labeling progress query request aiming at the image to be labeled, which is sent by the client. The embodiment of the disclosure can improve the labeling efficiency.

Description

Labeling method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a labeling method and apparatus, an electronic device, and a storage medium.
Background
Currently, sample labeling can be divided into manual labeling and auxiliary labeling. In the process of labeling, for some targets which are more complicated to label, the pure manual labeling is not very fast, for example, in a segmentation task, the targets are extracted from the pictures manually, the process needs to concentrate on labeling the pictures very finely, a labeling worker needs to switch between a keyboard and a mouse repeatedly, the operation is not very convenient, in addition, a video tracking task is also carried out, the labeling worker needs to turn over the pictures repeatedly to label, and the energy is also very consumed.
Manual labeling, while capable of accurately labeling images, is inefficient. With the rapid increase of the number of samples, the manual labeling is difficult to meet the requirements, so that for such an application scenario, a computer automatic labeling model can be adopted to assist a labeling operator to label, that is, the sample to be labeled is labeled by the automatic labeling model and then is manually corrected, which is equivalent to transferring a part of workload to a computer.
The annotation model is often located in a remote server, so that the annotation model is relatively dependent on a network environment when used for assisting annotation, and if a client side makes an error when downloading an annotation result, the client side will re-request the annotation model for re-annotation, which causes repeated consumption of processing resources, and therefore, a problem of low annotation efficiency caused by a network problem needs to be solved urgently.
Disclosure of Invention
The disclosure provides a labeling technical scheme to solve the problem of low labeling efficiency caused by network problems.
According to an aspect of the present disclosure, there is provided a labeling method applied to a server, including:
receiving a labeling request sent by a client, wherein the labeling request is used for requesting to label an image to be labeled; based on the marking request, requesting an image marking end to mark the image to be marked; receiving a marking result of the image to be marked returned by the image marking end, and storing the marking result; and sending the labeling result to the client under the condition of receiving a labeling progress query request aiming at the image to be labeled, which is sent by the client.
In a possible implementation manner, after receiving an annotation request sent by a client, the method further includes: generating a task identifier corresponding to the labeling request, and sending the task identifier to the client; the receiving and storing of the labeling result of the image to be labeled returned by the image labeling end includes: and performing associated storage on the labeling result of the image to be labeled and the task identifier.
In a possible implementation manner, the query request for labeling progress includes the task identifier; the sending the labeling result to the client includes: and sending the labeling result stored in association with the task identifier to the client.
In one possible implementation, the method further includes: determining the annotation progress of the image annotation end to the image to be annotated, wherein the annotation progress comprises one of annotation, successful annotation and failure annotation; and sending the annotation progress to the client under the condition of receiving an annotation progress query request sent by the client.
In a possible implementation manner, the annotation request includes annotation task information, where the annotation task information includes: marking image information to be marked, marking task types and marking task parameters; based on the labeling request, requesting an image labeling end to label the image to be labeled, including: and packaging the labeling task information into labeling task data in a preset data format, sending the labeling task data to the image labeling end, and requesting the image labeling end to label the image to be labeled.
In one possible implementation manner, the server communicates with the image annotation terminal by using an asynchronous communication mechanism.
In a possible implementation manner, the receiving an annotation request sent by a client includes: and receiving a request for querying the marked progress sent by the client in a polling mode.
According to an aspect of the present disclosure, there is provided an annotation method applied to a client, including: sending a labeling request to a server, wherein the labeling request is used for requesting to label the image to be labeled so as to indicate that the server requests an image labeling end to label the image to be labeled based on the labeling request; and sending a marking progress inquiry request to the server so as to receive a marking result returned by the server under the condition that the server stores the marking result of the image to be marked returned by the image marking end.
According to an aspect of the present disclosure, there is provided a labeling apparatus applied to a server, including:
the annotation request receiving module is used for receiving an annotation request sent by a client, wherein the annotation request is used for requesting to annotate the image to be annotated; the marking request module is used for requesting the image marking end to mark the image to be marked based on the marking request; the marking result receiving module is used for receiving and storing the marking result of the image to be marked, which is returned by the image marking end; and the annotation result sending module is used for sending the annotation result to the client under the condition of receiving an annotation progress query request aiming at the image to be annotated sent by the client.
In a possible implementation manner, the annotation request receiving module is further configured to generate a task identifier corresponding to an annotation request after receiving the annotation request sent by a client, and send the task identifier to the client; and the labeling result receiving module is used for storing the labeling result of the image to be labeled and the task identifier in a correlation manner.
In a possible implementation manner, the query request for labeling progress includes the task identifier; and the labeling result sending module is used for sending the labeling result stored in association with the task identifier to the client.
In one possible implementation, the apparatus further includes: the annotation progress determining module is used for determining the annotation progress of the image annotation end to the image to be annotated, wherein the annotation progress comprises one of annotation, successful annotation and failure annotation; and the annotation progress sending module is used for sending the annotation progress to the client under the condition of receiving an annotation progress query request sent by the client.
In a possible implementation manner, the annotation request includes annotation task information, where the annotation task information includes: marking image information to be marked, marking task types and marking task parameters; the annotation request module is used for packaging the annotation task information into annotation task data in a preset data format, sending the annotation task data to the image annotation end, and requesting the image annotation end to annotate the image to be annotated.
In one possible implementation manner, the server communicates with the image annotation terminal by using an asynchronous communication mechanism.
In a possible implementation manner, the annotation request receiving module is configured to receive an annotation progress query request sent by the client in a polling manner.
According to an aspect of the present disclosure, there is provided a labeling apparatus applied to a client, including:
the annotation request sending module is used for sending an annotation request to the server, wherein the annotation request is used for requesting to annotate the image to be annotated so as to indicate that the server requests the image annotation end to annotate the image to be annotated based on the annotation request;
and the progress query request sending module is used for sending a labeling progress query request to the server so as to receive a labeling result returned by the server under the condition that the server stores the labeling result of the image to be labeled returned by the image labeling end.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, after receiving a labeling request sent by a client for requesting labeling of an image to be labeled, a server requests an image labeling end to label the image to be labeled, receives a labeling result of the image to be labeled returned by the image labeling end, and stores the labeling result; and sending the labeling result to the client under the condition of receiving a labeling progress query request sent by the client. Therefore, the server side is used as a middleware of the client side and the image labeling side to process the request of the client side, the labeling result of the image labeling side is stored, and the stored labeling result can be directly sent to the client side under the condition that the labeling progress query request sent by the client side is received.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of an annotation method according to an embodiment of the present disclosure.
FIG. 2 shows a flow chart of another annotation method according to an embodiment of the present disclosure.
Fig. 3 shows a schematic structural diagram of an annotation system according to an embodiment of the present disclosure.
FIG. 4 shows a flow chart of another annotation method according to an embodiment of the present disclosure.
FIG. 5 shows a block diagram of an annotation device in accordance with an embodiment of the disclosure.
FIG. 6 shows a block diagram of another annotation device in accordance with embodiments of the disclosure.
Fig. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
FIG. 8 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
With the development of artificial intelligence technology, which is a very critical loop for machine learning, the artificial intelligence technology has greatly changed our production and life. A machine learning model which can be put into application needs to go through the processes of model building and model training. During model training, a large amount of sample data is generally required to be collected and labeled, each sample data and a corresponding labeling result are used as a group of training samples, and the established model is trained. It can be seen that sample labeling is a very critical link in model training.
For example, in the training process of the target recognition model, the target frame to be recognized is selected from the image through manual labeling to serve as labeling information, and then the labeled image is used for training the target recognition model, so that the process needs to focus on and label the image very finely.
Because manual labeling is time-consuming and labor-consuming, at present, a user with a labeling requirement can upload an image to be labeled to an image labeling end, and then the image labeling end can label the image uploaded by the user by using an integrated labeling model. The image labeling end can be internally provided with various labeling models so as to execute various image labeling tasks.
However, the annotation model is often located in a remote server, and therefore, the annotation model is relatively dependent on a network environment when used for assisting annotation, and if an error occurs when the client downloads the annotation result, the client will request the annotation model to re-annotate, which causes repeated consumption of processing resources, and thus, there is a need to solve the problem of low annotation efficiency caused by a network problem.
The disclosed embodiment provides a labeling method, in the disclosed embodiment, after receiving a labeling request sent by a client for requesting labeling of an image to be labeled, a server requests an image labeling end to label the image to be labeled, receives a labeling result of the image to be labeled returned by the image labeling end, and stores the labeling result; and sending the annotation result to the client under the condition of receiving an annotation progress query request sent by the client. Therefore, the server side is used as a middleware of the client side and the image labeling side to process the request of the client side, the labeling result of the image labeling side is stored, and the stored labeling result can be directly sent to the client side under the condition that the labeling progress query request sent by the client side is received.
The annotation method provided by the embodiment of the disclosure uses the server as middleware to process the annotation request sent by the client to the image annotation end, and introduces the annotation method provided by the disclosure using the execution subject of the annotation method as the server. The server may be located in a server and executed by an electronic device such as a server, and it should be understood that the execution subject of the method is the server only by way of example, and should not be construed as a limitation to the method.
Fig. 1 shows a flowchart of a labeling method according to an embodiment of the present disclosure, and as shown in fig. 1, the labeling method is applied to a server, and includes:
in step S11, a labeling request sent by a client is received, where the labeling request is used to request labeling of an image to be labeled;
in the embodiment of the present disclosure, the image to be annotated may be an image, or may also be a video, etc. The image to be labeled can be a training sample used for training a machine model in machine learning.
The annotation request may include an image to be annotated, in the embodiment of the present disclosure, a user may select the image to be annotated at a client to upload, the user may log in a pre-registered target account at the client, and after the client logs in successfully, the user may upload the image to be annotated that needs to be annotated through the client. Before uploading the image to be marked, the user can perform operations such as packaging, naming and the like on the image to be marked according to the specified data specification, and then upload the image to be marked.
In addition, the image to be annotated may be an image stored in a network, for example, an image stored in a network hard disk of a server, and in the case that the image to be annotated is an image located in the network hard disk, the annotation request may include an access address of the image to be annotated in the network hard disk. The user can input the access address of the image to be labeled and then sends a labeling request through the client.
In addition, the annotation request may also include information such as an annotation task type and an annotation task parameter, which may be specifically referred to a possible implementation manner provided in the present disclosure and is not described herein again.
In step S12, based on the tagging request, requesting an image tagging end to tag the image to be tagged;
after receiving a labeling request sent by a client, a server requests an image labeling end to label an image to be labeled based on the labeling request. The image annotation end can be specifically positioned in the server, and the server has strong calculation power, so that the rapid annotation service can be provided.
The server side can forward the image to be labeled in the labeling request to the image labeling side, meanwhile, information such as the labeling task type and the labeling task parameter is forwarded to the image labeling side, and the image labeling side executes the labeling task of the image to be labeled. And after the image labeling end executes the labeling task of the image to be labeled, the labeling result is returned to the server end.
In step S13, receiving and storing the labeling result of the to-be-labeled image returned by the image labeling end;
after receiving the labeling result of the image to be labeled returned by the image labeling end, the server end stores the labeling result, for example, the labeling result can be stored in a Remote Dictionary Service (Redis) database, the Redis database is a memory database supporting persistence, because of the volatile characteristic of the memory, the data in the memory is stored to a disk through the Redis database to ensure the persistence of the data, and therefore, the operation is also called data persistence operation. In addition, the annotation result can be stored in other types of databases, and the present disclosure is not limited thereto.
For a specific storage manner of the annotation result provided by the present disclosure, reference may be made to possible implementation manners provided by the present disclosure, and details are not described herein.
In step S14, when receiving the query request of the annotation progress for the to-be-annotated image sent by the client, the annotation result is sent to the client.
The client can inquire the annotation progress of the image to be annotated, the inquiry is realized by sending an annotation progress inquiry request to the server, and the annotation progress inquiry request can be a request manually triggered by a user, namely, the annotation progress inquiry request is sent to the server after receiving an operation instruction of the user, or can also be sent automatically at regular time, namely, the annotation task inquiry request is sent to the server at regular time according to a preset time interval.
In one possible implementation, the method further includes: determining the annotation progress of the image annotation end to the image to be annotated, wherein the annotation progress comprises one of annotation, successful annotation and failure annotation; and sending the annotation progress to the client under the condition of receiving an annotation progress query request sent by the client.
And the image labeling end returns the labeling result to the server end under the condition of successful labeling, and returns the result of failed labeling to the server end under the condition of failed labeling, and the image labeling end indicates that the image to be labeled is being labeled under the condition that the server end does not receive the labeling result and the result of failed labeling. In addition, the server side can also actively send a request for inquiring the annotation progress to the image annotation end to inquire the annotation progress of the image to be annotated. When the image annotation end is annotating the image to be annotated, a specific percentage of the annotation progress can be returned to the server end, for example, the annotation is completed by 50%.
After determining the annotation progress of the image annotation end, the server end can send the annotation progress to the client end under the condition of receiving an annotation progress query request sent by the client end. And after receiving the marking result of the image marking end, namely under the condition of successful marking, the marking result can be simultaneously sent to the client.
In the embodiment of the disclosure, after receiving a labeling request sent by a client for requesting labeling of an image to be labeled, a server requests an image labeling end to label the image to be labeled, receives a labeling result of the image to be labeled returned by the image labeling end, and stores the labeling result; and sending the annotation result to the client under the condition of receiving an annotation progress query request sent by the client. Therefore, the server side is used as a middleware of the client side and the image labeling side to process the request of the client side, the labeling result of the image labeling side is stored, and the stored labeling result can be directly sent to the client side under the condition that the labeling progress query request sent by the client side is received.
In a possible implementation manner, after receiving an annotation request sent by a client, the method further includes: generating a task identifier corresponding to the labeling request, and sending the task identifier to the client; the receiving and storing of the labeling result of the image to be labeled returned by the image labeling end includes: and performing associated storage on the labeling result of the image to be labeled and the task identifier.
In this implementation manner, a task identifier (id) of the annotation request may be identified as a tracking identifier in the annotation task flow, and after the task identifier is generated, the task identifier may be returned to the client, so that the subsequent client may perform progress query.
When the labeling result is stored, the labeling result of the image to be labeled and the task identifier can be stored in an associated manner, so that the subsequent query of the labeling result is facilitated. Taking the storage of the annotation result in the Redis database as an example, the Redis database is a cache system which has an index key (key) and an index value (value) structure and supports query based on the index key, for example, the index key is a primary key of a storage object and is used for uniquely identifying an object record, and the index value is an object record corresponding to the primary key. Thus, the object record can be inquired based on the primary key in the Redis cache system. Then, the task identifier can be used as a key, the labeling result can be used as a value, and the value can be stored as a data record, that is, the labeling result of the image to be labeled and the task identifier can be stored in an associated manner.
In the embodiment of the disclosure, the task identifier corresponding to the tagging request is generated and sent to the client, and then the tagging result of the image to be tagged and the task identifier are stored in an associated manner, so that the tracking tag in the tagging task flow is realized through the task identifier, the subsequent quick query of the tagging result is facilitated, and the tagging efficiency is improved.
In a possible implementation manner, the query request for labeling progress includes the task identifier; the sending the labeling result to the client includes: and sending the labeling result stored in association with the task identifier to the client.
After receiving the task identifier, the front end can send a query request for the annotation progress to the server according to the task identifier, that is, the query request for the annotation progress can include the task identifier, so that the server can analyze the task identifier in the query request for the annotation progress after receiving the query request for the annotation progress, then query the annotation progress according to the task identifier, and find out an annotation result stored in association with the task identifier from the database under the condition that the annotation progress is successful, and send the annotation result to the client.
In the embodiment of the disclosure, the query request for the annotation progress includes the task identifier, so that the server can send the annotation result stored in association with the task identifier to the client. The server side realizes the storage and the sending of the labeling result in the form of the middleware of the client side and the image labeling side, and improves the labeling efficiency of the image to be labeled.
In a possible implementation manner, the receiving an annotation request sent by a client includes: and receiving a request for querying the marked progress sent by the client in a polling mode.
The client sends out a progress query request at regular time and inquires the server about the marking progress, so that the server can return the marking result to the client in time under the condition of successful marking, and the marking efficiency is improved.
In a possible implementation manner, the annotation request includes annotation task information, where the annotation task information includes: marking image information to be marked, marking task types and marking task parameters; based on the labeling request, requesting an image labeling end to label the image to be labeled, including: and packaging the labeling task information into labeling task data in a preset data format, sending the labeling task data to the image labeling end, and requesting the image labeling end to label the image to be labeled.
In this implementation manner, the image information to be labeled may be the image information itself, i.e. a picture or a video, or may also be a network storage address of the image information.
The annotation task type is used for representing different annotation tasks, and at the image annotation end, the annotation task type can be used for identifying an annotation model for executing different annotation tasks, for example, a model for annotating a vehicle in an image, or a model for framing and annotating a face in the image, and the like.
In the client, a selection option of the annotation task type can be displayed, and after the user selects the annotation task type, the annotation task type in the annotation request can be determined. Or, a selection option of the annotation model may be displayed in the client, and after the user selects the annotation model, the type of the annotation task in the annotation request may be determined. The image labeling end can determine a labeling model used when the labeling task is executed according to the type of the labeling task.
The annotation task parameter is used to indicate parameter information when the annotation task is executed, for example, when a training sample used for training the tracking model is annotated, since the tracking model tracks the position of an object in a continuous picture, the annotation task parameter may be the start position and the end position in a video frame (to-be-annotated image), or may also be the position of the continuous video frame. The image annotation end may perform an annotation operation on the image to be annotated according to the annotation task parameter, for example, in a video frame, perform an annotation operation on the video frame between the start position and the end position indicated by the annotation task parameter.
In this implementation manner, the annotation task information may be encapsulated into annotation task data in a predetermined data format, the annotation task data is sent to the image annotation terminal, and the image annotation terminal is requested to annotate the image to be annotated. The predetermined data format may be a format of data supported by the image annotation end, for example, a Json format.
In the embodiment of the present disclosure, the annotation task information is encapsulated into annotation task data in a predetermined data format, and then the annotation task data is sent to the image annotation terminal, and the image annotation terminal is requested to annotate the image to be annotated. Therefore, the server side is used as the middleware, the format conversion of the labeling request of the client side is realized, and the labeling request is converted into the preset data format supported by the image labeling side, so that the client side does not need to adapt to the specific request format for the image labeling side, and the coupling of the image labeling side is improved.
In one possible implementation manner, the server communicates with the image annotation terminal by using an asynchronous communication mechanism.
In the asynchronous communication mechanism, data may be transmitted in character or byte units constituting character frames. The character frames are sent frame by the sending end and received frame by the receiving end through the transmission line. The sending end and the receiving end can control the sending and receiving of data by respective clocks, and the two clock sources are independent of each other and are not synchronous with each other. In the embodiment of the disclosure, the server communicates with the image labeling end by using an asynchronous communication mechanism, so that the communication problem caused by network fluctuation can be reduced, and the communication success rate is improved, so as to improve the labeling efficiency of the image to be labeled.
The annotation method provided by the embodiment of the disclosure is to take a server as middleware to process an annotation request sent by a client to an image annotation end, and take an execution main body of the annotation method as the client as an example to introduce the annotation method provided by the disclosure. The client may be located in a terminal device, and executed by an electronic device such as a server, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling computer readable instructions stored in a memory.
It is understood that the implementation of the method by the terminal device is only an exemplary illustration, and should not be construed as a limitation of the method.
Fig. 2 is a flowchart illustrating an annotation method according to an embodiment of the disclosure, and as shown in fig. 2, the annotation method applied to a client includes:
in step S21, a tagging request is sent to the server, where the tagging request is used to request tagging of an image to be tagged, so as to indicate that the server requests an image tagging end to tag the image to be tagged based on the tagging request;
the annotation request includes: the method comprises the steps that image information to be marked, a marking task type and a marking task parameter are marked, a user can select the image information to be marked, the marking task type and the marking task parameter in a client interface, and after the user finishes selecting, an operation instruction can be sent to the client to indicate the client to send a marking request to a server.
In step S22, a request for querying a labeling progress is sent to the server, so as to receive a labeling result returned by the server when the server stores a labeling result of the to-be-labeled image returned by the image labeling end.
After sending a labeling request to a server, a client may send a labeling progress query request to the server to query the labeling progress of an image labeling end on an image to be labeled, where the labeling progress query request may be a request manually triggered by a user, that is, after receiving an operation instruction of the user, the client sends a labeling progress query request to the server, or may also send the labeling progress query request to the server automatically at regular time, that is, at regular time according to a preset time interval.
Under the condition that the marking result of the image to be marked returned by the image marking end is stored, the server end sends the marking result to the client end, and the client end can receive the marking result returned by the server end.
In the embodiment of the disclosure, a client sends a marking request to a server, wherein the marking request is used for requesting to mark an image to be marked so as to indicate that the server is based on the marking request and request an image marking end to mark the image to be marked; and then sending a request for inquiring the annotation progress to the server so as to receive the annotation result returned by the server under the condition that the server stores the annotation result of the image to be annotated returned by the image annotation terminal. Therefore, the client does not directly send the labeling request and the labeling progress query request to the image labeling end, but sends the labeling request and the labeling progress query request to the server, the server serves as a middleware of the client and the image labeling end to process the request of the client, the labeling result of the image labeling end is stored, and the stored labeling result can be directly sent to the client under the condition that the labeling progress query request sent by the client is received.
Please refer to fig. 3, which is a schematic structural diagram of a labeling system provided by the present disclosure, the system includes a client 31, a server 32, an image labeling end 33, and a storage module 34.
The workflow diagram of the labeling system is shown in fig. 4, and includes:
in step S41, the client sends a labeling request to the server, where the labeling request is used to request labeling of the image to be labeled;
in step S42, the server receives a labeling request sent by the client, and requests the image labeling end to label the image to be labeled based on the labeling request;
in step S43, the image annotation end annotates the image to be annotated, and sends the annotation result to the server after the annotation is completed;
when the annotation fails, the image annotation terminal also sends the result of the annotation failure to the server terminal.
In step S44, the server receives and stores the labeling result of the to-be-labeled image returned by the image labeling end;
in step S45, the client sends a request for querying the annotation progress to the server to request querying the annotation progress;
in step S46, the server sends the annotation result stored by the server to the client when receiving the annotation progress query request for the image to be annotated sent by the client.
In the embodiment of the disclosure, the server serves as a middleware between the client and the image annotation terminal to process the request of the client, and stores the annotation result of the image annotation terminal, and when the annotation progress query request sent by the client is received, the stored annotation result can be directly sent to the client.
An application scenario of the embodiment of the present disclosure is explained below. In the application scene, the image to be labeled is a video frame, and the labeling task is to label a training sample used for tracking model training. The following provides an exemplary description of the labeling method provided by the present disclosure in this application scenario.
A user selects a video frame to be marked through a client; selecting a labeling task type as a tracking model training sample label; the input labeling task parameters are as follows: and taking the 11 th frame of the video frame as a starting frame and the 100 th frame of the video frame as an ending frame, and manually marking the position of the tracking object in the starting frame and the ending frame. After the user finishes the selection, the client can be triggered to send a labeling request to the server.
After receiving the annotation request, the server can generate an id for identifying the annotation task, return the id to the client, convert the data in the annotation request into a Json format, and send the Json format to the image annotation module. And the image labeling module labels the tracking object of the video frame between the 11 th frame and the 100 th frame in the video frames, and after the labeling is finished, the labeling result is sent to the server side. And the server stores the annotation result and the id association into a Redis database.
The client sends a marked progress query request to the server in a polling mode, the marked progress query request comprises an id of a marked task, the server analyzes the id in the request to query the marked progress after receiving the marked progress query request, and queries a related and stored marked result in a Redis database through the id under the condition that the marked progress is marked successfully, and the marked result is sent to the client. After the client successfully receives the labeling result, the labeling of the image to be labeled is realized.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an annotation device, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the annotation methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are referred to and are not described again.
Fig. 5 is a block diagram of an annotation device according to an embodiment of the disclosure, and as shown in fig. 5, the annotation device is applied to a server, and includes:
a labeling request receiving module 51, configured to receive a labeling request sent by a client, where the labeling request is used to request to label an image to be labeled;
a labeling request module 52, configured to request an image labeling end to label the image to be labeled based on the labeling request;
a labeling result receiving module 53, configured to receive and store a labeling result of the to-be-labeled image returned by the image labeling end;
and an annotation result sending module 54, configured to send the annotation result to the client when receiving an annotation progress query request for the image to be annotated sent by the client.
In a possible implementation manner, the annotation request receiving module is further configured to generate a task identifier corresponding to an annotation request after receiving the annotation request sent by a client, and send the task identifier to the client; and the labeling result receiving module is used for storing the labeling result of the image to be labeled and the task identifier in a correlation manner.
In a possible implementation manner, the query request for labeling progress includes the task identifier; and the labeling result sending module is used for sending the labeling result stored in association with the task identifier to the client.
In one possible implementation, the apparatus further includes: the annotation progress determining module is used for determining the annotation progress of the image annotation end to the image to be annotated, wherein the annotation progress comprises one of annotation, successful annotation and failure annotation; and the annotation progress sending module is used for sending the annotation progress to the client under the condition of receiving an annotation progress query request sent by the client.
In a possible implementation manner, the annotation request includes annotation task information, where the annotation task information includes: marking image information to be marked, marking task types and marking task parameters; the annotation request module is used for packaging the annotation task information into annotation task data in a preset data format, sending the annotation task data to the image annotation end, and requesting the image annotation end to annotate the image to be annotated.
In one possible implementation manner, the server communicates with the image annotation terminal by using an asynchronous communication mechanism.
In a possible implementation manner, the annotation request receiving module is configured to receive an annotation progress query request sent by the client in a polling manner.
Fig. 6 is a block diagram of an annotation device according to an embodiment of the disclosure, and as shown in fig. 6, the annotation device is applied to a client, and includes:
a labeling request sending module 61, configured to send a labeling request to a server, where the labeling request is used to request to label an image to be labeled, so as to indicate that the server requests an image labeling end to label the image to be labeled based on the labeling request;
a progress query request sending module 62, configured to send a request for querying a marked progress to the server, so as to receive a marking result returned by the server when the server stores the marking result of the image to be marked returned by the image marking terminal.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 7 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 7, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G), a long term evolution of universal mobile communication technology (LTE), a fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Fig. 8 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 8, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may further include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network,and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A labeling method is applied to a server side and comprises the following steps:
receiving a labeling request sent by a client, wherein the labeling request is used for requesting to label an image to be labeled;
based on the marking request, requesting an image marking end to mark the image to be marked;
receiving a marking result of the image to be marked returned by the image marking end, and storing the marking result;
and sending the labeling result to the client under the condition of receiving a labeling progress query request aiming at the image to be labeled, which is sent by the client.
2. The method of claim 1, wherein after receiving the annotation request sent by the client, the method further comprises:
generating a task identifier corresponding to the labeling request, and sending the task identifier to the client;
the receiving and storing of the labeling result of the image to be labeled returned by the image labeling end includes:
and performing associated storage on the labeling result of the image to be labeled and the task identifier.
3. The method according to claim 2, wherein the query request for progress labeling includes the task identifier;
the sending the labeling result to the client includes:
and sending the labeling result stored in association with the task identifier to the client.
4. The method according to any one of claims 1-3, further comprising:
determining the annotation progress of the image annotation end to the image to be annotated, wherein the annotation progress comprises one of annotation, successful annotation and failure annotation;
and sending the annotation progress to the client under the condition of receiving an annotation progress query request sent by the client.
5. The method according to any one of claims 1 to 4, wherein the annotation request includes annotation task information, and the annotation task information includes: marking image information to be marked, marking task types and marking task parameters;
based on the labeling request, requesting an image labeling end to label the image to be labeled, including:
and packaging the labeling task information into labeling task data in a preset data format, sending the labeling task data to the image labeling end, and requesting the image labeling end to label the image to be labeled.
6. The method according to any one of claims 1-5, wherein the server communicates with the video annotation server using an asynchronous communication mechanism.
7. The method according to any of claims 1-6, wherein the receiving the annotation request sent by the client comprises:
and receiving a request for querying the marked progress sent by the client in a polling mode.
8. An annotation method applied to a client includes:
sending a labeling request to a server, wherein the labeling request is used for requesting to label the image to be labeled so as to indicate that the server requests an image labeling end to label the image to be labeled based on the labeling request;
and sending a marking progress inquiry request to the server so as to receive a marking result returned by the server under the condition that the server stores the marking result of the image to be marked returned by the image marking end.
9. A kind of label device, characterized by that, apply to the server, comprising:
the annotation request receiving module is used for receiving an annotation request sent by a client, wherein the annotation request is used for requesting to annotate the image to be annotated;
the marking request module is used for requesting the image marking end to mark the image to be marked based on the marking request;
the marking result receiving module is used for receiving and storing the marking result of the image to be marked, which is returned by the image marking end;
and the annotation result sending module is used for sending the annotation result to the client under the condition of receiving an annotation progress query request aiming at the image to be annotated sent by the client.
10. A labeling device applied to a client comprises:
the annotation request sending module is used for sending an annotation request to the server, wherein the annotation request is used for requesting to annotate the image to be annotated so as to indicate that the server requests the image annotation end to annotate the image to be annotated based on the annotation request;
and the progress query request sending module is used for sending a labeling progress query request to the server so as to receive a labeling result returned by the server under the condition that the server stores the labeling result of the image to be labeled returned by the image labeling end.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 8.
12. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 8.
CN202111137177.2A 2021-09-27 2021-09-27 Labeling method and device, electronic equipment and storage medium Pending CN113839953A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111137177.2A CN113839953A (en) 2021-09-27 2021-09-27 Labeling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111137177.2A CN113839953A (en) 2021-09-27 2021-09-27 Labeling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113839953A true CN113839953A (en) 2021-12-24

Family

ID=78971024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111137177.2A Pending CN113839953A (en) 2021-09-27 2021-09-27 Labeling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113839953A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015058600A1 (en) * 2013-10-22 2015-04-30 Tencent Technology (Shenzhen) Company Limited Methods and devices for querying and obtaining user identification
CN108875769A (en) * 2018-01-23 2018-11-23 北京迈格威科技有限公司 Data mask method, device and system and storage medium
CN110753198A (en) * 2018-07-24 2020-02-04 杭州海康威视数字技术股份有限公司 Method and apparatus for storing image data
CN111176858A (en) * 2019-11-25 2020-05-19 腾讯云计算(北京)有限责任公司 Data request processing method and device
CN111324905A (en) * 2020-02-17 2020-06-23 平安国际智慧城市科技股份有限公司 Image data labeling method and device, computer equipment and storage medium
CN111554382A (en) * 2020-04-30 2020-08-18 上海商汤智能科技有限公司 Medical image processing method and device, electronic equipment and storage medium
WO2020253636A1 (en) * 2019-06-20 2020-12-24 杭州睿琪软件有限公司 Sample label information verification method and device
CN112131499A (en) * 2020-09-24 2020-12-25 腾讯科技(深圳)有限公司 Image annotation method and device, electronic equipment and storage medium
CN113065054A (en) * 2021-03-31 2021-07-02 北京达佳互联信息技术有限公司 Request processing method and device, electronic equipment and storage medium
CN113139109A (en) * 2021-04-20 2021-07-20 Oppo广东移动通信有限公司 Data labeling method, device, server and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015058600A1 (en) * 2013-10-22 2015-04-30 Tencent Technology (Shenzhen) Company Limited Methods and devices for querying and obtaining user identification
CN108875769A (en) * 2018-01-23 2018-11-23 北京迈格威科技有限公司 Data mask method, device and system and storage medium
CN110753198A (en) * 2018-07-24 2020-02-04 杭州海康威视数字技术股份有限公司 Method and apparatus for storing image data
WO2020253636A1 (en) * 2019-06-20 2020-12-24 杭州睿琪软件有限公司 Sample label information verification method and device
CN111176858A (en) * 2019-11-25 2020-05-19 腾讯云计算(北京)有限责任公司 Data request processing method and device
CN111324905A (en) * 2020-02-17 2020-06-23 平安国际智慧城市科技股份有限公司 Image data labeling method and device, computer equipment and storage medium
CN111554382A (en) * 2020-04-30 2020-08-18 上海商汤智能科技有限公司 Medical image processing method and device, electronic equipment and storage medium
CN112131499A (en) * 2020-09-24 2020-12-25 腾讯科技(深圳)有限公司 Image annotation method and device, electronic equipment and storage medium
CN113065054A (en) * 2021-03-31 2021-07-02 北京达佳互联信息技术有限公司 Request processing method and device, electronic equipment and storage medium
CN113139109A (en) * 2021-04-20 2021-07-20 Oppo广东移动通信有限公司 Data labeling method, device, server and storage medium

Similar Documents

Publication Publication Date Title
CN112991553B (en) Information display method and device, electronic equipment and storage medium
KR102013329B1 (en) Method and apparatus for processing data using optical character reader
JP2021034003A (en) Human object recognition method, apparatus, electronic device, storage medium, and program
CN112432637B (en) Positioning method and device, electronic equipment and storage medium
CN113065591B (en) Target detection method and device, electronic equipment and storage medium
CN113806054A (en) Task processing method and device, electronic equipment and storage medium
CN105763552B (en) Transmission method, device and system in remote control
US20220391446A1 (en) Method and device for data sharing
CN111160047A (en) Data processing method and device and data processing device
CN111158924B (en) Content sharing method and device, electronic equipment and readable storage medium
CN111950397B (en) Text labeling method, device and equipment for image and storage medium
CN113553946A (en) Information prompting method and device, electronic equipment and storage medium
CN113190307A (en) Control adding method, device, equipment and storage medium
CN111552688A (en) Data export method and device and electronic equipment
CN113839953A (en) Labeling method and device, electronic equipment and storage medium
CN114356529A (en) Image processing method and device, electronic equipment and storage medium
CN111666936A (en) Labeling method, labeling device, labeling system, electronic equipment and storage medium
CN112948411B (en) Pose data processing method, interface, device, system, equipment and medium
CN114550265A (en) Image processing method, face recognition method and system
CN114266305A (en) Object identification method and device, electronic equipment and storage medium
CN114648649A (en) Face matching method and device, electronic equipment and storage medium
CN114387622A (en) Animal weight recognition method and device, electronic equipment and storage medium
CN114153540A (en) Pre-training model issuing method and device, electronic equipment and storage medium
CN114490513A (en) File processing method and device, electronic equipment and storage medium
CN114029949A (en) Robot action editing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination