CN113435447B - Image labeling method, device and image labeling system - Google Patents

Image labeling method, device and image labeling system Download PDF

Info

Publication number
CN113435447B
CN113435447B CN202110841604.9A CN202110841604A CN113435447B CN 113435447 B CN113435447 B CN 113435447B CN 202110841604 A CN202110841604 A CN 202110841604A CN 113435447 B CN113435447 B CN 113435447B
Authority
CN
China
Prior art keywords
labeling
image
annotation
result
automatic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110841604.9A
Other languages
Chinese (zh)
Other versions
CN113435447A (en
Inventor
陈勇淼
郑佳俊
陈翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110841604.9A priority Critical patent/CN113435447B/en
Publication of CN113435447A publication Critical patent/CN113435447A/en
Application granted granted Critical
Publication of CN113435447B publication Critical patent/CN113435447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/117Tagging; Marking up; Designating a block; Setting of attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The image labeling method, the image labeling device and the image labeling system can improve the accuracy of image labeling. The image labeling method is applied to a labeling client, and comprises the following steps: acquiring an image to be marked; acquiring a region of interest selected by a user in the image; receiving a first automatic labeling instruction of a user on the region of interest, and sending a labeling request of the region of interest in the image to an automatic labeling module; and receiving a first labeling result of the region of interest from the automatic labeling module.

Description

Image labeling method, device and image labeling system
Technical Field
The present disclosure relates to the field of data labeling technologies, and in particular, to an image labeling method, an image labeling device, and an image labeling system.
Background
In application scenes such as image annotation, an annotation scheme is used for automatically annotating an image through an annotation algorithm model. However, automatic labeling approaches exist that fail to accurately locate the position of a single object within an image. In other words, the labeling accuracy of the current labeling scheme needs to be improved.
Disclosure of Invention
In view of the above, the application provides an image labeling method, an image labeling device and an image labeling system, which can improve the accuracy of image labeling.
According to one aspect of the application, there is provided an image labeling method applied to a labeling client, the method comprising:
acquiring an image to be marked;
acquiring a region of interest selected by a user in the image;
receiving a first automatic labeling instruction of a user on the region of interest, and sending a labeling request of the region of interest in the image to an automatic labeling module;
and receiving a first labeling result of the region of interest from the automatic labeling module.
In some embodiments, the method further comprises:
taking the first labeling result as the labeling result of the image, and submitting the labeling result of the image; or alternatively
Acquiring a second automatic labeling instruction of a user on the first labeling result, and sending a labeling request of a region corresponding to the first labeling result to an automatic labeling module;
receiving a second automatic labeling result of the region corresponding to the first labeling result from the automatic labeling module;
and taking the second labeling result as the labeling result of the image, and submitting the labeling result of the image.
In some embodiments, the method further comprises:
obtaining a labeling result of an image to be rechecked, wherein the labeling result of the image to be rechecked designates a labeling area in the image to be rechecked;
receiving a third automatic annotation instruction of a user on the image to be rechecked, and sending an annotation request of an annotation region in the image to be rechecked to an automatic annotation module;
receiving a third labeling result of the labeling area from the automatic labeling module;
and taking the third labeling result as a rechecking result of the image, and submitting the rechecking result of the image.
In some embodiments, the acquiring the user selected region of interest in the image comprises: acquiring a selected region of interest of a first user account in the image;
the receiving the third automatic annotation instruction of the user for the image to be rechecked, and sending an annotation request for the annotation region in the image to be rechecked to an automatic annotation module, includes: and receiving a third automatic annotation instruction of the second user account for the image to be rechecked, and sending an annotation request for the annotation region in the image to be rechecked to an automatic annotation module.
In some embodiments, after obtaining the labeling result of the image to be reviewed, the method further comprises: adjusting the marked area in the image to be rechecked according to the user input of the second user account;
before the third labeling result is used as the rechecking result of the image and submitted to the rechecking result of the image, the method further comprises: and adjusting the third labeling result according to the user input of the second user account.
In some embodiments, in the case where the method does not perform the step of receiving the first automatic annotation indication of the region of interest by the user, sending an annotation request for the region of interest in the image to an automatic annotation module, the method further comprises:
and taking the region of interest as a labeling result of the image, and submitting the labeling result of the image.
In some embodiments, the acquiring the user selected region of interest in the image comprises:
and acquiring a region of interest selected by a user from the partially overlapped graphic objects in the image, wherein the region of interest comprises a complete target graphic object and does not comprise a complete non-target graphic object.
In some embodiments, the automatic labeling module automatically labels the region of interest based on an unsupervised learning automatic labeling algorithm model.
According to an aspect of the present application, there is provided an annotation client for executing the image annotation method described above.
According to one aspect of the present application, there is provided an image labeling apparatus, the apparatus comprising:
the image acquisition unit acquires an image to be marked;
the region selection unit is used for acquiring a region of interest selected by a user in the image;
the annotation management unit is used for receiving a first automatic annotation instruction of a user on the region of interest and sending an annotation request on the region of interest in the image to the automatic annotation module; and receiving a first labeling result of the region of interest from the automatic labeling module.
According to one aspect of the present application, there is provided an image annotation system comprising: the system comprises task management equipment, a labeling client and an automatic labeling module;
the task management device is used for sending an annotation task to the annotation client, wherein the annotation task designates an image to be annotated;
the labeling client is used for receiving the labeling task and acquiring the image to be labeled;
the labeling client acquires a region of interest selected by a user in the image;
the labeling client receives a first automatic labeling instruction of a user on the region of interest and sends a labeling request of the region of interest in the image to an automatic labeling module;
the automatic labeling module receives the labeling request, generates a first labeling result of the region of interest, and sends the first labeling result to the labeling client;
the labeling client receives the first labeling result from the automatic labeling module. According to one aspect of the present application, there is provided an electronic device comprising:
a memory;
a processor;
a program stored in the memory and configured to be executed by the processor, the program comprising instructions for the image annotation method described above.
According to an aspect of the present application, there is provided a storage medium storing a program comprising instructions, characterized in that the instructions, when executed by a processor, cause the processor to perform the above-described image annotation method.
In summary, according to the image labeling scheme, the region of interest can be obtained according to the preliminary selection of the user, and then the region of interest is automatically labeled by the automatic labeling module. Here, the image labeling scheme can primarily select the region of interest through the user, so that interference of the non-region of interest on the automatic labeling module is eliminated, and the labeling accuracy can be improved.
Drawings
FIG. 1A illustrates a schematic diagram of an image annotation system according to some embodiments of the application;
FIG. 1B illustrates a schematic diagram of an image annotation system according to some embodiments of the application;
FIG. 2A illustrates a flow chart of an image annotation method 200 according to some embodiments of the application;
FIG. 2B shows a schematic diagram of an image according to one embodiment of the present application;
FIG. 2C is a schematic diagram showing a first labeling result in an image to be labeled according to one embodiment of the application;
FIG. 3 illustrates a flow chart of an image annotation method 300 according to some embodiments of the present application;
FIG. 4 illustrates a flow chart of an image annotation method 400 according to some embodiments of the present application;
FIG. 5 illustrates a flow chart of an image annotation method 500 according to some embodiments of the application;
FIG. 6 illustrates a flow chart of an image annotation method 600 according to some embodiments of the application;
FIG. 7 illustrates a flow chart of an image annotation method 700 according to some embodiments of the application;
FIG. 8 illustrates a flow chart of an image annotation device 800 according to some embodiments of the present application;
fig. 9 illustrates a schematic diagram of an electronic device according to some embodiments of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below by referring to the accompanying drawings and examples.
FIG. 1A shows a schematic diagram of an image annotation system according to one embodiment of the application. FIG. 1B shows a schematic diagram of an image annotation system according to one embodiment of the application. The image annotation system of fig. 1A and 1B may include one or more annotation clients, such as the annotation clients 111, 112, and 113 shown in fig. 1. In FIG. 1A, the image annotation system further comprises a task management device 120, an automatic annotation module 130, and a data storage device 140. The task management device 120, the automatic labeling module 130 and the data storage device 140 may each be, for example, a personal computer, a server device, or an electronic device with data processing capability such as a server cluster, which is not limited in this application. The labeling client is electronic equipment such as a personal computer.
In FIG. 1B, the automatic annotation module 130 can be a stand-alone application in the annotation client or a software component of the annotation client.
Wherein the task management device 120 can generate an annotation task and distribute the annotation task to the annotation client. The annotation task specifies the image to be annotated. The labeling client can generate a labeling result of the image by adopting a manual labeling mode or an automatic labeling mode according to the selection of a user. The automatic labeling mode is that firstly, a user selects an interested region in an image, then, the interested region is automatically labeled through the automatic labeling module 130, and a labeling result is returned to a labeling client. The automatic labeling module 130 may deploy one or more labeling algorithm models and label based on the labeling algorithm models. For example, the automatic labeling module 130 may automatically label based on an unsupervised learning labeling algorithm model.
In addition, the task management device 120 may also generate a review task and distribute the review task to the annotation client. The review task may specify a labeling area in the image to be reviewed. The labeling client can generate a rechecking result of the image to be rechecked by adopting a manual rechecking mode or an automatic rechecking mode according to the selection of a user. The manual review mode is that a user reviews the marked area so as to keep the original marked area or adjust the original marked area, and a reviewed marked area is obtained (namely, a review result is obtained). The automatic rechecking mode is to automatically label the labeling area through the automatic labeling module 130 and return the labeling result. In this way, the labeling client can take the returned labeling result as a rechecking result.
The task management device 120 can also store the annotation result and the review result to the data storage device 140.
In conclusion, the image annotation system can flexibly select a manual annotation mode and an automatic annotation mode to carry out image annotation according to user selection, so that the flexibility of image annotation is improved. Particularly, when the automatic labeling mode is adopted, the image labeling system can avoid target identification of all areas of the image, and can automatically label the partial areas (namely the interested areas) preliminarily selected by the user, so that the efficiency and the accuracy of the automatic labeling mode can be improved. In addition, the task management device 120 can flexibly send the labeling task and the review task to the labeling client, so that the labeling task and the review task of the same image can be ensured to be executed by different labeling clients, and the task execution efficiency is improved. In addition, the image labeling system can uniformly manage the labeling algorithm model through the automatic labeling module 130, so that the convenience of managing the labeling algorithm model is improved. In an alternative embodiment of the present application, the user may be a user account registered on the annotation client.
In some embodiments, the task management device 120 is configured to send annotation tasks to annotation clients. The annotation task specifies the image to be annotated.
The labeling client is used for receiving the labeling task and obtaining the image to be labeled.
The labeling client acquires a region of interest selected by a user in the image;
the labeling client receives a first automatic labeling instruction of a user on the region of interest and sends a labeling request of the region of interest in the image to an automatic labeling module;
the automatic labeling module receives the labeling request, generates a first labeling result of the region of interest, and sends the first labeling result to the labeling client;
and the labeling client receives the first labeling result from the automatic labeling module. FIG. 2A illustrates a flow chart of an image annotation method 200 according to some embodiments of the application. The method 200 may be applied, for example, to an annotation client.
As shown in fig. 2A, in step S201, an image to be annotated is acquired. For example, the annotation client may obtain annotation tasks from the task management device. The annotation task may specify the image to be annotated. The image to be marked is here, for example, an image frame or a plurality of image frames. The labeling task may comprise, for example, an image to be labeled or an identification of the image to be labeled. Thus, step S201 may, for example, directly obtain the image to be annotated from the annotation task, or obtain the image to be annotated from the data storage device 140 or other device storing the image according to the identity of the image to be annotated.
In step S202, a region of interest selected by a user in an image is acquired.
For example, fig. 2B shows a schematic diagram of an image according to one embodiment of the present application. As shown in fig. 2B, the region 20 is, for example, the region of interest determined in step S202. The region of interest shown in fig. 2B includes one frame selection region, but is not limited thereto, and the region of interest selected in step S202 may include two or more regions. In step S203, a first automatic labeling instruction of the user on the region of interest is received, and a labeling request for the region of interest in the image is sent to an automatic labeling module. In this way, the automatic labeling module may avoid labeling all regions of the image, but may automatically identify partial regions (i.e., regions of interest) of the image. Here, the automatic labeling module may perform automatic labeling by using an unsupervised labeling algorithm model trained in a deep learning manner, for example. The automatic labeling module can be an independent application in the labeling client or a software component of the labeling client, or can be a terminal device or a server device independent of the labeling client.
In some embodiments, step S202 obtains a region of interest selected by the user in the partially overlapping graphical objects in the image, the region of interest comprising obtaining the region of interest selected by the user in the partially overlapping graphical objects in the image. The region of interest contains the complete target graphical object and does not contain the complete non-target graphical object. For example, there is a partial overlap of two graphical objects that an image includes. The region of interest selected in step S202 comprises a complete target graphical object. In this way, step S202 may exclude disturbances of non-target graphical objects outside the region of interest. On the basis, after the step S203 sends the labeling request to the automatic labeling module, the region of interest can avoid the interference of other regions except the region of interest in the image to the automatic labeling module, so that the automatic labeling module can output the labeling result more accurately. In some embodiments, the acquisition of the region of interest in step S202 does not require acquisition of an exact circumscribed area of the graphical object, but rather an initial area capable of containing the target graphical object. The size of the primary selected region is greater than the circumscribed region of the graphical object. Thus, step S202 can reduce the operation difficulty of the user and save the operation time of the user.
In step S204, a first labeling result of the region of interest from the automatic labeling module is received. For example, fig. 2C shows a schematic diagram of displaying a first labeling result in an image to be labeled according to an embodiment of the application. The first annotation result includes regions 30 and 40.
In summary, the method 200 may obtain the region of interest according to the preliminary selection of the user, and then the region of interest is automatically labeled by the automatic labeling module, where the method 200 eliminates the interference of the non-region of interest on the automatic labeling module by the preliminary selection of the region of interest by the user, so as to improve the labeling accuracy.
In some embodiments, the labeling client may not perform steps S203 and S204. The labeling client can take the region of interest as a labeling result of the image and submit the labeling result of the image. Therefore, according to the image labeling scheme, according to the selection of a user, the manual labeling mode (namely, the mode of determining the region of interest as the labeling result) and the automatic labeling mode (namely, the first labeling result is determined as the labeling result) can be flexibly selected for image labeling, so that the flexibility of image labeling is improved.
FIG. 3 illustrates a flow chart of some image annotation methods 300 according to the present application. The method 300 may be applied, for example, to an annotation client.
As shown in fig. 3, in step S301, an image to be annotated is acquired.
In step S302, a region of interest selected by a user in an image is acquired.
In step S303, a first automatic labeling instruction of the user on the region of interest is received, and a labeling request for the region of interest in the image is sent to the automatic labeling module. In this way, the automatic labeling module may avoid labeling all regions of the image, but may automatically identify partial regions (i.e., regions of interest) of the image.
In step S304, a first labeling result of the region of interest from the automatic labeling module is received. In some embodiments, after receiving the first labeling result, the labeling client may replace the region of interest with the first labeling result. In some embodiments, after receiving the first labeling result, the labeling client may retain the first labeling result and the region of interest. In step S305, the first labeling result is taken as the labeling result of the image, and the labeling result of the image is submitted
In summary, the method 300 may obtain the region of interest according to the preliminary selection of the user, and then automatically annotate the region of interest by the automatic annotation module. Here, the method 300 primarily selects the region of interest by the user, so as to eliminate the interference of the non-region of interest on the automatic labeling module, thereby improving the labeling accuracy. In addition, the method 300 can submit the automatic labeling result to a device such as a task management device that manages the labeling result.
Fig. 4 illustrates a flow chart of some image annotation methods 400 according to the present application. The method 400 may be applied, for example, to an annotation client.
As shown in fig. 4, in step S401, an image to be annotated is acquired.
In step S402, a region of interest selected by a user in an image is acquired.
In step S403, a first automatic labeling instruction of the user on the region of interest is received, and a labeling request for the region of interest in the image is sent to the automatic labeling module. In this way, the automatic labeling module may avoid labeling all regions of the image, but may automatically identify partial regions (i.e., regions of interest) of the image.
In step S404, a first labeling result of the region of interest from the automatic labeling module is received. In some embodiments, after receiving the first labeling result, the labeling client may replace the region of interest with the first labeling result. In some embodiments, after receiving the first labeling result, the labeling client may retain the first labeling result and the region of interest.
On the basis, the labeling client can be used for a user to review the first labeling result. For example, when it is determined that the first labeling result can be directly used as the labeling result of the image, the labeling client may use the first labeling result as the labeling result of the image and submit the labeling result of the image. Upon determining that the accuracy of the first labeling result is to be improved, the method 400 may perform step S405.
In step S405, a second automatic labeling instruction of the user for the first labeling result is obtained, and a labeling request for the region corresponding to the first labeling result is sent to the automatic labeling module. Therefore, the automatic labeling module can automatically label the region corresponding to the first labeling result, so that the accuracy of labeling the image is improved.
In step S406, a second automatic labeling result for the region corresponding to the first labeling result is received from the automatic labeling module.
In step S407, the second labeling result is taken as the labeling result of the image, and the labeling result of the image is submitted.
In summary, the method 400 can provide the user with a review operation for the first labeling result. Under the condition that the accuracy of the first labeling result is not high, the method 400 can label the first labeling result again through the automatic labeling module according to the second labeling instruction of the user, so that the labeling accuracy is improved.
FIG. 5 illustrates a flow chart of an image annotation method 500 according to some embodiments of the application. The method 500 may be applied, for example, to an annotation client.
As shown in fig. 5, in step S501, a labeling result of an image to be reviewed is acquired. The labeling result of the image to be reviewed designates a labeling area in the image to be reviewed. Here, the labeling client may obtain the review task, i.e., obtain the labeling result to be reviewed, from the task management device, for example.
In step S502, a third automatic labeling instruction of the user to-be-reviewed image is received, and a labeling request of a labeling area in the to-be-reviewed image is sent to the automatic labeling device.
In step S503, a third labeling result for the labeling area from the automatic labeling device is received.
In step S504, the third labeling result is used as a review result of the image, and the review result of the image is submitted.
In summary, the method 500 can receive the review task and automatically label the labeling result of the image to be reviewed again through the automatic labeling module according to the selection of the user, thereby improving the labeling accuracy. In addition, the method 500 may adjust the labeling result according to the user selection and the user, so that the adjustment result is used as the review result.
FIG. 6 illustrates a flow chart of an image annotation method 600 according to some embodiments of the application. The method 600 may be applied, for example, to an annotation client.
As shown in fig. 6, in step S601, an image to be annotated is acquired.
In step S602, a region of interest selected in an image by a first user account is acquired. Here, the first user account is used to identify the first user. Step S602 may acquire the region of interest according to the input of the first user.
In step S603, a first automatic labeling instruction of the first user on the region of interest is received, and a labeling request for the region of interest in the image is sent to the automatic labeling module.
In step S604, a first labeling result of the region of interest from the automatic labeling module is received.
In step S605, the first labeling result is taken as the labeling result of the image, and the labeling result of the image is submitted.
In step S606, a labeling result of the image to be reviewed is obtained. The labeling result of the image to be reviewed designates a labeling area in the image to be reviewed. Here, the labeling client may obtain, for example, a review task for the second user account from the task management device, that is, obtain a labeling result to be reviewed. The second user account is used to characterize the second user.
In step S607, the labeling area in the image to be reviewed is adjusted according to the user input of the second user account.
In step S608, a third automatic labeling instruction of the second user account on the image to be reviewed is received, and a labeling request of the labeling area in the image to be reviewed is sent to the automatic labeling module. The automatic labeling module can automatically label the labeling area so as to improve the labeling accuracy of the image to be checked. Because step S607 can adjust the labeling area, and step S608 can instruct the automatic labeling module to perform automatic labeling, the method 600 can combine user input and automatic labeling operation in the review of the labeling result, thereby improving the labeling accuracy of the image to be reviewed.
In step S609, a third labeling result for the labeling area from the automatic labeling device is received.
In step S610, the third labeling result is adjusted according to the user input of the second user account.
In step S611, the third labeling result is taken as the review result of the image, and the review result of the image is submitted.
To sum up, the method 600 supports performing different operations in the same application client according to different user accounts. For example, the method 600 may obtain the region of interest of the image to be annotated according to user input of the first user account, and automatically annotate the region of interest by an automatic annotation module. In addition, the method 600 may remark the labeling result of the image to be reviewed according to the user input of the second user account. In this way, the method 600 can enable the labeling task and the review task of the same image to be managed by different user accounts respectively, thereby facilitating the fine management of the labeling of the image and further improving the accuracy of the labeling of the image. FIG. 7 illustrates a flow chart of an image annotation method 700 according to some embodiments of the application. The method 700 may be performed, for example, by an image annotation system.
In step S701, the task management device is used to send an annotation task to the annotation client. The annotation task specifies the image to be annotated.
In step S702, the labeling client is used to receive the labeling task, and obtain an image to be labeled.
In step S703, the region of interest selected by the user in the image is acquired by the labeling client.
In step S704, the labeling client receives a first automatic labeling instruction of the user on the region of interest, and sends a labeling request on the region of interest in the image to the automatic labeling module.
In step S705, an automatic labeling module receives a labeling request, generates a first labeling result for the region of interest, and sends the first labeling result to a labeling client.
In step S706, the labeling client receives the first labeling result from the automatic labeling module.
In summary, the method 700 may obtain the region of interest according to the preliminary selection of the user, and then the region of interest is automatically labeled by the automatic labeling module, where the method 700 eliminates the interference of the non-region of interest to the automatic labeling module by the preliminary selection of the region of interest by the user, so as to improve the labeling accuracy.
Fig. 8 shows a schematic diagram of an image annotation device 800 according to one embodiment of the application. The apparatus 800 may be applied, for example, to an annotation client.
The apparatus 800 includes: an image acquisition unit 801, a region selection unit 802, and a region selection unit 803.
The image acquisition unit 801 acquires an image to be annotated.
The region selection unit 802 acquires a region of interest selected by a user in the image.
The annotation management unit 803 receives a first automatic annotation instruction of the user on the region of interest and sends an annotation request on the region of interest in the image to an automatic annotation module; and receiving a first labeling result of the region of interest from the automatic labeling module.
In summary, the apparatus 800 may obtain the region of interest according to the preliminary selection of the user, and then the automatic labeling module performs automatic labeling on the region of interest, where the apparatus 800 eliminates the interference of the non-region of interest on the automatic labeling module by the preliminary selection of the region of interest by the user, so as to improve the labeling accuracy.
Fig. 9 illustrates a schematic diagram of an electronic device according to some embodiments of the present application. As shown in fig. 9, the electronic device includes one or more processors (CPUs) 902, a communication module 904, a memory 906, a user interface 910, and a communication bus 908 for interconnecting these components.
The processor 902 may receive and transmit data via the communication module 904 to enable network communication and/or local communication.
The user interface 910 includes an output device 912 and an input device 914.
Memory 906 may be a high-speed random access memory such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; or non-volatile memory such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices.
Memory 906 stores a set of instructions executable by processor 902, including:
an operating system 916 including programs for handling various basic system services and for performing hardware-related tasks;
applications 918 include various programs for implementing the above described schemes. Such a program can implement the processing flow in the above examples, and may include an image labeling method, for example.
In addition, each of the embodiments of the present application may be implemented by a data processing program executed by a data processing apparatus such as a computer. Obviously, the data processing program constitutes the invention. In addition, a data processing program typically stored in one storage medium is executed by directly reading the program out of the storage medium or by installing or copying the program into a storage device (such as a hard disk and/or a memory) of the data processing apparatus. Therefore, such a storage medium also constitutes the present invention. The storage medium may use any type of recording means, such as paper storage medium (e.g., paper tape, etc.), magnetic storage medium (e.g., floppy disk, hard disk, flash memory, etc.), optical storage medium (e.g., CD-ROM, etc.), magneto-optical storage medium (e.g., MO, etc.), etc.
The present application also discloses a nonvolatile storage medium in which a program is stored. The program comprises instructions that, when executed by a processor, cause an electronic device to perform an image annotation method according to the present application.
In addition, the method steps described herein may be implemented by hardware, such as logic gates, switches, application Specific Integrated Circuits (ASIC), programmable logic controllers, embedded microcontrollers, etc., in addition to data processing programs. Such hardware that can implement the method of determining relationship information between objects described herein may also constitute the present application.
The foregoing description of the preferred embodiments of the present invention is not intended to limit the invention to the precise form disclosed, and any modifications, equivalents, and variations which fall within the spirit and principles of the invention are intended to be included within the scope of the present invention.

Claims (9)

1. An image labeling method is applied to an image labeling system, and is characterized in that the image labeling system comprises a task management device, at least one labeling client and an automatic labeling module, wherein the task management device, the labeling client and the electronic device are respectively, the automatic labeling module is a device independent of the labeling client, or the automatic labeling module is a software component independently applied in the labeling client or the labeling client,
the image labeling method comprises the following steps: on the side of the annotation client,
receiving a labeling task and/or a rechecking task generated and distributed by a task management device, wherein the labeling task designates an image to be labeled, and the rechecking task designates a labeling area in the image to be rechecked;
for the received labeling task:
acquiring an image to be marked;
acquiring a selected region of interest of a first user account in the image;
receiving a first automatic labeling instruction of a user on the region of interest, sending a labeling request of the region of interest in the image to an automatic labeling module, enabling the automatic labeling module to receive the labeling request, generating a first labeling result of the region of interest, and sending the first labeling result to the labeling client;
receiving a first labeling result of the region of interest from the automatic labeling module;
for the received review task:
obtaining a labeling result of an image to be rechecked, wherein the labeling result of the image to be rechecked designates a labeling area in the image to be rechecked;
receiving a third automatic annotation instruction of a second user account on the image to be rechecked, sending an annotation request of an annotation region in the image to be rechecked to an automatic annotation module, enabling the automatic annotation module to receive the annotation request, generating a third annotation result of the annotation region in the image to be rechecked, and sending the third annotation result to the annotation client;
receiving a third labeling result of the labeling area from the automatic labeling module; wherein, the liquid crystal display device comprises a liquid crystal display device,
the labeling task and the review task of the same image are executed by different labeling clients.
2. The image annotation method of claim 1, wherein the image annotation method further comprises:
taking the first labeling result as the labeling result of the image, and submitting the labeling result of the image; or alternatively
Acquiring a second automatic labeling instruction of a user on the first labeling result, and sending a labeling request of a region corresponding to the first labeling result to an automatic labeling module;
receiving a second labeling result of the region corresponding to the first labeling result in the image to be labeled from the automatic labeling module;
and taking the second labeling result as the labeling result of the image, and submitting the labeling result of the image.
3. The image annotation method of claim 1, wherein the image annotation method further comprises:
and taking the third labeling result as a rechecking result of the image, and submitting the rechecking result of the image.
4. The image annotation method according to claim 2, characterized in that,
after the labeling result of the image to be rechecked is obtained, the image labeling method further comprises the following steps: adjusting the marked area in the image to be rechecked according to the user input of the second user account;
before the third labeling result is used as the rechecking result of the image and submitted to the rechecking result of the image, the image labeling method further comprises the following steps: and adjusting the third labeling result according to the user input of the second user account.
5. The image annotation method of claim 1, wherein in the event that the method does not perform the step of receiving a first automatic annotation indication of the region of interest by a user, sending an annotation request for the region of interest in the image to an automatic annotation module, the image annotation method further comprises:
and taking the region of interest as a labeling result of the image, and submitting the labeling result of the image.
6. The image annotation method of claim 1, wherein the acquiring the region of interest selected by the user in the image comprises:
and acquiring a region of interest selected by a user from the partially overlapped graphic objects in the image, wherein the region of interest comprises a complete target graphic object and does not comprise a complete non-target graphic object.
7. The image labeling method of claim 1, wherein the automatic labeling module automatically labels the region of interest based on an unsupervised learning automatic labeling algorithm model.
8. An image marking apparatus, characterized in that the image marking apparatus comprises:
the image acquisition unit is used for receiving a labeling task and/or a rechecking task generated and distributed by task management equipment in the image labeling system, wherein the labeling task designates an image to be labeled, and the rechecking task designates a labeling area in the image to be rechecked, so as to acquire a labeling result of the image to be labeled and/or the image to be rechecked;
the region selection unit is used for acquiring a region of interest selected by the first user account in the image;
the annotation management unit is used for receiving annotation tasks: receiving a first automatic labeling instruction of a user on the region of interest, sending a labeling request of the region of interest in the image to an automatic labeling module in an image labeling system, enabling the automatic labeling module to receive the labeling request, generating a first labeling result of the region of interest, and sending the first labeling result to the labeling client; receiving a first labeling result of the region of interest from the automatic labeling module; for the received review task: obtaining a labeling result of an image to be rechecked, wherein the labeling result of the image to be rechecked designates a labeling area in the image to be rechecked; receiving a third automatic annotation instruction of a second user account on the image to be rechecked, sending an annotation request of an annotation region in the image to be rechecked to an automatic annotation module, enabling the automatic annotation module to receive the annotation request, generating a third annotation result of the annotation region in the image to be rechecked, and sending the third annotation result to the annotation client; receiving a third labeling result of the labeling area from the automatic labeling module;
wherein, the liquid crystal display device comprises a liquid crystal display device,
the automatic labeling module is a device independent of the image labeling device, or is a software component of the image labeling device which is independently applied in the image labeling device or is used in the image labeling device;
the labeling task and the rechecking task of the same image are executed by different image labeling devices.
9. An image annotation system comprising: the system comprises task management equipment, at least one labeling client and an automatic labeling module, wherein the task management equipment and the labeling client are respectively electronic equipment, the automatic labeling module is equipment independent of the labeling client, or the automatic labeling module is an independent application in the labeling client or a software component of the labeling client;
the task management device is used for sending a labeling task and/or a rechecking task to the labeling client, wherein the labeling task designates an image to be labeled, and the rechecking task designates a labeling area in the image to be rechecked;
the annotation client is used for the user to make a user-specific annotation,
for the received labeling task:
acquiring the image to be marked;
acquiring a region of interest selected by a user in the image;
receiving a first automatic labeling instruction of a user on the region of interest, and sending a labeling request of the region of interest in the image to an automatic labeling module;
receiving a first labeling result of the region of interest from the automatic labeling module;
for the received review task:
obtaining a labeling result of an image to be rechecked, wherein the labeling result of the image to be rechecked designates a labeling area in the image to be rechecked;
receiving a third automatic annotation instruction of a user on the image to be rechecked, and sending an annotation request of an annotation region in the image to be rechecked to an automatic annotation module;
receiving a third labeling result of the labeling area from the automatic labeling module;
the automatic labeling module is used for receiving a labeling request of an area of interest in the image, generating a first labeling result of the area of interest, and sending the first labeling result to the labeling client; receiving a labeling request for a labeling area in the image to be reviewed, generating a third labeling result, sending the third labeling result to the labeling client,
wherein, the liquid crystal display device comprises a liquid crystal display device,
the labeling task and the review task of the same image are executed by different labeling clients.
CN202110841604.9A 2021-07-26 2021-07-26 Image labeling method, device and image labeling system Active CN113435447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110841604.9A CN113435447B (en) 2021-07-26 2021-07-26 Image labeling method, device and image labeling system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110841604.9A CN113435447B (en) 2021-07-26 2021-07-26 Image labeling method, device and image labeling system

Publications (2)

Publication Number Publication Date
CN113435447A CN113435447A (en) 2021-09-24
CN113435447B true CN113435447B (en) 2023-08-04

Family

ID=77761713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110841604.9A Active CN113435447B (en) 2021-07-26 2021-07-26 Image labeling method, device and image labeling system

Country Status (1)

Country Link
CN (1) CN113435447B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805959A (en) * 2018-04-27 2018-11-13 淘然视界(杭州)科技有限公司 A kind of image labeling method and system
CN110598032A (en) * 2019-09-25 2019-12-20 京东方科技集团股份有限公司 Image tag generation method, server and terminal equipment
CN111797832A (en) * 2020-07-14 2020-10-20 成都数之联科技有限公司 Automatic generation method and system of image interesting region and image processing method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8600165B2 (en) * 2010-02-12 2013-12-03 Xerox Corporation Optical mark classification system and method
WO2015080522A1 (en) * 2013-11-28 2015-06-04 Samsung Electronics Co., Ltd. Method and ultrasound apparatus for marking tumor on ultrasound elastography image
CN108985293A (en) * 2018-06-22 2018-12-11 深源恒际科技有限公司 A kind of image automation mask method and system based on deep learning
CN109949907B (en) * 2019-03-29 2021-07-13 西安交通大学 Cloud-based large pathology image collaborative annotation method and system
CN110675940A (en) * 2019-08-01 2020-01-10 平安科技(深圳)有限公司 Pathological image labeling method and device, computer equipment and storage medium
CN110737785B (en) * 2019-09-10 2022-11-08 华为技术有限公司 Picture labeling method and device
CN110929729B (en) * 2020-02-18 2020-08-04 北京海天瑞声科技股份有限公司 Image annotation method, image annotation device and computer storage medium
CN111695613B (en) * 2020-05-28 2023-01-24 平安科技(深圳)有限公司 Data annotation system, computer-readable storage medium, and electronic device
CN111915585A (en) * 2020-07-29 2020-11-10 深圳市商汤科技有限公司 Image annotation method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805959A (en) * 2018-04-27 2018-11-13 淘然视界(杭州)科技有限公司 A kind of image labeling method and system
CN110598032A (en) * 2019-09-25 2019-12-20 京东方科技集团股份有限公司 Image tag generation method, server and terminal equipment
CN111797832A (en) * 2020-07-14 2020-10-20 成都数之联科技有限公司 Automatic generation method and system of image interesting region and image processing method

Also Published As

Publication number Publication date
CN113435447A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
US7761510B2 (en) Conference system for enabling concurrent displaying of data from conference presenter and conference participants
CN109814854B (en) Project framework generation method, device, computer equipment and storage medium
EP1075124B1 (en) Processing method of device information and network device in device information management system
EP1515243A2 (en) Techniques for accessing information captured during a presentation using a paper document handout for the presentation
KR20060050872A (en) Image file management apparatus and method and storage medium
CN110442378A (en) The amending method and system of interface document
CN104333699A (en) Synthetic method and device of user-defined photographing area
CN112269622A (en) Page management method, device, equipment and medium
CN113435447B (en) Image labeling method, device and image labeling system
CN107784127A (en) A kind of focus localization method and device
CN111522626B (en) Virtual machine list generation method and device and electronic equipment
CN113064919A (en) Data processing method, data storage system, computer device and storage medium
CN104915342A (en) Medical image remote export system and method
US20230195742A1 (en) Time series prediction method for graph structure data
CN113254455B (en) Dynamic configuration method and device of database, computer equipment and storage medium
CN115205553A (en) Image data cleaning method and device, electronic equipment and storage medium
CN114663975A (en) Model training method and device, electronic equipment and storage medium
CN111399753B (en) Method and device for writing pictures
CN114564662A (en) Page guiding method and device, electronic equipment, medium and product
CN110335341B (en) BIM model-based defect positioning method, device, equipment and storage medium
CN113094048A (en) Data display rule determining method, display method, device, equipment and medium
CN104679853A (en) Information searching method and device
CN111158695B (en) Interface positioning method, device, computer equipment and storage medium
CN115756443B (en) Script generation method and device, electronic equipment and readable storage medium
CN114817078B (en) Automatic testing method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant