CN113392263A - Data labeling method and device, electronic equipment and storage medium - Google Patents

Data labeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113392263A
CN113392263A CN202110703208.XA CN202110703208A CN113392263A CN 113392263 A CN113392263 A CN 113392263A CN 202110703208 A CN202110703208 A CN 202110703208A CN 113392263 A CN113392263 A CN 113392263A
Authority
CN
China
Prior art keywords
labeling
tool
target
marking
labeled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110703208.XA
Other languages
Chinese (zh)
Inventor
牛菜梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Technology Development Co Ltd
Original Assignee
Shanghai Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Technology Development Co Ltd filed Critical Shanghai Sensetime Technology Development Co Ltd
Priority to CN202110703208.XA priority Critical patent/CN113392263A/en
Publication of CN113392263A publication Critical patent/CN113392263A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a data labeling method and device, electronic equipment and a storage medium, wherein the method is applied to the field of intelligent security and protection and comprises the following steps: acquiring an object to be marked in an intelligent security scene; presenting a target marking tool corresponding to the marking task type of the object to be marked in the plurality of marking tools to a user interaction interface; and responding to the labeling operation executed based on the target labeling tool to obtain the labeling information of the object to be labeled. The embodiment of the disclosure can rapidly and accurately provide a marking tool according to the marking task type, is high in convenience, and improves the data marking efficiency in the field of intelligent security. In addition, a data annotation scheme of the system can be provided for intelligent security enterprises, the requirement of the intelligent security field on data annotation in research and development is met, and development resources input in the research and development process are reduced.

Description

Data labeling method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data annotation method and apparatus, an electronic device, and a storage medium.
Background
With the wider application of the artificial intelligence technology in the security field, the intelligent security brings great convenience to the working life of people. For example, urban violation behaviors can be intelligently detected through intelligent security, and behaviors affecting urban appearance environment, such as violation construction, lane occupation management, illegal parking and the like, can be intelligently detected.
However, with the increase of the intelligent security application scenarios, data labeling schemes which are not available in the scenes exist in the solutions of different suppliers in a piecemeal manner, and are too single and not systematic enough, so that a labeling tool is not conveniently and quickly provided for a user, and the convenience is low.
Disclosure of Invention
The present disclosure provides a data annotation technical solution.
According to one aspect of the disclosure, a data annotation method is provided, which is applied to the field of intelligent security and protection, and comprises the following steps:
acquiring an object to be marked in an intelligent security scene;
presenting a target marking tool corresponding to the marking task type of the object to be marked in the plurality of marking tools to a user interaction interface;
and responding to the labeling operation executed based on the target labeling tool to obtain the labeling information of the object to be labeled.
In a possible implementation manner, the presenting, to a user interaction interface, a target annotation tool corresponding to an annotation task type of the object to be annotated in the plurality of annotation tools includes:
receiving a data field selection instruction input by a user, wherein the data field selection instruction is used for determining a target data field to which an object to be labeled belongs, and the target data field comprises an intelligent security field;
and according to the determined target data field, determining a target marking tool for marking the object to be marked in the target data field from the tool set, opening the use permission of the target marking tool, and displaying the use permission on a marking interface for a user to use.
In a possible implementation manner, after the object to be labeled is obtained, the method further includes:
and responding to the selection operation of the user on the plurality of marking tools, and taking the marking tool selected by the user as a target marking tool.
In a possible implementation manner, after the object to be labeled is obtained, the method further includes:
identifying the labeling task type of the object to be labeled to obtain a target labeling task type;
and determining a marking tool having a corresponding relation with the target marking task type as a target marking tool according to the corresponding relation between the preset marking task type and the marking tool.
In a possible implementation manner, the obtaining, in response to a labeling operation executed based on the target labeling tool, labeling information of the object to be labeled includes:
and responding to each step of marking operation executed based on the target marking tool, and displaying a marking result of each step of marking operation for verification of a marking person.
In a possible implementation manner, after obtaining the labeling information of the object to be labeled, the method further includes:
performing security detection by using the neural network obtained by the training of the labeled information;
and marking the image acquired in the security detection as a new object to be marked.
In one possible implementation manner, the annotation task type includes a face annotation task, and the target annotation tool corresponding to the face annotation task includes a rectangular frame tool and a key point annotation tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area indicated by a drawn rectangular frame as a position where a marked human face is located;
and determining the key points of the face in the labeled face in response to the key point labeling operation based on the key point labeling tool.
In a possible implementation manner, the labeling task type of the object to be labeled comprises a pedestrian attribute labeling task, and the target labeling tool corresponding to the target object labeling task comprises a rectangular frame tool and a pedestrian attribute labeling tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area indicated by a drawn rectangular frame as a position where a marked pedestrian is located, and displaying the position of the marked pedestrian;
and responding to the attribute selection operation of the pedestrian attribute marking tool, determining the multilevel attribute of the pedestrian marked in the object to be marked, and displaying the multilevel attribute of the marked pedestrian.
In one possible implementation manner, the annotation task type includes a vehicle annotation task, and the target annotation tool corresponding to the vehicle annotation task includes a rectangular frame tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position where the marked vehicle is located;
determining an attribute of the vehicle in response to an attribute selection operation based on the rectangular box tool;
the vehicle segment attributes include at least one of: automotive, non-automotive, stroller.
In a possible implementation manner, the labeling task type of the object to be labeled comprises a license plate labeling task, and the target labeling tool corresponding to the target object labeling task comprises a polygon tool and a text labeling tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
responding to polygon drawing operation based on the polygon tool, and taking a region surrounded by drawn polygons as a position where a marked license plate is located;
and responding to text information input by a user based on the text labeling tool, and taking the text information as the text of the license plate labeled in the object to be labeled.
In a possible implementation manner, the labeling task type of the object to be labeled includes a dangerous target labeling task, and the target labeling tool corresponding to the dangerous target labeling task includes a rectangular frame tool and a dangerous target attribute labeling tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular box drawing operation based on the rectangular box tool, taking an area indicated by a drawn rectangular box as a position where a marked dangerous target is located;
responding to an attribute selection operation of an attribute marking tool for the dangerous target, and determining the attribute of the dangerous target marked in the object to be marked;
the dangerous targets include at least one of: a source of danger, a human body with dangerous motion;
the attributes of the hazard source include at least one of: smoke, fire and garbage;
the attribute of the human body of the dangerous action comprises at least one of the following: squatting, sitting, falling and gathering people.
According to an aspect of the present disclosure, a data annotation device is provided, is applied to the intelligent security field, includes:
the system comprises a marked object acquisition unit, a marking unit and a marking unit, wherein the marked object acquisition unit is used for acquiring an object to be marked in an intelligent security scene;
the target marking tool display unit is used for presenting a target marking tool corresponding to the marking task type of the object to be marked in the plurality of marking tools to a user interaction interface;
and the marking information determining unit is used for responding to the marking operation executed based on the target marking tool to obtain the marking information of the object to be marked.
In one possible implementation manner, the target labeling tool presentation unit includes:
the data field determining unit is used for receiving a data field selection instruction input by a user, wherein the data field selection instruction is used for determining a target data field to which an object to be labeled belongs, and the target data field comprises an intelligent security field;
and the target marking tool display subunit is used for determining a target marking tool for marking the object to be marked in the target data field from the tool set according to the determined target data field, opening the use permission of the target marking tool and displaying the target marking tool on a marking interface for a user to use.
In a possible implementation manner, after obtaining the object to be labeled, the apparatus further includes:
and the first target marking tool determining unit is used for responding to the selection operation of the user on the plurality of marking tools and taking the marking tool selected by the user as the target marking tool.
In a possible implementation manner, after obtaining the object to be labeled, the apparatus further includes:
the second target labeling tool determining unit is used for identifying the labeling task type of the object to be labeled to obtain a target labeling task type; and determining a marking tool having a corresponding relation with the target marking task type as a target marking tool according to the corresponding relation between the preset marking task type and the marking tool.
In a possible implementation manner, the labeling information determining unit is configured to, in response to each step of labeling operation performed based on the target labeling tool, display a labeling result of each step of labeling operation for verification by a labeling person.
In one possible implementation, the apparatus further includes:
the security detection unit is used for performing security detection by utilizing the neural network obtained by the training of the labeling information;
and the new object to be labeled determining unit is used for labeling the image acquired in the security detection as the new object to be labeled.
In one possible implementation manner, the annotation task type includes a face annotation task, and the target annotation tool corresponding to the face annotation task includes a rectangular frame tool and a key point annotation tool;
the labeling information determining unit is used for responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area indicated by a drawn rectangular frame as the position of a labeled human face; and determining the key points of the face in the labeled face in response to the key point labeling operation based on the key point labeling tool.
In a possible implementation manner, the labeling task type of the object to be labeled comprises a pedestrian attribute labeling task, and the target labeling tool corresponding to the target object labeling task comprises a rectangular frame tool and a pedestrian attribute labeling tool;
the marking information determination unit is used for responding to a rectangular frame drawing operation based on the rectangular frame tool, taking an area indicated by a drawn rectangular frame as a position where a marked pedestrian is located, and displaying the position of the marked pedestrian; and responding to the attribute selection operation of the pedestrian attribute marking tool, determining the multilevel attribute of the pedestrian marked in the object to be marked, and displaying the multilevel attribute of the marked pedestrian.
In one possible implementation manner, the annotation task type includes a vehicle annotation task, and the target annotation tool corresponding to the vehicle annotation task includes a rectangular frame tool;
the marking information determination unit is used for responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area surrounded by the drawn rectangular frame as the position of the marked vehicle; determining an attribute of the vehicle in response to an attribute selection operation based on the rectangular box tool; the vehicle segment attributes include at least one of: automotive, non-automotive, stroller.
In a possible implementation manner, the labeling task type of the object to be labeled comprises a license plate labeling task, and the target labeling tool corresponding to the target object labeling task comprises a polygon tool and a text labeling tool;
the marking information determining unit is used for responding to the polygon drawing operation based on the polygon tool and taking the area surrounded by the drawn polygons as the position of the marked license plate; and responding to text information input by a user based on the text labeling tool, and taking the text information as the text of the license plate labeled in the object to be labeled.
In a possible implementation manner, the labeling task type of the object to be labeled includes a dangerous target labeling task, and the target labeling tool corresponding to the dangerous target labeling task includes a rectangular frame tool and a dangerous target attribute labeling tool;
the marking information determination unit is used for responding to a rectangular frame drawing operation based on the rectangular frame tool, and taking an area indicated by a drawn rectangular frame as a position where a marked dangerous target is located; responding to an attribute selection operation of an attribute marking tool for the dangerous target, and determining the attribute of the dangerous target marked in the object to be marked;
the dangerous targets include at least one of: a source of danger, a human body with dangerous motion;
the attributes of the hazard source include at least one of: smoke, fire and garbage;
the attribute of the human body of the dangerous action comprises at least one of the following: squatting, sitting, falling and gathering people.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, after an object to be labeled in an intelligent security scene is acquired, a target labeling tool corresponding to a labeling task type of the object to be labeled in a plurality of labeling tools is presented to a user interaction interface, and labeling information of the object to be labeled is obtained in response to a labeling operation executed based on the target labeling tool. Therefore, a labeling tool can be provided quickly and accurately according to the labeling task type, convenience is high, and data labeling efficiency in the intelligent security field is improved. In addition, a data annotation scheme of the system can be provided for intelligent security enterprises, the requirement of the intelligent security field on data annotation in research and development is met, and development resources input in the research and development process are reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow chart of a data annotation method according to an embodiment of the disclosure.
FIG. 2 shows a block diagram of a data annotation device according to an embodiment of the disclosure.
Fig. 3 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Fig. 4 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The target is that the artificial intelligence technology is continuously fused with the traditional security, and in the related technology, the data labeling scheme required by the training of the computer vision model is zero scattered, and the labeling scheme is different according to different security tasks. For the whole security industry, the business of each provider is relatively single, so the data labeling scheme is often too single and is not systematic enough, for example, one or two data labeling tools in scenes such as face recognition or vehicle recognition may be used in some businesses, but the data labeling tools in scenes such as face recognition, license plate detection and video analysis required in the whole security field are not completely included. Therefore, enterprises often invest too many development resources in the research and development process due to the fact that specific application scenes are not matched with the labeling method, and resource waste is caused.
Based on the above problems in practice, the embodiments of the present disclosure provide a data annotation scheme, which can be applied to annotation tasks in various scenarios and obtain annotation information. The method can be widely applied to the requirements of the intelligent security field on the labels, and saves time and labor.
The data labeling method provided by the embodiment of the disclosure can label samples for neural network training in the field of intelligent security, can train the neural network by using labeling information obtained by labeling, and can realize intelligent security service based on the trained neural network. The specific annotation scenario may include, for example: face recognition, pedestrian attribute recognition, vehicle recognition, license plate recognition, hazard source recognition, action recognition, and the like.
In a possible implementation manner, the data annotation method may be performed by an electronic device such as a terminal device or a server, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server.
For convenience of description, in one or more embodiments of the present specification, an execution subject of the identity authentication method may be a tagging platform, and hereinafter, an implementation of the method is described by taking the execution subject as the tagging platform as an example. It is understood that the implementation of the method on the annotation platform is only an exemplary illustration and should not be construed as a limitation of the method.
The labeling method of the embodiment of the present disclosure can also be applied to other technical fields related to various labeling task types, and the embodiment of the present disclosure does not specifically limit this. The following description mainly takes the intelligent security field as a target field.
Fig. 1 shows a flowchart of a data annotation method according to an embodiment of the present disclosure, and as shown in fig. 1, the data annotation method includes:
in step S11, an object to be labeled in the smart security scene is obtained
The user can load the object to be labeled through a menu of the user interaction interface, or drag the object to be labeled to a designated area of the user interaction interface in a direct dragging mode, so as to complete the loading of the labeled object. And loading the object to be labeled to realize the acquisition of the object to be labeled.
The object to be marked can be one or more images; but may also be a single frame or multiple frames in a video. And in the subsequent steps, the marking operation of the target object is carried out on the object to be marked.
The intelligent security scene may be, for example, a scene in which the verification data of the verified object is acquired and verified, and then the related authority is opened for the verified object, and a typical intelligent security scene may be, for example, an access control system, a parking lot management system, and the like. Wherein, the 'verified object' can be a person, a car, a pet, etc.; the acquisition comprises the active input of the verified object or the active acquisition of the intelligent equipment; the verification data includes: biometric data such as a user's face, voice, fingerprint, etc., a license plate of a car, a picture of a pet, etc.
In an intelligent security scene, verification data of a verified object is usually recognized first, and then the recognized verification data is verified, so that in a sample labeling stage, a target object to be labeled is the verification data to be recognized. For example, in the case that the object to be labeled is an image, the verification data to be identified in the image may be: vehicles, license plates, faces, pedestrians, objects, etc. In addition, target objects to be labeled in the objects to be labeled in different application scenes can be different.
In step S12, a target annotation tool corresponding to the annotation task type of the object to be annotated in the plurality of annotation tools is presented to the user interaction interface.
After the object to be labeled is obtained, the user can determine the type of the labeling task. The annotating task types herein may include: the system comprises a face labeling task, a pedestrian attribute labeling task, a vehicle labeling task, a license plate labeling task, a dangerous target labeling task and the like. And according to the determined annotation task type, selecting a suitable annotation tool as a target annotation tool.
Each type of annotation task may correspond to at least one target annotation tool. The target labeling tool herein may be a control for labeling a target object in an object to be labeled, such as: polygon tools, rectangle box tools, keypoint tools, attribute tools, text annotation tools, and the like.
The controls may be presented in pre-set locations in the user interface.
In step S13, in response to the labeling operation performed based on the target labeling tool, the labeling information of the object to be labeled is obtained.
The user performs a labeling operation according to the labeling task in the object to be labeled obtained in step S11 using the target labeling tool presented in step S12, and labels the target object. In the labeling operation, it is understood that the user may use the target labeling tool to determine points capable of identifying the target object and connecting lines between the points, determine the position or range of the object to be labeled by using the points and the connecting lines, and label the attributes of the object. After the labeling operation is completed, the labeling information such as the position, the range, the attribute and the like of the labeled object is obtained.
In the embodiment of the disclosure, after an object to be labeled in an intelligent security scene is acquired, a target labeling tool corresponding to a labeling task type of the object to be labeled in a plurality of labeling tools is presented to a user interaction interface, and labeling information of the object to be labeled is obtained in response to a labeling operation executed based on the target labeling tool. Therefore, a labeling tool can be provided quickly and accurately according to the labeling task type, convenience is high, and data labeling efficiency in the intelligent security field is improved. In addition, a data annotation scheme of the system can be provided for intelligent security enterprises, the requirement of the intelligent security field on data annotation in research and development is met, and development resources input in the research and development process are reduced.
In a possible implementation manner, the presenting, to a user interaction interface, a target annotation tool corresponding to an annotation task type of the object to be annotated in the plurality of annotation tools includes: receiving a data field selection instruction input by a user, wherein the data field selection instruction is used for determining a target data field to which an object to be labeled belongs, and the target data field comprises an intelligent security field; and according to the determined target data field, determining a target marking tool for marking the object to be marked in the target data field from the tool set, opening the use permission of the target marking tool, and displaying the use permission on a marking interface for a user to use.
The data labeling method provided by the disclosure can be suitable for labeling tasks in various data fields, the data fields can be fields to which artificial intelligence technology is applied, such as the field of intelligent automobiles, the field of intelligent retail, the field of intelligent security and the like, and for different data fields, the used labeling tools are often determined, so that in the implementation mode, the target labeling tools for labeling the objects to be labeled in the field of target data can be determined from the tool set according to the field of the target data to which the objects to be labeled belong, and the labeling efficiency is improved.
The target data field can be determined based on a data field selection instruction input by a user, so that the user can select the target data field to which the object to be labeled belongs. For example, in the case that the object to be labeled belongs to the intelligent security field, the user may select the intelligent security field based on the user operation interface.
In this implementation manner, the tool set may include a plurality of tools, such as a line tool, a polygon tool, a rectangular box tool, an attribute labeling tool, and the like, and in order to determine the target labeling tool according to the data field, a corresponding relationship between the data field and the labeling tool may be established in advance, that is, the labeling tool corresponding to each data field is determined in advance. Then, after the target data field is determined, a labeling tool corresponding to the target data field may be determined from the tool set as a target labeling tool according to the corresponding relationship.
And aiming at the determined target marking tool, the use permission of the target marking tool is opened and displayed in a marking interface, so that a marking person can execute a marking task based on the target marking tool.
In the embodiment of the disclosure, a target data field is determined based on a data field selection instruction input by a user, and then a target marking tool for marking an object to be marked in the target data field is determined from a tool set according to the determined target data field for the user to use. Therefore, the marking tool required for executing the marking task can be quickly provided for the user, the marking time and the labor cost are greatly reduced, and the convenience of the marking tool is improved.
In a possible implementation manner, after the object to be labeled is obtained, the method further includes: and responding to the selection operation of the user on the plurality of marking tools, and taking the marking tool selected by the user as a target marking tool.
After the object to be labeled is obtained, the user can select a target labeling tool from the plurality of labeling tools according to the type of the labeling task. For example, when the task annotation type is vehicle identification, a rectangular box tool can be selected, and the target object is subjected to box annotation.
In a possible implementation manner, after the object to be labeled is obtained, the method further includes: identifying the labeling task type of the object to be labeled to obtain a target labeling task type; and determining a marking tool having a corresponding relation with the target marking task type as a target marking tool according to the corresponding relation between the preset marking task type and the marking tool.
In the implementation manner, the target annotation task type can be determined according to the application scenario of the annotation task set by the user; or, the target annotation task type can be determined according to the content of the image and the video to be annotated based on the image recognition technology. As an example, if the application scenario of the annotation task set by the user is to identify a license plate number, it can be determined that the type of the annotation task is: and (5) marking the license plate. In the second example, when a human face is detected in an object (image) to be annotated, the annotation task type can be determined as follows: and (5) face recognition.
In a possible implementation manner, a menu of "annotation task" may also be displayed in a tab of the user interaction interface, and then the type of the annotation task selected by the user based on the menu of "annotation task" is received as the target annotation task type.
In order to adapt to different labeling task types, a corresponding relationship between a target labeling tool and a labeling task type can be established in advance. Based on the corresponding relation, the target marking tool corresponding to the obtained target marking task type can be determined, and the target marking tool is presented on the user interaction interface.
Illustratively, a corresponding relation is established between a polygon tool, a text labeling tool and a license plate identification task, and when the determined target labeling task type is the license plate labeling task, the polygon tool and the text labeling tool are presented in a user interaction interface. The user can mark the outline of the license plate characters by using a multi-edge line tool, and then mark the specific text of the marked license plate characters by using a text marking tool, so that the marking of the license plate is completed.
After the type of the labeling task is determined, the corresponding labeling tool can be obtained, so that the time for a user to select the tool from a plurality of tools can be saved, the step of prompting the user for labeling through the presented tool can be used, and the labeling process is simplified.
In a possible implementation manner, the obtaining, in response to a labeling operation executed based on the target labeling tool, labeling information of the object to be labeled includes: and responding to each step of marking operation executed based on the target marking tool, and displaying a marking result of each step of marking operation for verification of a marking person.
During the process that the user marks the target object, each marking operation can be presented in the user interaction interface, so that the user can verify the marking result in each step. And may enable a user to modify one or more erroneous annotations.
Each annotation operation referred to herein is understood to mean that, when a user performs annotation using a target annotation tool, a plurality of operations occur during the annotation process, for example, by determining the position of a point to draw a polygon, and then representing the target object with the polygon, a plurality of points will be determined, and an operation will occur once for each determined point. During the process of marking the target object, if the point just marked by the user is not satisfied, only the point can be modified, and the points determined before the point are still remained.
It can also be understood that when the user marks the target object, more than one target marking tool is used, and one tool is used as one operation. For example: in the personnel attribute labeling task, the identification of the target object by using the rectangular tool is determined as one-time operation, and then the attribute of the target object selected by using the attribute tool is determined as one-time operation. If the user thinks the labeling of the attribute of the target object is wrong, only the attribute of the target object can be modified, and the determined rectangular box still remains.
By checking each operation, the accuracy of the labeling is improved, the phenomenon that the labeling is restarted in the first step due to the fact that the final attribute is wrong or missed can be avoided, and the labeling efficiency is improved.
In a possible implementation manner, after obtaining the labeling information of the object to be labeled, the method further includes: performing security detection by using the neural network obtained by the training of the labeled information; and marking the image acquired in the security detection as a new object to be marked.
After data labeling is performed according to the data labeling method provided by the disclosure, a neural network for security detection can be trained by using the obtained labeling information, the trained neural network can execute a security detection task, the acquired image can be acquired in the process of executing the security detection task and used as a new object to be labeled, and then the new object to be labeled is labeled, wherein the specific labeling method can refer to possible implementation manners of the data labeling method provided by the disclosure.
Therefore, the labeling link can be connected with the service of the intelligent security application to form mutual refund data support and service data backflow closed loop, and the accuracy of the intelligent security detection result is improved.
Next, possible implementations of the data annotation method according to the embodiments of the present disclosure are exemplarily described according to the types of the plurality of annotation tasks.
Because security protection data mostly come from the video that the camera was gathered, it is comparatively time-consuming to consider video marking instrument development, and the marking personnel training of video marking is not as easy as the image marking, and the type that video marking instrument can mark is limited, and uses rectangle frame instrument among the image marking instrument etc. to make up, can accomplish most image marking task in the market. Therefore, the video can be framed as an image to be labeled. In summary, the object to be annotated in the present disclosure may be an image, and the object to be annotated is taken as an image as an example, so as to introduce the annotation method of the embodiment of the present disclosure.
In one possible implementation manner, the annotation task type includes a face annotation task, and the target annotation tool corresponding to the face annotation task includes a rectangular frame tool and a key point annotation tool: the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area indicated by a drawn rectangular frame as a position where a marked human face is located; and determining the key points of the face in the labeled face in response to the key point labeling operation based on the key point labeling tool.
In the embodiment of the present disclosure, a corresponding relationship between a "rectangular box tool and a key point labeling tool" and a face labeling task may be established in advance. Based on the corresponding relation, when the type of the labeling task is the face labeling task, it can be determined that the target labeling tool corresponding to the face labeling task comprises a rectangular frame tool and a key point labeling tool, and the tools are presented in a user interface. The user can select a rectangular frame tool, click a left mouse button to create a starting point, and perform frame selection on the face of the human body, so that the rectangular frame is tangent to the edge of the face in the frame selection process, and click the left mouse button again to finish drawing operation of the rectangular frame, and the position of the face can be determined.
Then, the key point labeling tool is opened, and the labeling precision is set, for example, the number of key points of the face (face 5 points, face 21 points, face 106 points) is set. And marking the positions of the five sense organs of the face by using a key point marking tool in the face in the drawn rectangular frame.
By the method, the labeling information of the face can be obtained after the position of the face is labeled, and the efficiency of face labeling is improved.
In a possible implementation manner, the labeling task type of the object to be labeled comprises a pedestrian attribute labeling task, and the target labeling tool corresponding to the target object labeling task comprises a rectangular frame tool and a pedestrian attribute labeling tool; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area indicated by a drawn rectangular frame as a position where a marked pedestrian is located, and displaying the position of the marked pedestrian; and responding to the attribute selection operation of the pedestrian attribute marking tool, determining the multilevel attribute of the pedestrian marked in the object to be marked, and displaying the multilevel attribute of the marked pedestrian.
In the embodiment of the present disclosure, a corresponding relationship between a "rectangular box tool and an attribute labeling tool" and a pedestrian attribute labeling task may be established in advance. Based on the corresponding relation, when the type of the labeling task is the pedestrian attribute labeling task, it can be determined that the target labeling tool corresponding to the pedestrian attribute labeling task comprises a rectangular box tool and an attribute labeling tool, and the tools are presented in the user interface.
After selecting the rectangular frame tool, the user clicks the left mouse button to create a starting point, performs frame selection on pedestrians in the image, and clicks the left mouse button again to finish the labeling operation. After the rectangular frame is drawn, the pedestrians in the rectangular frame are tangent by the upper side, the lower side, the left side and the right side of the rectangular frame. Taking a rectangular coordinate system as an example, the point with the maximum y value in all points indicating the positions of pedestrians is positioned on the upper side of a rectangular frame; of the points indicating the positions of pedestrians, the point with the smallest y value falls on the lower side of the rectangular frame; of the points indicating the positions of the pedestrians, the point at which the value x is the largest is located on the right side of the rectangular frame; the point where the value of x is smallest is located on the left side of the rectangular box.
Then, using the attribute labeling tool, the attribute of the pedestrian is selected. The attributes here may set a level of attributes, such as: male and female. In some scenarios, multi-level attributes may also be set, such as: the age of the primary attribute, and the secondary attribute is set under the attribute: children, teenagers, middle-aged and elderly people, etc. And after the user selects the attributes, marking the pedestrian can be finished.
By the method, the positions and attributes of the pedestrians can be labeled, the labeling information of the pedestrians is obtained, and the efficiency of pedestrian labeling is improved.
In one possible implementation manner, the annotation task type includes a vehicle annotation task, and the target annotation tool corresponding to the vehicle annotation task includes a rectangular frame tool; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position where the marked vehicle is located; determining an attribute of the vehicle in response to an attribute selection operation based on the rectangular box tool; the vehicle segment attributes include at least one of: automotive, non-automotive, stroller.
In the embodiment of the present disclosure, the corresponding relationship between the rectangular box tool and the vehicle labeling task may be established in advance. Based on the corresponding relation, when the annotation task type is the vehicle annotation task, it can be determined that the target annotation tool corresponding to the vehicle annotation task comprises a rectangular box tool, and the tools are presented in the user interface.
The user can select the rectangular frame tool, click the left mouse button to create a starting point, select the vehicle frame, click the left mouse button again to finish the rectangular frame drawing operation, and in the rectangular frame drawing process, the rectangular frame can be tangent to the outline of the vehicle so as to determine the position of the vehicle.
Then, attributes are selected for the drawn rectangular box, which may include, for example: automotive, non-automotive, stroller, etc.
It should be noted that, in the embodiment of the present disclosure, the attribute may also be directly selected by using a rectangular box tool, and the rectangular box tool may be provided with a first-level attribute selection control, so that the attribute in the object to be labeled is selected by using the rectangular box with the attribute selection control. In addition, the order of attribute selection and frame selection operations is not limited in the embodiments of the present disclosure.
By the method, the vehicle can be marked to obtain marking information of the vehicle, and vehicle marking efficiency is improved.
In a possible implementation manner, the labeling task type of the object to be labeled comprises a license plate labeling task, and the target labeling tool corresponding to the target object labeling task comprises a polygon tool and a text labeling tool; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: responding to polygon drawing operation based on the polygon tool, and taking a region surrounded by drawn polygons as a position where a marked license plate is located; and responding to text information input by a user based on the text labeling tool, and taking the text information as the text of the license plate labeled in the object to be labeled.
In the embodiment of the disclosure, the corresponding relationship between the polygon tool and the text marking tool and the license plate marking task can be pre-established. Based on the corresponding relation, when the type of the labeling task is the license plate labeling task, the target labeling tool corresponding to the license plate labeling task can be determined to comprise a polygon tool and a text labeling tool, and the tools are presented in a user interface.
The user uses the polygon tool to determine the boundaries of the license plate text regions in a certain order (e.g., clockwise), which is determined by the user clicking the points determined by the mouse and the connecting lines between the points. The user clicks the left mouse button to add a point, clicks the right mouse button to add the last point, the last point is automatically connected with the starting point to form a polygon, the boundary of the polygon is tangent to the edge of the license plate characters, and the position and the coordinate of the license plate characters can be obtained according to the boundary of the polygon. Then, the user marks the characters in the license plate by using a text marking tool.
In the embodiment of the disclosure, because the combination of the Chinese characters and the letters has irregular boundaries, the use of the frame-pulling tool can result in more or less segmentation of the license plate number, and therefore, the polygon tool is used in the disclosure, and the polygons can be freely drawn to segment characters in the license plate, so as to improve the accuracy of the labeling information. For the text with clear images or regular arrangement, a rectangular box tool can be used to improve the labeling efficiency.
In a possible implementation manner, the labeling task type of the object to be labeled includes a dangerous target labeling task, and the target labeling tool corresponding to the dangerous target labeling task includes a rectangular frame tool and a dangerous target attribute labeling tool; the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes: in response to a rectangular box drawing operation based on the rectangular box tool, taking an area indicated by a drawn rectangular box as a position where a marked dangerous target is located; responding to an attribute selection operation of an attribute marking tool for the dangerous target, and determining the attribute of the dangerous target marked in the object to be marked; the dangerous targets include at least one of: a source of danger, a human body with dangerous motion; the attributes of the hazard source include at least one of: smoke, fire and garbage; the attribute of the human body of the dangerous action comprises at least one of the following: squatting, sitting, falling and gathering people.
In the embodiment of the present disclosure, a corresponding relationship between a "rectangular box tool and an attribute labeling tool" and a dangerous object labeling task may be established in advance. Based on the corresponding relation, when the type of the annotation task is a dangerous target annotation task, it can be determined that the target annotation tool corresponding to the dangerous target annotation task comprises a shape frame tool and an attribute annotation tool, and the tools are presented in the user interface.
The danger target labeling task may include identification of dangerous objects (e.g., fireworks) or dangerous behavior (e.g., pedestrian falls), etc.
For example, a user may select a rectangular box tool, click a left mouse button to create a starting point, box select fireworks burning in the ground, and click the left mouse button again may complete a rectangular box drawing operation so that the rectangular box is tangent to the outline of the fireworks to determine the position and coordinates of the fireworks. Then, attributes are selected for the drawn rectangular box, such as: smoke, fire, garbage, etc.
Then, an attribute marking tool is used to select attributes for the dangerous targets in the rectangular box, the attributes of the dangerous sources can be fireworks, garbage and the like, and the attributes of the human body with dangerous actions include squatting, falling, crowds and the like.
It should be noted that, in the embodiment of the present disclosure, the position of the dangerous target may be marked by using a rectangular box tool, and then the attribute of the dangerous target may be selected; alternatively, the default attribute of the rectangular frame may be selected first, and after the rectangular frame is drawn by the rectangular frame tool, the attribute of the rectangular frame may be modified as appropriate.
By the method, the dangerous target can be marked, marking information of the dangerous target is obtained, and efficiency of marking the dangerous target is improved.
The data annotation method of the embodiment of the disclosure can also be applied to other annotation scenes, and the specific annotation concept is the same as that of the above implementation manner, and is not introduced one by one due to space limitations. In addition, to same mark task, also can have multiple suitable marking instrument, consequently, in this disclosed embodiment, can give different marking schemes, provide the marking instrument of multiple alternatives, the person of being convenient for mark gets or rejects according to the production operating condition of difference in the instrument is built to maximize ground improvement mark efficiency, accelerate the research and development process.
The utility model provides a systematized security industry data annotation solution, utilizes the detailed and complete security industry-based data annotation solution, can assist and build the required data support team of business team fast. Detailed labeling processes are listed in one or more implementation modes provided by the disclosure, so that a labeling team can be guided to carry out the training work of a labeling person, and the time and energy for rebuilding a process system are reduced. According to the tool selection logic set forth in the disclosure, the development team reasonably sorts the development work of the tools according to the actual business development conditions, and the tool selection logic is beneficial to optimizing resource allocation, reducing cost and improving efficiency.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a data labeling apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any data labeling method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are not repeated.
Fig. 2 shows a block diagram of a data annotation device according to an embodiment of the present disclosure, and as shown in fig. 2, the device 20 is applied to the field of intelligent security, and includes:
a labeled object obtaining unit 201, configured to obtain an object to be labeled in an intelligent security scene;
a target labeling tool display unit 202, configured to present, to a user interaction interface, a target labeling tool corresponding to the labeling task type of the object to be labeled in the multiple labeling tools;
a labeling information determining unit 203, configured to obtain labeling information of the object to be labeled in response to a labeling operation performed based on the target labeling tool.
In a possible implementation manner, the target labeling tool presentation unit 202 includes:
the data field determining unit is used for receiving a data field selection instruction input by a user, wherein the data field selection instruction is used for determining a target data field to which an object to be labeled belongs, and the target data field comprises an intelligent security field;
and the target marking tool display subunit is used for determining a target marking tool for marking the object to be marked in the target data field from the tool set according to the determined target data field, opening the use permission of the target marking tool and displaying the target marking tool on a marking interface for a user to use.
In a possible implementation manner, after obtaining the object to be labeled, the apparatus further includes:
and the first target marking tool determining unit is used for responding to the selection operation of the user on the plurality of marking tools and taking the marking tool selected by the user as the target marking tool.
In a possible implementation manner, after obtaining the object to be labeled, the apparatus further includes:
the second target labeling tool determining unit is used for identifying the labeling task type of the object to be labeled to obtain a target labeling task type; and determining a marking tool having a corresponding relation with the target marking task type as a target marking tool according to the corresponding relation between the preset marking task type and the marking tool.
In a possible implementation manner, the labeling information determining unit is configured to, in response to each step of labeling operation performed based on the target labeling tool, display a labeling result of each step of labeling operation for verification by a labeling person.
In one possible implementation, the apparatus further includes:
the security detection unit is used for performing security detection by utilizing the neural network obtained by the training of the labeling information;
and the new object to be labeled determining unit is used for labeling the image acquired in the security detection as the new object to be labeled.
In one possible implementation manner, the annotation task type includes a face annotation task, and the target annotation tool corresponding to the face annotation task includes a rectangular frame tool and a key point annotation tool;
the labeling information determining unit 203, configured to, in response to a rectangular frame drawing operation based on the rectangular frame tool, take a region indicated by a drawn rectangular frame as a position where a labeled human face is located; and determining the key points of the face in the labeled face in response to the key point labeling operation based on the key point labeling tool.
In a possible implementation manner, the labeling task type of the object to be labeled comprises a pedestrian attribute labeling task, and the target labeling tool corresponding to the target object labeling task comprises a rectangular frame tool and a pedestrian attribute labeling tool;
the labeling information determining unit 203, configured to, in response to a rectangular frame drawing operation based on the rectangular frame tool, take an area indicated by a drawn rectangular frame as a position where a labeled pedestrian is located, and display the position of the labeled pedestrian; and responding to the attribute selection operation of the pedestrian attribute marking tool, determining the multilevel attribute of the pedestrian marked in the object to be marked, and displaying the multilevel attribute of the marked pedestrian.
In one possible implementation manner, the annotation task type includes a vehicle annotation task, and the target annotation tool corresponding to the vehicle annotation task includes a rectangular frame tool;
the labeling information determining unit 203, configured to, in response to a rectangular frame drawing operation based on the rectangular frame tool, take an area surrounded by the drawn rectangular frame as a position where the labeled vehicle is located; determining an attribute of the vehicle in response to an attribute selection operation based on the rectangular box tool; the vehicle segment attributes include at least one of: automotive, non-automotive, stroller.
In a possible implementation manner, the labeling task type of the object to be labeled comprises a license plate labeling task, and the target labeling tool corresponding to the target object labeling task comprises a polygon tool and a text labeling tool;
the labeling information determining unit 203 is configured to, in response to a polygon drawing operation based on the polygon tool, use a region surrounded by drawn polygons as a position where a labeled license plate is located; and responding to text information input by a user based on the text labeling tool, and taking the text information as the text of the license plate labeled in the object to be labeled.
In a possible implementation manner, the labeling task type of the object to be labeled includes a dangerous target labeling task, and the target labeling tool corresponding to the dangerous target labeling task includes a rectangular frame tool and a dangerous target attribute labeling tool;
the labeling information determining unit 203, configured to, in response to a rectangular box drawing operation based on the rectangular box tool, take a region indicated by a drawn rectangular box as a position where a labeled risk target is located; responding to an attribute selection operation of an attribute marking tool for the dangerous target, and determining the attribute of the dangerous target marked in the object to be marked;
the dangerous targets include at least one of: a source of danger, a human body with dangerous motion;
the attributes of the hazard source include at least one of: smoke, fire and garbage;
the attribute of the human body of the dangerous action comprises at least one of the following: squatting, sitting, falling and gathering people.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 3 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 3, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 4 shows a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 4, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (14)

1. A data labeling method is characterized by being applied to the field of intelligent security and protection and comprising the following steps:
acquiring an object to be marked in an intelligent security scene;
presenting a target marking tool corresponding to the marking task type of the object to be marked in the plurality of marking tools to a user interaction interface;
and responding to the labeling operation executed based on the target labeling tool to obtain the labeling information of the object to be labeled.
2. The method of claim 1, wherein the presenting, to the user interaction interface, a target annotation tool corresponding to the annotation task type of the object to be annotated in the plurality of annotation tools comprises:
receiving a data field selection instruction input by a user, wherein the data field selection instruction is used for determining a target data field to which an object to be labeled belongs, and the target data field comprises an intelligent security field;
and according to the determined target data field, determining a target marking tool for marking the object to be marked in the target data field from the tool set, opening the use permission of the target marking tool, and displaying the use permission on a marking interface for a user to use.
3. The method according to any one of claims 1 or 2, wherein after obtaining the object to be labeled, the method further comprises:
and responding to the selection operation of the user on the plurality of marking tools, and taking the marking tool selected by the user as a target marking tool.
4. The method according to any one of claims 1 or 2, wherein after obtaining the object to be labeled, the method further comprises:
identifying the labeling task type of the object to be labeled to obtain a target labeling task type;
and determining a marking tool having a corresponding relation with the target marking task type as a target marking tool according to the corresponding relation between the preset marking task type and the marking tool.
5. The method according to any one of claims 1 to 4, wherein the obtaining the labeling information of the object to be labeled in response to the labeling operation performed based on the target labeling tool comprises:
and responding to each step of marking operation executed based on the target marking tool, and displaying a marking result of each step of marking operation for verification of a marking person.
6. The method according to any one of claims 1 to 5, wherein after obtaining the labeling information of the object to be labeled, the method further comprises:
performing security detection by using the neural network obtained by the training of the labeled information;
and marking the image acquired in the security detection as a new object to be marked.
7. The method according to any one of claims 1 to 6, wherein the annotation task type comprises a face annotation task, and the target annotation tool corresponding to the face annotation task comprises a rectangular box tool and a key point annotation tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area indicated by a drawn rectangular frame as a position where a marked human face is located;
and determining the key points of the face in the labeled face in response to the key point labeling operation based on the key point labeling tool.
8. The method according to any one of claims 1 to 7, wherein the labeling task type of the object to be labeled comprises a pedestrian attribute labeling task, and the target labeling tool corresponding to the target object labeling task comprises a rectangular box tool and a pedestrian attribute labeling tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area indicated by a drawn rectangular frame as a position where a marked pedestrian is located, and displaying the position of the marked pedestrian;
and responding to the attribute selection operation of the pedestrian attribute marking tool, determining the multilevel attribute of the pedestrian marked in the object to be marked, and displaying the multilevel attribute of the marked pedestrian.
9. The method according to any one of claims 1 to 8, wherein the annotation task type comprises a vehicle annotation task, and the target annotation tool corresponding to the vehicle annotation task comprises a rectangular box tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular frame drawing operation based on the rectangular frame tool, taking an area surrounded by the drawn rectangular frame as a position where the marked vehicle is located;
determining an attribute of the vehicle in response to an attribute selection operation based on the rectangular box tool;
the vehicle segment attributes include at least one of: automotive, non-automotive, stroller.
10. The method according to any one of claims 1 to 9, wherein the labeling task type of the object to be labeled comprises a license plate labeling task, and the target labeling tool corresponding to the target object labeling task comprises a polygon tool and a text labeling tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
responding to polygon drawing operation based on the polygon tool, and taking a region surrounded by drawn polygons as a position where a marked license plate is located;
and responding to text information input by a user based on the text labeling tool, and taking the text information as the text of the license plate labeled in the object to be labeled.
11. The method according to any one of claims 1 to 10, wherein the annotation task type of the object to be annotated comprises a dangerous target annotation task, and the target annotation tool corresponding to the dangerous target annotation task comprises a rectangular box tool and a dangerous target attribute annotation tool;
the obtaining of the labeling information of the object to be labeled in response to the labeling operation executed based on the target labeling tool includes:
in response to a rectangular box drawing operation based on the rectangular box tool, taking an area indicated by a drawn rectangular box as a position where a marked dangerous target is located;
responding to an attribute selection operation of an attribute marking tool for the dangerous target, and determining the attribute of the dangerous target marked in the object to be marked;
the dangerous targets include at least one of: a source of danger, a human body with dangerous motion;
the attributes of the hazard source include at least one of: smoke, fire and garbage;
the attribute of the human body of the dangerous action comprises at least one of the following: squatting, sitting, falling and gathering people.
12. The utility model provides a data annotation device which characterized in that is applied to intelligent security field, includes:
the system comprises a marked object acquisition unit, a marking unit and a marking unit, wherein the marked object acquisition unit is used for acquiring an object to be marked in an intelligent security scene;
the target marking tool display unit is used for presenting a target marking tool corresponding to the marking task type of the object to be marked in the plurality of marking tools to a user interaction interface;
and the marking information determining unit is used for responding to the marking operation executed based on the target marking tool to obtain the marking information of the object to be marked.
13. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 11.
14. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 11.
CN202110703208.XA 2021-06-24 2021-06-24 Data labeling method and device, electronic equipment and storage medium Pending CN113392263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110703208.XA CN113392263A (en) 2021-06-24 2021-06-24 Data labeling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110703208.XA CN113392263A (en) 2021-06-24 2021-06-24 Data labeling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113392263A true CN113392263A (en) 2021-09-14

Family

ID=77623722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110703208.XA Pending CN113392263A (en) 2021-06-24 2021-06-24 Data labeling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113392263A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113970992A (en) * 2021-11-15 2022-01-25 上海闪马智能科技有限公司 Image labeling method, device and system, storage medium and electronic device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108829435A (en) * 2018-06-19 2018-11-16 数据堂(北京)科技股份有限公司 A kind of image labeling method and general image annotation tool
CN108875020A (en) * 2018-06-20 2018-11-23 第四范式(北京)技术有限公司 For realizing the method, apparatus, equipment and storage medium of mark
CN109068105A (en) * 2018-09-20 2018-12-21 王晖 A kind of prison video monitoring method based on deep learning
CN109492576A (en) * 2018-11-07 2019-03-19 北京旷视科技有限公司 Image-recognizing method, device and electronic equipment
CN109800737A (en) * 2019-02-02 2019-05-24 深圳市商汤科技有限公司 Face recognition method and device, electronic equipment and storage medium
CN110210624A (en) * 2018-07-05 2019-09-06 第四范式(北京)技术有限公司 Execute method, apparatus, equipment and the storage medium of machine-learning process
CN110443109A (en) * 2019-06-11 2019-11-12 万翼科技有限公司 Abnormal behaviour monitor processing method, device, computer equipment and storage medium
CN110458226A (en) * 2019-08-08 2019-11-15 上海商汤智能科技有限公司 Image labeling method and device, electronic equipment and storage medium
CN111309995A (en) * 2020-01-19 2020-06-19 北京市商汤科技开发有限公司 Labeling method and device, electronic equipment and storage medium
CN112131499A (en) * 2020-09-24 2020-12-25 腾讯科技(深圳)有限公司 Image annotation method and device, electronic equipment and storage medium
CN112613668A (en) * 2020-12-26 2021-04-06 西安科锐盛创新科技有限公司 Scenic spot dangerous area management and control method based on artificial intelligence

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108829435A (en) * 2018-06-19 2018-11-16 数据堂(北京)科技股份有限公司 A kind of image labeling method and general image annotation tool
CN108875020A (en) * 2018-06-20 2018-11-23 第四范式(北京)技术有限公司 For realizing the method, apparatus, equipment and storage medium of mark
CN110210624A (en) * 2018-07-05 2019-09-06 第四范式(北京)技术有限公司 Execute method, apparatus, equipment and the storage medium of machine-learning process
CN109068105A (en) * 2018-09-20 2018-12-21 王晖 A kind of prison video monitoring method based on deep learning
CN109492576A (en) * 2018-11-07 2019-03-19 北京旷视科技有限公司 Image-recognizing method, device and electronic equipment
CN109800737A (en) * 2019-02-02 2019-05-24 深圳市商汤科技有限公司 Face recognition method and device, electronic equipment and storage medium
WO2020155606A1 (en) * 2019-02-02 2020-08-06 深圳市商汤科技有限公司 Facial recognition method and device, electronic equipment and storage medium
CN110443109A (en) * 2019-06-11 2019-11-12 万翼科技有限公司 Abnormal behaviour monitor processing method, device, computer equipment and storage medium
CN110458226A (en) * 2019-08-08 2019-11-15 上海商汤智能科技有限公司 Image labeling method and device, electronic equipment and storage medium
CN111309995A (en) * 2020-01-19 2020-06-19 北京市商汤科技开发有限公司 Labeling method and device, electronic equipment and storage medium
CN112131499A (en) * 2020-09-24 2020-12-25 腾讯科技(深圳)有限公司 Image annotation method and device, electronic equipment and storage medium
CN112613668A (en) * 2020-12-26 2021-04-06 西安科锐盛创新科技有限公司 Scenic spot dangerous area management and control method based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蔡莉等: "数据标注研究综述", 《软件学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113970992A (en) * 2021-11-15 2022-01-25 上海闪马智能科技有限公司 Image labeling method, device and system, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN113486765B (en) Gesture interaction method and device, electronic equipment and storage medium
CN112991553B (en) Information display method and device, electronic equipment and storage medium
CN112465843A (en) Image segmentation method and device, electronic equipment and storage medium
CN111884908B (en) Contact person identification display method and device and electronic equipment
CN113407083A (en) Data labeling method and device, electronic equipment and storage medium
CN109145970B (en) Image-based question and answer processing method and device, electronic equipment and storage medium
CN112907760A (en) Three-dimensional object labeling method and device, tool, electronic equipment and storage medium
CN113806054A (en) Task processing method and device, electronic equipment and storage medium
CN112417420A (en) Information processing method and device and electronic equipment
CN110909203A (en) Video analysis method and device, electronic equipment and storage medium
US20220270352A1 (en) Methods, apparatuses, devices, storage media and program products for determining performance parameters
CN114066858A (en) Model training method and device, electronic equipment and storage medium
CN112492201A (en) Photographing method and device and electronic equipment
CN113807253A (en) Face recognition method and device, electronic equipment and storage medium
CN114066856A (en) Model training method and device, electronic equipment and storage medium
CN113392263A (en) Data labeling method and device, electronic equipment and storage medium
CN113705653A (en) Model generation method and device, electronic device and storage medium
CN113128437A (en) Identity recognition method and device, electronic equipment and storage medium
CN112508020A (en) Labeling method and device, electronic equipment and storage medium
CN113869295A (en) Object detection method and device, electronic equipment and storage medium
CN106126104B (en) Keyboard simulation method and device
CN114266305A (en) Object identification method and device, electronic equipment and storage medium
CN114519794A (en) Feature point matching method and device, electronic equipment and storage medium
US20170060822A1 (en) Method and device for storing string
CN114387622A (en) Animal weight recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210914