CN112967359A - Data labeling method and device, terminal equipment and storage medium - Google Patents

Data labeling method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN112967359A
CN112967359A CN202110340495.2A CN202110340495A CN112967359A CN 112967359 A CN112967359 A CN 112967359A CN 202110340495 A CN202110340495 A CN 202110340495A CN 112967359 A CN112967359 A CN 112967359A
Authority
CN
China
Prior art keywords
data
picture
frame
labeling
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110340495.2A
Other languages
Chinese (zh)
Other versions
CN112967359B (en
Inventor
李炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202110340495.2A priority Critical patent/CN112967359B/en
Publication of CN112967359A publication Critical patent/CN112967359A/en
Application granted granted Critical
Publication of CN112967359B publication Critical patent/CN112967359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application is applicable to the technical field of artificial intelligence, and provides a data labeling method, a device, terminal equipment and a storage medium, wherein the method comprises the following steps: displaying a picture to be marked; if a preset event aiming at the picture to be marked is monitored, determining a data marking starting point and a data marking end point according to the preset event; generating and displaying a data annotation frame according to the determined data annotation starting point and the determined data annotation end point; and if a selection instruction for any data labeling frame is received, labeling the selected data labeling frame, and generating labeling data corresponding to the picture to be labeled according to the labeled data labeling frame. According to the method and the device, the data marking starting point and the data marking end point in the picture to be marked can be effectively determined by monitoring the preset event of the picture to be marked, the corresponding data marking frame is automatically generated, and the accuracy of marking the label on the data marking frame is effectively improved by marking the selected data marking frame.

Description

Data labeling method and device, terminal equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a data annotation method, apparatus, terminal device, and storage medium.
Background
The training of the AI algorithm model usually needs a large amount of available data, the data marking becomes a crucial ring after data acquisition, picture classification and object detection are particularly common in data marking, the core work in the early stage of object detection is to mark out corresponding labels and classifications in pictures after a large amount of picture data are acquired, and corresponding marked data are generated based on the labels in the pictures and the classified marked results for model training.
In the existing data labeling process, data labeling is carried out on the picture in a manual mode, so that the labeling accuracy of the data is low, and the generation accuracy of the labeled data is reduced.
Disclosure of Invention
In view of this, embodiments of the present application provide a data annotation method, an apparatus, a terminal device, and a storage medium, so as to solve the problem of low data annotation accuracy caused by performing data annotation in a manual manner in a data annotation process in the prior art.
A first aspect of an embodiment of the present application provides a data annotation method, including:
displaying a picture to be marked;
if a preset event aiming at the picture to be marked is monitored, determining a data marking starting point and a data marking end point according to the preset event;
generating and displaying a data annotation frame according to the determined data annotation starting point and the determined data annotation end point;
and if a selection instruction for any one of the data labeling frames is received, labeling the selected data labeling frame, and generating labeling data corresponding to the picture to be labeled according to the labeled data labeling frame.
Further, if a preset event for the picture to be labeled is monitored, determining a data labeling start point and a data labeling end point according to the preset event, including:
if a click event aiming at the picture to be marked is monitored, acquiring a coordinate point pointed by the click event on the picture to be marked to obtain the data marking starting point, wherein the click event is used for selecting the coordinate point on the picture to be marked and monitoring a lifting event for the picture to be marked, and the lifting event is used for reversely selecting the coordinate point on the picture to be marked;
if the lifting event aiming at the picture to be marked is monitored, obtaining a coordinate point reversely selected on the picture to be marked by the lifting event, and obtaining the data marking end point.
Further, after obtaining the coordinate point pointed by the click event on the picture to be labeled and obtaining the data labeling starting point, the method further includes:
monitoring a moving event of the picture to be marked, wherein the moving event is used for moving a coordinate point on the picture to be marked;
if the mobile event aiming at the picture to be marked is monitored, acquiring a coordinate point pointed by the mobile event on the picture to be marked to obtain a marked mobile point;
and generating a candidate marking frame according to the marking moving point and the data marking starting point.
Further, the generating and displaying a data annotation frame according to the determined data annotation starting point and the determined data annotation end point includes:
and acquiring the marking moving point corresponding to the data marking end point, and setting the candidate marking frame corresponding to the acquired marking moving point as the data marking frame.
Further, after generating and displaying a data annotation frame according to the determined data annotation starting point and the determined data annotation ending point, the method further includes:
acquiring a labeling area of the data labeling frame on the picture to be labeled, and performing area overlapping detection on the acquired labeling area and a displayed labeling area corresponding to the data labeling frame;
and if an overlapping area exists between the acquired labeling area and the displayed labeling area corresponding to the data labeling frame, deleting the data labeling frame corresponding to the acquired labeling area.
Further, after an overlapping area exists between the obtained labeling area and a labeling area corresponding to the displayed data labeling frame, the method further includes:
if the number of the overlapped areas is smaller than a first number threshold and the area of the overlapped areas is larger than an area threshold, sending a replacing prompt of a labeling frame;
and if a determination instruction aiming at the mark frame replacement prompt is received, deleting the displayed data mark frame corresponding to the overlapped area, and displaying the generated data mark frame.
Further, the generating of the labeling data corresponding to the to-be-labeled picture according to the data labeling frame labeled by the label includes:
respectively acquiring a center point coordinate, a picture height, a picture width and a preset origin point coordinate on the picture to be labeled, and acquiring label information pointed by the label mark;
calculating a coordinate conversion rule according to the acquired center point coordinate, the image height, the image width and the preset origin point coordinate;
acquiring display coordinates of the data marking frame on the picture to be marked, and performing coordinate conversion on the display coordinates according to the coordinate conversion rule;
and correspondingly storing the display coordinates after coordinate conversion corresponding to the same data labeling frame and the label information to obtain the labeling data.
A second aspect of an embodiment of the present application provides a data annotation device, including:
the picture display unit is used for displaying the picture to be marked;
the event monitoring unit is used for determining a data annotation starting point and a data annotation end point according to a preset event if the preset event aiming at the picture to be annotated is monitored;
the marking frame generating unit is used for generating and displaying a data marking frame according to the determined data marking starting point and the determined data marking end point;
and the marking data generating unit is used for labeling the selected data marking frame and generating marking data corresponding to the picture to be marked according to the labeled data marking frame if a selection instruction aiming at any data marking frame is received.
A third aspect of the embodiments of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the terminal device, where the processor implements the steps of the data annotation method provided in the first aspect when executing the computer program.
A fourth aspect of the embodiments of the present application provides a storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the steps of the data annotation method provided by the first aspect.
The data labeling method, the data labeling device, the terminal equipment and the storage medium have the following beneficial effects:
according to the data labeling method provided by the embodiment of the application, the data labeling starting point and the data labeling end point in the picture to be labeled can be effectively determined by monitoring the preset event of the picture to be labeled, the corresponding data labeling frame can be automatically generated in the picture to be labeled based on the determined data labeling starting point and the determined data labeling end point, the generated data labeling frame is displayed, the generated data labeling frame can be effectively checked conveniently by a user, if a selection instruction for any data labeling frame is received, the data labeling frame is subjected to label labeling, the accuracy of label labeling on the data labeling frame is effectively improved, the phenomenon that the accuracy of data labeling is low due to manual data labeling is prevented, and the accuracy of label data generation is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating an implementation of a data annotation method according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating an implementation of a data annotation method according to another embodiment of the present application;
FIG. 3 is a flowchart illustrating an implementation of a data annotation method according to another embodiment of the present application;
FIG. 4 is a schematic structural diagram of a picture A to be labeled provided in the embodiment of FIG. 3;
fig. 5 is a block diagram illustrating a data annotation device according to an embodiment of the present application;
fig. 6 is a block diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The data annotation method according to the embodiment of the present application may be executed by a control device or a terminal (hereinafter referred to as a "mobile terminal").
Referring to fig. 1, fig. 1 shows a flowchart of an implementation of a data annotation method provided in an embodiment of the present application, including:
and step S10, displaying the picture to be annotated.
The picture to be marked can be displayed on any application with a picture display function, and preferably, in the step, the picture to be marked is displayed on a canvas (canvas) canvas tool.
Specifically, in this step, a zoom instruction or a picture deletion instruction of the user may also be obtained, and the picture size of the displayed picture to be labeled is zoomed based on the obtained zoom instruction, or the picture area on the displayed picture to be labeled is deleted based on the obtained deletion instruction.
Further, in the step, the picture size of the picture to be marked is obtained, the picture size of the picture to be marked is compared with the preset size, and the picture to be marked is automatically zoomed according to the size comparison result, so that the effect of automatically setting the display size of the picture to be marked according to the size requirement of a user is achieved.
Furthermore, in this step, if a picture moving instruction for the picture to be labeled is received, determining a region to be moved in the picture to be labeled according to the picture moving instruction, and moving the region to be moved according to moving information carried by the picture moving instruction, where the moving information includes a region moving direction and a moving distance.
Step S20, if a preset event for the picture to be labeled is monitored, determining a data labeling start point and a data labeling end point according to the preset event.
The preset event can be set according to requirements, and is used for triggering the determination operation of the data annotation starting point and the data annotation end point on the picture to be annotated.
For example, when an event such as a click operation, a voice instruction, or a touch instruction for the picture to be annotated is monitored, the data annotation starting point and the data annotation ending point on the picture to be annotated are determined according to the event such as the click operation, the voice instruction, or the touch instruction.
And step S30, generating and displaying a data annotation frame according to the determined data annotation starting point and the determined data annotation end point.
The data marking method comprises the steps of obtaining coordinate points of a data marking starting point and a data marking end point on a picture to be marked respectively to obtain a starting point coordinate and an end point coordinate, calculating the distance between the starting point coordinate and the end point coordinate respectively to obtain a marking frame length and a marking frame width, drawing a preset graph towards the data marking end point by the data marking starting point according to the marking frame length and the marking frame width obtained through calculation, and obtaining the data marking frame.
Specifically, in this step, the preset pattern may be set as required, for example, the preset pattern may be set as a rectangle, an ellipse, a triangle, or any polygon.
Step S40, if a selection instruction for any of the data labeling frames is received, labeling the selected data labeling frame, and generating labeling data corresponding to the to-be-labeled picture according to the labeled data labeling frame.
The selection instruction is used for selecting the corresponding data marking frame, and setting the selected data marking frame to be in an active state, at the moment, the data marking frames except the active state on the picture to be marked are all in a non-selection state, namely, all are in a non-active state.
In the step, the selected data marking frame is set to be in an active state, so that the marking error of the data marking frame in other non-selected states is effectively prevented, and the accuracy of the label marking of the data marking frame is improved, wherein the label marking is used for marking the pointed label information on the corresponding data marking frame in the active state.
Specifically, in this step, the generating, according to the data labeling frame marked by the label, labeling data corresponding to the picture to be labeled includes:
respectively acquiring a central point coordinate, a picture height, a picture width and a preset origin point coordinate on the picture to be marked, and acquiring label information pointed by a label mark, wherein the central point coordinate is a coordinate point coordinate corresponding to the picture center of the picture to be marked after display, the preset origin point coordinate can be set according to requirements, in the step, the upper left corner of the picture to be marked is set as the preset origin point coordinate (0, 0), the central point coordinate is (x1, y1), the acquired picture height is picH, the picture width is picW, the label information comprises preset article categories, and the preset article categories comprise categories such as pedestrians, vehicles, animals or buildings.
Calculating a coordinate conversion rule according to the acquired center point coordinate, the acquired picture height, the acquired picture width and the preset origin point coordinate, wherein the picture to be labeled displayed on the canvas tool is a coordinate system established by taking the center point of the picture as the origin point, and the coordinate system established by taking the top left corner vertex of the picture as the origin point in the data labeling operation process, so that the coordinate conversion can be effectively carried out on the data labeling frame through the coordinate conversion rule to prevent the error of the coordinate in the generated labeling data;
acquiring the display coordinate of the data marking frame on the picture to be marked, and performing coordinate conversion on the display coordinate according to the coordinate conversion rule, wherein the coordinate conversion rule is as follows: the coordinate point after the coordinate conversion is shown as (x2, y2), and the coordinate relationship after the coordinate conversion is x2 ═ x2+ picW/2, and y2 ═ picH/2-y 2.
And correspondingly storing the display coordinates after coordinate conversion corresponding to the same data labeling frame and the label information to obtain the labeling data.
Further, in this step, after the tag marking is performed on the selected data tagging box, the method further includes:
acquiring the number of the marked data marking frames marked by the labels on the pictures to be marked;
if the number of the marks is greater than or equal to a second number threshold, sending a data labeling stop prompt for the picture to be labeled, wherein the second number threshold can be set according to requirements, the second number threshold is used for detecting whether the number of the data labeling frames labeled on the picture to be labeled exceeds the second number threshold, when the number of the marks is greater than or equal to the second number threshold, judging that the number of the data labeling frames labeled on the picture to be labeled exceeds the second number threshold, and sending the data labeling stop prompt for the picture to be labeled to prompt a user to stop generating the data labeling frames and/or labeling the picture to be labeled;
specifically, in this step, when the labeling of the data labeling frame is completed, a unique identifier id (feature-timestamp) is assigned to the labeled data labeling frame for counting, so as to obtain the number of the labels m +1, and when any labeled data labeling frame is deleted, the database is deleted according to the identifier id of the currently deleted data labeling frame, and m-1 is counted.
In the embodiment, the data marking starting point and the data marking end point in the picture to be marked can be effectively determined by monitoring the preset event of the picture to be marked, the corresponding data marking frame can be automatically generated in the picture to be marked based on the determined data marking starting point and the determined data marking end point, the generated data marking frame is displayed, the generated data marking frame can be effectively checked by a user conveniently, if a selection instruction for any data marking frame is received, the accuracy of marking the data marking frame by labeling the selected data marking frame is effectively improved, the phenomenon that the accuracy of data marking is low due to the fact that manual data marking is adopted is prevented, and the accuracy of data marking is improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating an implementation of a data annotation method according to another embodiment of the present application. With respect to the embodiment corresponding to fig. 1, the data annotation method provided in this embodiment is used to further refine step S20, and includes:
step S21, if a click event aiming at the picture to be marked is monitored, obtaining a coordinate point pointed by the click event on the picture to be marked to obtain the data marking starting point, and monitoring a lift event of the picture to be marked.
The click event is used for selecting a coordinate point on the picture to be marked, and the lift event is used for performing reverse selection of the coordinate point on the picture to be marked.
Specifically, in this step, whether to trigger the acquisition of the data annotation starting point for the picture to be annotated is determined by monitoring a click event of a left button of a mouse of a user, and when the click event of the left button of the mouse of the user is monitored, a coordinate point clicked on the picture to be annotated by the mouse of the user is acquired to obtain the data annotation starting point, and a lift event of the mouse of the user is monitored.
Optionally, in this step, after obtaining the coordinate point pointed by the click event on the picture to be labeled and obtaining the data labeling starting point, the method further includes:
monitoring a moving event of the picture to be marked, wherein the moving event is used for moving a coordinate point on the picture to be marked, and in the step, the effect of monitoring the moving event of the picture to be marked is achieved by monitoring the motion state of a mouse of a user;
if the moving event aiming at the picture to be marked is monitored, acquiring a coordinate point pointed by the moving event on the picture to be marked to obtain a marked moving point, wherein when the movement of a user mouse is monitored, acquiring the coordinate point of the user mouse on the picture to be marked in real time to obtain the marked moving point;
and generating a candidate marking frame according to the marking moving point and the data marking starting point, wherein the candidate marking frame is generated according to the marking moving point and the data marking starting point, so that the user can conveniently check the framed picture area on the picture to be marked effectively, preferably, in the step, after the candidate marking frame is generated, the color or the figure of the area in the candidate marking frame is modified according to the preset filling color and/or the figure, and further, the user can conveniently check the framed picture area on the picture to be marked effectively.
Step S22, if the lift-off event aiming at the picture to be marked is monitored, obtaining a coordinate point reversely selected by the lift-off event on the picture to be marked to obtain the data marking end point.
If it is monitored that the left key of the mouse of the user is switched from the pressed state to the non-pressed state, it is determined that a lifting event occurs to the picture to be marked, and the data marking end point is obtained by obtaining a coordinate point reversely selected by the mouse of the user on the picture to be marked.
Further, in this embodiment, with respect to step S30 in the embodiment of fig. 1, the generating and displaying a data annotation frame according to the determined data annotation starting point and the determined data annotation ending point includes:
and acquiring the marking moving point corresponding to the data marking end point, and setting the candidate marking frame corresponding to the acquired marking moving point as the data marking frame, wherein the candidate marking frame corresponding to the acquired marking moving point is set as the data marking frame as the coordinate point corresponding to the lifting event, so that the corresponding data marking frame is set according to the data marking end point finally selected by the user.
In the embodiment, whether the acquisition of the data annotation starting point is triggered for the picture to be annotated is judged by monitoring the click event of the picture to be annotated, when the click event of the picture to be annotated is monitored, the coordinate point pointed by the click event on the picture to be annotated is acquired to obtain the data annotation starting point, and the coordinate point reversely selected by the lift event on the picture to be annotated can be effectively acquired by monitoring the lift event of the picture to be annotated to obtain the data annotation end point.
Referring to fig. 3, fig. 3 is a flowchart illustrating an implementation of a data annotation method according to another embodiment of the present application. With respect to the embodiment corresponding to fig. 1, the data annotation method provided in this embodiment is used to further refine the steps after step S30, and the data annotation method further includes:
step S50, acquiring the labeling area of the data labeling frame on the picture to be labeled, and performing area overlapping detection on the acquired labeling area and the displayed labeling area corresponding to the data labeling frame.
And detecting whether the area overlapping occurs between the currently generated data labeling frame and the displayed data labeling frame or not by performing area overlapping detection on the acquired labeling area and the labeling area corresponding to the displayed data labeling frame.
For example, please refer to fig. 4, which is a schematic structural diagram of a to-be-annotated picture a provided in this embodiment, where the to-be-annotated picture a includes a currently generated data annotation frame B, a displayed data annotation frame C, a data annotation frame D, and a data annotation frame E, in this step, annotation areas of the data annotation frame B on the to-be-annotated picture a are respectively obtained to obtain an annotation area B, and annotation areas corresponding to the displayed data annotation frame C, the displayed data annotation frame D, and the displayed data annotation frame E to obtain an annotation area C, an annotation area D, and an annotation area E, and the obtained annotation area B is respectively subjected to area overlap detection with the annotation area C, the annotation area D, and the annotation area E.
Step S60, if there is an overlapping area between the obtained labeling area and the labeling area corresponding to the displayed data labeling box, delete the data labeling box corresponding to the obtained labeling area.
If an overlapping area exists between the acquired labeling area and the labeling area corresponding to the displayed data labeling frame, the currently generated data labeling frame is judged to be overlapped with the displayed data labeling frame, and the data labeling frame corresponding to the acquired labeling area is deleted, so that the phenomenon of object labeling error caused by overlapping between the data labeling frames is prevented, and the accuracy of generating the labeling data is improved.
Optionally, in this step, after an overlapping area exists between the obtained labeling area and the labeling area corresponding to the displayed data labeling frame, the method further includes:
if the number of the overlapped regions is smaller than a first number threshold and the area of the overlapped regions is larger than an area threshold, sending a mark frame replacement prompt, wherein the first number threshold and the area threshold can be set according to requirements;
specifically, in this step, when the obtained overlap area between the labeling area and the displayed data labeling frame meets a preset prompt condition, a labeling frame replacement prompt is sent to prompt a user whether the displayed data labeling frame needs to be replaced for the currently generated data labeling frame.
And if a determination instruction aiming at the mark frame replacement prompt is received, deleting the displayed data mark frame corresponding to the overlapping area, and displaying the generated data mark frame, wherein the displayed data mark frame corresponding to the overlapping area is deleted, and the generated data mark frame is displayed, so that the effect of replacing the displayed data mark frame by the currently generated data mark frame is achieved, and the replacement operation of the data mark frame on the image to be marked by the user is effectively facilitated.
In this embodiment, the region overlapping detection is performed on the acquired labeling region and the labeling region corresponding to the displayed data labeling frame, so as to detect whether region overlapping occurs between the currently generated data labeling frame and the displayed data labeling frame, and if an overlapping region exists between the acquired labeling region and the labeling region corresponding to the displayed data labeling frame, the data labeling frame corresponding to the acquired labeling region is deleted, so that an object labeling error phenomenon caused by the overlapping between the data labeling frames is prevented, and the accuracy of generating the labeling data is improved.
Referring to fig. 5, fig. 5 is a block diagram illustrating a data annotation device 100 according to an embodiment of the present disclosure. In this embodiment, the data annotation apparatus 100 includes units for executing the steps in the embodiments corresponding to fig. 1, fig. 2, and fig. 3. Please refer to fig. 1, fig. 2, and fig. 3, and the corresponding embodiments of fig. 1, fig. 2, and fig. 3. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 5, it includes: the image display device comprises a picture display unit 10, an event monitoring unit 11, a label frame generating unit 12 and a label data generating unit 13, wherein:
and the picture display unit 10 is used for displaying the picture to be marked.
And the event monitoring unit 11 is configured to determine a data annotation starting point and a data annotation ending point according to a preset event if the preset event for the picture to be annotated is monitored.
Wherein, the event listening unit 11 is further configured to: if a click event aiming at the picture to be marked is monitored, acquiring a coordinate point pointed by the click event on the picture to be marked to obtain the data marking starting point, wherein the click event is used for selecting the coordinate point on the picture to be marked and monitoring a lifting event for the picture to be marked, and the lifting event is used for reversely selecting the coordinate point on the picture to be marked;
if the lifting event aiming at the picture to be marked is monitored, obtaining a coordinate point reversely selected on the picture to be marked by the lifting event, and obtaining the data marking end point.
Further, the event listening unit 11 is further configured to: monitoring a moving event of the picture to be marked, wherein the moving event is used for moving a coordinate point on the picture to be marked;
if the mobile event aiming at the picture to be marked is monitored, acquiring a coordinate point pointed by the mobile event on the picture to be marked to obtain a marked mobile point;
and generating a candidate marking frame according to the marking moving point and the data marking starting point.
And a marking frame generating unit 12, configured to generate and display a data marking frame according to the determined data marking starting point and the determined data marking end point.
Wherein, the label frame generating unit 12 is further configured to: and acquiring the marking moving point corresponding to the data marking end point, and setting the candidate marking frame corresponding to the acquired marking moving point as the data marking frame.
And the labeling data generating unit 13 is configured to label the selected data labeling frame if a selection instruction for any one of the data labeling frames is received, and generate labeling data corresponding to the to-be-labeled picture according to the labeled data labeling frame.
Wherein, the annotation data generation unit 13 is further configured to: respectively acquiring a center point coordinate, a picture height, a picture width and a preset origin point coordinate on the picture to be labeled, and acquiring label information pointed by the label mark;
calculating a coordinate conversion rule according to the acquired center point coordinate, the image height, the image width and the preset origin point coordinate;
acquiring display coordinates of the data marking frame on the picture to be marked, and performing coordinate conversion on the display coordinates according to the coordinate conversion rule;
and correspondingly storing the display coordinates after coordinate conversion corresponding to the same data labeling frame and the label information to obtain the labeling data.
Optionally, the data annotation apparatus 100 further includes:
the overlap detection unit 14 is configured to acquire a labeling area of the data labeling frame on the picture to be labeled, and perform area overlap detection on the acquired labeling area and a labeling area corresponding to the displayed data labeling frame;
and if an overlapping area exists between the acquired labeling area and the displayed labeling area corresponding to the data labeling frame, deleting the data labeling frame corresponding to the acquired labeling area.
Optionally, the overlap detection unit 14 is further configured to: if the number of the overlapped areas is smaller than a first number threshold and the area of the overlapped areas is larger than an area threshold, sending a replacing prompt of a labeling frame;
and if a determination instruction aiming at the mark frame replacement prompt is received, deleting the displayed data mark frame corresponding to the overlapped area, and displaying the generated data mark frame.
In the embodiment, the data marking starting point and the data marking end point in the picture to be marked can be effectively determined by monitoring the preset event of the picture to be marked, the corresponding data marking frame can be automatically generated in the picture to be marked based on the determined data marking starting point and the determined data marking end point, the generated data marking frame is displayed, the generated data marking frame can be effectively checked by a user conveniently, if a selection instruction for any data marking frame is received, the accuracy of marking the data marking frame by labeling the selected data marking frame is effectively improved, the phenomenon that the accuracy of data marking is low due to the fact that manual data marking is adopted is prevented, and the accuracy of data marking is improved.
Fig. 6 is a block diagram of a terminal device 2 according to another embodiment of the present application. As shown in fig. 6, the terminal device 2 of this embodiment includes: a processor 20, a memory 21 and a computer program 22, such as a program of a data annotation method, stored in said memory 21 and executable on said processor 20. The processor 20, when executing the computer program 23, implements the steps in the embodiments of the data annotation methods described above, such as S10 to S40 shown in fig. 1, or S21 to S22 shown in fig. 2, or S50 to S60 shown in fig. 3. Alternatively, when the processor 20 executes the computer program 22, the functions of the units in the embodiment corresponding to fig. 5, for example, the functions of the units 10 to 14 shown in fig. 5, are implemented, for which reference is specifically made to the relevant description in the embodiment corresponding to fig. 6, which is not repeated herein.
Illustratively, the computer program 22 may be divided into one or more units, which are stored in the memory 21 and executed by the processor 20 to accomplish the present application. The one or more units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 22 in the terminal device 2. For example, the computer program 22 may be divided into a tile display unit 10, an event listening unit 11, a label frame generating unit 12, a label data generating unit 13, and an overlap detecting unit 14, and the specific functions of the units are as described above.
The terminal device may include, but is not limited to, a processor 20, a memory 21. Those skilled in the art will appreciate that fig. 6 is merely an example of a terminal device 2 and does not constitute a limitation of terminal device 2 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 20 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 21 may be an internal storage unit of the terminal device 2, such as a hard disk or a memory of the terminal device 2. The memory 21 may also be an external storage device of the terminal device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 2. Further, the memory 21 may also include both an internal storage unit and an external storage device of the terminal device 2. The memory 21 is used for storing the computer program and other programs and data required by the terminal device. The memory 21 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the application also provides a storage medium, wherein the storage medium stores a computer program, and the computer program realizes the steps of the provided data labeling method when being executed by a processor.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for annotating data, comprising:
displaying a picture to be marked;
if a preset event aiming at the picture to be marked is monitored, determining a data marking starting point and a data marking end point according to the preset event;
generating and displaying a data annotation frame according to the determined data annotation starting point and the determined data annotation end point;
and if a selection instruction for any one of the data labeling frames is received, labeling the selected data labeling frame, and generating labeling data corresponding to the picture to be labeled according to the labeled data labeling frame.
2. The data annotation method of claim 1, wherein if a preset event for the to-be-annotated picture is monitored, determining a data annotation starting point and a data annotation ending point according to the preset event comprises:
if a click event aiming at the picture to be marked is monitored, acquiring a coordinate point pointed by the click event on the picture to be marked to obtain the data marking starting point, wherein the click event is used for selecting the coordinate point on the picture to be marked and monitoring a lifting event for the picture to be marked, and the lifting event is used for reversely selecting the coordinate point on the picture to be marked;
if the lifting event aiming at the picture to be marked is monitored, obtaining a coordinate point reversely selected on the picture to be marked by the lifting event, and obtaining the data marking end point.
3. The data annotation method of claim 2, wherein after obtaining the coordinate point pointed by the click event on the picture to be annotated and obtaining the data annotation starting point, the method further comprises:
monitoring a moving event of the picture to be marked, wherein the moving event is used for moving a coordinate point on the picture to be marked;
if the mobile event aiming at the picture to be marked is monitored, acquiring a coordinate point pointed by the mobile event on the picture to be marked to obtain a marked mobile point;
and generating a candidate marking frame according to the marking moving point and the data marking starting point.
4. The data annotation method of claim 3, wherein the generating and displaying a data annotation frame according to the determined data annotation starting point and the determined data annotation ending point comprises:
and acquiring the marking moving point corresponding to the data marking end point, and setting the candidate marking frame corresponding to the acquired marking moving point as the data marking frame.
5. The data annotation method of claim 1, wherein after generating and displaying a data annotation frame according to the determined data annotation starting point and data annotation ending point, the method further comprises:
acquiring a labeling area of the data labeling frame on the picture to be labeled, and performing area overlapping detection on the acquired labeling area and a displayed labeling area corresponding to the data labeling frame;
and if an overlapping area exists between the acquired labeling area and the displayed labeling area corresponding to the data labeling frame, deleting the data labeling frame corresponding to the acquired labeling area.
6. The data annotation method according to claim 5, wherein, if there is an overlapping area between the obtained annotation area and the annotation area corresponding to the displayed data annotation frame, the method further comprises:
if the number of the overlapped areas is smaller than a first number threshold and the area of the overlapped areas is larger than an area threshold, sending a replacing prompt of a labeling frame;
and if a determination instruction aiming at the mark frame replacement prompt is received, deleting the displayed data mark frame corresponding to the overlapped area, and displaying the generated data mark frame.
7. The data annotation method of claim 1, wherein the generating annotation data corresponding to the to-be-annotated picture according to the labeled data annotation frame comprises:
respectively acquiring a center point coordinate, a picture height, a picture width and a preset origin point coordinate on the picture to be labeled, and acquiring label information pointed by the label mark;
calculating a coordinate conversion rule according to the acquired center point coordinate, the image height, the image width and the preset origin point coordinate;
acquiring display coordinates of the data marking frame on the picture to be marked, and performing coordinate conversion on the display coordinates according to the coordinate conversion rule;
and correspondingly storing the display coordinates after coordinate conversion corresponding to the same data labeling frame and the label information to obtain the labeling data.
8. A data annotation device, comprising:
the picture display unit is used for displaying the picture to be marked;
the event monitoring unit is used for determining a data annotation starting point and a data annotation end point according to a preset event if the preset event aiming at the picture to be annotated is monitored;
the marking frame generating unit is used for generating and displaying a data marking frame according to the determined data marking starting point and the determined data marking end point;
and the marking data generating unit is used for labeling the selected data marking frame and generating marking data corresponding to the picture to be marked according to the labeled data marking frame if a selection instruction aiming at any data marking frame is received.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A storage medium storing a computer program, characterized in that the computer program realizes the steps of the method according to any one of claims 1 to 7 when executed by a processor.
CN202110340495.2A 2021-03-30 2021-03-30 Data labeling method, device, terminal equipment and storage medium Active CN112967359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110340495.2A CN112967359B (en) 2021-03-30 2021-03-30 Data labeling method, device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110340495.2A CN112967359B (en) 2021-03-30 2021-03-30 Data labeling method, device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112967359A true CN112967359A (en) 2021-06-15
CN112967359B CN112967359B (en) 2023-12-19

Family

ID=76279698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110340495.2A Active CN112967359B (en) 2021-03-30 2021-03-30 Data labeling method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112967359B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115570A (en) * 2023-10-25 2023-11-24 成都数联云算科技有限公司 Canvas-based image labeling method and Canvas-based image labeling system
CN117437382A (en) * 2023-12-19 2024-01-23 成都电科星拓科技有限公司 Updating method and system for data center component

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109492635A (en) * 2018-09-20 2019-03-19 第四范式(北京)技术有限公司 Obtain method, apparatus, equipment and the storage medium of labeled data
CN110865756A (en) * 2019-11-12 2020-03-06 苏州智加科技有限公司 Image labeling method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109492635A (en) * 2018-09-20 2019-03-19 第四范式(北京)技术有限公司 Obtain method, apparatus, equipment and the storage medium of labeled data
CN110865756A (en) * 2019-11-12 2020-03-06 苏州智加科技有限公司 Image labeling method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115570A (en) * 2023-10-25 2023-11-24 成都数联云算科技有限公司 Canvas-based image labeling method and Canvas-based image labeling system
CN117115570B (en) * 2023-10-25 2023-12-29 成都数联云算科技有限公司 Canvas-based image labeling method and Canvas-based image labeling system
CN117437382A (en) * 2023-12-19 2024-01-23 成都电科星拓科技有限公司 Updating method and system for data center component
CN117437382B (en) * 2023-12-19 2024-03-19 成都电科星拓科技有限公司 Updating method and system for data center component

Also Published As

Publication number Publication date
CN112967359B (en) 2023-12-19

Similar Documents

Publication Publication Date Title
TWI729331B (en) Image annotation information processing method, device, server and system
CN112967359B (en) Data labeling method, device, terminal equipment and storage medium
US8805081B2 (en) Object detection in an image
CN111309618B (en) Page element positioning method, page testing method and related devices
US9600298B2 (en) Active and efficient monitoring of a graphical user interface
CN111814716A (en) Seal removing method, computer device and readable storage medium
CN112183321A (en) Method and device for optimizing machine learning model, computer equipment and storage medium
KR20180109378A (en) Appartus for saving and managing of object-information for analying image data
CN115469954A (en) Canvas-based image annotation method and Canvas-based image annotation system
CN112116585B (en) Image removal tampering blind detection method, system, device and storage medium
CN113438471A (en) Video processing method and device, electronic equipment and storage medium
CN111292341B (en) Image annotation method, image annotation device and computer storage medium
CN112651315A (en) Information extraction method and device of line graph, computer equipment and storage medium
CN110334576B (en) Hand tracking method and device
CN111768433A (en) Method and device for realizing tracking of moving target and electronic equipment
CN115457119B (en) Bus bar labeling method, device, computer equipment and readable storage medium
US20230401809A1 (en) Image data augmentation device and method
CN114255219B (en) Symptom identification method and device, electronic equipment and storage medium
CN115890004A (en) Laser marking method based on image recognition and related device
CN111832549B (en) Data labeling method and device
CN114155471A (en) Design drawing and object verification method, device, computer equipment and system
CN115035032A (en) Neural network training method, related method, device, terminal and storage medium
CN114494960A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN112819885A (en) Animal identification method, device and equipment based on deep learning and storage medium
CN111124862A (en) Intelligent equipment performance testing method and device and intelligent equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant