CN116149521A - Image labeling method, device and system - Google Patents
Image labeling method, device and system Download PDFInfo
- Publication number
- CN116149521A CN116149521A CN202111326407.XA CN202111326407A CN116149521A CN 116149521 A CN116149521 A CN 116149521A CN 202111326407 A CN202111326407 A CN 202111326407A CN 116149521 A CN116149521 A CN 116149521A
- Authority
- CN
- China
- Prior art keywords
- frame
- annotation
- marking
- state
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04812—Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides an image labeling method, an image labeling device and an image labeling system, wherein the method comprises the following steps: when the fact that the image marking cursor is located in the area where the history marking frame is located in the sample image is detected, setting the editing state of the history marking frame to be an uneditable state, wherein the history marking frame is a marking frame which is marked currently and completed; the method comprises the steps of obtaining marking operation based on an image marking cursor, marking a marking frame of a target to be marked based on the marking operation, setting the marking frame at the position to be in a non-editable state by the electronic equipment when the cursor of a user marking image is positioned in a region where a history marking frame is positioned in a sample image, and therefore, when the user marks the image, the user cannot select the history marking frame which is marked completely, and the problem that misoperation is easy to occur to the marking frame when the image is marked is solved.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image labeling method, apparatus, and system.
Background
With the development of science and technology, the application of the machine learning model is more and more extensive, for example, the machine learning model can be applied to the fields of target recognition in images, image classification, character recognition and the like. Before the image is processed by the model, the model needs to be trained to ensure that it can output accurate processing results.
In the training process of the model, a large number of sample images are required to be acquired first, then the sample images are marked, and the marked sample images are used for training the model. In the current image labeling method, when a plurality of targets exist in a sample image, one labeling frame needs to be labeled for each target so as to label the position of the target in the sample image.
When a user wants to edit a label frame of one of the objects, the label frames of the other objects are likely to be mishandled, and particularly, when the plurality of objects overlap, a problem of mishandling is likely to occur.
Disclosure of Invention
The embodiment of the invention aims to provide an image labeling method, an image labeling device and an image labeling system, which are used for solving the problem that misoperation is easy to occur on a labeling frame during image labeling. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides an image labeling method, where the method includes:
when detecting that an image marking cursor is positioned in a region where a history marking frame is positioned in a sample image, setting an editing state of the history marking frame to be a non-editable state, wherein the history marking frame is a marking frame which is marked currently and completed;
And acquiring the marking operation based on the image marking cursor, and marking the marking frame of the target to be marked based on the marking operation.
Optionally, after the step of labeling the labeling frame for obtaining the target to be labeled based on the labeling operation, the method further includes:
displaying the identification of the target to be marked in a preset status bar, and recording the corresponding relation between the marking frame and the identification;
and changing the editing state of the annotation frame based on the corresponding relation and the detected editing state changing operation.
Optionally, the step of changing the editing state of the annotation frame based on the correspondence and the detected editing state changing operation includes:
when a selection operation for the mark in the preset status bar is detected, determining a mark frame corresponding to the mark according to the corresponding relation;
and changing the editing state of the annotation frame according to the selection operation.
Optionally, the step of changing the editing state of the annotation frame based on the correspondence and the detected editing state changing operation includes:
when detecting a scrolling operation in the preset status bar, determining a scrolling direction of the scrolling operation;
When the rolling direction is a first preset direction, determining a marking frame corresponding to a previous mark of a target mark according to the corresponding relation, and setting the editing state of the marking frame to be an editable state, wherein the target mark is a mark corresponding to the marking frame in the current editable state;
when the rolling direction is the second preset direction, determining a labeling frame corresponding to the next mark of the target mark according to the corresponding relation, and setting the editing state of the labeling frame as an editable state.
Optionally, the step of changing the editing state of the annotation frame based on the correspondence and the detected editing state changing operation includes:
when detecting that the state modification cursor is positioned in the overlapping area of the plurality of marking frames, highlighting the marking frame with the highest layer;
setting the editing state of the highlighted marking frame as an editable state, and determining the identification corresponding to the highlighted marking frame based on the corresponding relation;
and displaying the mark corresponding to the highlighted marking frame as a preset state in the preset state column, wherein the preset state indicates that the editing state of the corresponding marking frame is an editable state.
Optionally, before the step of setting the edit status of the highlighted callout box to an editable status, the method further includes:
when a rolling operation is detected, determining a target annotation frame indicated by the rolling operation according to a pre-recorded layer sequence, wherein the layer sequence is determined according to the annotation sequence of the annotation frame;
highlighting the target annotation frame.
Optionally, the method further comprises:
hiding other annotation frames except the highlighted annotation frame; or alternatively, the first and second heat exchangers may be,
displaying other annotation frames except the highlighted annotation frame with brightness different from the brightness of the highlighted annotation frame; or alternatively, the first and second heat exchangers may be,
and displaying other annotation frames except the highlighted annotation frame by adopting colors different from the colors of the highlighted annotation frames.
In a second aspect, an embodiment of the present invention provides an image labeling apparatus, including:
the state setting module is used for setting the editing state of the history marking frame to be an uneditable state when detecting that the image marking cursor is positioned in the area where the history marking frame is positioned in the sample image, wherein the history marking frame is a marking frame which is marked currently and completed;
And the image annotation module is used for acquiring annotation operation based on the image annotation cursor and annotating an annotation frame of the target to be annotated based on the annotation operation.
In a third aspect, an embodiment of the present invention provides an image labeling system, where the system includes a plurality of terminals and a server, each terminal is provided with a mouse, and where:
each terminal is used for setting the editing state of a history marking frame to be an uneditable state when detecting that an image marking cursor is positioned in a region where the history marking frame is positioned in a sample image; acquiring an annotation operation based on the image annotation cursor, annotating an annotation frame of a target to be annotated based on the annotation operation, and sending an annotated sample image to the server; the history marking frame is a marking frame which is marked currently and is finished, and the display position of the image marking cursor is determined based on the operation obtained by the mouse;
the server is used for receiving the marked sample image sent by each terminal and storing the marked sample image.
The embodiment of the invention has the beneficial effects that:
in the scheme provided by the embodiment of the invention, when the electronic equipment detects that the image marking cursor is positioned in the area where the history marking frame is positioned in the sample image, the editing state of the history marking frame is set to be an uneditable state, wherein the history marking frame is a marking frame which is marked currently and completed; and acquiring the labeling operation based on the image labeling cursor, and labeling the labeling frame of the target to be labeled based on the labeling operation. When the cursor of the user labeling image is positioned in the area where the history labeling frame is positioned in the sample image, the electronic equipment can set the labeling frame at the position to be in a non-editable state, so that the user cannot select the history labeling frame which is already labeled when the user labels the image, and the problem that misoperation is easy to occur to the labeling frame when the image is labeled is solved. Of course, it is not necessary for any one product or method of practicing the invention to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other embodiments may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an image annotation interface in the related art;
FIG. 2 is a flowchart of an image labeling method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method of editing a callout box based on the embodiment shown in FIG. 2;
FIG. 4 is a schematic illustration of an image annotation interface based on the embodiment shown in FIG. 2;
FIG. 5 is a flowchart showing a step S302 in the embodiment shown in FIG. 3;
FIG. 6 is a schematic diagram of a preset status bar based on the embodiment shown in FIG. 5;
FIG. 7 is another specific flowchart of step S302 in the embodiment shown in FIG. 3;
FIG. 8 is another embodiment of a flowchart of step S302 in the embodiment of FIG. 3
FIG. 9 is a flow chart of a manner of determining a callout box based on the embodiment shown in FIG. 8;
Fig. 10 is a schematic structural diagram of an image labeling device according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of an image labeling system according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, those of ordinary skill in the art will be able to devise all other embodiments that are obtained based on this application and are within the scope of the present invention.
In the conventional image labeling method, when a user wants to edit a labeling frame of a certain target in a sample image, the labeling frames of other targets are easy to perform misoperation, especially when the targets overlap. As shown in fig. 1, at this time, there is an overlapping portion between the object 1, the object 2, and the object 3, and when one of the label frames is selected for editing, other label frames are easily selected by mistake.
For example, after marking the marking frames of the target 1 and the target 2, when continuing marking the marking frame of the target 3, the marking frames of the target 1 and the target 2 which are already marked will interfere with the marking of the marking frame of the target 3, and at this time, the user will easily select the marking frame of the target 1 or the target 2; for example, when the labeling frames of the target 1, the target 2, and the target 3 are labeled, and then the labeling frame of the target 2 is required to be adjusted, the labeling frames of the target 2 need to be selected for editing again, and at this time, because the labeling frames of the target 1 and the target 3 are blocked, the labeling frame of the target 1 or the labeling frame of the target 3 is also easy to be selected erroneously when the labeling frame of the target 2 is selected, which causes inconvenience to the user operation.
In order to solve the problem that misoperation is easy to occur to an annotation frame during image annotation, the embodiment of the invention provides an image annotation method, an image annotation device, an image annotation system, electronic equipment, a computer readable storage medium and a computer program product. The following first describes an image labeling method provided by the embodiment of the present invention.
The image labeling method provided by the embodiment of the invention can be applied to any electronic equipment needing image labeling, for example, computers, tablet computers, mobile phones and the like, and is not particularly limited. For clarity of description, hereinafter, referred to as an electronic device.
As shown in fig. 2, an image labeling method, the method includes:
s201, when detecting that an image marking cursor is positioned in a region where a history marking frame is positioned in a sample image, setting an editing state of the history marking frame into an uneditable state;
the history marking frame is a marking frame which is marked currently.
S202, obtaining an annotation operation based on the image annotation cursor, and annotating to obtain an annotation frame of the target to be annotated based on the annotation operation.
In the scheme provided by the embodiment of the invention, when the electronic equipment detects that the image marking cursor is positioned in the area where the history marking frame is positioned in the sample image, the editing state of the history marking frame is set to be an uneditable state, wherein the history marking frame is a marking frame which is marked currently; and acquiring the labeling operation based on the image labeling cursor, and labeling the labeling frame of the target to be labeled based on the labeling operation. When the cursor of the user labeling image is positioned in the area where the history labeling frame is positioned in the sample image, the electronic equipment can set the labeling frame at the position to be in a non-editable state, so that the user cannot select the history labeling frame which is already labeled when the user labels the image, and the problem that misoperation is easy to occur to the labeling frame when the image is labeled is solved.
In the sample image labeling process, a user can control an image labeling cursor through operating a mouse and the like so as to label a labeling frame of a target to be labeled, and in the process, when the electronic equipment detects that the image labeling cursor is positioned in an area where a history labeling frame is positioned in a sample image, the editing state of the history labeling frame can be modified into a non-editable state.
The number of the history marking frames may be one or more, no matter how many the history marking frames are, if only the marking cursor is located in the area of the history marking frames, the target to be marked and the marked target are overlapped, so that the marking frames of the target to be marked and the history marking frames are overlapped, at this time, when the marking frames of the target to be marked are marked, the history marking frames which are marked and completed before are easily selected by mistake, thereby generating misoperation, and when the electronic equipment detects that the image marking cursor is located in the area of the history marking frames in the sample image, the editing state of the history marking frames can be changed into an uneditable state, thereby avoiding generating misoperation.
After the electronic equipment sets the historical annotation frame at the position of the image annotation cursor to be in an uneditable state, the annotation operation sent by the user based on the image annotation cursor can be obtained, and the annotation frame of the target to be annotated is annotated based on the annotation operation.
In one embodiment, the user may perform a clicking operation in the sample image to start labeling of the labeling frame, at this time, the electronic device may determine, when detecting the clicking operation of the user, a position indicated by the clicking operation, and record coordinates of the position as coordinates of the first position.
The user typically needs to send out a start labeling operation and an end labeling operation for labeling a frame, for example, the labeling frame is rectangular, so that the user typically needs to label an angular point first and then label the diagonal angular point of the angular point, and the rectangle determined by the two angular points is the labeling frame. Therefore, the user can perform clicking operation in the sample image to end the labeling of the labeling frame, at this time, the electronic device can determine the position indicated by the clicking operation when detecting the clicking operation of the user, and record the coordinate of the position as the coordinate of the second position. After the first position and the second position are determined, the electronic device can determine the labeling frame of the target to be labeled according to the first position and the second position.
In one embodiment, the electronic device may generate a label frame according to a preset generating manner of the label frame and the first position and the second position, and then determine the label frame as a label frame of the target to be labeled. The generation mode of the preset labeling frame can be determined according to actual requirements such as the type of the target to be labeled in the sample image. For example, the labeling frame is rectangular, and the electronic device may determine one rectangle as the labeling frame of the target to be labeled by using the first position and the second position as two diagonal corners respectively. For another example, the labeling frame is circular, and then the electronic device may use the connection line between the first position and the second position as the diameter of the labeling frame, and then determine, according to the diameter of the labeling frame, a circular shape as the labeling frame of the target to be labeled.
In the scheme provided by the embodiment of the invention, when the cursor of the user labeling image is positioned in the area where the history labeling frame is positioned in the sample image, the electronic equipment can set the labeling frame at the position to be in an uneditable state, so that the user can not select the history labeling frame which is already labeled when the user labels the image, and the problem that misoperation is easy to occur to the labeling frame when the image is labeled is solved.
As shown in fig. 3, after the step of labeling the labeling frame for obtaining the target to be labeled based on the labeling operation, the method may further include:
s301, displaying the identification of the target to be marked in a preset status bar, and recording the corresponding relation between the marking frame and the identification;
after the electronic device obtains the labeling frame of the target to be labeled based on the labeling operation, the identifier of the target to be labeled can be displayed in the preset status bar, wherein the identifier of the target to be labeled corresponds to the target to be labeled one by one, and the labeling frame of the target to be labeled corresponds to the target to be labeled one by one, and the identifier of the target to be labeled corresponds to the labeling frame of the target to be labeled one by one, and in one embodiment, the identifier of the target to be labeled can be any text, number, pattern and the like capable of identifying the target to be labeled, which is not particularly limited herein.
For example, as shown in the image labeling interface of fig. 4, the sample image includes three targets to be labeled. When the user annotates the first annotation box 210, the identifier 230 of the target to be annotated displayed in the preset status bar 220 may be "target 1", where "target 1" has a unique correspondence with the first annotation box 210 annotated by the user. When the user annotates the second annotation frame 211, the identifier 231 of the target to be annotated displayed in the preset status bar 220 may be "target 2", where "target 2" has a unique correspondence with the second annotation frame 211 annotated by the user. When the user annotates the third annotation box 212, the identifier 232 of the target to be annotated displayed in the preset status bar 220 may be "target 3", where "target 3" has a unique correspondence with the third annotation box 212 annotated by the user.
If more targets to be marked are included in the sample image, when the user marks the marking frames of other targets to be marked, the marks of the targets to be marked as 'target 4', 'target 5', and the like can be sequentially determined according to the sequence of marking the marking frames.
In the image labeling process, in order to facilitate the subsequent editing of the labeling frames and the viewing and changing of the editing state, the electronic device can record the corresponding relation between each labeling frame and the mark. In one embodiment, the electronic device may record a correspondence between the location and the identifier of each label box, for example, as shown in the following table:
Sequence number | Identification mark | Marking frame | First position | Second position |
1 | Target 1 | Marking frame 1 | Angular point coordinates a1 | Angular point coordinates b1 |
2 | Target 2 | Marking frame 2 | Angular point coordinates a2 | Corner coordinates b2 |
… | … | … | … | … |
n | Target n | Marking frame n | Angular point coordinates an | Angular point coordinates bn |
S302, changing the editing state of the annotation frame based on the corresponding relation and the detected editing state changing operation.
Because the labeling frames and the identifiers have a one-to-one correspondence, a user can change the editing state of the labeling frames corresponding to the identifiers by changing the states of the identifiers in the state column. When the editing state of the marking frame is changed, the electronic equipment can change the state of the mark in the preset state column corresponding to the marking frame so as to ensure that the marked state is consistent with the marking state of the corresponding marking frame, and the user can conveniently check and determine the editing state of each marking frame.
When the electronic device detects an edit status change operation issued by a user, the electronic device may determine an object for which the edit status change operation is directed. In one case, when the object is a preset status bar, determining an identifier for the edit status change operation, further determining a label frame corresponding to the identifier, and changing the edit status of the determined label frame according to the edit status change operation.
In another case, when the object targeted by the edit status change operation is a labeling frame labeling area where the target to be labeled is located, the electronic device may determine a labeling frame targeted by the edit status change operation, and further change the edit status of the labeling frame according to the edit status change operation. Of course, after changing the editing state of the labeling frame, in order to keep the state of the identifier consistent with the labeling state of the corresponding labeling frame, the electronic device may also change the state of the identifier corresponding to the labeling frame.
In this embodiment, after the electronic device obtains the labeling frame of the target to be labeled based on the labeling operation, the identifier of the target to be labeled is displayed in the preset status bar, the corresponding relationship between the labeling frame and the identifier is recorded, and when the editing state changing operation is detected, the editing state of the labeling frame can be changed according to the changing operation and the corresponding relationship. Therefore, after the labeling frame labeling is completed, a user can select a specific labeling frame for editing according to requirements.
As shown in fig. 5, the step of changing the edit status of the annotation frame based on the correspondence and the detected edit status change operation may include:
S501, when a selection operation for the mark in the preset status bar is detected, determining a mark box corresponding to the mark according to the corresponding relation;
when the electronic equipment detects the selection operation aiming at the mark in the preset status bar, the electronic equipment indicates that the user wants to change the editing status of the mark frame at the moment, and then the electronic equipment can determine the mark frame corresponding to the mark according to the corresponding relation between the pre-recorded mark frame and the mark. The selection operation may specifically be a selection frame corresponding to the selection identifier, or may be operations such as clicking, double clicking, or long pressing the identifier, which is not specifically limited herein.
The display state of the logo may be changed, for example, the logo may be highlighted, gray displayed, enlarged displayed, or the color of the logo may also be changed, etc., in order to make it clear to the user whether the logo was successfully selected, and is not particularly limited herein.
As shown in fig. 6, the preset status bar displays the identifiers corresponding to the labeling frames in the sample image, and each identifier has a check frame, and when the user wants to select a labeling frame for editing, the user can check the check frame before the identifier corresponding to the labeling frame. For example, the user wants to edit the labeling frame of the target 2 in the sample image, so that the user can check the check frame before the "target 2", and when the electronic device detects the check operation of the user on the check frame, the electronic device can determine that the label selected by the user is the "target 2" according to the check operation of the user, and further determine that the labeling frame corresponding to the "target 2" is the labeling frame of the target 2 according to the corresponding relation between the pre-recorded labeling frame and the label.
S502, changing the editing state of the annotation frame according to the selection operation.
After the labeling frame is determined, the electronic device can change the editing state of the labeling frame according to the current editing state of the labeling frame. Under the condition that the current editing state of the annotation frame is an editable state, the electronic equipment can change the editing state of the annotation frame into an uneditable state when detecting the selection operation of the identifier corresponding to the annotation frame. In another case, the current editing state of the annotation frame is an uneditable state, and when the electronic device detects a selection operation for the identifier corresponding to the annotation frame, the editing state of the annotation frame may be changed to an editable state.
In addition, for the marking frame with the changed editing state, the marking frame can be displayed in different forms according to the editing state, so that a user can conveniently determine the current editing state of the marking frame, for example, when the editing state of the marking frame is changed into a non-editable state, the marking frame can be gray displayed or hidden and displayed; when the edit status of the label frame is changed to an editable status, the label frame may be highlighted, or the label frame may be displayed in a bolded manner, displayed in a specific color, or the like, and is not particularly limited herein.
In this embodiment, the electronic device may detect a selection operation of the user on the identifier in the preset status bar, determine the corresponding identifier according to the operation, further determine the label frame corresponding to the identifier according to the corresponding relationship, and change the editing status of the label frame. On the one hand, a user can select a specific annotation frame for editing according to actual demands, on the other hand, when the user selects the annotation frame to be edited, the user selects the annotation frame from a preset status bar, so that the operation of directly aiming at a plurality of overlapped annotation frames is avoided, and the problem of easy misoperation can be further solved.
As shown in fig. 7, the step of changing the edit status of the annotation frame based on the correspondence and the detected edit status change operation may include:
s701, when detecting a scrolling operation in the preset status bar, determining a scrolling direction of the scrolling operation;
when the user wants to change the editing state of a certain annotation box, the corresponding identifier can be selected by scrolling a cursor in the status bar. The electronic device, upon detecting a scroll operation, may acquire the location at which the cursor is located and then determine whether the location is within a marked area in the sample image or within an area of the status bar, and when the location is within the area of the status bar, the electronic device may determine the scroll direction of the scroll operation, e.g., scroll forward or scroll backward.
S702, when the rolling direction is a first preset direction, determining a labeling frame corresponding to a previous mark of a target mark according to the corresponding relation, and setting the editing state of the labeling frame as an editable state;
s703, when the rolling direction is the second preset direction, determining a labeling frame corresponding to the next mark of the target mark according to the corresponding relation, and setting the editing state of the labeling frame as an editable state.
After determining the scrolling direction, the electronic device may determine a label frame that the user wants to edit based on the scrolling direction. Specifically, when the scrolling direction is a first preset direction, the electronic device may determine, according to the correspondence between the identifier and the labeling frame, a labeling frame corresponding to a previous identifier of the target identifier, and set an editing state of the labeling frame to an editable state, where the target identifier is an identifier corresponding to the labeling frame that is currently in the editable state, and the first preset direction may be forward scrolling.
Similarly, when the scrolling direction is a second preset direction, the electronic device may determine, according to the correspondence between the identifier and the labeling frame, a labeling frame corresponding to a subsequent identifier of the target identifier, and set an editing state of the labeling frame to an editable state, where the second preset direction may be backward scrolling.
Of course, after changing the editing state of the labeling frame, in order to keep the state of the identifier consistent with the labeling state of the corresponding labeling frame, the electronic device may also change the state of the identifier corresponding to the labeling frame.
For example, as shown in fig. 6, the mark in the status bar in the checking state is "target 2", that is, the mark frame of target 2 in the current sample image is in the editable state, if the first preset direction is scrolling forward and the second preset direction is scrolling backward, when the user wants to edit the mark frame of target 1, the cursor may be scrolled forward in the status bar, and when the electronic device detects that the scrolling direction is the first preset direction, the editing state of the mark frame corresponding to the previous mark "target 1" of "target 2" in the status bar may be changed to the editable state, and at the same time, the checking frame before the mark "target 1" may be changed to the checking state.
When the user wants to edit the label frame of the target 3, the cursor can be rolled backwards in the status bar, and when the electronic device detects that the rolling direction is the second preset direction, the editing state of the label frame corresponding to the next mark "target 3" of the "target 2" in the status bar can be changed into the editable state, and meanwhile, the check frame before the mark "target 3" can be changed into the check state.
In this embodiment, the electronic device may detect the scrolling operation in the status bar, determine the identifier corresponding to the to-be-edited label frame according to the scrolling direction and the identifier corresponding to the label frame currently in the editing state, and further determine the label frame corresponding to the identifier according to the correspondence relationship, and change the editing state of the label frame to the editable state. Therefore, when a user selects a label frame to be edited, the user can perform scroll selection in the preset status bar, and the problem that misoperation is easy to occur when the label frames are directly selected under the condition that a plurality of label frames are overlapped is avoided.
As shown in fig. 8, the step of changing the edit status of the annotation frame based on the correspondence and the detected edit status change operation may include:
s801, when detecting that a state modification cursor is positioned in an overlapping area of a plurality of marking frames, highlighting the marking frame with the highest layer;
when the user wants to select a certain annotation frame for modification, the state modification cursor can be moved to the area where the annotation frame is located in the sample image, if the position of the state modification cursor is only a certain standard frame, the electronic device can determine that the annotation frame is the annotation frame which the user wants to edit, and then the editing state of the annotation frame can be set to be an editable state. Meanwhile, the method can inform the user that the annotation frame is changed into an editable state in a mode of highlighting, specific color display, specific line display and the like, and the user can edit the annotation frame.
If the electronic device detects that the cursor is located in the overlapping area of the multiple label frames, in this case, it may be considered that the user wants to edit the label frame with the highest layer, so the label frame with the highest layer may be highlighted, where the highlighting may specifically be highlighting, displaying with a different color from other label frames, or only displaying the label frame, and the like, which is not specifically limited herein.
The labeling frame with the highest layer may be the labeling frame labeled first or the labeling frame labeled last in the multiple labeling frames, which is reasonable and not limited herein.
S802, setting the editing state of the highlighted marking frame as an editable state, and determining the identification corresponding to the highlighted marking frame based on the corresponding relation;
the electronic equipment highlights the label frame with the highest layer, and sets the editing state of the label frame to be an editable state, so that when a plurality of label frames are overlapped, the electronic equipment can highlight one of the label frames, and sets the editing state of the label frame to be an editable state, thereby facilitating the user to check and edit the label frame.
In order to synchronously update the states of the identifiers corresponding to the highlighted marking frames so as to ensure that the states of the two are consistent, the electronic equipment can also determine the identifiers corresponding to the highlighted marking frames according to the corresponding relation between the pre-recorded marking frames and the identifiers.
For example, as shown in FIG. 4, assuming the annotation box 212 is at the highest layer, the annotation box 212 may be highlighted when the electronic device detects that the cursor is located within an overlapping region of multiple annotation boxes. And further, according to the corresponding relation between the pre-recorded labeling frame and the mark, the mark corresponding to the labeling frame 212 is determined to be the mark 'target 3' in the preset status bar.
S803, displaying the mark corresponding to the highlighted marking frame as a preset state in the preset state column.
After changing the editing state of the labeling frame, in order to keep the state of the identifier consistent with the editing state of the corresponding labeling frame, the electronic device may set the state of the identifier corresponding to the highlighted labeling frame to a preset state. The preset state indicates that the editing state of the corresponding marking frame is an editable state.
The preset state indicates that the label frame corresponding to the label is in an editable state, for example, the label may be highlighted, or when the label has a frame to be checked, the state of the frame to be checked may be set to the checked state, or the label corresponding to the same color display as the highlighted label frame may be used, which is not limited herein.
For example, in the example of step S802, assuming that the label box 212 is displayed in green in the interface shown in fig. 6, the mark "target 3" in the preset status bar may also be displayed in green, and the other label boxes and marks may be black or other colors different from green.
In this embodiment, when the electronic device detects that the cursor is located in the overlapping area of the plurality of label frames, the label frame with the highest layer can be highlighted, the editing state of the highlighted label frame is set to be an editable state, the identifier corresponding to the highlighted label frame is determined based on the corresponding relationship, and the identifier corresponding to the highlighted label frame is further displayed as a preset state in the preset state column. Therefore, the user can select the annotation frame to be edited by moving the cursor to the area where the plurality of annotation frames are located, and even if the plurality of annotation frames have overlapping parts, the annotation frame can be accurately determined, so that the user can edit the annotation frame conveniently.
As shown in fig. 9, before the step of setting the editing state of the highlighted marking frame to the editable state, the method may further include:
S901, when a scrolling operation is detected, determining a target annotation frame indicated by the scrolling operation according to a pre-recorded layer sequence;
when the electronic device detects that the cursor is located in the overlapping area of the plurality of annotation frames, the annotation frame with the highest layer can be highlighted, and if the annotation frame which the user wants to edit is not the annotation frame which is currently highlighted, the mouse can be used for scrolling operation so as to select the annotation frame which the user wants to edit.
At this time, the electronic device may detect a scrolling operation in an overlapping area of the plurality of label frames, and further may determine a scrolling direction of the scrolling operation, and determine the target label frame according to the scrolling direction and the currently highlighted layer. The layer sequence may be determined according to the labeling sequence of the labeling frame.
As an implementation mode, the layer where the first marked marking frame is located can be the highest layer, and the layers where the subsequently marked marking frames are located are sequentially decreased according to the marking sequence. As another embodiment, the layer where the first marked marking frame is located may be the lowest layer, and the layers where the subsequently marked marking frames are located are sequentially increased according to the marking order, which is reasonable and not specifically limited herein.
Then, when the electronic device detects a scroll operation generated by the user scrolling the mouse, it may be determined that the label frame in the layer next to the highest layer is the target label frame indicated by the scroll operation. If the user continues to send out the scrolling operation, the next target annotation frame can be determined according to the direction of the scrolling operation, for example, if the user scrolls the mouse forward to generate the scrolling operation, the electronic device can determine that the annotation frame in the layer above the current annotation frame is the target annotation frame indicated by the scrolling operation, that is, the annotation frame located in the highest layer; if the user scrolls the mouse backwards to generate a scrolling operation, the electronic device can determine that the label frame in the layer next to the current label frame is the target label frame indicated by the scrolling operation.
For example, when the electronic device detects a scrolling operation in an overlapping area of a plurality of label frames, if it is determined that the scrolling direction indicated by the scrolling operation is forward scrolling and the layer where the currently highlighted label frame is located is the second layer of the layer, the target label frame may be determined to be the label frame in the first layer of the layer, that is, the highest layer of the layer, and if it is determined that the scrolling direction indicated by the scrolling operation is backward scrolling and the layer where the currently highlighted label frame is located is the second layer of the layer, the target label frame may be determined to be the label frame in the next layer of the layer, that is, the third layer of the layer.
S902, highlighting the target annotation frame.
After the electronic equipment determines the target annotation frame, the target annotation frame can be highlighted, and then the editing state of the target annotation frame is set to be an editable state. Of course, in order to avoid that the previously highlighted annotation frame affects the display and editing of the target annotation frame, the electronic device may cancel highlighting of the previously highlighted annotation frame when highlighting the target annotation frame, that is, only one annotation frame in the sample image may always be highlighted.
After the target mark frame is highlighted, the mark corresponding to the target mark frame in the preset status bar can be highlighted, so that the status of the target mark frame and the mark frame is kept consistent, and the user can check and edit conveniently. The specific manner is similar to the manner of displaying the mark corresponding to the highlighted marking frame as the preset state, and will not be described herein.
In this embodiment, when the electronic device detects a scrolling operation in the overlapping area of the plurality of label frames, the target label frame indicated by the scrolling operation may be determined according to the pre-recorded layer sequence, so as to highlight the target label frame. Therefore, when the user selects a plurality of annotation frames with overlapping parts, the annotation frames to be edited can be switched at will, and the electronic equipment prompts the user of the currently selected annotation frame by highlighting the annotation frame, so that the user is further convenient to modify and edit the annotation frame.
As an implementation manner of the embodiment of the present invention, the method may further include:
hiding other annotation frames except the highlighted annotation frame; or, displaying other annotation frames except the highlighted annotation frame with brightness different from the brightness of the highlighted annotation frame; or, displaying other annotation frames except the highlighted annotation frame by adopting colors different from the colors of the highlighted annotation frame.
When highlighting a label frame that a user wants to edit, in order to make it more convenient for the user to determine whether the highlighted label frame is a label frame that the user wants to edit, the electronic device may hide other label frames except the highlighted label frame, and the user may see the highlighted label frame more clearly, so as to accurately determine whether the user wants to edit the label frame.
The electronic device may display the other annotation frame in addition to the highlighted annotation frame with a brightness different from the brightness of the highlighted annotation frame. For example, a highlighting-required annotation box is displayed with a higher brightness than other annotation boxes.
Other label frames than the highlighted label frame may be displayed in a different color from the color of the highlighted label frame, for example, the color of the highlighted label frame may be set to red, the color of the other label frame than the highlighted label frame may be set to a color different from red, such as green, and the like, and of course, the highlighted label frame may be set to another color as long as the color of the highlighted label frame is different from that of the other label frame.
It can be seen that, in this embodiment, the electronic device may also hide the other annotation frames except for the highlighted annotation frame; or, displaying other annotation frames except the highlighted annotation frame with brightness different from the brightness of the highlighted annotation frame; or, the color different from the color of the highlighted annotation frame is adopted to display other annotation frames except the highlighted annotation frame, so that a user can conveniently determine whether the highlighted annotation frame is the annotation frame which the user wants to edit, and the user operation is further facilitated.
As an implementation manner of the embodiment of the present invention, the method may further include:
and inputting the marked sample image into a deep learning model to be trained, and training the deep learning model to be trained.
After the electronic device marks the sample image, the sample image can be sent to model training equipment such as a server, and the model training equipment can input the marked sample image into a deep learning model to be trained after receiving the marked sample image, so that the deep learning model to be trained is trained.
The trained deep learning model can be applied to a required application scene, for example, the trained deep learning model can be used for image classification, target detection, vehicle recognition, license plate recognition, character recognition, target behavior analysis and the like.
Corresponding to the image labeling method, the embodiment of the invention also provides an image labeling device. The following describes an image labeling device provided by the embodiment of the invention.
As shown in fig. 10, an image labeling apparatus, the apparatus comprising:
the state setting module 1010 is configured to set an editing state of a history annotation frame to a non-editable state when it is detected that an image annotation cursor is located in an area where the history annotation frame is located in a sample image;
the history marking frame is a marking frame which is marked currently.
The image labeling module 1020 is configured to obtain a labeling operation based on the image labeling cursor, and label a labeling frame of the target to be labeled based on the labeling operation.
In the scheme provided by the embodiment of the invention, when the electronic equipment detects that the image marking cursor is positioned in the area where the history marking frame is positioned in the sample image, the editing state of the history marking frame is set to be an uneditable state, wherein the history marking frame is a marking frame which is marked currently; and acquiring the labeling operation based on the image labeling cursor, and labeling the labeling frame of the target to be labeled based on the labeling operation. When the cursor of the user labeling image is positioned in the area where the history labeling frame is positioned in the sample image, the electronic equipment can set the labeling frame at the position to be in a non-editable state, so that the user cannot select the history labeling frame which is already labeled when the user labels the image, and the problem that misoperation is easy to occur to the labeling frame when the image is labeled is solved.
As an implementation manner of the embodiment of the present invention, the foregoing apparatus may further include:
the corresponding relation recording module is used for displaying the identification of the target to be marked in a preset status bar after marking the marking frame of the target to be marked based on the marking operation, and recording the corresponding relation between the marking frame and the identification;
and the state changing module is used for changing the editing state of the annotation frame based on the corresponding relation and the detected editing state changing operation.
As an implementation manner of the embodiment of the present invention, the state change module may include:
the first annotation frame determining unit is used for determining an annotation frame corresponding to the identifier according to the corresponding relation when detecting the selection operation of the identifier in the preset status bar;
and the first state changing unit is used for changing the editing state of the annotation frame according to the selection operation.
As an implementation manner of the embodiment of the present invention, the above state change module may further include:
a direction determining unit configured to determine a scroll direction of a scroll operation when the scroll operation within the preset status bar is detected;
the second state changing unit is used for determining a labeling frame corresponding to a previous mark of the target mark according to the corresponding relation when the rolling direction is determined to be a first preset direction, and setting the editing state of the labeling frame as an editable state;
The target mark is the mark corresponding to the mark frame in the editable state.
And the third state changing unit is used for determining a labeling frame corresponding to the next mark of the target mark according to the corresponding relation when the rolling direction is determined to be the second preset direction, and setting the editing state of the labeling frame as an editable state.
As an implementation manner of the embodiment of the present invention, the above state change module may further include:
the highlighting unit is used for highlighting the label frame with the highest layer when detecting that the state modification cursor is positioned in the overlapping area of the plurality of label frames;
a fourth state changing unit, configured to set an editing state of the highlighted marking frame to an editable state, and determine an identifier corresponding to the highlighted marking frame based on the correspondence;
the mark display unit is used for displaying the mark corresponding to the highlighted marking frame in the preset status bar as a preset status, wherein the preset status indicates that the editing status of the corresponding marking frame is an editable status.
As an implementation manner of the embodiment of the present invention, the above state change module may further include:
A second label frame determining unit, configured to determine, when a scrolling operation is detected before the editing state of the highlighted label frame is set to an editable state, a target label frame indicated by the scrolling operation according to a pre-recorded layer sequence;
the layer sequence is determined according to the labeling sequence of the labeling frame.
And a fifth state changing unit for highlighting the target labeling frame.
As an implementation manner of the embodiment of the present invention, the above state change module may further include:
a mark frame hiding unit for hiding other mark frames except the highlighted mark frame; or alternatively, the first and second heat exchangers may be,
a brightness display unit for displaying other annotation frames except the highlighted annotation frame with brightness different from the brightness of the highlighted annotation frame; or alternatively, the first and second heat exchangers may be,
and a color display unit for displaying the other annotation frames except the highlighted annotation frame in a color different from the color of the highlighted annotation frame.
In response to the above image labeling method, an embodiment of the present invention provides an image labeling system, and an image labeling system provided in the embodiment of the present invention is described below, as shown in fig. 11, and the system includes a plurality of terminals 1101 and a server 1102, where each terminal 1101 is provided with a mouse (not shown in fig. 11), and the system is shown in fig. 11, where:
Each terminal 1101 is configured to set an editing state of a history annotation frame to a non-editable state when it is detected that an image annotation cursor is located in an area of a sample image where the history annotation frame is located; acquiring an annotation operation based on the image annotation cursor, annotating an annotation frame of a target to be annotated based on the annotation operation, and sending an annotated sample image to the server 1102;
the history marking frame is a marking frame which is marked currently, and the display position of the image marking cursor is determined based on the operation obtained by the mouse.
The server 1102 is configured to receive the annotated sample image sent by each terminal 1101 and store the annotated sample image.
Therefore, in the scheme provided by the embodiment of the invention, when the terminal detects that the image marking cursor is positioned in the area where the history marking frame is positioned in the sample image, the editing state of the history marking frame is set to be a non-editable state; and acquiring an annotation operation based on an image annotation cursor, annotating an annotation frame of a target to be annotated based on the annotation operation, and finally transmitting the annotated sample image to a server, wherein the historical annotation frame is an annotation frame which is currently annotated, and the display position of the image annotation cursor is determined based on the operation acquired by a mouse. The server receives the marked sample image sent by each terminal and stores the marked sample image, and because when a cursor of a user marked image is positioned in an area where a history marked frame is positioned in the sample image, the electronic equipment can set the marked frame at the position to be in a non-editable state, when the user marks the image, the user can not select the marked history marked frame, thereby solving the problem that misoperation is easy to the marked frame when the image is marked.
The image marking system can comprise a plurality of terminals and a server, each terminal is provided with a mouse, a user can perform image marking on a sample image by operating the mouse, and different users can perform image marking work on a large number of sample images through different terminals at the same time. Each terminal can determine the display position of the image marking cursor based on the operation acquired by the mouse of the terminal, mark the target to be marked based on the operation of the user on the image marking cursor, and obtain a corresponding marking frame. Further, the labeled sample image may be sent to a server, and the server may receive the labeled sample image sent by each terminal and store the labeled sample image for training the deep learning model.
The sample image can be stored in the server, and when the sample image is marked, the terminal can acquire the sample image stored in the server and display the sample image so as to enable a user to carry out image marking operation. It is reasonable that the sample image may be stored locally at the terminal, and is not specifically limited herein.
As an implementation manner of the embodiment of the present invention, each terminal 1101 may be further configured to display an identifier of a target to be annotated in a preset status bar after an annotation frame of the target to be annotated is annotated based on the annotation operation, and record a correspondence between the annotation frame and the identifier; and changing the editing state of the annotation frame based on the corresponding relation and the detected editing state changing operation.
As an implementation manner of the embodiment of the present invention, each terminal 1101 may be further configured to, when detecting a selection operation for a identifier in the preset status bar, determine a label frame corresponding to the identifier according to the correspondence; and changing the editing state of the annotation frame according to the selection operation.
As one implementation of the embodiments of the present invention, each of the terminals 1101 described above may also be configured to determine a scrolling direction of a scrolling operation when the scrolling operation within the preset status bar is detected; when the rolling direction is a first preset direction, determining a labeling frame corresponding to a previous mark of the target mark according to the corresponding relation, and setting the editing state of the labeling frame as an editable state; when the rolling direction is the second preset direction, determining a labeling frame corresponding to the next mark of the target mark according to the corresponding relation, and setting the editing state of the labeling frame as an editable state.
The target mark is the mark corresponding to the mark frame in the editable state.
As one implementation of the embodiment of the present invention, each of the above-described terminals 1101 may also be configured to highlight the highest-level callout box in which a status modifying cursor is located when it is detected that the status modifying cursor is located in an overlapping area of multiple callout boxes; setting the editing state of the highlighted marking frame as an editable state, and determining the identification corresponding to the highlighted marking frame based on the corresponding relation; and displaying the mark corresponding to the highlighted marking frame as a preset state in the preset state column.
The preset state indicates that the editing state of the corresponding marking frame is an editable state.
As one implementation of embodiments of the present invention, each of the above-described terminals 1101 may also be configured to determine, upon detection of a scroll operation, a target annotation frame indicated by the scroll operation in a pre-recorded layer order prior to setting the edit status of the highlighted annotation frame to an editable status; highlighting the target annotation frame.
The layer sequence is determined according to the labeling sequence of the labeling frame.
As one implementation of embodiments of the present invention, each of the above-described terminals 1101 may also be used to hide other annotation frames in addition to the highlighted annotation frame; or, the display device is also used for displaying other annotation frames except the highlighted annotation frame by adopting the brightness different from the brightness of the highlighted annotation frame; or, the method is also used for displaying other annotation frames except the highlighted annotation frame by adopting a color different from the color of the highlighted annotation frame.
As an implementation manner of the embodiment of the present invention, the server 1102 may be further configured to input the labeled sample image into a deep learning model to be trained, and train the deep learning model to be trained.
The embodiment of the present invention further provides an electronic device, as shown in fig. 12, including a processor 1201, a communication interface 1202, a memory 1203, and a communication bus 1204, where the processor 1201, the communication interface 1202, the memory 1203, and the communication bus 1204 complete communication with each other,
a memory 1203 for storing a computer program;
the processor 1201 is configured to implement the image labeling method steps described in any of the embodiments above when executing the program stored in the memory 1203.
In the scheme provided by the embodiment of the invention, when the electronic equipment detects that the image marking cursor is positioned in the area where the history marking frame is positioned in the sample image, the editing state of the history marking frame is set to be an uneditable state, wherein the history marking frame is a marking frame which is marked currently; and acquiring the labeling operation based on the image labeling cursor, and labeling the labeling frame of the target to be labeled based on the labeling operation. When the cursor of the user labeling image is positioned in the area where the history labeling frame is positioned in the sample image, the electronic equipment can set the labeling frame at the position to be in a non-editable state, so that the user cannot select the history labeling frame which is already labeled when the user labels the image, and the problem that misoperation is easy to occur to the labeling frame when the image is labeled is solved. .
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, a computer readable storage medium is provided, where a computer program is stored, where the computer program is executed by a processor to implement the steps of the image labeling method according to any of the foregoing embodiments.
In a further embodiment of the present invention, a computer program product comprising instructions which, when run on a computer, cause the computer to perform the image annotation method steps according to any of the embodiments described above is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, systems, electronic devices, computer readable storage media, and computer program product embodiments, the description is relatively simple as it is substantially similar to method embodiments, as relevant points are found in the partial description of method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.
Claims (10)
1. A method of image annotation, the method comprising:
when detecting that an image marking cursor is positioned in a region where a history marking frame is positioned in a sample image, setting an editing state of the history marking frame to be a non-editable state, wherein the history marking frame is a marking frame which is marked currently and completed;
and acquiring the marking operation based on the image marking cursor, and marking the marking frame of the target to be marked based on the marking operation.
2. The method of claim 1, wherein after the step of labeling a labeling frame that results in a target to be labeled based on the labeling operation, the method further comprises:
displaying the identification of the target to be marked in a preset status bar, and recording the corresponding relation between the marking frame and the identification;
and changing the editing state of the annotation frame based on the corresponding relation and the detected editing state changing operation.
3. The method of claim 2, wherein the step of changing the edit status of the callout box based on the correspondence and the detected edit status change operation comprises:
when a selection operation for the mark in the preset status bar is detected, determining a mark frame corresponding to the mark according to the corresponding relation;
and changing the editing state of the annotation frame according to the selection operation.
4. The method of claim 2, wherein the step of changing the edit status of the callout box based on the correspondence and the detected edit status change operation comprises:
when detecting a scrolling operation in the preset status bar, determining a scrolling direction of the scrolling operation;
when the rolling direction is a first preset direction, determining a marking frame corresponding to a previous mark of a target mark according to the corresponding relation, and setting the editing state of the marking frame to be an editable state, wherein the target mark is a mark corresponding to the marking frame in the current editable state;
when the rolling direction is the second preset direction, determining a labeling frame corresponding to the next mark of the target mark according to the corresponding relation, and setting the editing state of the labeling frame as an editable state.
5. The method of claim 2, wherein the step of changing the edit status of the callout box based on the correspondence and the detected edit status change operation comprises:
when detecting that the state modification cursor is positioned in the overlapping area of the plurality of marking frames, highlighting the marking frame with the highest layer;
setting the editing state of the highlighted marking frame as an editable state, and determining the identification corresponding to the highlighted marking frame based on the corresponding relation;
and displaying the mark corresponding to the highlighted marking frame as a preset state in the preset state column, wherein the preset state indicates that the editing state of the corresponding marking frame is an editable state.
6. The method of claim 5, wherein prior to the step of setting the edit status of the highlighted callout box to an editable status, the method further comprises:
when a rolling operation is detected, determining a target annotation frame indicated by the rolling operation according to a pre-recorded layer sequence, wherein the layer sequence is determined according to the annotation sequence of the annotation frame;
highlighting the target annotation frame.
7. The method of claim 5 or 6, wherein the method further comprises:
hiding other annotation frames except the highlighted annotation frame; or alternatively, the first and second heat exchangers may be,
displaying other annotation frames except the highlighted annotation frame with brightness different from the brightness of the highlighted annotation frame; or alternatively, the first and second heat exchangers may be,
and displaying other annotation frames except the highlighted annotation frame by adopting colors different from the colors of the highlighted annotation frames.
8. The method according to any one of claims 1-6, wherein the method comprises:
and inputting the marked sample image into a deep learning model to be trained, and training the deep learning model to be trained.
9. An image annotation device, the device comprising:
the state setting module is used for setting the editing state of the history marking frame to be an uneditable state when detecting that the image marking cursor is positioned in the area where the history marking frame is positioned in the sample image, wherein the history marking frame is a marking frame which is marked currently and completed;
and the image annotation module is used for acquiring annotation operation based on the image annotation cursor and annotating an annotation frame of the target to be annotated based on the annotation operation.
10. An image annotation system comprising a plurality of terminals and a server, each terminal being provided with a mouse, wherein:
each terminal is used for setting the editing state of a history marking frame to be an uneditable state when detecting that an image marking cursor is positioned in a region where the history marking frame is positioned in a sample image; acquiring an annotation operation based on the image annotation cursor, annotating an annotation frame of a target to be annotated based on the annotation operation, and sending an annotated sample image to the server; the history marking frame is a marking frame which is marked currently and is finished, and the display position of the image marking cursor is determined based on the operation obtained by the mouse;
the server is used for receiving the marked sample image sent by each terminal and storing the marked sample image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111326407.XA CN116149521A (en) | 2021-11-10 | 2021-11-10 | Image labeling method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111326407.XA CN116149521A (en) | 2021-11-10 | 2021-11-10 | Image labeling method, device and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116149521A true CN116149521A (en) | 2023-05-23 |
Family
ID=86360446
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111326407.XA Pending CN116149521A (en) | 2021-11-10 | 2021-11-10 | Image labeling method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116149521A (en) |
-
2021
- 2021-11-10 CN CN202111326407.XA patent/CN116149521A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020233053A1 (en) | Method, apparatus, device, and storage medium for making test page script | |
US10013408B2 (en) | Information processing apparatus, information processing method, and computer readable medium | |
CN111814885B (en) | Method, system, device and medium for managing image frames | |
US20140351718A1 (en) | Information processing device, information processing method, and computer-readable medium | |
CN112307408B (en) | Method and device for determining element information in page, electronic equipment and storage medium | |
CN105975550B (en) | Question searching method and device of intelligent equipment | |
JP6056648B2 (en) | Information processing apparatus and information processing program | |
CN109976614B (en) | Method, device, equipment and medium for marking three-dimensional graph | |
US11055107B2 (en) | Electronic apparatus and method of executing application program | |
CN103492981A (en) | Touch screen selection | |
CN103294766A (en) | Associating strokes with documents based on the document image | |
CN107391914B (en) | Parameter display method, device and equipment | |
CN110046009B (en) | Recording method, recording device, server and readable storage medium | |
JP6237135B2 (en) | Information processing apparatus and information processing program | |
CN108415890B (en) | Method for setting top display unit cell, terminal equipment and computer readable storage medium | |
JP6759552B2 (en) | Information processing equipment and information processing programs | |
WO2024088209A1 (en) | Position information acquisition method and apparatus | |
JP2017091328A (en) | Information processing apparatus and information processing program | |
CN111428452B (en) | Annotation data storage method and device | |
JP6668868B2 (en) | Information processing apparatus and information processing program | |
WO2023231860A1 (en) | Input method and apparatus, and device and storage medium | |
CN111260759B (en) | Path determination method and device | |
CN107450809B (en) | Page interaction method and device and electronic terminal | |
CN116149521A (en) | Image labeling method, device and system | |
CN116992519A (en) | Fire-fighting construction drawing marking method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |