CN116052170A - Image labeling method and device, electronic equipment and storage medium - Google Patents

Image labeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116052170A
CN116052170A CN202310086604.1A CN202310086604A CN116052170A CN 116052170 A CN116052170 A CN 116052170A CN 202310086604 A CN202310086604 A CN 202310086604A CN 116052170 A CN116052170 A CN 116052170A
Authority
CN
China
Prior art keywords
image
marking
point
marked
labeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310086604.1A
Other languages
Chinese (zh)
Inventor
朱良明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Chongqing BOE Smart Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Chongqing BOE Smart Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Chongqing BOE Smart Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202310086604.1A priority Critical patent/CN116052170A/en
Publication of CN116052170A publication Critical patent/CN116052170A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area

Abstract

The embodiment of the application relates to the technical field of image processing, and provides an image labeling method, an image labeling device, electronic equipment and a storage medium, wherein the image labeling method comprises the following steps: displaying the image to be annotated on the canvas; drawing a plurality of marking points on the image to be marked; displaying the annotation frame on the image to be annotated based on the plurality of annotation points, wherein the annotation frame is the content to be annotated of the image to be annotated; and adjusting the position and/or the size of the marking frame in response to the position change and/or the size change of the image to be marked, so that the selected contents of the marking frame before and after adjustment are the same.

Description

Image labeling method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to an image labeling method, an image labeling device, electronic equipment and a storage medium.
Background
With the development of artificial intelligence technology, the neural network can perform tasks such as image classification, target detection, image segmentation and the like in the field of image processing, and the application of the neural network realizes automatic image processing, so that the labor cost is reduced, and the efficiency and the accuracy are improved.
In the field of artificial intelligence, a large number of images subjected to labeling processing are required for training an image processing neural network, and the labeling processing needs to be applied to an image labeling processing tool. However, when the existing image labeling processing tool performs zooming-in, zooming-out or moving processing on an image in the labeling drawing process, the positions of labeling points on a labeling frame can deviate, the accuracy of labeling results is affected, and the use experience of a user is poor. Therefore, how to improve the accuracy of the labeling result and the user experience is a problem to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides an image labeling method, an image labeling device, electronic equipment and a storage medium, and aims to solve the problem of improving the accuracy of labeling results.
A first aspect of an embodiment of the present application provides an image labeling method, including:
displaying the image to be annotated on the canvas;
drawing a plurality of marking points on the image to be marked;
displaying the annotation frame on the image to be annotated based on the plurality of annotation points, wherein the annotation frame is the content to be annotated of the image to be annotated;
and adjusting the position and/or the size of the marking frame in response to the position change and/or the size change of the image to be marked, so that the selected contents of the marking frame before and after adjustment are the same.
In an alternative embodiment, adjusting the position and/or size of the annotation frame in response to the position change and/or size change of the image to be annotated comprises:
responding to the target position of the image to be marked after the position change, and moving the marking frame so that the marking frame follows the position change of the image to be marked;
and/or, in response to the size change of the image to be marked, determining a first size before the change and a second size after the change, and adjusting the size of the marking frame based on the first size and the second size.
In an alternative embodiment, after displaying the image to be annotated on the canvas, before drawing the plurality of annotation points on the image to be annotated, the method further comprises:
scaling the image to be marked to ensure that the width of the image to be marked is the same as the width of the canvas, or
And scaling the image to be annotated so that the height of the image to be annotated is the same as the height of the canvas.
In an alternative embodiment, the labeling frame is a rectangular labeling frame, the labeling points include a first labeling point and a second labeling point, and if the first labeling point or the second labeling point is located in the peripheral area of the canvas, the first labeling point or the second labeling point is moved to a point at the edge of the image to be labeled.
In an alternative embodiment, the peripheral region of the canvas includes a first sub-region, a second sub-region, a third sub-region, and a fourth sub-region, and moving the first annotation point or the second annotation point to a point at an edge of the image to be annotated includes:
if the first marking point or the second marking point is only located in one sub-area, the first marking point or the second marking point is moved to a point at the edge of the image to be marked, which is closest to the edge in the horizontal direction or the vertical direction;
and if the first marking point or the second marking point is positioned in the overlapped area of the two sub-areas, moving the first marking point or the second marking point to the vertex at the edge of the image to be marked, which is closest to the first marking point or the second marking point.
In an optional implementation manner, the labeling frame is a polygonal labeling frame, the labeling points are n polygonal labeling points, n is greater than or equal to 3, and after the plurality of labeling points are drawn on the image to be labeled, the method further includes:
responding to a drawing ending instruction, and displaying the polygonal marking frame on the image to be marked based on the polygonal marking point, wherein the polygonal marking frame is the content to be marked of the image to be marked;
And adjusting the position and/or the size of the polygonal marking frame in response to the position change and/or the size change of the image to be marked, so that the contents selected by the polygonal marking frame before and after adjustment are the same.
In an optional implementation manner, the labeling frame is a rectangular labeling frame, the labeling points include a first labeling point and a second labeling point, and the drawing of the plurality of labeling points on the image to be labeled includes:
determining the coordinates of the first marking point, wherein the coordinates of the first marking point are relative coordinates of the mouse position relative to the vertex coordinates of the image to be marked when the mouse is pressed down;
determining the coordinates of the second labeling point, wherein the coordinates of the second labeling point are relative coordinates of the mouse position relative to the vertex coordinates of the image to be labeled when the mouse is lifted;
the position of the mouse when the mouse is pressed down and the position of the mouse when the mouse is lifted up are positioned in the image to be marked;
and drawing the first annotation point and the second annotation point on the image to be annotated based on the coordinates of the first annotation point and the coordinates of the second annotation point.
In an alternative embodiment, after determining the coordinates of the second annotation point, the method further comprises:
Acquiring the width and the height of the rectangular annotation frame based on the coordinates of the first annotation point and the coordinates of the second annotation point;
acquiring vertex coordinates of the rectangular annotation frame based on the coordinates of the first annotation point, the coordinates of the second annotation point and the width and the height of the rectangular annotation frame;
and if the vertex coordinates of the rectangular annotation frame are positioned in the range of the vertex coordinates of the image to be annotated, drawing the rectangular annotation frame based on the vertex coordinates of the rectangular annotation frame.
In an alternative embodiment, the image to be annotated includes a first vertex, a second vertex, a third vertex and a fourth vertex, and the vertex coordinates of the image to be annotated are obtained as follows:
acquiring the width and the height of a canvas, and the width and the height of an image to be marked;
determining coordinates of four vertexes of the image to be annotated in the width direction based on the canvas width and the image to be annotated in the width;
and determining coordinates of the height directions of the four vertexes of the image to be annotated based on the width of the canvas and the width of the image to be annotated.
In an alternative embodiment, scaling the image to be annotated includes:
Acquiring the width and the height of a canvas, and the width and the height of an image to be marked;
acquiring a first scaling ratio based on the canvas width and the image width to be annotated; acquiring a second scaling ratio based on the canvas height and the image height to be annotated;
acquiring a minimum value between the first scaling ratio and the second scaling ratio as an image scaling ratio;
and scaling the image to be annotated on the canvas based on the image scaling ratio.
In an alternative embodiment, the polygon labeling points are obtained as follows:
determining coordinates of the n polygonal marking points, wherein the coordinates of the n polygonal marking points are relative coordinates of a mouse position relative to vertex coordinates of the image to be marked when the mouse clicks a left key;
and drawing the polygon marking points based on the coordinates of the n polygon marking points.
In an alternative embodiment, the drawing end instruction is a mouse left click operation performed within a target period of time, or a preset key on the keyboard is entered.
A second aspect of the embodiments of the present application provides an image labeling apparatus, including:
the display module is used for displaying the image to be annotated on the canvas;
The marking point drawing module is used for drawing a plurality of marking points on the image to be marked;
the marking frame generation module is used for displaying the marking frame on the image to be marked based on the marking points, wherein the marking frame is the content to be marked of the image to be marked;
and the adjusting module is used for responding to the position change and/or the size change of the image to be marked and adjusting the position and/or the size of the marking frame so that the frame selected before and after the marking frame is adjusted is the same content.
Wherein, the adjustment module includes:
the adjustment sub-module is used for responding to the target position after the position change of the image to be marked and moving the marking frame so that the marking frame follows the position change of the image to be marked;
and/or, in response to the size change of the image to be marked, determining a first size before the change and a second size after the change, and adjusting the size of the marking frame based on the first size and the second size.
Wherein, the device still includes:
a scaling module for scaling the image to be marked so that the width of the image to be marked is the same as the width of the canvas, or
And scaling the image to be annotated so that the height of the image to be annotated is the same as the height of the canvas.
Wherein, the device still includes:
and the marking point limiting module is used for moving the first marking point or the second marking point to a point at the edge of the image to be marked if the first marking point or the second marking point is positioned in the peripheral area of the canvas.
Wherein, the mark point limiting module further comprises:
the first moving submodule is used for moving the first marking point or the second marking point to a point at the edge of the image to be marked, which is closest to the image to be marked in the horizontal direction or the vertical direction, if the first marking point or the second marking point is only located in one subarea;
and the second moving sub-module is used for moving the first marking point or the second marking point to the vertex at the edge of the image to be marked, which is closest to the first marking point or the second marking point, if the first marking point or the second marking point is positioned in the overlapping area of the two sub-areas.
Wherein, the device still includes:
the polygon labeling frame drawing module is used for responding to a drawing end instruction, displaying the polygon labeling frame on the image to be labeled based on the polygon labeling points, wherein the polygon labeling frame is internally provided with the content to be labeled of the image to be labeled;
And the polygonal marking frame adjusting module is used for adjusting the position and/or the size of the polygonal marking frame in response to the position change and/or the size change of the image to be marked, so that the selected contents of the polygonal marking frame before and after adjustment are the same.
The marking point drawing module comprises:
the first marking point determining sub-module is used for determining the coordinates of the first marking point, wherein the coordinates of the first marking point are relative coordinates of the mouse position relative to the vertex coordinates of the image to be marked when the mouse is pressed down; the position of the mouse when the mouse is pressed down is positioned in the image to be marked;
the second marking point determining submodule is used for determining the coordinates of the second marking point, wherein the coordinates of the second marking point are relative coordinates of the mouse position relative to the vertex coordinates of the image to be marked when the mouse is lifted; the position of the mouse when the mouse is lifted is positioned in the image to be marked;
and the marking point drawing sub-module is used for drawing the first marking point and the second marking point on the image to be marked based on the coordinates of the first marking point and the coordinates of the second marking point.
Wherein, the mark point drawing module further comprises:
the rectangular size obtaining sub-module is used for obtaining the width and the height of the rectangular labeling frame based on the coordinates of the first labeling point and the coordinates of the second labeling point;
the rectangular vertex coordinate acquisition sub-module is used for acquiring the vertex coordinate of the rectangular annotation frame based on the coordinates of the first annotation point, the coordinates of the second annotation point and the width and the height of the rectangular annotation frame;
and the rectangle frame drawing submodule is used for drawing the rectangle marking frame based on the vertex coordinates of the rectangle marking frame if the vertex coordinates of the rectangle marking frame are positioned in the range of the vertex coordinates of the image to be marked.
Wherein, the mark point drawing module further comprises:
the size obtaining sub-module is used for obtaining the width and the height of the canvas, and the width and the height of the image to be marked;
the image to be annotated width coordinate acquisition submodule is used for determining the coordinates of the four vertexes of the image to be annotated in the width direction based on the canvas width and the image to be annotated;
and the image height coordinate obtaining sub-module is used for determining the coordinates of the height directions of the four vertexes of the image to be marked based on the width of the canvas and the width of the image to be marked.
Wherein, the scaling module further comprises:
the size obtaining sub-module is used for obtaining the width and the height of the canvas and the width and the height of the image to be marked;
the candidate scaling ratio obtaining sub-module is used for obtaining a first scaling ratio based on the canvas width and the image width to be annotated; acquiring a second scaling ratio based on the canvas height and the image height to be annotated;
a scaling ratio determination submodule, configured to obtain a minimum value between the first scaling ratio and the second scaling ratio as an image scaling ratio;
and the scaling sub-module is used for scaling the image to be annotated on the canvas based on the image scaling ratio.
Wherein, the polygon labeling frame drawing module includes:
the polygon marking point coordinate acquisition sub-module is used for determining the coordinates of the n polygon marking points, wherein the coordinates of the n polygon marking points are relative coordinates of the mouse position relative to the vertex coordinates of the image to be marked when the mouse clicks the left button;
and the polygon marking point drawing sub-module is used for drawing the polygon marking points based on the coordinates of the n polygon marking points.
Wherein, the module is drawn to polygon mark frame still includes:
And the drawing end instruction generation sub-module is used for generating a drawing end instruction, wherein the drawing end instruction is a mouse left key double-click operation executed in a target time period or a preset key on a keyboard is input.
A third aspect of the embodiments of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory, where the processor executes the computer program to implement the steps in an image labeling method according to any one of the first aspect.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium having stored thereon a computer program/instruction which, when executed by a processor, implements the steps of a method for labeling images according to any of the first aspects.
The beneficial effects are that:
the embodiment of the application provides an image labeling method, an image labeling device, electronic equipment and a storage medium, comprising the following steps: displaying the image to be annotated on the canvas; drawing a plurality of marking points on the image to be marked; displaying the annotation frame on the image to be annotated based on the plurality of annotation points, wherein the annotation frame is the content to be annotated of the image to be annotated; and adjusting the position and/or the size of the marking frame in response to the position change and/or the size change of the image to be marked, so that the selected contents of the marking frame before and after adjustment are the same. The generated marking frame is correspondingly adjusted along with the change of the image to be marked according to the coordinates of the marking points, so that the frame selected content of the marking frame cannot deviate, the accuracy of marking results is effectively improved, and the use experience of a user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an image labeling method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image annotation interface according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a change in dimension of a marking frame according to an embodiment of the present application;
FIG. 4 is a schematic diagram of four vertices of an image to be annotated according to an embodiment of the present disclosure;
FIG. 5a is a schematic diagram of a canvas first sub-region and a third sub-region according to an embodiment of the present application;
FIG. 5b is a schematic diagram of a canvas second sub-region and a fourth sub-region according to an embodiment of the present application;
FIG. 6 is a schematic drawing of a drawing direction of a rectangular label frame according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating movement of a polygon point according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an image labeling apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In the related art, in the field of artificial intelligence, a large number of images subjected to labeling are required for training a neural network for image processing, and the labeling needs to be applied to an image labeling processing tool. However, when the existing image labeling processing tool performs zooming-in, zooming-out or moving processing on an image in the labeling drawing process, the positions of labeling points on a labeling frame can deviate, the accuracy of labeling results is affected, and the use experience of a user is poor.
In view of this, an embodiment of the present application proposes an image labeling method, and fig. 1 shows a flowchart of the image labeling method, as shown in fig. 1, including the following steps:
S101, displaying the image to be annotated on the canvas.
S102, drawing a plurality of marking points on the image to be marked.
And S103, displaying the annotation frame on the image to be annotated based on the plurality of annotation points, wherein the annotation frame is the content to be annotated of the image to be annotated.
And S104, adjusting the position and/or the size of the marking frame in response to the position change and/or the size change of the image to be marked, so that the frame selected before and after adjustment is the same content.
In this embodiment of the present application, the image labeling method interacts with a user in the image labeling interface, and fig. 2 shows a schematic diagram of the image labeling interface, as shown in fig. 2, where the image labeling interface includes a canvas, and the canvas is an image processing area where the user performs related image processing operations on an image to be labeled. Besides the canvas (image processing area) for performing image processing operation on the image to be annotated by the user, the annotation interface may further include a marker selection area, a function operation area and an information display area, and other functional areas included in the annotation interface may be set according to actual situations, which is not particularly limited herein.
In this embodiment of the present application, the image to be annotated is an original image with no annotation points or annotation frames, and the image is used in a training process of machine learning of image processing in a subsequent downstream task, where the image processing includes multiple image processing types such as image classification, image detection, and image segmentation, and in the training process of the neural network corresponding to these types of image processing, the training image needs to be annotated, so that the training image needs to be used as the original image to perform corresponding annotation in advance.
In this embodiment of the present application, the marking point is used as a positioning point forming a marking frame, and the marking frame is a planar graphic frame for selecting the content to be marked. The marking points are part or all of vertexes of the plane graph corresponding to the marking frame, for example, if the marking frame is a rectangular marking frame, the marking points are a group of diagonal vertexes of the rectangular marking frame; if the labeling frame is a polygonal labeling frame, the labeling points are all vertexes of the polygonal labeling frame. After the marking points are obtained, if the marking points are part of vertexes of the marking frame, other vertexes on the marking frame are determined according to the marking points, and all vertexes determined by the marking points are connected into a closed plane graph to form the marking frame.
In the image processing process, the image to be marked can be enlarged or reduced to improve the drawing precision of the marking points and the marking frame, or the image to be marked is moved in the canvas, however, when the image to be marked is enlarged, reduced or moved, the positions of the pixel points on the image to be marked are correspondingly changed, and the relative positions of the marking points which are already drawn in the prior art are generated based on the canvas, so that the positions of the marking points which are already drawn are not changed, the content to be marked in the marking frame is changed after the image to be marked is enlarged, reduced or moved, and the correct content to be marked selected by the original frame is partially or completely deviated to the outside of the marking frame, so that the marking precision is greatly reduced.
Therefore, in the embodiment of the present application, when the position and/or the size of the image to be marked changes, the position and/or the size of the marking frame is correspondingly adjusted based on the position adjustment of the marking point in response to the size change and/or the position change information of the image to be marked, so that the position and/or the size of the marking frame and the position and/or the size of the image to be marked change in the same proportion, and the content selected by the marking frame before and after adjustment is ensured to be the same content.
For example, FIG. 3 shows a schematic representation of a change in size of a marking frame, as shown in FIG. 3, in a first position relative to a canvas and in a second position relative to an image to be marked; when the image to be marked is reduced, the marking frame is positioned at a third position relative to the canvas, but is still positioned at the second position relative to the image to be marked, that is, when the size of the image to be marked is changed, the position of the marking frame relative to the canvas is correspondingly adjusted, so that the position of the marking frame relative to the image to be marked is kept unchanged, and at the moment, the content to be marked selected by the marking frame is the same content as the content selected by the frame before and after adjustment.
According to the method and the device for marking the image, the positions of the marking frames relative to the canvas are changed, and the positions of the marking frames relative to the image to be marked are kept unchanged, so that when the positions and/or the sizes of the image to be marked are changed, the content to be marked selected by the marking frames before and after the adjustment of the image to be marked is the same content, and the marking precision is improved.
In order to enable those skilled in the art to better understand the solution of the present application, a detailed description of an image labeling method provided in the embodiments of the present application is provided below:
When the step S101 is implemented, drawing an original image on the canvas, and rendering the original image to obtain an image to be annotated. The raw image importing process may be performed according to the prior art, for example, may be implemented by using open source canvas-based graphic library fabric, and the specific raw image importing process may be determined according to the actual situation, which is not limited herein.
After the image to be marked is obtained, the size of the image to be marked is possibly larger than the size of the canvas, so that the image to be marked cannot be displayed on the canvas, and the image part exceeding the canvas cannot be subjected to effective marking operation in the subsequent marking process, therefore, in order to ensure that the image to be marked is completely positioned in the canvas, the image to be marked needs to be subjected to scaling treatment. In addition, if the image to be marked is scaled to be too small, adverse effects will be caused on the marking precision, so in the embodiment of the application, one of the width and the height of the image to be marked is selected to be the same as the canvas, that is, the image to be marked is scaled, so that the width of the image to be marked is the same as the width of the canvas, or the image to be marked is scaled, so that the height of the image to be marked is the same as the height of the canvas.
The image to be annotated may exceed the height of the canvas in the height direction, may exceed the width of the canvas in the width direction, or may exceed the width of the canvas in the width direction while exceeding the height of the canvas in the height direction, so that in order to ensure that the scaled image to be annotated may be smaller than the size of the canvas in both the width direction and the height direction, an image scaling ratio needs to be determined. Specifically, firstly, the width and the height of a canvas are obtained, and the width and the height of an image to be annotated are obtained; then, calculating a size ratio between the canvas width and the image width to be annotated based on the canvas width and the image width to be annotated, and taking the ratio as a first scaling ratio; and calculating the size wallpaper between the canvas height and the image height to be annotated based on the canvas height and the image height to be annotated, and taking the ratio as a second scaling ratio. If the image to be marked is scaled according to the first scaling ratio, the width of the image to be marked can be adjusted to be consistent with the width of the canvas, but the height of the image to be marked is possibly not matched with the first scaling ratio, so that the image to be marked after being scaled in the height direction still exceeds the height of the canvas; similarly, if the image to be annotated is scaled according to the second scaling ratio only, the height of the image to be annotated can be adjusted to be consistent with the height of the canvas, but the width of the image to be annotated and the second scaling ratio may not be adapted, so that the image to be annotated after scaling in the width direction still exceeds the width of the canvas; if the width of the image to be marked is adjusted according to the first scaling ratio, the height of the image to be marked is adjusted according to the second scaling ratio, and the aspect ratio of the image to be marked is changed under the condition that the first scaling ratio is different from the second scaling ratio, so that the image to be marked is distorted.
Therefore, in order to solve the above-mentioned problem, after the first scaling ratio and the second scaling ratio are obtained, the first scaling ratio and the second scaling ratio need to be compared, a minimum value between the first scaling ratio and the second scaling ratio is obtained, the minimum value is used as an image scaling ratio for finally scaling an image to be marked, and the image to be marked is scaled on a canvas based on the image scaling ratio. The specific first scaling ratio, the second scaling ratio and the image scaling ratio are obtained according to the following formula:
Figure BDA0004071908400000121
Figure BDA0004071908400000122
R=Min(R 1 ,R 2 )
wherein R is 1 For the first scaling ratio, R 2 R is the image scaling ratio for the second scaling ratio; w (w) 1 A width of the canvas; w (w) 2 The width of the image to be marked is the width of the image to be marked; h is a 1 -height of the canvas; h is a 2 And the height of the image to be marked is the height of the image to be marked.
For example, if the first scaling ratio is 0.5 and the second scaling ratio is 0.4, scaling the width and the height of the image to be marked according to the second scaling ratio of 0.4, wherein the height of the adjusted image to be marked is consistent with the height of the canvas, and the width of the adjusted image to be marked is smaller than the width of the canvas; if the first scaling ratio is 0.5 and the second scaling ratio is 0.6, scaling the width and the height of the image to be marked according to the first scaling ratio of 0.5, wherein the width of the adjusted image to be marked is consistent with the width of the canvas, and the height of the adjusted image to be marked is smaller than the height of the canvas.
In an alternative embodiment, the zoomed image to be annotated is centrally displayed in the central region of the canvas, and the canvas region around the image to be annotated is used as the peripheral region of the canvas.
In an alternative embodiment, when step S102 is specifically implemented, the labeling frame is a rectangular labeling frame, and the position and the size of the rectangle may be determined by a set of diagonal vertices, so for the rectangular labeling frame, the labeling points include a first labeling point and a second labeling point, where the first labeling point and the second labeling point are a set of diagonal vertices of the rectangular labeling frame, for example, the first labeling point and the second labeling point are a lower left vertex and an upper right vertex of the rectangular labeling frame, and may also be an upper left vertex and a lower right vertex.
Specifically, in response to a mouse click operation, one mouse press and lift is used as one click operation, and the coordinates of the first annotation point and the coordinates of the second annotation point are determined based on one mouse press and lift. Firstly, determining the coordinates of the first marking point, wherein the coordinates of the first marking point are relative coordinates of the mouse position relative to the vertex coordinates of the image to be marked when the mouse is pressed down; and then determining the coordinates of the second labeling point, wherein the coordinates of the second labeling point are relative coordinates of the mouse position relative to the vertex coordinates of the image to be labeled when the mouse is lifted. Because the coordinates of the first labeling point and the coordinates of the second labeling point are relative coordinates relative to the coordinates of the vertexes of the image to be labeled, when the size and/or the position of the image to be labeled are changed, all the pixel coordinates of the image to be labeled reflected on the pixel layer surface are changed in the same size and/or the same position, and therefore, the change of the image to be labeled can be represented by the coordinates of the four vertexes of the image to be labeled. When the vertex coordinates of the image to be marked are changed, the coordinates of the first marking point and the second marking point are based on the relative coordinates of the vertex coordinates of the image to be marked, so that the coordinates of the first marking point and the second marking point can correspondingly change, and the positions of the first marking point and the second marking point relative to the image to be marked are kept unchanged.
In an alternative embodiment, fig. 4 shows a schematic diagram of four vertices of an image to be annotated, as shown in fig. 4, where the image to be annotated includes a first vertex of an upper left corner, a second vertex of a lower left corner, a third vertex of an upper right corner, and a fourth vertex of a lower right corner, and the vertex coordinates of the image to be annotated are obtained as follows: acquiring the width and the height of a canvas, and the width and the height of an image to be marked; determining coordinates of four vertexes of the image to be annotated in the width direction based on the canvas width and the image to be annotated in the width; and determining coordinates of the height directions of the four vertexes of the image to be annotated based on the width of the canvas and the width of the image to be annotated.
Specifically, firstly, the width and the height of a canvas are obtained, and the width and the height of an image to be annotated are obtained; and calculating half of the difference value between the canvas width and the image width to be annotated based on the canvas width and the image width to be annotated, wherein the half of the difference value is used as a first abscissa of the image to be annotated, and the first abscissa is the abscissa of the first vertex and the second vertex because the first vertex and the second vertex are the same in abscissa, and a specific calculation formula of the first abscissa is as follows:
Figure BDA0004071908400000131
Wherein x is 1 Is the firstAn abscissa; w (w) 1 A width of the canvas; w (w) 2 And the width of the image to be marked is the width of the image to be marked.
And then calculating half of the difference between the canvas height and the image height to be annotated based on the canvas height and the image height to be annotated, wherein the first ordinate is the ordinate of the first vertex and the third vertex as the first ordinate of the image to be annotated, and the calculation formula of the specific first ordinate is as follows:
Figure BDA0004071908400000132
wherein y is 1 Is the first ordinate; h is a 1 -height of the canvas; h is a 2 And the height of the image to be marked is the height of the image to be marked.
After the first abscissa and the first ordinate are determined, the first abscissa and the image width to be marked are added based on the first abscissa and the image width to be marked, and the first abscissa and the image width to be marked are taken as a second abscissa of the image to be marked, and the third vertex and the fourth vertex are the same in abscissa, so that the second abscissa is the abscissa of the third vertex and the fourth vertex, and a specific calculation formula of the second abscissa is as follows:
x 2 =x 1 +w 2
wherein x is 2 Is the second abscissa; x is x 1 Is the first abscissa; w (w) 2 And the width of the image to be marked is the width of the image to be marked.
Finally, based on the first ordinate and the height of the image to be marked, calculating the sum of the first ordinate and the height of the image to be marked, wherein the second ordinate is the ordinate of the second vertex and the fourth vertex, and the calculation formula of the specific second ordinate is as follows:
y 2 =y 1 +h 2
wherein y is 2 Is the second ordinate; y is 1 Is the first ordinate; h is a 2 And the height of the image to be marked is the height of the image to be marked.
So far, the first vertex coordinates (x 1 ,y 1 ) Second vertex coordinates (x 1 ,y 2 ) Third vertex coordinates (x 2 ,y 1 ) And fourth vertex coordinates (x 2 ,y 2 ). And acquiring the relative coordinates of the first labeling point and the second labeling point based on the four vertex coordinates of the image to be labeled.
It should be noted that, the mouse position when the mouse is pressed down and the mouse position when the mouse is lifted up are located in the image to be marked, so that the marking point is ensured not to be drawn in the peripheral area of the canvas where the image to be marked does not exist.
In an alternative embodiment, after determining the coordinates of the first annotation point and the second annotation point, if the first annotation point or the second annotation point is located in the peripheral area of the canvas, the first annotation point or the second annotation point is moved to a point at the edge of the image to be annotated, so as to prevent the annotation point from being drawn in the peripheral area of the canvas. Specifically, fig. 5a shows a schematic diagram of a first subregion and a third subregion of a canvas, fig. 5b shows a schematic diagram of a second subregion and a fourth subregion of the canvas, as shown in fig. 5a-5b, the peripheral region of the canvas includes the first subregion, the second subregion, the third subregion and the fourth subregion, if the first marking point or the second marking point is only located in one subregion, the first marking point or the second marking point is moved to a point at the edge of the image to be marked, which is closest to the first marking point in the horizontal direction or the vertical direction, for example, if the first marking point is only located in the first subregion, the rectangular marking frame directly formed according to the first marking point must exceed the size of the image to be marked, and at this time, the edge of the image to be marked, which is closest to the first marking point, is the left edge of the image to be marked, and the first marking point is moved to the left edge of the image to be marked in the horizontal direction, as a new first marking point.
Because of the overlapping portions of the four sub-regions in the perimeter region of the canvas, for example, the first sub-region and the second sub-region have overlapping regions in the upper left corner of the canvas. And if the first marking point or the second marking point is positioned in the overlapping area of the two sub-areas, at the moment, the first marking point or the second marking point is moved to the vertex at the edge of the image to be marked, which is closest to the first marking point or the second marking point. For example, if the first labeling point is located in the overlapping area of the first sub-area and the second sub-area, the rectangular labeling frame formed directly according to the first labeling point must exceed the size of the image to be labeled, and at this time, the vertex at the edge of the image to be labeled closest to the first labeling point is the first vertex of the image to be labeled, and then the first labeling point is moved to the first vertex of the image to be labeled, so as to be used as a new first labeling point.
Specifically, the coordinates of the moved annotation point are obtained as follows: if the coordinate of the marking point is positioned in the first subarea or the second subarea, taking the maximum value between the abscissa of the marking point and the first abscissa of the image to be marked as the abscissa of the marking point after moving; taking the maximum value between the ordinate of the marking point and the first ordinate of the image to be marked as the ordinate of the moved marking point, and specifically obtaining according to the following formula:
x′=Max(x 1 ,x);
y′=Max(y 1 ,y);
Wherein x' is the abscissa of the moved annotation point; x is x 1 A first abscissa of the image to be marked; x is the abscissa of the marked point; y' is the ordinate of the moved marking point; y is 1 A first ordinate of the image to be marked; y is the ordinate of the noted point.
If the coordinate of the marking point is positioned in the third subarea or the fourth subarea, taking the minimum value between the abscissa of the marking point and the second abscissa of the image to be marked as the abscissa of the marking point after moving; taking the minimum value between the ordinate of the marking point and the second ordinate of the image to be marked as the ordinate of the marking point after moving, and specifically obtaining according to the following formula:
x′=Min(x 2 ,x);
y′=Min(y 2 ,y);
wherein x' is the abscissa of the moved annotation point; x is x 2 A second abscissa of the image to be marked; x is the abscissa of the marked point; y' is the ordinate of the moved marking point; y is 2 A second ordinate of the image to be marked; y is the ordinate of the noted point.
And finally, after determining the coordinates of the first annotation point and the coordinates of the second annotation point which are positioned in the range of the image to be annotated, drawing the first annotation point and the second annotation point on the image to be annotated based on the coordinates of the first annotation point and the coordinates of the second annotation point.
When step S103 is specifically implemented, the size of the rectangular labeling frame needs to be determined based on the first labeling point and the second labeling point that are drawn first: and acquiring the width and the height of the rectangular annotation frame based on the coordinates of the first annotation point and the coordinates of the second annotation point. Specifically, calculating an absolute value of a difference value between the abscissa of the first labeling point and the abscissa of the second labeling point as the width of the rectangular labeling frame; calculating the absolute value of the difference between the ordinate of the first annotation point and the ordinate of the second annotation point as the height of the rectangular annotation frame, wherein the absolute value is calculated according to the following formula:
w 3 =Abs(x a -x b );
h 3 =Abs(y a -y b );
wherein w is 3 Marking the width of the frame for the rectangle; h is a 3 The height of the rectangular marking frame is set; x is x a The first marked point abscissa is given; y is a The ordinate of the first marking point is; x is x b Is saidThe abscissa of the second marking point; y is b And the ordinate of the second annotation point.
Then, based on the coordinates of the first labeling point and the coordinates of the second labeling point and the width and the height of the rectangular labeling frame, vertex coordinates of the rectangular labeling frame are obtained, and because the first labeling point and the second labeling point are two vertexes in the rectangular labeling frame, other two vertex coordinates are obtained here, for example, if the first labeling point is an upper left corner vertex and the second labeling point is a lower right corner vertex, the sum of the abscissa of the first labeling point and the width of the rectangular labeling frame can be calculated and used as the abscissa of the upper right corner vertex, and the ordinate of the first labeling point is used as the ordinate of the upper right corner vertex; and calculating the sum of the ordinate of the first marking point and the height of the rectangular marking frame to be used as the ordinate of the left lower corner vertex, and taking the abscissa of the first marking point as the abscissa of the left lower corner vertex. It should be noted that, the method for calculating the vertices of other rectangular labeling frames based on the labeling points and the widths and heights of the rectangular frames may be determined according to practical situations, and the above examples are only illustrative of one of them, and the method for calculating the vertices of other rectangular labeling frames is not limited in this application.
After the vertex coordinates of the rectangular labeling frame are obtained, judging whether the vertex of the rectangular labeling frame is positioned in the range of the vertex coordinates of the image to be labeled; if the vertex coordinates of the rectangular labeling frame are located in the range of the vertex coordinates of the image to be labeled, drawing the rectangular labeling frame based on the vertex coordinates of the rectangular labeling frame; and if the vertex coordinates of the rectangular labeling frame have coordinates which are not in the range of the vertex coordinates of the image to be labeled, acquiring new labeling points according to the method.
In an alternative embodiment, it is determined whether the vertex of the rectangular label frame is located within the range of the vertex coordinates of the image to be labeled, and the reference coordinates of the rectangular label frame need to be acquired, where the reference coordinates are the top left corner vertex coordinates of the rectangular frame, and fig. 6 shows a schematic drawing direction of the rectangular label frame, and as shown in fig. 6, the point in the drawing is a starting point of a mouse, and when the rectangular label frame is drawn based on a mouse click operation, the moving direction of the mouse is a diagonal direction, and is arbitrary on a plane, so that the diagonal direction of the rectangle drawn by the rectangular label frame is uncertain, that is, the positions of the first label point and the second label point in the rectangular label frame are uncertain. Therefore, the positioning of the rectangular marking frame position is realized through reference coordinates.
Specifically, a minimum value between the first labeling point abscissa and the second labeling point abscissa is obtained and used as an abscissa value of the reference coordinate; acquiring a minimum value between the ordinate of the first annotation point and the ordinate of the second annotation point as the ordinate value of the reference coordinate; and forming the abscissa value of the reference coordinate and the ordinate value of the reference coordinate into the reference coordinate. The reference coordinates are obtained according to the following formula:
x 0 =Min(x a ,x b );
y 0 =Min(y a ,y b );
wherein x is 0 An abscissa value for the reference coordinate; y is 0 An ordinate value being the reference coordinate; x is x a The first marked point abscissa is given; y is a The ordinate of the first marking point is; x is x b The second marked point abscissa is given; y is b And the ordinate of the second annotation point.
And finally, displaying the rectangular annotation frame on the image to be annotated based on the vertex coordinates of the rectangular annotation frame in the range of the image to be annotated and the width and the height of the rectangular annotation frame, wherein the rectangular annotation frame is the content to be annotated of the image to be annotated.
In an optional implementation manner, the labeling frame may be a polygonal labeling frame, where n is greater than or equal to 3, the polygonal labeling points are first drawn, coordinates of the n polygonal labeling points are determined, and the coordinates of the n polygonal labeling points are relative coordinates of a mouse position when a mouse clicks a left button relative to vertex coordinates of the image to be labeled; and drawing the polygon marking points based on the coordinates of the n polygon marking points.
Specifically, determining first polygon labeling point coordinates, wherein the first polygon labeling point coordinates are relative coordinates of a mouse position relative to vertex coordinates of the image to be labeled when the mouse clicks a left key at a first moment; then determining the coordinates of the second polygon marking points, wherein the coordinates of the second polygon marking points are relative coordinates of the mouse position relative to the vertex coordinates of the image to be marked when the mouse clicks the left button at a second moment, and the second moment is a moment larger than a target time period after the first moment; and determining the coordinates of the nth polygon marking point, wherein the coordinates of the nth polygon marking point are relative coordinates of a mouse position relative to the vertex coordinates of the image to be marked when the mouse clicks the left button at the nth moment, and the nth moment is a moment larger than the target time period after the nth-1 moment.
Since the drawing of the polygon mark points is in response to the click operation of the mouse, in order to prevent that a plurality of mark points are continuously drawn, the interval of the click operation is determined as the double click operation of the mouse too fast, and thus a target time period is set, and when the interval between the time when the click operation is performed next and the time when the click operation is performed last is larger than the target time period, the mark point corresponding to the next click operation is generated. The target period may be set according to specific situations, for example, set to 200ms, which is not limited herein.
And after the polygon marking point coordinates are obtained, generating polygon marking points on the image to be marked based on the polygon marking point coordinates. Subsequently, a drawing end instruction needs to be input to end the drawing process of the polygon mark points. In an optional implementation manner, the drawing end instruction is a left click operation of a mouse executed within a target time period, for example, the target time period is set to be 200ms, if two continuous click operations of the mouse are completed within 200ms, the drawing end instruction is determined, and the polygon is drawn according to all the polygon marking points currently drawn; the drawing end instruction may also be a preset key input on a keyboard, for example, a key Q is set as the preset key, if the key Q instruction is received in the drawing process of the polygon labeling points, drawing of the polygon labeling points is ended, and the polygon is drawn according to all the polygon labeling points which are currently drawn. It should be noted that, the specific manner of the drawing end instruction may be determined according to practical situations, and the present application is not limited herein.
Because the coordinates of the polygonal labeling points are relative coordinates relative to the coordinates of the vertexes of the image to be labeled, when the size and/or the position of the image to be labeled are changed, all the pixel coordinates of the image to be labeled reflected on the pixel layer surface are changed in the same size and/or the same position, and therefore the change of the image to be labeled can be represented by the coordinates of the four vertexes of the image to be labeled. When the vertex coordinates of the image to be marked are changed, the polygon marking point coordinates are based on the relative coordinates of the vertex coordinates of the image to be marked, so that the polygon marking point coordinates can also be correspondingly changed, and the positions of the polygon marking points relative to the image to be marked are kept unchanged.
In an alternative embodiment, fig. 7 shows a schematic diagram of movement of a polygon labeling point, and as shown in fig. 7, when the polygon labeling point is drawn in a peripheral area of the canvas, the polygon labeling point is processed according to the same method as that for drawing a labeling point of a rectangular labeling frame in the peripheral area of the canvas, and specific steps can be seen from the above content of S102, which is not repeated herein.
And then, responding to a drawing ending instruction, and displaying the polygonal marking frame on the image to be marked based on the polygonal marking point, wherein the polygonal marking frame is the content to be marked of the image to be marked.
When step S104 is specifically implemented, after the labeling frame (including the rectangular labeling frame or the polygonal labeling frame as described above) is obtained, the image to be labeled may be enlarged or reduced to improve the drawing precision of the labeling point and the labeling frame, or the image to be labeled may be moved in the canvas, and when the image to be labeled is enlarged, reduced or moved, the positions of the pixels on the image to be labeled are correspondingly changed, and the coordinates of the labeling point are based on the relative coordinates of the vertex coordinates of the image to be labeled, so that the positions and/or the sizes of the labeling frame are automatically adjusted by adjusting the coordinates of the labeling point in response to the position change and/or the size change of the image to be labeled, so that the contents selected by the labeling frame before and after adjustment are the same.
In an alternative embodiment, the labeling frame is moved in response to the target position after the position of the image to be labeled changes, so that the labeling frame follows the position of the image to be labeled. For example, if the image to be marked moves leftwards by 5 unit lengths along the horizontal direction, the marking frame also moves leftwards by 5 unit lengths along the horizontal direction, and at this time, the position of the marking frame relative to the image to be marked is unchanged, and the position of the marking frame relative to the canvas changes, so that the content to be marked selected by the marking frame before and after the movement is the same content.
In another alternative embodiment, in response to a change in size of the image to be annotated, a first size before the change and a second size after the change are determined, and the size of the annotation frame is adjusted based on the first size and the second size. For example, the annotation frame is positioned at a first position and sized at a first size relative to the canvas and positioned at a second position and sized at a first size relative to the image to be annotated; when the image to be marked is reduced, the marking frame is positioned at a third position relative to the canvas and has a second size, but is still positioned at the second position relative to the image to be marked and has a first size, that is, when the size of the image to be marked is changed, the position and the size of the marking frame relative to the canvas are correspondingly adjusted, so that the position and the size of the marking frame relative to the image to be marked are kept unchanged, and at the moment, the content to be marked selected by the marking frame before and after adjustment is the same content.
In an alternative embodiment, after scaling the image to be marked, judging the marking state of the image to be marked, if the marking state of the image to be marked is marked, manually jumping to the next new image to be marked, and repeating the steps S101-S104 to graphically mark the new image to be marked; if the labeling state of the image to be labeled is not labeled, continuing to graphically label the current image to be labeled, and automatically jumping to the next new image to be labeled after the labeling is completed.
In an alternative embodiment, after the image to be annotated is displayed on the canvas, the annotation parameters of the annotation point and the annotation frame to be annotated are set in the annotation selection area, wherein the annotation parameters comprise content attributes and annotation attributes. For example, the content attribute may be language, font type, text direction, and recognition; the labeling attribute can be labeling type information, labeling point information and labeling frame line information. It should be noted that the specific type of the labeling parameter may be set according to the actual situation, which is not limited herein.
In an alternative embodiment, after the drawing of the labeling frame is completed, the labeling parameters and labeling information are transmitted to an image processing algorithm of a downstream task, wherein the labeling information comprises the size and vertex coordinates of the labeling frame and the coordinates of the labeling points.
The embodiment of the application provides an image labeling method, which comprises the following steps: displaying the image to be annotated on the canvas; drawing a plurality of marking points on the image to be marked; displaying the annotation frame on the image to be annotated based on the plurality of annotation points, wherein the annotation frame is the content to be annotated of the image to be annotated; and adjusting the position and/or the size of the marking frame in response to the position change and/or the size change of the image to be marked, so that the selected contents of the marking frame before and after adjustment are the same. The generated marking frame is correspondingly adjusted along with the change of the image to be marked according to the coordinates of the marking points, so that the frame selected content of the marking frame cannot deviate, the accuracy of marking results is effectively improved, and the use experience of a user is improved.
In order to make the present application more clearly understood to those skilled in the art, the medical image-based lifetime prediction method described in the present application will now be described in detail by the following examples.
Example 1
Drawing an original image onto a canvas of the canvas drawing through fabric. Js, dynamically creating img labels on the canvas, and rendering the original image to obtain an image to be annotated.
Calculating a size ratio between the width of the canvas and the width of the image to be marked, and taking the ratio as a first scaling ratio; calculating the size wallpaper between the canvas height and the image height to be annotated, and taking the ratio as a second scaling ratio; and obtaining a minimum value between the first scaling ratio and the second scaling ratio, taking the minimum value as an image scaling ratio for finally scaling the image to be marked, and scaling the image to be marked on a canvas through an attribute setb ack group image of fabric. And displaying the zoomed image to be annotated in the center area of the canvas through the centrobject. And setting marking parameters of marking points and marking frames to be marked in the marking selection area, wherein the marking parameters comprise content attributes and marking attributes.
Firstly, acquiring coordinates of four vertexes of an image to be marked: acquiring the width and the height of a canvas, and the width and the height of an image to be marked; determining coordinates of four vertexes of the image to be annotated in the width direction based on the canvas width and the image to be annotated in the width; and determining coordinates of the height directions of the four vertexes of the image to be annotated based on the width of the canvas and the width of the image to be annotated.
Determining the coordinates of a first labeling point (top left corner vertex) according to the relative coordinates of the mouse position when the mouse is pressed relative to the vertex coordinates of the image to be labeled; and determining the coordinates of a second labeling point (lower right corner vertex) according to the relative coordinates of the mouse position when the mouse is lifted relative to the vertex coordinates of the image to be labeled.
Calculating the absolute value of the difference value between the abscissa of the first labeling point and the abscissa of the second labeling point, and taking the absolute value as the width of the rectangular labeling frame; calculating the absolute value of the difference value between the ordinate of the first labeling point and the ordinate of the second labeling point, and taking the absolute value as the height of the rectangular labeling frame; calculating the sum of the abscissa of the first marking point and the width of the rectangular marking frame to be used as the abscissa of the top right corner vertex, and taking the ordinate of the first marking point as the ordinate of the top right corner vertex; and calculating the sum of the ordinate of the first marking point and the height of the rectangular marking frame to be used as the ordinate of the left lower corner vertex, and taking the abscissa of the first marking point as the abscissa of the left lower corner vertex.
Acquiring a minimum value between the first labeling point abscissa and the second labeling point abscissa as an abscissa value of the reference coordinate; acquiring a minimum value between the ordinate of the first annotation point and the ordinate of the second annotation point as the ordinate value of the reference coordinate; and forming the abscissa value of the reference coordinate and the ordinate value of the reference coordinate into the reference coordinate. And judging that the vertex coordinates of the rectangular labeling frame are all positioned in the range of the image to be labeled according to the reference coordinates.
And displaying the rectangular annotation frame on the image to be annotated based on the vertex coordinates of the rectangular annotation frame in the range of the image to be annotated and the width and the height of the rectangular annotation frame, wherein the content to be annotated of the image to be annotated is in the rectangular annotation frame.
When the image to be marked moves leftwards by 6 unit lengths along the horizontal direction, the marking frame also moves leftwards by 6 unit lengths along the horizontal direction, and at the moment, the position of the marking frame relative to the image to be marked is unchanged, and the position of the marking frame relative to the canvas changes, so that the content to be marked, which is selected by the marking frame before and after the movement, is the same content.
After the rectangular annotation frame is drawn, the annotation parameters and the annotation information of the rectangular annotation frame are converted into json format and transmitted to an image processing algorithm of a downstream task. And automatically switching to the next image to be marked.
Based on the same inventive concept, an embodiment of the present application discloses an image labeling device, fig. 8 shows a schematic diagram of the image labeling device, as shown in fig. 8, including:
the display module is used for displaying the image to be annotated on the canvas;
the marking point drawing module is used for drawing a plurality of marking points on the image to be marked;
The marking frame generation module is used for displaying the marking frame on the image to be marked based on the marking points, wherein the marking frame is the content to be marked of the image to be marked;
and the adjusting module is used for responding to the position change and/or the size change of the image to be marked and adjusting the position and/or the size of the marking frame so that the frame selected before and after the marking frame is adjusted is the same content.
Wherein, the adjustment module includes:
the adjustment sub-module is used for responding to the target position after the position change of the image to be marked and moving the marking frame so that the marking frame follows the position change of the image to be marked;
and/or, in response to the size change of the image to be marked, determining a first size before the change and a second size after the change, and adjusting the size of the marking frame based on the first size and the second size.
Wherein, the device still includes:
a scaling module for scaling the image to be marked so that the width of the image to be marked is the same as the width of the canvas, or
And scaling the image to be annotated so that the height of the image to be annotated is the same as the height of the canvas.
Wherein, the device still includes:
and the marking point limiting module is used for moving the first marking point or the second marking point to a point at the edge of the image to be marked if the first marking point or the second marking point is positioned in the peripheral area of the canvas.
Wherein, the mark point limiting module further comprises:
the first moving submodule is used for moving the first marking point or the second marking point to a point at the edge of the image to be marked, which is closest to the image to be marked in the horizontal direction or the vertical direction, if the first marking point or the second marking point is only located in one subarea;
and the second moving sub-module is used for moving the first marking point or the second marking point to the vertex at the edge of the image to be marked, which is closest to the first marking point or the second marking point, if the first marking point or the second marking point is positioned in the overlapping area of the two sub-areas.
Wherein, the device still includes:
the polygon labeling frame drawing module is used for responding to a drawing end instruction, displaying the polygon labeling frame on the image to be labeled based on the polygon labeling points, wherein the polygon labeling frame is internally provided with the content to be labeled of the image to be labeled;
And the polygonal marking frame adjusting module is used for adjusting the position and/or the size of the polygonal marking frame in response to the position change and/or the size change of the image to be marked, so that the selected contents of the polygonal marking frame before and after adjustment are the same.
The marking point drawing module comprises:
the first marking point determining sub-module is used for determining the coordinates of the first marking point, wherein the coordinates of the first marking point are relative coordinates of the mouse position relative to the vertex coordinates of the image to be marked when the mouse is pressed down; the position of the mouse when the mouse is pressed down is positioned in the image to be marked;
the second marking point determining submodule is used for determining the coordinates of the second marking point, wherein the coordinates of the second marking point are relative coordinates of the mouse position relative to the vertex coordinates of the image to be marked when the mouse is lifted; the position of the mouse when the mouse is lifted is positioned in the image to be marked;
and the marking point drawing sub-module is used for drawing the first marking point and the second marking point on the image to be marked based on the coordinates of the first marking point and the coordinates of the second marking point.
Wherein, the mark point drawing module further comprises:
the rectangular size obtaining sub-module is used for obtaining the width and the height of the rectangular labeling frame based on the coordinates of the first labeling point and the coordinates of the second labeling point;
the rectangular vertex coordinate acquisition sub-module is used for acquiring the vertex coordinate of the rectangular annotation frame based on the coordinates of the first annotation point, the coordinates of the second annotation point and the width and the height of the rectangular annotation frame;
and the rectangle frame drawing submodule is used for drawing the rectangle marking frame based on the vertex coordinates of the rectangle marking frame if the vertex coordinates of the rectangle marking frame are positioned in the range of the vertex coordinates of the image to be marked.
Wherein, the mark point drawing module further comprises:
the size obtaining sub-module is used for obtaining the width and the height of the canvas, and the width and the height of the image to be marked;
the image to be annotated width coordinate acquisition submodule is used for determining the coordinates of the four vertexes of the image to be annotated in the width direction based on the canvas width and the image to be annotated;
and the image height coordinate obtaining sub-module is used for determining the coordinates of the height directions of the four vertexes of the image to be marked based on the width of the canvas and the width of the image to be marked.
Wherein, the scaling module further comprises:
the size obtaining sub-module is used for obtaining the width and the height of the canvas and the width and the height of the image to be marked;
the candidate scaling ratio obtaining sub-module is used for obtaining a first scaling ratio based on the canvas width and the image width to be annotated; acquiring a second scaling ratio based on the canvas height and the image height to be annotated;
a scaling ratio determination submodule, configured to obtain a minimum value between the first scaling ratio and the second scaling ratio as an image scaling ratio;
and the scaling sub-module is used for scaling the image to be annotated on the canvas based on the image scaling ratio.
Wherein, the polygon labeling frame drawing module includes:
the polygon marking point coordinate acquisition sub-module is used for determining the coordinates of the n polygon marking points, wherein the coordinates of the n polygon marking points are relative coordinates of the mouse position relative to the vertex coordinates of the image to be marked when the mouse clicks the left button;
and the polygon marking point drawing sub-module is used for drawing the polygon marking points based on the coordinates of the n polygon marking points.
Wherein, the module is drawn to polygon mark frame still includes:
And the drawing end instruction generation sub-module is used for generating a drawing end instruction, wherein the drawing end instruction is a mouse left key double-click operation executed in a target time period or a preset key on a keyboard is input.
Based on the same inventive concept, an embodiment of the present application discloses an electronic device, fig. 9 shows a schematic diagram of the electronic device disclosed in the embodiment of the present application, and as shown in fig. 9, the electronic device 100 includes: the image labeling method comprises a memory 110 and a processor 120, wherein the memory 110 is in communication connection with the processor 120 through a bus, and a computer program is stored in the memory 110 and can be run on the processor 120 to realize the steps in the image labeling method disclosed by the embodiment of the application.
Based on the same inventive concept, embodiments of the present application disclose a computer readable storage medium having stored thereon a computer program/instructions which, when executed by a processor, implement steps in an image labeling method disclosed in embodiments of the present application.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, electronic devices, and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The image labeling method, the device, the electronic equipment and the storage medium provided by the invention are described in detail, and specific examples are applied to illustrate the principle and the implementation of the invention, and the description of the above examples is only used for helping to understand the method and the core idea of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (15)

1. An image labeling method, comprising:
displaying the image to be annotated on the canvas;
drawing a plurality of marking points on the image to be marked;
displaying the annotation frame on the image to be annotated based on the plurality of annotation points, wherein the annotation frame is the content to be annotated of the image to be annotated;
and adjusting the position and/or the size of the marking frame in response to the position change and/or the size change of the image to be marked, so that the selected contents of the marking frame before and after adjustment are the same.
2. An image marking method as claimed in claim 1, wherein adjusting the position and/or size of the marking frame in response to the change in position and/or size of the image to be marked comprises:
Responding to the target position of the image to be marked after the position change, and moving the marking frame so that the marking frame follows the position change of the image to be marked;
and/or, in response to the size change of the image to be marked, determining a first size before the change and a second size after the change, and adjusting the size of the marking frame based on the first size and the second size.
3. An image annotation method according to claim 1 wherein after displaying the image to be annotated on the canvas, the method further comprises, prior to drawing a plurality of annotation points on the image to be annotated:
scaling the image to be marked to ensure that the width of the image to be marked is the same as the width of the canvas, or
And scaling the image to be annotated so that the height of the image to be annotated is the same as the height of the canvas.
4. The method according to claim 1, wherein the labeling frame is a rectangular labeling frame, the labeling points include a first labeling point and a second labeling point, and if the first labeling point or the second labeling point is located in a peripheral area of the canvas, the first labeling point or the second labeling point is moved to a point at an edge of the image to be labeled.
5. The method of claim 4, wherein the canvas has a perimeter region comprising a first sub-region, a second sub-region, a third sub-region, and a fourth sub-region, and wherein moving the first or second annotation point to a point at the edge of the image to be annotated comprises:
if the first marking point or the second marking point is only located in one sub-area, the first marking point or the second marking point is moved to a point at the edge of the image to be marked, which is closest to the edge in the horizontal direction or the vertical direction;
and if the first marking point or the second marking point is positioned in the overlapped area of the two sub-areas, moving the first marking point or the second marking point to the vertex at the edge of the image to be marked, which is closest to the first marking point or the second marking point.
6. The image labeling method according to claim 1, wherein the labeling frame is a polygonal labeling frame, the labeling points are n polygonal labeling points, n is greater than or equal to 3, and after the plurality of labeling points are drawn on the image to be labeled, the method further comprises:
responding to a drawing ending instruction, and displaying the polygonal marking frame on the image to be marked based on the polygonal marking point, wherein the polygonal marking frame is the content to be marked of the image to be marked;
And adjusting the position and/or the size of the polygonal marking frame in response to the position change and/or the size change of the image to be marked, so that the contents selected by the polygonal marking frame before and after adjustment are the same.
7. The method according to claim 1, wherein the labeling frame is a rectangular labeling frame, the labeling points include a first labeling point and a second labeling point, and drawing a plurality of labeling points on the image to be labeled includes:
determining the coordinates of the first marking point, wherein the coordinates of the first marking point are relative coordinates of the mouse position relative to the vertex coordinates of the image to be marked when the mouse is pressed down;
determining the coordinates of the second labeling point, wherein the coordinates of the second labeling point are relative coordinates of the mouse position relative to the vertex coordinates of the image to be labeled when the mouse is lifted;
the position of the mouse when the mouse is pressed down and the position of the mouse when the mouse is lifted up are positioned in the image to be marked;
and drawing the first annotation point and the second annotation point on the image to be annotated based on the coordinates of the first annotation point and the coordinates of the second annotation point.
8. The method of claim 7, wherein after determining the coordinates of the second annotation point, the method further comprises:
acquiring the width and the height of the rectangular annotation frame based on the coordinates of the first annotation point and the coordinates of the second annotation point;
acquiring vertex coordinates of the rectangular annotation frame based on the coordinates of the first annotation point, the coordinates of the second annotation point and the width and the height of the rectangular annotation frame;
and if the vertex coordinates of the rectangular annotation frame are positioned in the range of the vertex coordinates of the image to be annotated, drawing the rectangular annotation frame based on the vertex coordinates of the rectangular annotation frame.
9. The method for labeling an image according to claim 7, wherein the image to be labeled includes a first vertex, a second vertex, a third vertex and a fourth vertex, and the vertex coordinates of the image to be labeled are obtained as follows:
acquiring the width and the height of a canvas, and the width and the height of an image to be marked;
determining coordinates of four vertexes of the image to be annotated in the width direction based on the canvas width and the image to be annotated in the width;
And determining coordinates of the height directions of the four vertexes of the image to be annotated based on the width of the canvas and the width of the image to be annotated.
10. An image annotation method according to claim 3, wherein scaling the image to be annotated comprises:
acquiring the width and the height of a canvas, and the width and the height of an image to be marked;
acquiring a first scaling ratio based on the canvas width and the image width to be annotated; acquiring a second scaling ratio based on the canvas height and the image height to be annotated;
acquiring a minimum value between the first scaling ratio and the second scaling ratio as an image scaling ratio;
and scaling the image to be annotated on the canvas based on the image scaling ratio.
11. The image labeling method according to claim 6, wherein the polygon labeling points are obtained as follows:
determining coordinates of the n polygonal marking points, wherein the coordinates of the n polygonal marking points are relative coordinates of a mouse position relative to vertex coordinates of the image to be marked when the mouse clicks a left key;
and drawing the polygon marking points based on the coordinates of the n polygon marking points.
12. The method according to claim 6, wherein the drawing end instruction is a mouse left click operation performed during a target period, or a preset key on a keyboard is entered.
13. An image marking apparatus, comprising:
the display module is used for displaying the image to be annotated on the canvas;
the marking point drawing module is used for drawing a plurality of marking points on the image to be marked;
the marking frame generation module is used for displaying the marking frame on the image to be marked based on the marking points, wherein the marking frame is the content to be marked of the image to be marked;
and the adjusting module is used for responding to the position change and/or the size change of the image to be marked and adjusting the position and/or the size of the marking frame so that the frame selected before and after the marking frame is adjusted is the same content.
14. An electronic device comprising a memory, a processor and a computer program stored on the memory, the processor executing the computer program to perform the steps of a method of image annotation according to any of claims 1-12.
15. A computer readable storage medium having stored thereon a computer program/instruction which when executed by a processor performs the steps of a method of image labelling according to any of claims 1-12.
CN202310086604.1A 2023-01-18 2023-01-18 Image labeling method and device, electronic equipment and storage medium Pending CN116052170A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310086604.1A CN116052170A (en) 2023-01-18 2023-01-18 Image labeling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310086604.1A CN116052170A (en) 2023-01-18 2023-01-18 Image labeling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116052170A true CN116052170A (en) 2023-05-02

Family

ID=86133041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310086604.1A Pending CN116052170A (en) 2023-01-18 2023-01-18 Image labeling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116052170A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115570A (en) * 2023-10-25 2023-11-24 成都数联云算科技有限公司 Canvas-based image labeling method and Canvas-based image labeling system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115570A (en) * 2023-10-25 2023-11-24 成都数联云算科技有限公司 Canvas-based image labeling method and Canvas-based image labeling system
CN117115570B (en) * 2023-10-25 2023-12-29 成都数联云算科技有限公司 Canvas-based image labeling method and Canvas-based image labeling system

Similar Documents

Publication Publication Date Title
US11595737B2 (en) Method for embedding advertisement in video and computer device
US10360473B2 (en) User interface creation from screenshots
JP6435740B2 (en) Data processing system, data processing method, and data processing program
CN110660117A (en) Determining position of image control key
US20170293992A1 (en) Image code for processing information and device and method for generating and parsing same
US9251580B2 (en) Methods and systems for automated selection of regions of an image for secondary finishing and generation of mask image of same
CN111078035B (en) Drawing method based on HTML5Canvas
CN116052170A (en) Image labeling method and device, electronic equipment and storage medium
US11314400B2 (en) Unified digital content selection system for vector and raster graphics
CN103218600B (en) Real-time face detection algorithm
CN108205415B (en) Text selection method and device
CN106934839B (en) Automatic cutting method and device for CAD vector diagram
US5703962A (en) Image processing method and apparatus
CN111857893A (en) Method and device for generating label graph
CN109254760A (en) A kind of method of picture scaling
CN111104883A (en) Job answer extraction method, device, equipment and computer readable storage medium
CN110827301B (en) Method and apparatus for processing image
US20230162413A1 (en) Stroke-Guided Sketch Vectorization
JP2011014076A (en) Information processing apparatus, document enlarging display method, program, and recording medium
CN109948605B (en) Picture enhancement method and device for small target
CN107465902A (en) A kind of projection correction's method, apparatus and laser television
US10235786B2 (en) Context aware clipping mask
CN114240734A (en) Image data augmentation method, image data augmentation device, electronic apparatus, and storage medium
CN112036268A (en) Component identification method and related device
CN107688427B (en) Image display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination