CN114298915A - Image object processing method and device, storage medium and electronic device - Google Patents
Image object processing method and device, storage medium and electronic device Download PDFInfo
- Publication number
- CN114298915A CN114298915A CN202111669870.4A CN202111669870A CN114298915A CN 114298915 A CN114298915 A CN 114298915A CN 202111669870 A CN202111669870 A CN 202111669870A CN 114298915 A CN114298915 A CN 114298915A
- Authority
- CN
- China
- Prior art keywords
- vertex
- target
- edge
- display interface
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title description 17
- 238000002372 labelling Methods 0.000 claims abstract description 139
- 238000000034 method Methods 0.000 claims abstract description 71
- 230000004044 response Effects 0.000 claims abstract description 43
- 238000012545 processing Methods 0.000 claims abstract description 22
- 230000008569 process Effects 0.000 claims description 27
- 238000004590 computer program Methods 0.000 claims description 8
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 208000033748 Device issues Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The application discloses a method and a device for processing an image target, a storage medium and an electronic device, wherein the method comprises the following steps: displaying a first image to be annotated on a target display interface, wherein the first image to be annotated comprises a target object to be annotated; drawing a first edge of a rotating marking frame of the target object on the target display interface in response to the detected first drawing operation, wherein the angle of the first edge is a target angle, and the first drawing operation is used for indicating positions of a first vertex and a second vertex corresponding to the first edge; determining a third vertex of the rotating labeling frame of the target object in response to the detected second drawing operation, wherein the second drawing operation is used for indicating the position of the third vertex in a target direction which takes the second vertex as a starting point and is vertical to the first edge; and drawing a rotary labeling frame of the target object on the target display interface according to the first edge and the third vertex to obtain a target rotary labeling frame.
Description
Technical Field
The present application relates to the field of image processing, and in particular, to a method and an apparatus for processing an image object, a storage medium, and an electronic apparatus.
Background
In the case of model training or other scenes requiring image annotation, an image of the annotated object is provided. Currently, in the process of labeling an object, a rotating labeling box (i.e., a rotating rectangle) can be added to the object as follows: a rectangle is drawn according to the upper left vertex and the lower right vertex, and then the rectangle is rotated.
However, for the above object labeling manner, the rectangle drawn first may not be matched with the object, and at this time, the labeling frame needs to be redrawn or adjusted; in addition, in the process of rotating the rectangle, the angle needs to be adjusted for many times to ensure that the rotated rectangle is matched with the object. Therefore, the labeling mode of the image object in the related art has the problem of complicated object labeling process caused by the need of adjusting the rectangle for multiple times.
Disclosure of Invention
The embodiment of the application provides a processing method and device of an image target, a storage medium and an electronic device, and aims to at least solve the problem that the object labeling process is complicated due to the fact that rectangles need to be adjusted for multiple times in the labeling mode of an image object in the related art.
According to an aspect of the embodiments of the present application, there is provided a method for processing an image object, including: displaying a first image to be annotated on a target display interface, wherein the first image to be annotated comprises a target object to be annotated; drawing a first edge of a rotating labeling frame of the target object on the target display interface in response to a detected first drawing operation, wherein the angle of the first edge is a target angle, and the first drawing operation is used for indicating positions of a first vertex and a second vertex corresponding to the first edge; determining a third vertex of the rotating labeling frame of the target object in response to a detected second drawing operation, wherein the second drawing operation is used for indicating the position of the third vertex in a target direction which takes the second vertex as a starting point and is perpendicular to the first edge; and drawing a rotary labeling frame of the target object on the target display interface according to the first edge and the third vertex to obtain a target rotary labeling frame.
In an exemplary embodiment, in the drawing a first edge of a rotating annotation box of the target object on the target display interface in response to the detected first drawing operation, the drawing includes: drawing the first vertex on the target display interface in response to the detected first sub-drawing operation, wherein the first sub-drawing operation is used for indicating the position of the first vertex; and drawing the first edge according to the target angle on the target display interface in response to the detected second sub-drawing operation, wherein the second sub-drawing operation is used for indicating the position of the second vertex.
In an exemplary embodiment, before the drawing the first edge at the target angle on the target display interface, the method further includes: and displaying a first moving edge with the first vertex as a starting point on the target display interface, wherein the end point position of the first moving edge moves along with the movement of the input position on the target display interface.
In an exemplary embodiment, in the process of displaying the first moving edge starting from the first vertex on the target display interface, the method further includes: and displaying a first auxiliary edge which takes the terminal point of the first moving edge as a starting point and is vertical to the first moving edge in direction on the target display interface.
In an exemplary embodiment, after the first edge of the rotating label box of the target object is drawn on the target display interface, the method further includes: and displaying a second moving edge with the second vertex as a starting point and the direction perpendicular to the first edge on the target display interface, wherein the end point position of the second moving edge moves along with the input position on the target display interface.
In an exemplary embodiment, in the process of displaying a second moving edge with the second vertex as a starting point and a direction perpendicular to the first edge on the target display interface, the method further includes: and displaying a second auxiliary edge which takes the terminal point of the second moving edge as a starting point and is vertical to the second moving edge in direction on the target display interface.
In an exemplary embodiment, the drawing, according to the first edge and the third vertex, a rotating labeling box of the target object on the target display interface to obtain a target rotating labeling box includes: drawing a second edge of the rotating labeling frame of the target object between the second vertex and the third vertex on the target display interface; determining a fourth vertex of the rotating labeling box of the target object according to the first vertex, the second vertex and the third vertex; and drawing a third side of the rotating labeling frame of the target object between the third vertex and the fourth vertex and drawing a fourth side of the rotating labeling frame of the target object between the fourth vertex and the first vertex to obtain the target rotating labeling frame.
In an exemplary embodiment, after the drawing the rotating labeling box of the target object on the target display interface according to the first edge and the third vertex to obtain the target rotating labeling box, the method further includes: displaying a second image to be annotated and the target rotary annotation frame on the target display interface, wherein the second image to be annotated comprises the target object to be annotated, and the second image to be annotated is an image adjacent to the first image to be annotated in the video to be annotated; and in response to the detected association operation, determining the target rotary labeling frame as the rotary labeling frame of the target object in the second image to be labeled.
According to another aspect of the embodiments of the present application, there is also provided an apparatus for processing an image object, including: the display unit is used for displaying a first image to be annotated on a target display interface, wherein the first image to be annotated comprises a target object to be annotated; a first drawing unit, configured to draw a first edge of a rotating labeling frame of the target object on the target display interface in response to a detected first drawing operation, where an angle of the first edge is a target angle, and the first drawing operation is used to indicate positions of a first vertex and a second vertex corresponding to the first edge; a first determination unit, configured to determine a third vertex of the rotating labeling frame of the target object in response to a detected second drawing operation, where the second drawing operation is used to indicate a position of the third vertex in a target direction that is perpendicular to the first edge and that uses the second vertex as a starting point; and the second drawing unit is used for drawing the rotary labeling frame of the target object on the target display interface according to the first edge and the third vertex to obtain a target rotary labeling frame.
In one exemplary embodiment, the first drawing unit includes: a first drawing module, configured to draw the first vertex on the target display interface in response to a detected first sub-drawing operation, where the first sub-drawing operation is used to indicate a position of the first vertex; and a second rendering module. The first edge is drawn according to the target angle on the target display interface in response to a detected second sub-drawing operation, wherein the second sub-drawing operation is used for indicating the position of the second vertex.
In one exemplary embodiment, the apparatus further comprises: and a second display unit, configured to display a first moving edge with the first vertex as a starting point on the target display interface before drawing the first edge according to the target angle on the target display interface, where an end point position of the first moving edge moves along with movement of an input position on the target display interface.
In one exemplary embodiment, the apparatus further comprises: and the third display unit is used for displaying a first auxiliary edge which takes the end point of the first moving edge as a starting point and is vertical to the first moving edge in direction on the target display interface in the process of displaying the first moving edge which takes the first vertex as the starting point on the target display interface.
In one exemplary embodiment, the apparatus further comprises: and a fourth display unit, configured to display a second moving edge having a direction perpendicular to the first edge and taking the second vertex as a starting point on the target display interface after drawing the first edge of the rotation labeling frame of the target object on the target display interface, where an end point position of the second moving edge moves along with an input position on the target display interface.
In one exemplary embodiment, the apparatus further comprises: and a fifth display unit, configured to, in a process of displaying a second moving edge with the second vertex as a starting point and a direction perpendicular to the first edge on the target display interface, display a second auxiliary edge with an end point of the second moving edge as a starting point and a direction perpendicular to the second moving edge on the target display interface.
In an exemplary embodiment, the second drawing unit includes: the third drawing module is used for drawing a second edge of the rotary labeling frame of the target object between the second vertex and the third vertex on the target display interface; a first determining module, configured to determine a fourth vertex of the rotating labeling box of the target object according to the first vertex, the second vertex, and the third vertex; and the fourth drawing module is used for drawing a third edge of the rotating marking frame of the target object between the third vertex and the fourth vertex and drawing a fourth edge of the rotating marking frame of the target object between the fourth vertex and the first vertex to obtain the target rotating marking frame.
In one exemplary embodiment, the apparatus further comprises: a sixth display unit, configured to draw a rotating annotation frame of the target object on the target display interface according to the first edge and the third vertex, and after obtaining the target rotating annotation frame, display a second image to be annotated and the target rotating annotation frame on the target display interface, where the second image to be annotated includes the target object to be annotated, and the second image to be annotated is an image that is adjacent to the first image to be annotated in a video to be annotated; and the second determining unit is used for determining the target rotary labeling frame as the rotary labeling frame of the target object in the second image to be labeled in response to the detected association operation.
According to still another aspect of the embodiments of the present application, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the processing method of the image object when running.
According to another aspect of the embodiments of the present application, there is also provided an electronic apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the processing method of the image object through the computer program.
In the embodiment of the application, a mode of drawing one side of a rotating labeling frame and drawing a third vertex on a vertical line of the side so as to generate a rotating rectangle is adopted, and a first image to be labeled is displayed on a target display interface, wherein the first image to be labeled comprises a target object to be labeled; drawing a first edge of a rotating marking frame of the target object on the target display interface in response to the detected first drawing operation, wherein the angle of the first edge is a target angle, and the first drawing operation is used for indicating positions of a first vertex and a second vertex corresponding to the first edge; determining a third vertex of the rotating labeling frame of the target object in response to the detected second drawing operation, wherein the second drawing operation is used for indicating the position of the third vertex in a target direction which takes the second vertex as a starting point and is vertical to the first edge; according to the first edge and the third vertex, the rotary labeling frame of the target object is drawn on the target display interface to obtain the rotary labeling frame of the target, one edge of a target angle is drawn firstly, the rotating angle of the rotary labeling frame can be set according to the shape of the target object in the step, and then the third vertex is drawn on the perpendicular line of the edge to generate the rotary labeling frame.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic diagram of a hardware environment for an alternative method of processing an image object according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating an alternative method of processing an image object according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an alternative software architecture according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative method of processing an image object according to an embodiment of the application;
FIG. 5 is a schematic flow chart diagram illustrating an alternative method for processing an image target according to an embodiment of the present application;
FIG. 6 is a block diagram of an alternative image object processing apparatus according to an embodiment of the present application;
fig. 7 is a block diagram of an alternative electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of an embodiment of the present application, there is provided a method for processing an image object. Alternatively, in the present embodiment, the processing method of the image target may be applied to a hardware environment formed by the image annotation device 102 and the server 104 as shown in fig. 1. As shown in fig. 1, the server 104 is connected to the image annotation device 102 via a network, and may be configured to provide services (e.g., application services, etc.) for the terminal or a client installed on the terminal, and may be configured with a database on the server or separately from the server, and configured to provide a data storage service for the server 104.
The network may include, but is not limited to, at least one of: wired networks, wireless networks. The wired network may include, but is not limited to, at least one of: wide area networks, metropolitan area networks, local area networks, which may include, but are not limited to, at least one of the following: WIFI (Wireless Fidelity), bluetooth. The image annotation device 102 may not be limited to a PC, a mobile phone, a tablet computer, etc.
The image target processing method according to the embodiment of the present application may be executed by the server 104, or executed by the image annotation device 102, or executed by both the server 104 and the image annotation device 102. The image annotation device 102 may also be configured to execute the processing method of the image target according to the embodiment of the present application by a client installed thereon.
Taking the image annotation device 102 as an example to execute the processing method of the image object in the present embodiment, fig. 2 is a schematic flowchart of an optional processing method of the image object according to the present embodiment, and as shown in fig. 2, the flowchart of the method may include the following steps:
step S202, displaying a first image to be annotated on a target display interface, wherein the first image to be annotated comprises a target object to be annotated.
The image target processing method in this embodiment may be applied to a scene in which an annotation target in an image to be annotated by an image annotation device is annotated, and the target object is annotated by executing a corresponding operation on the image annotation device. The image annotation device may run a target application for performing image annotation, and a first image to be annotated may be displayed on a target display interface of the target application, where the first image to be annotated may include at least one target object to be annotated (i.e., an annotation target). The target object may be a moving object, which may be a vehicle, a pedestrian, or the like. In addition, the target display interface can display a drawing tool for assisting in drawing the rotating frame.
Step S204, in response to the detected first drawing operation, drawing a first edge of the rotating labeling frame of the target object on the target display interface, wherein the angle of the first edge is a target angle, and the first drawing operation is used for indicating positions of a first vertex and a second vertex corresponding to the first edge.
Labeling of the labeling target may be performed by drawing a rotation rectangle, which is a rectangular frame allowed to have a certain rotation angle. In the related art, when a rotating rectangle is drawn for a labeling target, a complete rectangle is usually drawn according to an upper left vertex and a lower right vertex, then the drawn rectangle is rotated according to the shape (occupied area) of the labeling target, and the rotating labeling frame of the labeling target can be obtained by matching the drawing rectangle with the labeling target after rotating a certain angle. For the drawing mode of the rotary labeling frame, the operation process is complicated, errors are easy to occur during adaptation (the rotation angle is easy to be too large or too small, so that the rotation angle is not matched with the labeling target), and the efficiency is low.
In order to solve the above problem, in this embodiment, an arbitrary angle side may be drawn first, and then a third vertex may be drawn on a vertical line of the side to generate a rotation labeling box (which is a rotation rectangle). The user can view the first image to be annotated and determine the annotation target, i.e. the target object, contained therein. Optionally, an object range identifier of at least one candidate object identified from the first image to be annotated may be displayed on the first image to be annotated. That is, the image annotation device or other devices may first pre-process the first image to be annotated, and identify a plurality of candidate objects and an object range of each candidate object from the first image to be annotated, which may include the target object or other objects that are erroneously identified. According to the object range of each candidate object, the image annotation equipment can display the object range identification of each candidate object on the target display interface.
The user can execute a first drawing operation on the target display interface, wherein the first drawing operation is used for triggering and drawing a first edge of a rotating marking frame of the target object. The first drawing operation may be an operation performed by a user directly on an image to be annotated displayed on the target display page, or an operation performed on a drawing tool that assists in drawing the rotating frame on the display screen. By performing the first drawing operation, the user may indicate the position of the first edge of the rotating label box of the target object on the target display page, for example, may indicate the positions of the first vertex and the second vertex, and the first edge may be drawn according to the positions of the first vertex and the second vertex.
For example, the first drawing operation performed by the user may perform an operation such as clicking, double clicking, sliding, and the like on the image to be annotated displayed on the target display page, indicating the positions of the first vertex and the second vertex. For another example, the user may indicate the positions of the first vertex and the second vertex by an operation performed on an input/output component such as a mouse and a keyboard connected to the image annotation device, which is not limited in this embodiment.
The image annotation device may detect a first drawing operation, and in response to the first drawing operation, draw a side of the annotation frame, which is a straight line formed by connecting the first vertex and the second vertex, on the target display interface. The thickness, color, etc. of the first edge may take default values and may be adjusted based on the user's configuration. The first side may be adjustable or fixed. The angle of the first edge is plotted as a target angle, which may be the angle the first edge makes with the horizontal.
In step S206, a third vertex of the rotating label box of the target object is determined in response to a detected second drawing operation, where the second drawing operation is used to indicate a position of the third vertex in a target direction that takes the second vertex as a starting point and is perpendicular to the first edge.
After drawing the first edge, the user may perform a second drawing operation on the image annotation device. The second drawing operation described above may be an operation of indicating the position of the third vertex. In the process of executing the second drawing operation, an edge that is required to be located by the third vertex and is starting from the second vertex and perpendicular to the first edge as a reference edge, that is, the third vertex, may be displayed on the target display interface. The user may select a suitable location on the reference edge, and the selected location may be labeled as the third vertex.
The image annotation device may detect the second drawing operation, acquire the position of the third vertex, and determine the position of the third vertex as the third vertex of the to-be-drawn rotational annotation frame, which is the same as or similar to the foregoing embodiment. Here, the rotation of the labeling box refers to allowing a certain angle of the labeling box compared to a horizontal line or other reference lines.
And S208, drawing a rotary labeling frame of the target object on the target display interface according to the first edge and the third vertex to obtain a target rotary labeling frame.
According to the feature of the rectangle, after the third vertex is drawn, that is, after one side and one vertex (or three vertices) of the rectangle are determined, a rectangle can be directly determined. The way of determining the rectangle may be: determining a fourth vertex, and then determining a rectangle according to the four vertices, which may also be: and determining four sides, wherein the enclosed area enclosed by the four sides is the required rectangle. Here, the manner of determining the fourth vertex may be: and determining a point which is symmetrical with the center of the second vertex as a fourth vertex by taking a connecting line of the first vertex and the third vertex as a symmetry axis.
After a rectangle is determined, the determined rectangle is a rotation labeling frame of the target object, that is, a target rotation labeling frame. The image annotation equipment can draw a target rotation annotation frame on the target display interface. After the target rotation labeling frame is drawn, the user may determine whether the target rotation labeling frame can label the target object, for example, may determine whether the target object can be placed in the target rotation labeling frame, if so, it may be determined that the labeling of the target object in the first image to be labeled is completed, otherwise, the rotation labeling frame of the target object may be redrawn in a manner the same as or similar to that in the foregoing embodiment, or the target rotation labeling frame may be adjusted through operations such as translation until it is determined that the target rotation labeling frame can label the target object.
Optionally, after the target rotation annotation frame is obtained, the user may annotate other annotation targets that need to be annotated in the first image to be annotated, or jump to the next image to be annotated in the first image to be annotated, and annotate the annotation target in the next image to be annotated.
Through the steps S202 to S208, displaying a first image to be annotated on the target display interface, where the first image to be annotated includes a target object to be annotated; drawing a first edge of a rotating marking frame of the target object on the target display interface in response to the detected first drawing operation, wherein the angle of the first edge is a target angle, and the first drawing operation is used for indicating positions of a first vertex and a second vertex corresponding to the first edge; determining a third vertex of the rotating labeling frame of the target object in response to the detected second drawing operation, wherein the second drawing operation is used for indicating the position of the third vertex in a target direction which takes the second vertex as a starting point and is vertical to the first edge; according to the first edge and the third vertex, the rotary labeling frame of the target object is drawn on the target display interface to obtain the rotary labeling frame of the target, so that the problem that the labeling process of the object is complicated due to the fact that the rectangle needs to be adjusted for multiple times in the labeling mode of the image object in the related art is solved, the process of labeling the object is simplified, and the efficiency of labeling the object is improved.
In one exemplary embodiment, in response to the detected first drawing operation, drawing a first edge of a rotating annotation box of the target object on the target display interface, includes:
s11, drawing a first vertex on the target display interface in response to the detected first sub-drawing operation, wherein the first sub-drawing operation is used for indicating the position of the first vertex;
and S12, in response to the detected second sub-drawing operation, drawing the first edge according to the target angle on the target display interface, wherein the second sub-drawing operation is used for indicating the position of the second vertex.
In this embodiment, the first drawing operation may include a first sub-drawing operation and a second sub-drawing operation, for drawing the first vertex and the second vertex, respectively. The image annotation device may draw a first edge on the target display interface in response to detecting the first sub-drawing operation and the second sub-drawing operation, where the drawn first edge may have a target angle.
Optionally, when the user performs the first sub-drawing operation and the second sub-drawing operation, the image annotation device issues a prompt (e.g., a voice prompt, a pop-up prompt, etc.) to the user to remind the user whether to take the currently drawn point as the first vertex and the second vertex.
According to the embodiment, the first edge is drawn through different drawing operations, so that the flexibility of drawing the first edge can be improved, and the efficiency of labeling the object can be improved.
In an exemplary embodiment, before drawing the first edge according to the target angle on the target display interface, the method further includes:
and S21, displaying a first moving edge taking the first vertex as a starting point on the target display interface, wherein the end point position of the first moving edge moves along with the movement of the input position on the target display interface.
In this embodiment, due to uncertainty of the target object (for example, the target object has a different shape), during the process of drawing the first edge, the distance between the first vertex and the second vertex is set unreasonably, or the angle of the drawn first edge is too small to completely mark the target object, which results in failure to draw the rotation rectangle. Therefore, when the first edge is drawn, the first vertex can be drawn on the target display interface in response to the detected first sub-drawing operation. Then, the drawing position of the second vertex is determined by a moving edge with the first vertex as a starting point.
After the first vertex is drawn, a first moving edge with the first vertex as a starting point can be displayed on the target display interface, the end point position of the first moving edge moves along with the movement of the input position on the target display interface, and an appropriate angle (i.e., a target angle) can be selected by adjusting the end point position of the first moving edge, for example, the first edge cannot pass through the target object at the target angle and has a reasonable distance from the target object. For example, the first edge does not touch the target object. After finding a suitable position, the user may perform the drawing operation of the second vertex, and the specific operation process may refer to the foregoing embodiment, which is not described herein again.
According to the embodiment, after a vertex is determined, a moving edge of which the end point position changes along with the input position is displayed by taking the vertex as a starting point, so that a proper drawing angle can be conveniently found, and the efficiency of object labeling is improved.
In an exemplary embodiment, in the process of displaying the first moving edge starting from the first vertex on the target display interface, the method further includes:
and S31, displaying a first auxiliary edge which takes the end point of the first moving edge as a starting point and is vertical to the first moving edge in direction on the target display interface.
In this embodiment, in order to improve the efficiency of the rotation rectangle drawing, the first moving edge may be displayed, and at the same time, a first auxiliary edge whose direction is perpendicular to the first moving edge may be displayed with an end point of the first moving edge (for example, an input position of the target display page) as a start point, where the first auxiliary edge may be in a ray shape or a straight line shape, which is not limited in this embodiment.
Since the labeling target is displayed two-dimensionally, if a rectangle (which may be a semi-surrounding structure) that can be formed by drawn points is not visually identified, a problem of poor matching with the target object occurs in the rotation matrix drawing based on the viewed information. And by adding the auxiliary line, the position of the rotation rectangle to be drawn can be visually displayed, and the drawing efficiency of the rotation inverse matrix is improved.
According to the embodiment, after the moving edge is displayed, the auxiliary edge corresponding to the moving edge is displayed to assist in object labeling, so that the efficiency of object labeling is improved.
In an exemplary embodiment, after drawing the first edge of the rotating label box of the target object on the target display interface, the method further includes:
and S41, displaying a second moving edge which takes the second vertex as a starting point and is vertical to the first edge in direction on the target display interface, wherein the end point position of the second moving edge moves along with the input position on the target display interface.
In this embodiment, after determining the second vertex and drawing the first edge, the user may directly specify the position of the third vertex to be drawn. In order to ensure the accuracy of the object labeling, after the first edge is drawn, a second moving edge may be displayed, and the second moving edge may be an edge which has a second vertex as a starting point and is perpendicular to the first edge in direction. The end position of the second moving edge can move along with the input position on the target display interface, and the length of the second east edge can be adjusted to assist in determining the marking of the target object by adjusting the input position.
According to the embodiment, after the first edge is drawn, one moving edge is displayed in the direction which is perpendicular to the first edge by taking one vertex of the first edge as a starting point, so that the drawing of the rotating rectangular frame can be assisted, and the efficiency of object labeling is improved.
In an exemplary embodiment, in the process of displaying a second moving edge with a second vertex as a starting point and a direction perpendicular to the first edge on the target display interface, the method further includes:
and S51, displaying a second auxiliary edge which takes the end point of the second moving edge as a starting point and is vertical to the second moving edge in direction on the target display interface.
In this embodiment, in order to improve the efficiency of the rotation rectangle drawing, the second moving edge may be displayed, and at the same time, a second auxiliary edge whose direction is perpendicular to the second moving edge may be displayed with an end point of the second moving edge (for example, an input position of the target display page) as a start point, where the second auxiliary edge may be in a ray shape or a straight line shape, which is not limited in this embodiment.
Since the labeling target is displayed two-dimensionally, if a rectangle (which may be a semi-surrounding structure) that can be formed by drawn points is not visually identified, a problem of poor matching with the target object occurs in the rotation matrix drawing based on the viewed information. And by adding the auxiliary line, the position of the rotation rectangle to be drawn can be visually displayed, and the drawing efficiency of the rotation inverse matrix is improved.
According to the embodiment, after the moving edge is displayed, the auxiliary edge corresponding to the moving edge is displayed to assist in object labeling, so that the efficiency of object labeling is improved.
In an exemplary embodiment, drawing a rotating label box of a target object on a target display interface according to a first edge and a third vertex to obtain a target rotating label box includes:
s61, drawing a second edge of the rotary labeling frame of the target object between the second vertex and the third vertex on the target display interface;
s62, determining a fourth vertex of the rotary labeling frame of the target object according to the first vertex, the second vertex and the third vertex;
and S63, drawing the third side of the rotating labeling frame of the target object between the third vertex and the fourth vertex and drawing the fourth side of the rotating labeling frame of the target object between the fourth vertex and the first vertex to obtain the target rotating labeling frame.
In this embodiment, after the first edge and the third vertex are determined, the unique rotated rectangle may be directly determined. For example, a second edge of the rotated rectangle may be drawn between the second vertex and the third vertex; then, a fourth vertex of the rotation labeling frame of the target object is determined according to the first vertex, the second vertex and the third vertex, the fourth vertex can be obtained by using a diagonal line of the rectangle in the foregoing embodiment, and can also be obtained by making a perpendicular line to the first vertex and the third vertex adjacent to the fourth vertex in a direction matching the first edge and the second edge, since four corners of the rectangle are all right angles, an intersection point of the perpendicular lines of the first vertex and the third vertex is the fourth vertex.
After obtaining the position information of the fourth vertex, the third side of the rotating annotation frame of the target object may be drawn between the third vertex and the fourth vertex, and the fourth side of the rotating annotation frame of the target object may be drawn between the fourth vertex and the first vertex, so as to obtain the target rotating annotation frame.
Through this embodiment, through the geometric relation of rectangle, determine the fourth summit according to the summit of known position, and then can obtain the rotatory mark frame of mark target, improved the efficiency and the convenience of object mark.
In an exemplary embodiment, after the rotating label box of the target object is drawn on the target display interface according to the first edge and the third vertex, and the target rotating label box is obtained, the method further includes:
s71, displaying a second image to be annotated and a target rotation annotation frame on the target display interface, wherein the second image to be annotated contains a target object to be annotated, and the second image to be annotated is an image adjacent to the first image to be annotated in the video to be annotated;
s72, in response to the detected association operation, determining the target rotary annotation box as the rotary annotation box of the target object in the second image to be annotated.
In this embodiment, after obtaining the target rotation annotation frame, the user may continue to annotate the next annotation target in the first image to be annotated. Or, the first image to be annotated may be a certain frame of image in the video to be annotated, and the image annotation device may display a next frame of image, that is, the second image to be annotated, of the first image to be annotated in the video to be annotated on the target display interface.
When labeling the labeling target in the second image to be labeled, the labeling may be performed by the same or similar method as in the foregoing embodiment. Optionally, in this embodiment, since the video to be annotated has a certain correlation between objects included in adjacent video images, the target rotational annotation frame of the first image to be annotated and the second image to be annotated can be displayed on the target display interface together.
The user can determine whether the target rotating labeling frame marks the position of the target object in the second image to be labeled, if so, the user can perform association operation on the target rotating labeling frame and the target object, and determine the target rotating labeling frame as the rotating labeling frame of the target object in the second image to be labeled, so that the labeling frame of the target object does not need to be drawn, otherwise, the displayed target rotating labeling frame can be removed, and the user can label the target object again.
By the embodiment, when the continuous images are labeled, the labeling result of the image of the previous frame is combined to assist in labeling the object of the next frame, so that the time consumption of labeling can be reduced, and the efficiency of labeling the object can be improved.
The following explains a processing method of an image object in the embodiment of the present application with reference to an alternative example. For ease of understanding, the rotation matrix with a certain angle is drawn directly on the canvas in this alternative example, and the association with the annotation target is not shown, and may be applied to various application scenarios. In this alternative example, the first fixed point is fixed point a, the second fixed point is B, the first moving point is B ', the third fixed point is fixed point C, the second moving point is C', and the fourth fixed point is fixed point D.
In this alternative example, a drawing scheme of a rotation rectangle is provided, and the rotation rectangle is drawn in a non-assisted line manner. As shown in fig. 3 and 4, the flow of the processing method of the image object in the present alternative example may include the following steps:
step 1, drawing a fixed point A;
step 2, drawing a second moving point B and a moving edge AB';
step 3, after finding a proper position, drawing a fixed point B, and continuously drawing a third moving point C and a moving edge BC '(the moving edge BC' is always vertical to the fixed edge AB);
and 4, after finding a proper position, drawing a fixed point C, wherein at the moment, three fixed points are available in the rotating rectangle, namely A, B, C, the position of a fourth fixed point D is also determined, and the rotating rectangle path is closed, thus finishing the process.
The optional example further provides a drawing scheme of the rotation rectangle, and the rotation rectangle is drawn in a mode with an auxiliary line. As shown in fig. 5, the flow of the processing method of the image object in the present alternative example may include the steps of:
firstly, drawing a fixed point A on a canvas;
step 2, drawing a second moving point B and a moving edge AB ', wherein the auxiliary line is B ' C ' (the auxiliary line B ' C ' is always vertical to the moving edge AB '), and the position of the B point to be drawn can be adjusted according to the positions of the auxiliary line B ' C ' and the moving point C ';
step 3, after finding a proper position, drawing a fixed point B, continuously drawing a third moving point C and a moving edge BC '(the moving edge BC' should be always vertical to the fixed edge AB), wherein the auxiliary line is C 'D' (the auxiliary line C 'D' should be always vertical to the moving edge BC '), and adjusting the position of the D point to be drawn according to the positions of the auxiliary line C' D 'and the moving point D';
and 4, after finding a proper position, drawing a fixed point C, wherein at the moment, three fixed points are available in the rotating rectangle, namely A, B, C, the position of a fourth fixed point D is also determined, and the rotating rectangle path is closed, thus finishing the process.
With the present alternative example, by using the auxiliary line and the non-auxiliary line to assist in drawing the rotation rectangle, the applicability of the rotation rectangle drawing can be improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., a ROM (Read-Only Memory)/RAM (Random Access Memory), a magnetic disk, an optical disk) and includes several instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the methods according to the embodiments of the present application.
According to another aspect of the embodiments of the present application, there is also provided an image object processing apparatus for implementing the above image object processing method. Fig. 6 is a block diagram of an alternative image object processing apparatus according to an embodiment of the present application, and as shown in fig. 6, the apparatus may include:
the first display unit 602 is configured to display a first image to be annotated on a target display interface, where the first image to be annotated includes a target object to be annotated;
a first drawing unit 604, connected to the first display unit 602, configured to draw a first edge of the rotating annotation frame of the target object on the target display interface in response to a detected first drawing operation, where an angle of the first edge is a target angle, and the first drawing operation is used to indicate positions of a first vertex and a second vertex corresponding to the first edge;
and a first determining unit 606, connected to the first drawing unit 604, for determining a third vertex of the rotating labeling frame of the target object in response to a detected second drawing operation, wherein the second drawing operation is used for indicating a position of the third vertex in a target direction which takes the second vertex as a starting point and is perpendicular to the first edge.
And the second drawing unit 608 is connected to the first determining unit 606, and is configured to draw the rotating labeling frame of the target object on the target display interface according to the first edge and the third vertex, so as to obtain the target rotating labeling frame.
It should be noted that the first display unit 602 in this embodiment may be configured to execute the step S202, the first drawing unit 604 in this embodiment may be configured to execute the step S204, the first determining unit 606 in this embodiment may be configured to execute the step S206, and the second drawing unit 608 in this embodiment may be configured to execute the step S208.
Displaying a first image to be annotated on a target display interface through the module, wherein the first image to be annotated comprises a target object to be annotated; drawing a first edge of a rotating marking frame of the target object on the target display interface in response to the detected first drawing operation, wherein the angle of the first edge is a target angle, and the first drawing operation is used for indicating positions of a first vertex and a second vertex corresponding to the first edge; determining a third vertex of the rotating labeling frame of the target object in response to the detected second drawing operation, wherein the second drawing operation is used for indicating the position of the third vertex in a target direction which takes the second vertex as a starting point and is vertical to the first edge; according to the first edge and the third vertex, the rotary labeling frame of the target object is drawn on the target display interface to obtain the rotary labeling frame of the target, so that the problem that the labeling process of the object is complicated due to the fact that the rectangle needs to be adjusted for multiple times in the labeling mode of the image object in the related art is solved, the process of labeling the object is simplified, and the efficiency of labeling the object is improved.
In one exemplary embodiment, the first drawing unit includes:
the first drawing module is used for drawing a first vertex on the target display interface in response to the detected first sub-drawing operation, wherein the first sub-drawing operation is used for indicating the position of the first vertex;
and a second rendering module. And the second sub-drawing operation is used for responding to the detected second sub-drawing operation and drawing the first edge according to the target angle on the target display interface, wherein the second sub-drawing operation is used for indicating the position of the second vertex.
In an exemplary embodiment, the apparatus further includes:
and the second display unit is used for displaying a first moving edge taking the first vertex as a starting point on the target display interface before drawing the first edge according to the target angle on the target display interface, wherein the end point position of the first moving edge moves along with the movement of the input position on the target display interface.
In an exemplary embodiment, the apparatus further includes:
and the third display unit is used for displaying a first auxiliary edge which takes the end point of the first moving edge as the starting point and is vertical to the first moving edge in direction on the target display interface in the process of displaying the first moving edge which takes the first vertex as the starting point on the target display interface.
In an exemplary embodiment, the apparatus further includes:
and the fourth display unit is used for displaying a second moving edge which takes the second vertex as a starting point and has a direction perpendicular to the first edge on the target display interface after drawing the first edge of the rotating marking frame of the target object on the target display interface, wherein the end point position of the second moving edge moves along with the input position on the target display interface.
In an exemplary embodiment, the apparatus further includes:
and the fifth display unit is used for displaying a second auxiliary edge which takes the end point of the second moving edge as the starting point and is vertical to the second moving edge in direction in the process of displaying the second moving edge which takes the second vertex as the starting point and is vertical to the first edge on the target display interface.
In an exemplary embodiment, the second drawing unit includes:
the third drawing module is used for drawing a second edge of the rotary labeling frame of the target object between the second vertex and the third vertex on the target display interface;
the first determining module is used for determining a fourth vertex of the rotary labeling frame of the target object according to the first vertex, the second vertex and the third vertex;
and the fourth drawing module is used for drawing the third side of the rotary marking frame of the target object between the third vertex and the fourth vertex and drawing the fourth side of the rotary marking frame of the target object between the fourth vertex and the first vertex to obtain the target rotary marking frame.
In an exemplary embodiment, the apparatus further includes:
the sixth display unit is used for drawing a rotary labeling frame of the target object on the target display interface according to the first edge and the third vertex, and displaying a second image to be labeled and the target rotary labeling frame on the target display interface after the target rotary labeling frame is obtained, wherein the second image to be labeled comprises the target object to be labeled, and the second image to be labeled is an image adjacent to the first image to be labeled in the video to be labeled;
and the second determining unit is used for responding to the detected association operation and determining the target rotary annotation frame as the rotary annotation frame of the target object in the second image to be annotated.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may be operated in a hardware environment as shown in fig. 1, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.
According to still another aspect of an embodiment of the present application, there is also provided a storage medium. Alternatively, in this embodiment, the storage medium may be a program code for executing a processing method of any one of the image objects described above in this embodiment.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s1, displaying a first image to be annotated on the target display interface, wherein the first image to be annotated comprises a target object to be annotated;
s2, in response to the detected first drawing operation, drawing a first side of the rotating annotation frame of the target object on the target display interface, wherein the angle of the first side is a target angle, and the first drawing operation is used for indicating the positions of a first vertex and a second vertex corresponding to the first side;
s3, determining a third vertex of the rotating labeling frame of the target object in response to a detected second drawing operation, wherein the second drawing operation is used for indicating the position of the third vertex in a target direction which takes the second vertex as a starting point and is vertical to the first edge;
and S4, drawing the rotary labeling frame of the target object on the target display interface according to the first edge and the third vertex to obtain the target rotary labeling frame.
Optionally, the specific example in this embodiment may refer to the example described in the above embodiment, which is not described again in this embodiment.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, a ROM, a RAM, a removable hard disk, a magnetic disk, or an optical disk.
According to still another aspect of the embodiments of the present application, there is also provided an electronic device for implementing the processing method of the image object, which may be a server, a terminal, or a combination thereof.
Fig. 7 is a block diagram of an alternative electronic device according to an embodiment of the present application, as shown in fig. 7, including a processor 702, a communication interface 704, a memory 706 and a communication bus 708, where the processor 702, the communication interface 704 and the memory 706 communicate with each other via the communication bus 708, where,
a memory 706 for storing computer programs;
the processor 702, when executing the computer program stored in the memory 706, performs the following steps:
s1, displaying a first image to be annotated on the target display interface, wherein the first image to be annotated comprises a target object to be annotated;
s2, in response to the detected first drawing operation, drawing a first side of the rotating annotation frame of the target object on the target display interface, wherein the angle of the first side is a target angle, and the first drawing operation is used for indicating the positions of a first vertex and a second vertex corresponding to the first side;
s3, determining a third vertex of the rotating labeling frame of the target object in response to a detected second drawing operation, wherein the second drawing operation is used for indicating the position of the third vertex in a target direction which takes the second vertex as a starting point and is vertical to the first edge;
and S4, drawing the rotary labeling frame of the target object on the target display interface according to the first edge and the third vertex to obtain the target rotary labeling frame.
Alternatively, the communication bus may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus. The communication interface is used for communication between the electronic device and other equipment.
The memory may include RAM, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
As an example, the memory 706 may include, but is not limited to, the first display unit 602, the first drawing unit 604, the first determination unit 606, and the second drawing unit 608 in the processing device of the image object. In addition, the image processing apparatus may further include, but is not limited to, other module units in the processing apparatus of the image target, which is not described in this example again.
The processor may be a general-purpose processor, and may include but is not limited to: a CPU (Central Processing Unit), an NP (Network Processor), and the like; but also a DSP (Digital Signal Processing), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 7 is only an illustration, and the device implementing the method for processing the image object may be a terminal device, and the terminal device may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 7 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 7, or have a different configuration than shown in FIG. 7.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, network devices, or the like) to execute all or part of the steps of the method described in the embodiments of the present application.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, and may also be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution provided in the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or at least two units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.
Claims (11)
1. A method of processing an image object, comprising:
displaying a first image to be annotated on a target display interface, wherein the first image to be annotated comprises a target object to be annotated;
drawing a first edge of a rotating labeling frame of the target object on the target display interface in response to a detected first drawing operation, wherein the angle of the first edge is a target angle, and the first drawing operation is used for indicating positions of a first vertex and a second vertex corresponding to the first edge;
determining a third vertex of the rotating labeling frame of the target object in response to a detected second drawing operation, wherein the second drawing operation is used for indicating the position of the third vertex in a target direction which takes the second vertex as a starting point and is perpendicular to the first edge;
and drawing a rotary labeling frame of the target object on the target display interface according to the first edge and the third vertex to obtain a target rotary labeling frame.
2. The method according to claim 1, wherein the drawing a first edge of a rotating label box of the target object on the target display interface in response to the detected first drawing operation comprises:
drawing the first vertex on the target display interface in response to the detected first sub-drawing operation, wherein the first sub-drawing operation is used for indicating the position of the first vertex;
and drawing the first edge according to the target angle on the target display interface in response to the detected second sub-drawing operation, wherein the second sub-drawing operation is used for indicating the position of the second vertex.
3. The method of claim 2, wherein prior to drawing the first edge at the target angle on the target display interface, the method further comprises:
and displaying a first moving edge with the first vertex as a starting point on the target display interface, wherein the end point position of the first moving edge moves along with the movement of the input position on the target display interface.
4. The method according to claim 3, wherein in the process of displaying the first moving edge with the first vertex as the starting point on the target display interface, the method further comprises:
and displaying a first auxiliary edge which takes the terminal point of the first moving edge as a starting point and is vertical to the first moving edge in direction on the target display interface.
5. The method of claim 1, wherein after drawing the first edge of the rotating callout box of the target object on the target display interface, the method further comprises:
and displaying a second moving edge with the second vertex as a starting point and the direction perpendicular to the first edge on the target display interface, wherein the end point position of the second moving edge moves along with the input position on the target display interface.
6. The method according to claim 5, wherein in the process of displaying a second moving edge which takes the second vertex as a starting point and is perpendicular to the first edge on the target display interface, the method further comprises the following steps:
and displaying a second auxiliary edge which takes the terminal point of the second moving edge as a starting point and is vertical to the second moving edge in direction on the target display interface.
7. The method according to claim 1, wherein the drawing a rotating label box of the target object on the target display interface according to the first edge and the third vertex to obtain a target rotating label box comprises:
drawing a second edge of the rotating labeling frame of the target object between the second vertex and the third vertex on the target display interface;
determining a fourth vertex of the rotating labeling box of the target object according to the first vertex, the second vertex and the third vertex;
and drawing a third side of the rotating labeling frame of the target object between the third vertex and the fourth vertex and drawing a fourth side of the rotating labeling frame of the target object between the fourth vertex and the first vertex to obtain the target rotating labeling frame.
8. The method according to any one of claims 1 to 7, wherein after the drawing the rotating labeling box of the target object on the target display interface according to the first edge and the third vertex to obtain a target rotating labeling box, the method further comprises:
displaying a second image to be annotated and the target rotary annotation frame on the target display interface, wherein the second image to be annotated comprises the target object to be annotated, and the second image to be annotated is an image adjacent to the first image to be annotated in the video to be annotated;
and in response to the detected association operation, determining the target rotary labeling frame as the rotary labeling frame of the target object in the second image to be labeled.
9. An apparatus for processing an image object, comprising:
the first display unit is used for displaying a first image to be annotated on a target display interface, wherein the first image to be annotated comprises a target object to be annotated;
a first drawing unit, configured to draw a first edge of a rotating labeling frame of the target object on the target display interface in response to a detected first drawing operation, where an angle of the first edge is a target angle, and the first drawing operation is used to indicate positions of a first vertex and a second vertex corresponding to the first edge;
a first determination unit, configured to determine a third vertex of the rotating labeling frame of the target object in response to a detected second drawing operation, where the second drawing operation is used to indicate a position of the third vertex in a target direction that is perpendicular to the first edge and that uses the second vertex as a starting point;
and the second drawing unit is used for drawing the rotary labeling frame of the target object on the target display interface according to the first edge and the third vertex to obtain a target rotary labeling frame.
10. A computer-readable storage medium, comprising a stored program, wherein the program when executed performs the method of any of claims 1 to 8.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 8 by means of the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111669870.4A CN114298915A (en) | 2021-12-30 | 2021-12-30 | Image object processing method and device, storage medium and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111669870.4A CN114298915A (en) | 2021-12-30 | 2021-12-30 | Image object processing method and device, storage medium and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114298915A true CN114298915A (en) | 2022-04-08 |
Family
ID=80973050
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111669870.4A Pending CN114298915A (en) | 2021-12-30 | 2021-12-30 | Image object processing method and device, storage medium and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114298915A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118229507A (en) * | 2024-05-22 | 2024-06-21 | 芯瞳半导体技术(山东)有限公司 | Image processing method, device, system, equipment and computer storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110025687A1 (en) * | 2008-03-28 | 2011-02-03 | Konami Digetal Entertainment Co., Ltd. | Image processing device, image processing device control method, program, and information storage medium |
CN108573279A (en) * | 2018-03-19 | 2018-09-25 | 精锐视觉智能科技(深圳)有限公司 | Image labeling method and terminal device |
CN112508020A (en) * | 2020-12-22 | 2021-03-16 | 深圳市商汤科技有限公司 | Labeling method and device, electronic equipment and storage medium |
CN112685998A (en) * | 2021-01-04 | 2021-04-20 | 广联达科技股份有限公司 | Automatic labeling method, device, equipment and readable storage medium |
US20220245912A1 (en) * | 2019-10-22 | 2022-08-04 | Huawei Technologies Co., Ltd. | Image display method and device |
-
2021
- 2021-12-30 CN CN202111669870.4A patent/CN114298915A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110025687A1 (en) * | 2008-03-28 | 2011-02-03 | Konami Digetal Entertainment Co., Ltd. | Image processing device, image processing device control method, program, and information storage medium |
CN108573279A (en) * | 2018-03-19 | 2018-09-25 | 精锐视觉智能科技(深圳)有限公司 | Image labeling method and terminal device |
US20220245912A1 (en) * | 2019-10-22 | 2022-08-04 | Huawei Technologies Co., Ltd. | Image display method and device |
CN112508020A (en) * | 2020-12-22 | 2021-03-16 | 深圳市商汤科技有限公司 | Labeling method and device, electronic equipment and storage medium |
CN112685998A (en) * | 2021-01-04 | 2021-04-20 | 广联达科技股份有限公司 | Automatic labeling method, device, equipment and readable storage medium |
Non-Patent Citations (3)
Title |
---|
SARAH FACHADA; DANIELE BONATTO; MEHRDAD TERATANI; GAUTHIER LAFRUIT: "Polynomial Image-Based Rendering for non-Lambertian Objects", 2021 INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 31 December 2021 (2021-12-31) * |
张秀英, 陈徽纹: "计算机绘图的尺寸标注及子程序设计", 煤矿机械, no. 06, 28 December 1987 (1987-12-28) * |
樊亚春;谭小慧;周明全;陆兆老;: "基于形状检索的场景图像三维建模", 高技术通讯, no. 08, 15 August 2013 (2013-08-15) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118229507A (en) * | 2024-05-22 | 2024-06-21 | 芯瞳半导体技术(山东)有限公司 | Image processing method, device, system, equipment and computer storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107463331B (en) | Gesture track simulation method and device and electronic equipment | |
CN109726647B (en) | Point cloud labeling method and device, computer equipment and storage medium | |
CN111832447B (en) | Building drawing component identification method, electronic equipment and related product | |
US20200167568A1 (en) | Image processing method, device, and storage medium | |
CN107845113A (en) | Object element localization method, device and ui testing method, apparatus | |
CN112258507B (en) | Target object detection method and device of internet data center and electronic equipment | |
CN109740487B (en) | Point cloud labeling method and device, computer equipment and storage medium | |
US20130101226A1 (en) | Feature descriptors | |
CN109726481B (en) | Auxiliary method and device for robot construction and terminal equipment | |
CN113741763A (en) | Electronic book display method and device and electronic equipment | |
CN113469000A (en) | Regional map processing method and device, storage medium and electronic device | |
CN114298915A (en) | Image object processing method and device, storage medium and electronic device | |
CN110619597A (en) | Semitransparent watermark removing method and device, electronic equipment and storage medium | |
CN107122093B (en) | Information frame display method and device | |
CN110248235B (en) | Software teaching method, device, terminal equipment and medium | |
CN110163914B (en) | Vision-based positioning | |
CN114782769A (en) | Training sample generation method, device and system and target object detection method | |
CN111223155A (en) | Image data processing method, image data processing device, computer equipment and storage medium | |
CN111540060B (en) | Display calibration method and device of augmented reality equipment and electronic equipment | |
CN104881423A (en) | Information Providing Method And System Using Signage Device | |
CN111143912A (en) | Display labeling method and related product | |
CN116309999A (en) | Driving method and device for 3D virtual image, electronic equipment and storage medium | |
CN110737417A (en) | demonstration equipment and display control method and device of marking line thereof | |
CN109740005A (en) | A kind of image object mask method and device | |
CN111246140A (en) | Digital mark display method and digital mark display system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |