CN112036443A - Method and device for labeling object contour in image data - Google Patents

Method and device for labeling object contour in image data Download PDF

Info

Publication number
CN112036443A
CN112036443A CN202010759545.6A CN202010759545A CN112036443A CN 112036443 A CN112036443 A CN 112036443A CN 202010759545 A CN202010759545 A CN 202010759545A CN 112036443 A CN112036443 A CN 112036443A
Authority
CN
China
Prior art keywords
contour
data
points
shape
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010759545.6A
Other languages
Chinese (zh)
Inventor
郑贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tusimple Inc
Original Assignee
Tusimple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tusimple Inc filed Critical Tusimple Inc
Priority to CN202010759545.6A priority Critical patent/CN112036443A/en
Publication of CN112036443A publication Critical patent/CN112036443A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/40Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/235Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on user input or interaction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for labeling an object contour in image data and a storage device, which are used for solving the problems of low labeling speed and low efficiency in the prior art that a labeling person manually labels the object contour in the image data. The marking device displays the image data of the current frame and receives input contour marking data aiming at an object in the image data; the contour marking data comprises contour shape related data and contour attribute data; determining a corresponding pre-stored outline shape according to the outline shape related data; wherein the contour shape comprises contour points at a plurality of preset positions; generating an object contour having a determined contour shape and conforming to the contour attribute data in a two-dimensional space in which the image data is displayed; and storing the generated position coordinate data of the contour points at a plurality of preset positions on the contour of the object.

Description

Method and device for labeling object contour in image data
Technical Field
The present application relates to the field of data annotation, and in particular, to a method and an apparatus for annotating an object contour in image data, and a storage apparatus.
Background
In the related art, in the field of machine learning, a neural network is usually trained using labeled image data so that the neural network can learn a desired function, such as object recognition.
Annotating image data typically includes annotating an outline of an object. When a marker marks the contour of an object, a plurality of points are usually marked along the edge of the object in the image data, and a closed connecting line is formed by the plurality of points to form the contour of the object. In the labeling process, a labeling person needs to click the required contour points one by one along the edge of the object, the labeling system stores the contour points selected by the labeling person, and the contour edge of the object is identified through the stored contour points.
In practice, some objects have a more regular shape, and the annotator can annotate the object with fewer points. Some objects have more complex or irregular shapes and the annotator needs to identify the object by more points. When there are more objects in the image data to be labeled with contours, or more objects with complex shapes and irregular shapes, the labeling operator needs to spend more time selecting the required contour points, which results in slower and less efficient labeling operation.
Disclosure of Invention
The application provides a method and a device for labeling an object contour in image data and a storage device, which are used for solving the problems of low labeling speed and low efficiency in the prior art that a labeling person manually labels the object contour in the image data.
In one aspect, an embodiment of the present application provides a method for labeling an object contour in image data, including: the marking device displays the image data of the current frame and receives input contour marking data aiming at an object in the image data; the contour marking data comprises contour shape related data and contour attribute data; determining a corresponding pre-stored outline shape according to the outline shape related data; wherein the contour shape comprises contour points at a plurality of preset positions; generating an object contour having a determined contour shape and conforming to the contour attribute data in a two-dimensional space in which the image data is displayed; and storing the generated position coordinate data of the contour points at a plurality of preset positions on the contour of the object.
In another aspect, an embodiment of the present application provides an apparatus for annotating an object contour in image data, including a processor and at least one memory, where the at least one memory stores at least one machine executable instruction, and the processor executes the at least one machine executable instruction to perform the method for annotating an object contour in image data as described above.
In another aspect, an embodiment of the present application provides a non-volatile storage medium, which stores at least one machine executable instruction, where the at least one machine executable instruction is executed by a processor to implement the method for labeling the contour of an object in image data as described above.
According to the technical scheme provided by the embodiment of the application, after the labeling device displays one frame of image data, the contour shape of an object is determined according to received contour labeling data, an object contour with the determined contour shape and an object contour matched with the edge of the object are generated in a two-dimensional space for displaying the image data, and position coordinates of contour points on a plurality of preset positions on the object contour corresponding to the image data are determined and stored, so that the object contour can be automatically labeled according to the input contour labeling data, and the speed and the efficiency of labeling the object contour can be improved.
Drawings
The accompanying drawings are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiment(s) of the application and together with the description serve to explain the application and not limit the application.
Fig. 1 is a schematic structural diagram of an apparatus for labeling an object contour in image data according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating an architecture of an annotation process for an object contour in image data according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a method for labeling an object contour in image data according to an embodiment of the present disclosure;
FIG. 4 is an exemplary diagram of a rectangular outline template provided by an embodiment of the present application;
FIG. 5 is a flowchart of a process of one embodiment of step 305 of FIG. 3;
FIG. 6 is a flowchart illustrating another embodiment of step 305 of FIG. 3;
FIG. 7 is a flowchart illustrating another embodiment of the process of step 305 of FIG. 3;
FIG. 8 is a flowchart illustrating another embodiment of the process of step 305 of FIG. 3;
FIG. 9 is a flowchart illustrating another embodiment of the process of step 305 of FIG. 3;
FIG. 10 is a flowchart illustrating an exemplary process for adjusting the contour of an object according to the present disclosure;
FIG. 11 is a process flow diagram for one embodiment of step 3057 in FIG. 10;
FIG. 12 is a process flow diagram for another embodiment of step 3057 in FIG. 10;
FIG. 13 is a flowchart illustrating a process for creating a new outline template according to an embodiment of the present application;
FIG. 14 is a flowchart of a process for adjusting a contour template according to an embodiment of the present application;
fig. 15 is a flowchart of a process for adjusting a contour template according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Aiming at the problems of low labeling speed and low efficiency in the related art of manually labeling the object contour in the image data by a labeling operator, the embodiment of the application provides a labeling scheme of the object contour in the image data to solve the problems. In the scheme, after displaying a frame of image data, the labeling device determines the contour shape of an object according to received contour labeling data, generates an object contour with the determined contour shape and an object contour matched with the edge of the object in a two-dimensional space for displaying the image data, and determines and stores position coordinates of contour points at a plurality of preset positions on the object contour corresponding to the image data, so that the object contour can be automatically labeled according to the input contour labeling data, and the speed and the efficiency of labeling the object contour can be improved.
Some embodiments of the present application provide a labeling scheme for object contours in image data. Fig. 1 shows a structure of a labeling apparatus provided in an embodiment of the present application, where the apparatus 1 includes a processor 11 and at least one memory 12.
In some embodiments, the at least one memory 12 may be a storage device of various modalities, such as a transitory or non-transitory storage medium, a volatile or non-volatile storage medium. At least one machine executable instruction may be stored in the memory 12, and the at least one machine executable instruction, when executed by the processor 11, implements the annotation processing of the object contour in the image data provided in the embodiment of the present application.
In some embodiments, the annotation device 1 may be located on the server side. In other embodiments, the annotation device 1 may also be located in a cloud server. In other embodiments, the annotation device 1 may also be located in the client.
As shown in fig. 2, the labeling process for the object contour in the image data provided by the embodiment of the present application may include a front-end process 12 and a back-end process 14. The front end processing 12 displays the relevant image data or other relevant data and receives the relevant data or information input by the annotator, for example, the front end processing 12 may be a processing implemented by a web page or a processing implemented by a separate application interface. The back-end processing 14 performs annotation processing on the object contours in the image data based on the relevant data and information received by the front-end processing 12. After the annotation process is completed, the annotation device 1 may further provide the annotation result to other processes or applications on the client, the server, or the cloud server.
The following describes the process of labeling the contour of an object in image data by the labeling device 1 executing at least one machine-executable instruction.
Fig. 3 is a processing flow chart of a method for labeling an object contour in image data according to an embodiment of the present application, including:
301, displaying current frame image data by a labeling device, and receiving input contour labeling data aiming at an object in the image data; the contour marking data comprises contour shape related data and contour attribute data;
step 303, determining a corresponding pre-stored outline shape according to the outline shape related data; wherein the contour shape comprises contour points at a plurality of preset positions;
step 305 of generating an object contour having a predetermined contour shape and conforming to the contour attribute data in a two-dimensional space in which image data is displayed;
and 307, storing the generated position coordinate data of the contour points at a plurality of preset positions on the object contour.
According to the method shown in fig. 3, the labeling device can determine the contour shape of the object according to the received contour labeling data, generate the object contour with the determined contour shape and the object contour matched with the contour attribute data in the two-dimensional space for displaying the image data, and store the position coordinates of the contour points at a plurality of preset positions on the object contour corresponding to the image data, so that the object contour can be automatically labeled according to the input contour labeling data, and the speed and efficiency of labeling the object contour can be improved.
The process illustrated in FIG. 3 may be implemented in a number of embodiments and alternatives, and the embodiments provided in the examples of this application are described below.
In step 301, the annotating device can display one frame of image data through the human-computer interface, where the frame of image data can be one of multiple frames of image data. The marking device displays a frame of image data currently, namely the current frame of image data. When the annotator identifies that the current frame image data comprises the object to be annotated, the annotator inputs the outline annotation data aiming at the object to the annotation device.
The receiving of the outline annotation data by the annotation device can be achieved in a variety of ways. For example, an input box can be provided on the human-computer interface, and a annotator can input preset instructions and data in the input box; buttons and keys can be provided on the human-computer interface, the buttons or the keys have preset instructions or data, and a marker inputs the instructions or the data by clicking the buttons or the keys; the corresponding option can be selected from a pull-down menu provided on the human-computer interface, the pull-down menu can comprise one or more levels of submenus, each submenu can comprise one or more options, the options in the menu correspond to preset instructions or data, and the annotator inputs the instructions or data by selecting the options in the pull-down menu. The preset instruction may be a contour labeling instruction, and the data may be contour labeling data.
The receiving of the outline marking data by the marking device may be performed in one receiving operation or may be performed by a plurality of receiving operations.
Contouring data contour shape-related data and contour attribute data. The contour shape related data may comprise object class data or contour shape data. The profile attribute data may include profile position data and/or profile size data.
In some embodiments, when the object type data is included in the contour label data, the type included in the object type data may be set in advance according to the scene included in the image data. For example, when annotating image data comprising a road scene, the object categories may include categories of objects appearing in the road scene including, but not limited to, vehicles, traffic lights, road signs, and the like. The objects of each category may also include a plurality of subcategories, for example, the vehicle category may also include a passenger vehicle subcategory and a truck subcategory.
In the embodiment of the present application, the correspondence between the object type and the object outline shape may be preset and stored in the labeling device. In the case where a category does not include a sub-category, the category may correspond to an object outline shape, and in the case where a category includes a plurality of sub-categories, each sub-category may correspond to an object outline shape. For example, for multiple sub-categories included in a vehicle category, each sub-category may correspond to an object outline shape.
In some embodiments, when the outline shape data is included in the outline marking data, the outline shape included in the outline shape data may be a plurality of preset general object outline shapes, or may be the outline shapes of some specific objects. The general object outline shape may include a common rectangle, a circle, and the like, and the specific object outline shape may be set in advance according to a scene included in the image data, for example, when the image data including a road scene is labeled, the specific object outline shape may include an outline shape of a passenger car, an outline shape of a truck, an outline shape of a pedestrian in walking, an outline shape of a pedestrian standing still, and the like.
In the embodiment of the present application, the identifier of the outline shape of the object and the template of the outline shape may also be saved in advance. The outline shape template may be represented by outline points at a plurality of preset locations on the outline shape. For example, one contour point may be set on each of four vertices of a rectangular contour, and one contour point may be set on each corner point of an irregularly shaped contour. Further, the order of positions of the plurality of contour points may be set in advance. Thus, the contour shape can be expressed by a plurality of contour points having a sequential relationship. In one example, as shown in fig. 4, four contour points 1, 2, 3, 4 are disposed on four vertices of a rectangular contour R, and the positional sequence between the four contour points is clockwise, the contour point 1 is a starting point, and the contour point 4 is an end point, and according to the positional sequence, the adjacent contour points of the contour point 1 are the contour point 4 and the contour point 2, the adjacent contour point of the contour point 2 is the contour point 1 and the contour point 3, the adjacent contour point of the contour point 3 is the contour point 2 and the contour point 4, and the adjacent contour point of the contour point 4 is the contour point 3 and the contour point 1, so that a rectangular contour can be expressed by the coordinate data of the four contour points 1, 2, 3, 4.
When one contour template is expressed by storing coordinates of a plurality of contour points, predetermined coordinates of the plurality of contour points may be stored, or relative coordinates of the plurality of contour points may be stored.
Under the condition of keeping the preset coordinates of a plurality of contour points, the preset coordinates express the preset position of the contour and the preset size of the contour, and when a contour of the shape is generated according to the preset coordinates, the contour with the preset size can be generated at the preset position. Wherein the predetermined position may be represented by coordinates of a specific contour point, and the predetermined size may be represented by a relationship between the predetermined coordinates of the contour points. Further, it is also possible to separately save the positions of the specific contour points as predetermined positions of the contour template, and to separately save the size relationship between the contour points as a predetermined size.
When the relative coordinates of the plurality of contour points are saved, the relative coordinates express the relative positional relationship and the relative dimensional relationship between the plurality of contour points. For example, when four contour points of a rectangular contour are saved, coordinates of the contour point 1 are (x, y), coordinates of the contour point 2 are (x + a, y), coordinates of the contour point 3 are (x + a, y-b), and coordinates of the contour point 4 are (x, y-b) may be recorded. Wherein the position of the object contour can be located by a specific contour point, for example, the position of the object contour is located by the coordinates (x, y) of the contour 1, and the size of the object contour is expressed by a and b. And when the values of x, y, a and b are preset values, the coordinates of the contour points are preset coordinates, otherwise, the coordinates of the contour points are relative coordinates. When the coordinates of the contour points are relative coordinates, the object contour can be positioned when the object contour is generated through (x, y), and the size of the object contour can be described through a and b. Thus, in the process of generating the contour in step 305, the actual coordinates of the contour points can be determined from the contour position data (e.g., values of x, y) and the contour size data (values of a and b) included in the contour labeling data further input by the labeling person.
The labeling device can pre-store the preset coordinates or relative coordinates of a plurality of contour points on the contour template, and can also pre-store the preset coordinates and the relative coordinates at the same time so as to adapt to the requirements of different application scenes. When the predetermined coordinates and the relative coordinates of a plurality of contour points on each contour target are prestored in the labeling device at the same time, the process of generating the contour can be implemented in a plurality of ways more flexibly.
In the above step 303, the process of determining the shape of the contour of the object in the step 303 is different depending on the data included in the contouring data. The contouring data may include object class data, or contour shape data.
In some embodiments, when the object type data is included in the contour label data received in step 301, the processing of step 303 includes: and determining the outline shape corresponding to the received object type data as the outline shape of the object according to the preset corresponding relation between the object type and the outline shape.
As described above, when the labeling device receives the object type data, the object contour shape corresponding to the received object type can be determined according to the pre-stored correspondence between the object type and the contour shape.
In some embodiments, when the outline shape data is included in the outline marking data received in step 301, the processing of step 303 includes: the contour shape indicated by the received contour shape data is determined as the contour shape of the object.
That is, when the labeling device directly receives the outline shape data, the outline pointed by the outline shape data can be directly used.
In step 303, the determined contour shape has a plurality of contour points at preset positions as described above, for example, contour points are disposed on four vertices of a rectangular contour, contour points are disposed on a plurality of corner points of an irregular shape, and the plurality of contour points may have a predetermined sequential relationship. For example, of the four contour points of the rectangular contour shown in fig. 4, with one specific contour point 1 as a starting point, a plurality of contour points have a clockwise relationship or a counterclockwise relationship therebetween, and according to the relationship between the contour points, the last contour point 4 is an end point.
In the process of generating the object contour in step 305 described above, it is necessary to generate an object contour having the determined contour shape and the specified size at the specified position, that is, to generate a contour in accordance with the contour attribute data. The contour generation process may include the process shown in fig. 5:
step 3051, determining position coordinates of contour points at a plurality of preset positions on the contour shape according to the contour attribute data;
3053, correspondingly generating contour points on the position coordinates determined in the two-dimensional space;
and step 3055, generating an object contour according to the generated plurality of contour points and the preset position sequence relation among the plurality of contour points.
The process shown in fig. 5 has various embodiments depending on the designated position and the designated size.
In a first generation manner, in some embodiments, if the outline marking data does not further include data about a specified position and a specified size, for example, only the outline shape related data is included in the outline marking data received for the first time, the specified position may be at a preset position in the two-dimensional space, and the specified size may be a predetermined size, that is, the marking device generates the outline of the object with the determined outline shape and the determined preset size at the preset position in the two-dimensional space. After the contour attribute data is received again subsequently, the received contour attribute data may be used as adjustment data to adjust the generated object contour.
The preset position may be any position in the two-dimensional space for displaying the image data, for example, a right region or a left upper corner in the two-dimensional space for displaying the image data, or other positions set according to the application scene requirement, and the preset size may be a size for displaying the outline template.
In this generation manner, the process shown in fig. 5 may be implemented as the process shown in fig. 6:
step 3051a, determining coordinates of contour points at a plurality of preset positions on the contour shape according to the preset position in the two-dimensional space and the preset size of the contour of the object.
When the labeling device holds predetermined coordinates of a plurality of contour points of the contour shape, the operation of determining the coordinates of the contour points may directly determine the read predetermined coordinates of the contour points as the coordinates of the generated contour points.
And 3053a, correspondingly generating a plurality of contour points in a two-dimensional space according to the determined coordinates.
Step 3055a, generating a connecting line between the adjacent contour points according to the generated plurality of contour points and the preset position sequence relation among the plurality of contour points, and obtaining the object contour.
In one example, when the outline shape data included in the outline marking data is a rectangle and the outline position data and the outline size data are not included, if the marking device stores predetermined coordinates of four outline points of a rectangular outline, the predetermined coordinates of the four outline points may be directly read, for example, x, y, a, b each have a predetermined value, coordinates of an outline point 1 is (x, y), coordinates of an outline point 2 is (x + a, y), coordinates of an outline point 3 is (x + a, y-b), and coordinates of an outline point 4 is (x, y-b), and the corresponding four outline points are generated in a two-dimensional space according to the read predetermined coordinates, and a connecting line between adjacent outline points is generated according to the sequential relationship between the outline points, thereby obtaining the outline of the object.
After generating the outline of the predetermined size at the predetermined position by the first generation means, the marking device may subsequently adjust the position and/or size of the generated outline according to the adjustment data input by the marker.
In a second generation method, in some embodiments, if the outline marking data includes outline attribute data and only outline size data is included in the outline attribute data, the marking device may generate an outline of the object having the determined outline shape and size indicated by the outline size data at a predetermined position in the two-dimensional space.
The preset position may be any position in the two-dimensional space where the image data is displayed, and the preset position may be set according to the needs of the application scene. The contour dimension data is the dimension that the annotator needs to annotate the contour of an object.
In this manner, the process shown in FIG. 5 may be implemented as the process shown in FIG. 7:
step 3051b, determining coordinates of contour points at a plurality of preset positions on the contour shape according to the preset position in the two-dimensional space and the size indicated by the contour data of the object.
When the object contour is generated at the preset position, the position of the object contour can be positioned through the preset specific contour point on the contour template, namely, the coordinate of the preset position is determined as the coordinate of the specific contour point, and the coordinates of other contour points are determined according to the size data of the object contour and the relative coordinate of the contour point.
3053b, correspondingly generating a plurality of contour points in a two-dimensional space according to the determined coordinates;
step 3055b, generating a connecting line between the adjacent contour points according to the generated plurality of contour points and the preset position sequence relation among the plurality of contour points, and obtaining the object contour.
In one example, in the process of determining the coordinates of four contour points on the rectangular contour when the object classification data included in the contour labeling data is the contour shape data as a rectangle and the contour size data is included, the coordinates of the predetermined position may be determined as the coordinates (x, y) of a specific contour point 1 among the four contour points, and the coordinates (x + a0, y) of the contour point 2, the coordinates (x + a0, y-b 0) of the contour point 3, and the coordinates (x, y-b 0) of the contour point 4 are determined from the values a0 and b0 of a and b included in the contour size data. And generating a connecting line between the adjacent contour points according to the sequential relation between the contour points, thereby obtaining the object contour.
In a third generation manner, in some embodiments, if the outline marking data includes outline attribute data and only the outline position data is included in the outline attribute data, the marking device may generate the outline of the object having the determined outline shape and the predetermined size at the position indicated by the outline position data in the two-dimensional space.
The position indicated by the contour position data may be any one of the positions in the image data selected by the annotator, and the predetermined size data may be a predetermined size represented by predetermined coordinates of a plurality of contour points on the contour template.
In this generation manner, the process shown in fig. 5 may be implemented as the process shown in fig. 8:
step 3051c, determining coordinates of contour points at a plurality of preset positions on the contour shape according to the position pointed by the contour position data in the two-dimensional space and the preset size of the object contour.
Wherein, the position pointed by the contour position data can be determined as the position of a specific contour point on the contour template, and the predetermined size of the object contour can be the size relationship between contour points which are separately saved.
3053c, correspondingly generating a plurality of contour points in a two-dimensional space according to the determined coordinates;
step 3055c, generating a connecting line between the adjacent contour points according to the generated plurality of contour points and the preset position sequence relation among the plurality of contour points, and obtaining the object contour.
In one example, in the process of determining the coordinates of four contour points on the rectangular contour when the contour shape data included in the contour labeling data is rectangular and includes contour position data, the position coordinates (x0, y0) indicated by the contour position data may be determined as the coordinates (x0, y0) of a specific contour point 1 among the four contour points, and the coordinates (x0+ a, y0) of the contour point 2, the coordinates (x0+ a, y 0-b) of the contour point 3, and the coordinates (x0, y 0-b) of the contour point 4 are determined based on the values of a and b included in the predetermined contour dimension relationship. And generating a connecting line between the adjacent contour points according to the sequential relation between the contour points, thereby obtaining the object contour.
In some embodiments, if the outline attribute data is contained in the outline marking data, and the outline attribute data includes outline size data and outline position data, the marking device may generate the outline of the object having the determined outline shape and the size indicated by the outline size data at the position indicated by the outline position data in the two-dimensional space.
The position indicated by the contour position data may be any position in the image data selected by the annotator, and the contour size data is the size required by the annotator to annotate the contour of an object.
In this generation manner, the process shown in fig. 5 may be implemented as the process shown in fig. 9:
step 3051d, determining coordinates of the contour points at a plurality of preset positions on the contour shape according to the position pointed by the contour position data and the size pointed by the contour size data in the two-dimensional space.
The position pointed by the contour position data may be determined as the position of a specific contour point on the contour template, and the size relationship included in the contour size data may be determined as the size relationship between the contour points.
3053d, correspondingly generating a plurality of contour points in a two-dimensional space according to the determined coordinates;
and step 3055d, generating a connecting line between the adjacent contour points according to the position sequence relation between the contour points to obtain the object contour.
In one example, in the process of determining the coordinates of four contour points on the rectangular contour when the contour shape data included in the contour labeling data is rectangular and includes contour position data and contour size data, the position coordinates (x0, y0) indicated by the contour position data may be determined as the coordinates (x0, y0) of a specific contour point 1 among the four contour points, and the coordinates (x0+ a0, y0) of the contour point 2, the coordinates (x0+ a0, y 0-b 0) of the contour point 3, and the coordinates (x0, y 0-b 0) of the contour point 4 may be determined based on the values a0 and b0 of a and b included in the contour size data. And generating a connecting line between the adjacent contour points according to the sequential relation between the contour points, thereby obtaining the object contour.
Through the above processing, the labeling apparatus can generate an object contour having the determined shape in a two-dimensional space.
For the above-mentioned preliminarily generated object contour, the annotator may need to adjust the object contour so that the generated object contour matches the edge of the annotated object.
In some embodiments, step 305 may further adjust the preliminarily generated object contour.
As shown in fig. 10, step 305 may further include the following steps:
step 3056, the labeling device receives the contour adjustment data;
and 3057, adjusting the generated object contour in the two-dimensional space according to the contour adjustment data to obtain an object contour matched with the edge of the object.
When the object contour is adjusted, the position of at least one contour point on the object contour can be adjusted to realize the adjustment of the contour position, the contour size and the contour shape.
In step 3055, the contour adjustment data received by the annotation device includes a displacement angle and a displacement distance of at least one contour point to be adjusted. In different implementation scenarios, the adjustment performed by the annotator on the preliminarily generated object contour may be an adjustment of a contour position, a contour size, or a contour shape. The annotator determines the adjustment mode for the required adjustment. The adjustment mode can comprise adjustment of at least one contour point, adjustment of at least one contour edge and adjustment of the whole contour. For example, when adjusting the position and shape of the contour, the position of one contour point may be adjusted, one contour edge may be displaced, one contour edge may be rotated, the entire contour may be displaced, and the entire contour may be rotated. However, these adjustments are adjustments to the corresponding contour points, i.e. a displacement of at least one contour point to be adjusted. Specifically, the displacement of one contour edge is the displacement of a contour point included in the contour edge, and the displacement of the entire contour is the position of a contour point included in the contour. On the other hand, the adjustment of the object contour may also be an adjustment of the contour size, such as an overall enlargement or reduction of the object contour, and such an adjustment is an adjustment of each contour point, that is, each contour point has a displacement direction and a displacement distance. The adjustment of the object profile is the displacement of at least one profile point to be adjusted, including the displacement angle and the displacement distance.
The displacement angle may be a displacement angle of the contour point to be adjusted in a two-dimensional space where the image data is located, or may be a displacement angle relative to a position where the contour point to be adjusted is located.
As shown in fig. 11, the process of adjusting the generated object contour according to the contour adjustment data at step 3057 includes:
step 30571a, determining the adjusted position of each contour point to be adjusted according to the contour adjustment data and the current position of each contour point to be adjusted;
step 30573a, regenerating at least one contour point with the adjusted position in a two-dimensional space according to the determined position;
30575a, generating an adjusted object contour according to the positions of the contour points, so that the adjusted object contour is matched with the edge of the object.
In step 30571a, when determining an adjusted position of the contour to be adjusted, the adjusted position coordinate may be determined according to the geometric knowledge, the current position (x1, y1) of the contour point, the adjustment angle α, and the adjustment distance c.
Through the processes of fig. 10 and 11, the preliminarily generated object contour can be adjusted. When the annotator needs to make multiple adjustments, the annotating device can perform the processing shown in fig. 10 and 11 multiple times accordingly.
In some embodiments, when the annotator needs to annotate an irregular-shaped object, new contour points may be added to the preliminarily generated object contour, and a new contour shape may be obtained through the original contour points and the new contour points.
In this case, the contour adjustment data may include: and position data of at least one newly added contour point.
As shown in fig. 12, the process of adjusting the generated object contour according to the contour adjustment data at step 3057 includes:
30571b, correspondingly generating at least one newly added contour point in the two-dimensional space according to the position data of the at least one newly added contour point in the contour adjustment data;
30573b, determining a new position sequence between at least one newly added contour point and the original contour points on the object contour according to the positions of the newly added contour points and the positions and position sequence relations of the original contour points;
30575b, generating connecting lines between the adjacent contour points according to the position sequence of the new contour points.
The position sequence of all the contour points can be re-determined according to the geometric knowledge and the position coordinates of the contour points and the preset position sequence direction between the contour points in step 30573 b.
Through the processing, the embodiment of the application can mark the contour of an object in the image data.
In the above step 307, the process of determining that the plurality of contour points correspond to the position coordinates in the image data may be determined according to the display scale of the current frame image data and the position coordinates of the contour points.
In step 309, the position coordinate data of the plurality of contour points is stored, and the position coordinate data of the plurality of contour points may be stored in association with each other in accordance with a predetermined sequence relationship between the contour points.
In some embodiments, in the case that the object contour template to be labeled by the labeling person is not pre-stored in the labeling device, the labeling person can also select a new created contour template.
As shown in fig. 13, the process of creating the contour target may include:
step 1301, the labeling device receives newly-built contour data, wherein the newly-built contour data comprises position sequence relations of a plurality of contour points and position coordinate data of each contour point;
step 1303, generating an object contour composed of a plurality of contour points included in the newly-created contour data in a two-dimensional space displaying the image data;
the labeling device can generate a plurality of contour points included in the newly-built contour data and generate connecting lines between adjacent contour points to obtain the object contour;
step 1305, storing the position coordinate data of the plurality of contour points according to the position sequence relation among the plurality of contour points.
Through the process shown in FIG. 13, the outline template required by the annotator can be created and saved. In a specific implementation scenario, the process shown in fig. 13 and the process shown in fig. 3 may be implemented as two separate functional modules.
In some embodiments, based on the process shown in fig. 13, if the annotator needs to adjust the generated new contour, a contour adjustment process can be added between step 1303 and step 1305, or after step 1305.
FIG. 14 illustrates a process for adjusting a newly created outline template, comprising:
step 1301, the labeling device receives newly-built contour data, wherein the newly-built contour data comprises position sequence relations of a plurality of contour points and position coordinate data of each contour point;
step 1303, generating an object contour composed of a plurality of contour points included in the newly-created contour data in a two-dimensional space displaying the image data;
step 3055, the labeling device receives contour adjustment data;
step 3057, adjusting the generated object contour in the two-dimensional space according to the contour adjustment data to obtain an object contour matched with the edge of the object;
step 1305, storing the position coordinate data of the plurality of contour points according to the position sequence relation among the plurality of contour points.
FIG. 15 illustrates another process for adjusting the new outline template, including:
step 1301, the labeling device receives newly-built contour data, wherein the newly-built contour data comprises position sequence relations of a plurality of contour points and position coordinate data of each contour point;
step 1303, generating an object contour composed of a plurality of contour points included in the newly-created contour data in a two-dimensional space displaying the image data;
step 1305, storing position coordinate data of the plurality of contour points according to the position sequence relation among the plurality of contour points;
step 3055, the labeling device receives contour adjustment data;
step 3057, adjusting the generated object contour in the two-dimensional space according to the contour adjustment data to obtain an object contour matched with the edge of the object;
step 1307, storing the position coordinate data of the plurality of adjusted contour points according to the position sequence relation among the plurality of adjusted contour points.
By the processing shown in fig. 14 or fig. 15, the newly created outline template can be adjusted to obtain a more ideal outline template.
Embodiments of the subject matter and the functional operations described in this application can be implemented by various systems, digital electronic circuitry, or computer software, firmware, or hardware, including the structures disclosed in this specification and their equivalents, or combinations of these structures. Embodiments of the subject matter described in this specification can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded for storage in a tangible, non-transitory computer readable medium, for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of these. The term "data processing unit" or "data processing apparatus" includes all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors, multiple computers. These means may include, in addition to hardware, code that creates an executable environment for the computer program in question, e.g., code that constitutes a processor firewall, a protocol stack, a database management system, an operating system, or a combination of these.
A computer program (also known as a program, software application, script, or code) can be written in any programming language, including compiled or interpreted speech; and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that also stores other programs or data (e.g., one or more scripts stored in a markup language document), or in a separate file dedicated to the program in question, or in a coordinated file (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed by one or more computers at one site or distributed across multiple sites and interconnected by a communication network.
The processes or logic diagrams described in this specification can be executed by one or more programmable processors to execute one or more computer programs and perform processes on input data to generate output results. The processes or logic diagrams may be performed by, and various devices may be implemented as, special purpose logic circuitry, e.g., a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC).
Processors for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory. The basic unit of a computer includes a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices, including magnetic, magneto-optical disks, or optical disks. However, a computer need not include these devices. Computer-readable media for storing instructions and data include all forms of non-volatile memory, media and storage devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be replaced by, or incorporated in, special purpose logic circuitry.
While this document contains many specifics, these specifics should not be construed as limitations on the scope of the disclosure, but merely as descriptions of features that may be incorporated into specific embodiments of particular inventions. Some of the features described in separate embodiments in this application may also be combined and implemented in a single embodiment. Features which are described in the context of separate embodiments may also be provided in combination in a single embodiment, or in any suitable subcombination. Also, while features may be described above in certain combinations, one or more features may be deleted from one or more of the claimed combinations and the claimed combinations may be further combined or modified.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in such order to achieve desirable results. Also, the separation of various system components in the embodiments should not be understood as requiring such separation in all embodiments. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (12)

1. A method for labeling the contour of an object in image data is characterized by comprising the following steps:
the marking device displays the image data of the current frame and receives input contour marking data aiming at an object in the image data; the contour marking data comprises contour shape related data and contour attribute data;
determining a corresponding pre-stored outline shape according to the outline shape related data; wherein the contour shape comprises contour points at a plurality of preset positions;
generating an object contour having a determined contour shape and conforming to the contour attribute data in a two-dimensional space in which the image data is displayed;
and storing the generated position coordinate data of the contour points at a plurality of preset positions on the contour of the object.
2. The method of claim 1, wherein the contour shape related data comprises object class data;
determining the contour shape of the object according to the contour marking data, comprising: determining the outline shape corresponding to the received object type data as the outline shape of the object according to the preset corresponding relation between the object type and the outline shape; alternatively, the first and second electrodes may be,
the profile shape related data comprises profile shape data;
determining the contour shape of the object according to the contour marking data, comprising: the contour shape indicated by the received contour shape data is determined as the contour shape of the object.
3. The method of claim 1, wherein the contour points at a plurality of predetermined positions on the contour shape have a predetermined positional sequence relationship therebetween; the profile attribute data comprises profile position data and/or profile size data;
generating an object profile having a determined profile shape in two dimensions in accordance with profile attribute data, comprising:
determining position coordinates of contour points at a plurality of preset positions on the contour shape according to the contour attribute data;
correspondingly generating contour points on the position coordinates determined in the two-dimensional space;
and generating the object contour according to the generated plurality of contour points and the preset position sequence relation among the plurality of contour points.
4. The method of claim 3, wherein determining position coordinates of contour points at a plurality of preset positions on the contour shape from the contour attribute data comprises:
and determining coordinates of the plurality of contour points corresponding to the image data according to the contour attribute data and preset positions of the plurality of contour points on the object contour.
5. The method of claim 1, further comprising: receiving profile attribute adjustment data;
and adjusting the generated object contour in the two-dimensional space according to the contour attribute adjustment data.
6. The method according to claim 5, wherein the profile property adjustment data comprises a displacement angle and a displacement distance of at least one profile point to be adjusted;
adjusting the generated object profile in the two-dimensional space according to the profile attribute adjustment data includes:
determining the adjusted position of each contour point to be adjusted according to the displacement angle and the displacement distance of each contour point to be adjusted and the current position of each contour point to be adjusted;
regenerating at least one contour point with the adjusted position in a two-dimensional space according to the determined position;
and generating the adjusted object contour according to the adjusted contour points.
7. The method according to claim 5, wherein the contour property adjustment data includes position data of at least one newly added contour point; moreover, a plurality of contour points on the object contour have a preset sequential relationship;
adjusting the generated object profile in the two-dimensional space according to the profile attribute adjustment data, comprising:
in a two-dimensional space, generating newly added contour points correspondingly according to the position data of at least one newly added contour point;
and determining a new position sequence between at least one newly added contour point and the original contour points on the object contour according to the positions of the newly added contour points, the positions of the original contour points and the position sequence relationship, and generating the object contour according to the position sequence of the new contour points.
8. The method of claim 1, wherein the plurality of contour points on the object contour have a predetermined sequential relationship;
storing position coordinate data of a plurality of contour points, comprising: and correspondingly storing the position coordinate data of the plurality of contour points according to the sequence relation among the preset contour points.
9. The method of claim 1, further comprising:
receiving newly-built contour data, wherein the newly-built contour data comprises the position sequence relation of a plurality of contour points and the position coordinate data of each contour point;
and storing the position coordinate data of the plurality of contour points according to the position sequence relation among the plurality of contour points.
10. The method of claim 1, further comprising: the marking device prestores a plurality of contour shape identifiers and coordinates of contour points at a plurality of preset positions on each contour shape; the coordinates of the contour points include one or more of: preset coordinates and relative coordinates.
11. An apparatus for annotating an object contour in image data, comprising a processor and at least one memory, the at least one memory having at least one machine executable instruction stored therein, the processor executing the at least one machine executable instruction to perform the method according to any one of claims 1 to 10.
12. A non-volatile storage medium having stored thereon at least one machine executable instruction, the at least one machine executable instruction when executed by a processor implementing a method as claimed in any one of claims 1 to 10.
CN202010759545.6A 2020-07-31 2020-07-31 Method and device for labeling object contour in image data Pending CN112036443A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010759545.6A CN112036443A (en) 2020-07-31 2020-07-31 Method and device for labeling object contour in image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010759545.6A CN112036443A (en) 2020-07-31 2020-07-31 Method and device for labeling object contour in image data

Publications (1)

Publication Number Publication Date
CN112036443A true CN112036443A (en) 2020-12-04

Family

ID=73583727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010759545.6A Pending CN112036443A (en) 2020-07-31 2020-07-31 Method and device for labeling object contour in image data

Country Status (1)

Country Link
CN (1) CN112036443A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113421319A (en) * 2021-06-30 2021-09-21 重庆小雨点小额贷款有限公司 Image processing method and device and computer equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205827A (en) * 2015-10-16 2015-12-30 中科院成都信息技术股份有限公司 Auxiliary feature point labeling method for statistical shape model
CN107492135A (en) * 2017-08-21 2017-12-19 维沃移动通信有限公司 A kind of image segmentation mask method, device and computer-readable recording medium
CN109685870A (en) * 2018-11-21 2019-04-26 北京慧流科技有限公司 Information labeling method and device, tagging equipment and storage medium
CN110348415A (en) * 2019-07-17 2019-10-18 济南大学 A kind of efficient mask method and system of high-definition remote sensing target large data sets
CN110852138A (en) * 2018-08-21 2020-02-28 北京图森未来科技有限公司 Method and device for labeling object in image data
CN111339659A (en) * 2020-02-25 2020-06-26 上汽通用汽车有限公司 Method and device for quickly marking stepped hole in three-dimensional model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205827A (en) * 2015-10-16 2015-12-30 中科院成都信息技术股份有限公司 Auxiliary feature point labeling method for statistical shape model
CN107492135A (en) * 2017-08-21 2017-12-19 维沃移动通信有限公司 A kind of image segmentation mask method, device and computer-readable recording medium
CN110852138A (en) * 2018-08-21 2020-02-28 北京图森未来科技有限公司 Method and device for labeling object in image data
CN109685870A (en) * 2018-11-21 2019-04-26 北京慧流科技有限公司 Information labeling method and device, tagging equipment and storage medium
CN110348415A (en) * 2019-07-17 2019-10-18 济南大学 A kind of efficient mask method and system of high-definition remote sensing target large data sets
CN111339659A (en) * 2020-02-25 2020-06-26 上汽通用汽车有限公司 Method and device for quickly marking stepped hole in three-dimensional model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113421319A (en) * 2021-06-30 2021-09-21 重庆小雨点小额贷款有限公司 Image processing method and device and computer equipment

Similar Documents

Publication Publication Date Title
US11763575B2 (en) Object detection for distorted images
CN109685870B (en) Information labeling method and device, labeling equipment and storage medium
JP2019114059A (en) Determination device, repair cost determination system, determination method, and determination program
CN113343740B (en) Table detection method, device, equipment and storage medium
CN103227877B (en) Image processing system and image forming method
CN102467519B (en) Visualization plotting method and system based on geographic information system
CN113158895A (en) Bill identification method and device, electronic equipment and storage medium
CN112036443A (en) Method and device for labeling object contour in image data
CN109683834B (en) Gerber file conversion precision processing method, system, equipment and storage medium
US20140136966A1 (en) Method and System for Generating Instructions According to Change of Font Outline
CN112884844B (en) Method and device for calibrating panoramic image system and computer readable storage medium
US20230071291A1 (en) System and method for a precise semantic segmentation
WO2021082652A1 (en) Information display method and apparatus, and computer-readable storage medium
CN113743056A (en) Document conversion method based on paragraph shrinkage amount, computing device and storage medium
CN115393379A (en) Data annotation method and related product
JP2021047688A (en) Form recognition method and program
CN114781005B (en) Multi-party-based electronic signature method and device
CN113159008B (en) Passenger ticket travel itinerary construction method and device, computer equipment and storage medium
US20240054701A1 (en) Method and apparatus for generating directed distance field image, device, and storage medium
KR102597991B1 (en) Apparatus and method for adjusting label data for artificial intelligence learning
CN114779271B (en) Target detection method and device, electronic equipment and storage medium
US20240112437A1 (en) Estimation apparatus, model generation apparatus, and estimation method
US20240193980A1 (en) Method for recognizing human body area in image, electronic device, and storage medium
US10558774B1 (en) Electronic library and design generation using image and text processing
US20230119741A1 (en) Picture annotation method, apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination