CN117830804A - Image labeling method and device, electronic equipment and storage medium - Google Patents

Image labeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117830804A
CN117830804A CN202311799659.3A CN202311799659A CN117830804A CN 117830804 A CN117830804 A CN 117830804A CN 202311799659 A CN202311799659 A CN 202311799659A CN 117830804 A CN117830804 A CN 117830804A
Authority
CN
China
Prior art keywords
contour
labeling
current
labeling object
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311799659.3A
Other languages
Chinese (zh)
Inventor
李哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Mega Technology Co Ltd
Original Assignee
Suzhou Mega Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Mega Technology Co Ltd filed Critical Suzhou Mega Technology Co Ltd
Priority to CN202311799659.3A priority Critical patent/CN117830804A/en
Publication of CN117830804A publication Critical patent/CN117830804A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides an image labeling method and device, electronic equipment and a storage medium. The method comprises the following steps: after the current erasing operation is finished, the following erasing judgment operation is executed for each first labeling object on the image: acquiring a contour region set of a current first labeling object after the current erasing operation is finished, wherein the last contour region in the contour region set is the contour region formed first in a plurality of contour regions; each contour region in the contour region set includes all image regions within the contour line; if the current first labeling object is in a hollow state after the current erasing operation is finished, executing a hollow processing step according to the outline area in the outline area set so as to determine a newly added labeling object; if the current first labeling object is in a non-hollow state after the current erasing operation is finished, executing a truncation processing step according to the outline area in the outline area set so as to determine a newly added labeling object. The scheme is helpful for improving the image annotation efficiency.

Description

Image labeling method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer vision, and more particularly, to an image labeling method, an image labeling device, an electronic apparatus, and a storage medium.
Background
In recent years, computer vision has been widely used in various fields. For example, computer vision techniques are used to detect surface defects in a product to facilitate timely inspection or repair. When the defect detection is performed on the product to be detected by using computer vision, a defect detection model is generally trained by using a pre-marked sample image, and then the trained defect detection model is used for processing the image to be detected containing the product to be detected so as to determine the defect area of the product to be detected in the image to be detected.
In the related art, when labeling a sample image, if a labeling error occurs, all labels on the sample image are generally emptied, and then labeling is performed again. The method influences the labeling efficiency of the sample image and reduces the use experience of a user.
Disclosure of Invention
The present application has been made in view of the above-described problems. The application provides an image labeling method, an image labeling device, electronic equipment and a storage medium.
According to an aspect of the present application, there is provided an image labeling method, on which a first labeling object having a labeling type of a brush is drawn, the method including: after the current erasing operation is finished, the following erasing judgment operation is executed for each first labeling object on the image: acquiring a contour region set of a current first labeling object after the current erasing operation is finished, wherein the last contour region in the contour region set is the contour region formed first in a plurality of contour regions; each contour region in the contour region set includes all image regions within the contour line; if the current first labeling object is in a hollow state after the current erasing operation is finished, executing a hollow processing step according to the outline area in the outline area set so as to determine a newly added labeling object; if the current first labeling object is in a non-hollow state after the current erasing operation is finished, executing a truncation processing step according to the outline area in the outline area set so as to determine a newly added labeling object.
Illustratively, the method includes performing a hollow processing step according to a contour region in the contour region set to determine an added labeling object, and specifically includes: determining an erasing area of the current erasing operation based on the last contour area and a first effective area of the current first labeling object, wherein the first effective area is an effective area of the current first labeling object after the erasing operation; differencing each of the plurality of contour regions except for the last contour region with an erasure region to obtain a second set of active regions; and de-duplicating the second effective area set to obtain a third effective area set, and determining a new labeling object based on each third effective area in the third effective area set.
Illustratively, the method further specifically includes the steps of executing a hollow processing step according to the contour regions in the contour region set to determine an added labeling object: for each third effective area in the third effective area set, respectively acquiring intersection sets of the third effective area and all other third effective areas in the third effective area set to obtain intersection results corresponding to all other third effective areas in the third effective area set one to one; determining that the third effective area is invalid when any one of the intersection results is not empty and the intersection result is different from the third effective area; otherwise, determining the third effective area as a new labeling object.
Illustratively, the method further specifically includes the steps of executing a hollow processing step according to the contour regions in the contour region set to determine an added labeling object: and making a difference between the first effective area and all the newly added labeling objects, and assigning the obtained result as the current first labeling object.
Illustratively, the step of truncating is performed according to the contour regions in the contour region set to determine the newly added labeling object, and specifically includes: and determining a newly added labeling object based on each contour region except the last contour region in the plurality of contour region sets when the contour region is intersected with a first effective region of the first labeling object, wherein the first effective region is an effective region of the current first labeling object after the erasing operation.
For each contour region except the last contour region in the plurality of contour region sets, the method specifically includes, when the contour region intersects with a first effective region of a first labeling object, determining a new labeling object based on the contour region: for each contour region except the last contour region in the plurality of contour region sets, adding the contour region to the active set when the contour region intersects the first active region of the current first labeling object; adding the contour region to an invalid set when the contour region does not intersect with a first valid region of a current first annotation object; and for each contour area stored in the effective set, making a difference between the contour area and all contour areas stored in the ineffective set to obtain a new labeling object.
Illustratively, the step of performing truncation processing according to the contour region in the contour region set to determine an added labeling object further specifically includes: and if the current first labeling object is in a non-hollow state after the current erasing operation is finished, assigning the last contour region in the contour region set as the current first labeling object.
The method for acquiring the contour region set of the current first labeling object after the current erasing operation is finished specifically comprises the following steps: judging whether the current first labeling object is completely erased or not according to the erasing result of the current first labeling object; and when the current first labeling object is not completely erased, acquiring a contour region set of the current first labeling object after the current erasing operation is finished.
The method for obtaining the contour region set of the current first labeling object after the current erasing operation is finished specifically further comprises the following steps: the first annotation object is deleted from the image when the first annotation object is completely erased.
Illustratively, when the first labeling object is completely erased, deleting the first labeling object from the image specifically includes: when the first annotation object is completely erased, adding the first annotation object into the annotation set to be deleted; the method further comprises the steps of: and deleting each first annotation object in the annotation set to be deleted on the image.
Illustratively, the erase judgment operation further includes: adding the new annotation object into the annotation set to be newly added; the method further comprises the steps of: and for each labeling object in the labeling set to be newly added, displaying the labeling object on the image.
Illustratively, the method further comprises: when any one contour region except the last contour region in the contour region set is intersected with the last contour region, determining that the current first labeling object is in a hollow state; otherwise, determining that the current first labeling object is in a non-hollow state; or, for each contour region except the last contour region in the plurality of contour region sets, determining that the current first labeling object is in a hollow state when the contour region intersects with the last contour region; otherwise, determining that the current first labeling object is in a non-hollow state.
According to another aspect of the present application, there is provided an image annotation device on which a first annotation object of the annotation type of a brush has been drawn, the device comprising: the execution module is used for executing erasure judgment operation on each first labeling object on the image after the current erasure operation is finished; the execution module comprises: the acquisition sub-module is used for acquiring a contour region set of the current first labeling object after the current erasing operation is finished, wherein the last contour region in the contour region set is a contour region formed first in a plurality of contour regions; each contour region in the contour region set includes all image regions within the contour line; the first execution sub-module is used for executing a hollow processing step on the outline area if the current first labeling object is in a hollow state after the current erasing operation is finished so as to determine a newly added labeling object based on a plurality of outline areas; and the second execution sub-module is used for executing a truncation processing step on the contour area if the current first labeling object is in a non-hollow state after the current erasing operation is finished so as to determine a new labeling object based on a plurality of contour areas.
According to still another aspect of the present application, there is provided an electronic apparatus including: the image labeling system comprises a processor and a memory, wherein the memory stores computer program instructions which are used for executing the image labeling method when the computer program instructions are executed by the processor.
According to still another aspect of the present application, there is provided a storage medium having stored thereon program instructions for executing the image labeling method described above when executed.
In the above technical solution, not only the decrement correction of the first labeling object can be implemented by using the erasing operation, but also corresponding processing can be performed according to whether the erasing result condition of the first labeling object after the erasing operation is hollow, so as to determine the newly added labeling object, that is, the first labeling object is automatically changed and processed into a plurality of first labeling objects through the erasing process. In a word, the scheme is helpful for improving the image annotation efficiency and improving the annotation flexibility.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 shows a schematic flow chart of an erase determination operation according to one embodiment of the present application;
FIG. 2 illustrates a schematic diagram of a first annotation object after an erase operation, according to one embodiment of the present application;
FIG. 3 illustrates another schematic diagram of a first annotation object after an erase operation, according to another embodiment of the present application;
FIG. 4 is a schematic diagram of a first annotation object according to a further embodiment of the application after an erase operation;
FIG. 5 shows a schematic block diagram of an image annotation device according to one embodiment of the application; and
fig. 6 shows a schematic block diagram of an electronic device according to one embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments according to the present application will be described in detail below with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein. Based on the embodiments of the present application described herein, all other embodiments that may be made by one skilled in the art without the exercise of inventive faculty are intended to fall within the scope of protection of the present application.
In order to at least partially solve the above-mentioned problems, an embodiment of the present application provides an image labeling method, in which a first labeling object having a labeling type of a brush is drawn on an image. The method comprises the following steps: and after the current erasing operation is finished, executing erasing judgment operation on each first labeling object on the image.
It can be appreciated that the image labeling method described in the present application may be implemented on any visual software platform. For example, the vision platform may be a firearms vision software platform.
Alternatively, the erasing operation may be performed by an eraser tool. In this embodiment, the user can activate the eraser tool and move the eraser over the image by controlling the movement of the eraser with the mouse, thereby completing the erasing operation. In the process, the software platform can monitor the track of the movement of the mouse, and new point location data can be obtained after each movement of the mouse. And obtaining the path image data of the movement of the eraser through the data of all the point positions in the movement process. In a specific embodiment, a line geometry class of microsoft may be called, and the line geometry class is instantiated and transferred into the current point location data and the size of the eraser (i.e. the pixel value corresponding to the eraser tool), so as to obtain the image data of the area covered by the eraser corresponding to the erasing operation.
Optionally, the end of the current erase operation indicates that one erase operation is complete. The embodiment in which the above-described erasing operation is implemented by the eraser tool is explained, in which the user can control the eraser to move on the image by pressing the mouse and dragging the mouse to move. When the user lifts the mouse, it can be considered that one erase operation is ended. Alternatively, the end of one erase operation may be determined in response to an end instruction input by the user.
Alternatively, the annotation type of the annotation object may be determined by the attribute of the annotation data corresponding to the annotation object. The attribute may be defined by the software platform. For example, when the attribute of the annotation data corresponding to the annotation object is "json path", it may be determined that the annotation object is the first annotation object.
Alternatively, the annotation types may include polygons, brushes, rectangles, and points; before performing the erase judgment operation, the method may further include the steps of: and judging whether a first labeling object with the labeling type of a painting brush exists on the image. In this embodiment, the annotation type of each annotation object on the image may be predetermined, and when there is a first annotation object with the annotation type being the brush on the image, the erasure judgment operation is performed on each first annotation object on the image.
Optionally, after the current erasing operation is finished, performing an erasing judgment operation on each first labeling object on the image may specifically include the following steps: for each annotation object on the image, determining whether the annotation object is a first annotation object. And when the marked object is the first marked object, executing erasure judgment operation on the marked object. In this embodiment, all annotation objects on the image may be cycled through in sequence. When the labeling object is the first labeling object, the next process (i.e., performing the erasure judgment operation) is entered. When the annotation object is not the first annotation object (for example, the annotation type of the annotation object is a polygon), the next annotation object may be continued as the current annotation object, and the above steps may be repeated. In this embodiment, by performing the above steps sequentially on the annotation objects on the image, omission of the first annotation object is facilitated.
Optionally, after the current erasing operation is finished, performing an erasing judgment operation on each first labeling object on the image may specifically include the following steps: and after the current erasing operation is finished, judging whether each labeling object on the image is a first labeling object or not in sequence. And when the marked object is a first marked object, adding the first marked object into the to-be-processed object set. And executing erasure judgment operation for each first labeling object in the object set to be processed. In this embodiment, all the first annotation objects in the image may be first determined, and then the erasure judgment operation may be sequentially performed on the first annotation objects. This solution contributes to an improvement in image processing efficiency.
Fig. 1 shows a schematic flow chart of an erasure judgment operation according to an embodiment of the present application. As shown in fig. 1, the erasure judgment operation 100 may include step S110, step S120, and step S130.
In step S110, a contour region set of the current first labeling object after the current erasing operation is finished is obtained, wherein a last contour region in the contour region set is a contour region formed first in a plurality of contour regions; each contour region in the set of contour regions includes all of the image regions within the contour line.
Optionally, determining the contour region set according to the remaining effective region of the first labeling object after the erasing operation, where the remaining effective region of the first labeling object refers to: the first labeling object obtains a region after the difference of the image data of the current erasing path. In some embodiments, the first label object after the current erasing operation may be obtained by a PathGeome method of Microsoft. In the resulting PathGeome type data, there is a property, figure, which can be used to represent the number of contour regions of the current first annotation object. Specifically, if the number of configuration attributes in the PathGeome type data is equal to n, the number of contour regions in the current first annotation object is n.
After determining the contour regions, the contour regions may be sequentially added to the contour region set according to the order in which the contour regions are formed. Wherein the later the formation time of the contour region, the earlier the ordering of the contour region in the contour region set.
The last contour region in the set of contour regions is described below by way of several specific embodiments.
FIG. 2 illustrates a schematic diagram of a first annotation object after an erase operation, according to one embodiment of the present application. As shown in fig. 2, after a current erasing operation is performed on a first labeling object shown in fig. 2, a PathGeometry type data is obtained by a PathGeometry, createfromgeometric method of microsoft, and an experiment is performed on a contour region set of the obtained PathGeometry type data: the respective contour lines in the image may be sequentially referred to as contour line 1, contour line 2, and contour line 3 from outside to inside. The contour area corresponding to the contour line 1 comprises a complete area formed by an area A, an area B and an area C, the contour area corresponding to the contour line 2 comprises an area formed by an area B and an area C, and the contour area corresponding to the contour line 3 comprises an area C. In this embodiment, the area B is an erased portion. In the contour region corresponding to each contour line, since the contour line 1 is always present, the contour region corresponding to the contour line 1 is first generated. Thus, the contour region corresponding to contour line 1 is the last contour region in the set of contour regions. Fig. 2 shows that the first labeling object is hollow, and then a hollow processing step is required in the subsequent steps.
FIG. 3 is a schematic diagram of a first annotation object after an erase operation, according to another embodiment of the present application. As shown in fig. 3, after a current erasing operation is performed on a first labeling object shown in fig. 3, a PathGeometry type data is obtained by a PathGeometry, createfromgeometric method of microsoft, and an experiment is performed on a contour region set of the obtained PathGeometry type data: the left side image in fig. 3 is an image before the erasing operation, and the right side image is an image after the erasing operation, and each contour area in the image may be sequentially referred to as a contour 1, a contour 2, a contour 3, and a contour 4 from left to right. In this embodiment, the erasing operation may be performed from right to left, in which case profile 4, profile 3, profile 2, and profile 1 are generated in order. At this time, the contour regions in the contour region set are contour 1, contour 2, contour 3, and contour 4 in this order. Fig. 3 shows that the first labeling object is truncated, and then a truncation process is required in the subsequent steps.
In step S120, if the current first labeling object is in a hollow state after the current erasing operation is finished, a hollow processing step is performed according to the contour regions in the contour region set, so as to determine a newly added labeling object.
After the contour region set of the current first labeling object after the current erasing operation is finished is obtained, the state of the current first labeling object after the current erasing operation is finished can be judged first. In other words, it may be first determined whether the external contour of the current first annotation object is destroyed after the current erase operation is ended. If the contour is not destroyed, it can be determined that the current first labeling object is in a hollow state after the current erasing operation is finished. As shown in fig. 2, the erasing operation is only performed on the inside of the current first labeling object, and the outermost contour of the current first labeling object is not changed before and after the erasing operation is finished. Thus, it can be determined that the current first annotation object is in a hollow state after the current erase operation is ended.
When it is determined that the current first labeling object is in a hollow state after the current erasing operation is finished, step S120 may be executed to perform hollow processing on the contour regions in the contour region set, thereby determining a new labeling object.
In step S130, if the current first labeling object is in a non-hollow state after the current erasing operation is finished, a truncation processing step is performed according to the contour regions in the contour region set, so as to determine a newly added labeling object.
As described above, the state of the current first labeling object after the current erasing operation is ended can be determined according to whether the external contour of the current first labeling object after the current erasing operation is ended is destroyed. If the contour is destroyed, it can be determined that the current first labeling object is in a non-hollow state after the current erasing operation is finished. As shown in fig. 3, the outermost contour of the current first labeling object changes before and after the end of the erasing operation. Thus, it can be determined that the current first annotation object is in a non-hollow state (i.e., truncated state) after the current erase operation is completed. In this case, step S130 may be performed to perform truncation processing on the contour regions in the contour region set, thereby determining a newly added labeling object. Moreover, under the first labeling object shown in fig. 3, no influence is generated on the subsequent truncation processing process no matter whether the user erases the contour region set determined from left to right or in any order.
In the above technical solution, not only the decrement correction of the first labeling object can be implemented by using the erasing operation, but also corresponding processing can be performed according to whether the erasing result condition of the first labeling object after the erasing operation is hollow, so as to determine the newly added labeling object, that is, the first labeling object is automatically changed and processed into a plurality of first labeling objects through the erasing process. In a word, the scheme is helpful for improving the image annotation efficiency and improving the annotation flexibility.
Illustratively, the method comprises the following steps of: determining an erasing area of the current erasing operation based on the last contour area and a first effective area of the current first labeling object, wherein the first effective area is an effective area of the current first labeling object after the erasing operation; differencing each of the plurality of contour regions except for the last contour region with an erasure region to obtain a second set of active regions; and de-duplicating the second effective area set to obtain a third effective area set, and determining a new labeling object based on each third effective area in the third effective area set.
It can be understood that when the current first labeling object is in a hollow state after the current erasing operation is finished, the contour line corresponding to the last contour region is consistent with the contour line of the first labeling object before the erasing operation. Thus, the last contour region can be considered as the first annotation object prior to the erase operation. At this time, the erasing area of the current erasing operation can be determined by subtracting the first effective area of the current first labeling object from the first labeling object before the erasing operation.
The determination of the erasure area will be described by taking fig. 2 as an example. In the embodiment shown in fig. 2, the last contour region includes a region a, a region B and a region C, and the first effective region of the current first labeling object includes a region a and a region C, in which case, the erasing region is obtained by subtracting the first effective region of the current first labeling object from the last contour region, and the erasing region is the region B.
After determining the current erasure area, each of the plurality of contour areas except for the last contour area may be differenced from the erasure area to obtain a second set of valid areas. It will be appreciated that in the set of contour regions, each contour line corresponds to a contour region. At this time, some of these contour regions may include an erasure region (for example, in the embodiment shown in fig. 2, the contour region corresponding to the contour line 2 includes an erasure region). By making each of the plurality of contour regions except for the last contour region differ from the erased region, an effective region (which may be referred to as a second effective region) corresponding to each contour region can be obtained. The corresponding effective areas of the contour areas form a second effective area set together. The process will be described with reference to the embodiment shown in fig. 2. In the embodiment shown in fig. 2, the contour area corresponding to the contour line 2 may be differentiated from the erased area to obtain the effective area corresponding to the contour line 2 (i.e., the area C in the drawing). Meanwhile, the contour area corresponding to the contour line 3 may be differenced from the erasing area to obtain an effective area (i.e., area C in the figure) corresponding to the contour line 3. In this embodiment, the second set of active areas includes an active area corresponding to contour 2 and an active area corresponding to contour 3.
After the second set of active areas is obtained, the second set of active areas may be deduplicated to obtain a third set of active areas. In some embodiments, the second set of active areas may be deduplicated using any existing or future developed deduplication algorithm. In a specific embodiment, linq can be used directly to deduplicate the second set of active areas. In this embodiment, the image data corresponding to each second effective area in the second effective area set may be grouped, and then each image data is converted into a corresponding character string by using a ToString method, and unique judgment is performed, so as to implement deduplication for the second effective area set.
The deduplication operation will be described by taking fig. 2 as an example. In fig. 2, the effective area corresponding to the contour line 2 and the effective area corresponding to the contour line 3 are both areas C. In this embodiment, the second set of active areas may be de-duplicated using Linq, so that only one of the active areas corresponding to contour 2 and the active areas corresponding to contour 3 is retained.
After the third set of active areas is obtained, a new annotation object may be determined based on each third active area in the third set of active areas. For example, each third effective area may be directly determined as an added annotation object. For another example, it may be further determined whether an intersection exists between each third effective area in the third effective area set (i.e., there exists a partially repeated area), and then after the third effective area set is further de-duplicated, the remaining effective areas are determined as newly added labeling objects.
In the above technical solution, when the current first labeling object is in a hollow state after the current erasing operation is finished, the newly added labeling object may be accurately determined according to the result of the erasing operation. The scheme is helpful for improving the accuracy of the confirmed newly added labeling objects.
Illustratively, the method further specifically includes the steps of executing a hollow processing step according to the contour regions in the contour region set to determine an added labeling object: for each third effective area in the third effective area set, respectively acquiring intersection sets of the third effective area and all other third effective areas in the third effective area set to obtain intersection results corresponding to all other third effective areas in the third effective area set one to one; determining that the third effective area is invalid when any one of the intersection results is not empty and the intersection result is different from the third effective area; otherwise, determining the third effective area as a new labeling object.
In a specific embodiment, a geometry. Combine afferent may be utilized
The geometry combinemode is identified by intersect to intersect each third active area with all other third active areas in the third active area set.
Fig. 4 is a schematic diagram of a first annotation object according to a further embodiment of the application after an erase operation. As shown in fig. 4, the respective contour lines in the image may be sequentially referred to as a contour line 1, a contour line 2, a contour line 3, a contour line 4, a contour line 5, and a contour line 6 from outside to inside. The contour area corresponding to the contour line 1 includes an area a, an area B, an area C, an area D, an area E, and an area F. The contour areas corresponding to the contour lines 2 include an area B, an area C, an area D, an area E, and an area F. The contour area corresponding to the contour line 3 includes an area C. The contour area corresponding to the contour line 4 includes an area D. The contour area corresponding to the contour line 5 includes an area E. The contour area corresponding to the contour line 6 includes an area F. The contour area corresponding to the contour line 1 is the last contour area in the contour area set, and the erasing area is the area B. In this embodiment, after deduplication of the second active area set, a third active area set region may be obtained. Each of the third effective areas in the third effective set area is sequentially: a third effective area (may be simply referred to as effective area 1) corresponding to the contour line 2, including an area C, an area D, an area E, and an area F; a third effective area (may be simply referred to as effective area 2) corresponding to the contour line 3, including an area C; a third effective area (may be simply referred to as effective area 3) corresponding to the contour line 4, including an area D; a third effective area (may be simply referred to as effective area 4) corresponding to the contour line 5, including an area E; a third effective area (which may be simply referred to as effective area 5) corresponding to the contour line 6 includes an area F. In this embodiment, for each third effective area in the third effective area set, the third effective area is respectively intersected with all other third effective areas in the third effective area set, so as to obtain intersection results corresponding to all other third effective areas in the third effective area set one to one; determining that the third effective area is invalid when any one of the intersection results is not empty and the intersection result is different from the third effective area; otherwise, determining the third effective area as a new labeling object. For convenience of description, the intersection result is not null and is different from the third effective area as an invalid condition. Specifically, for the effective area 1 in the third effective area set, the effective area 1 may be intersected with the effective area 2, the effective area 3, the effective area 4, and the effective area 5 in the third effective area set, respectively. Taking the intersection result between the effective area 1 and the effective area 2 as an example, the intersection result between the effective area 1 and the effective area 2 is the area C (i.e., the intersection result is not null), and the area C is different from the effective area 1, so that the effective area 1 is invalid. For the active area 2 in the third active area set, the active area 2 may be intersected with the active area 1, the active area 3, the active area 4 and the active area 5, respectively. Wherein, no intersection exists between the effective area 2 and the effective areas 3, 4 and 5, i.e. the corresponding intersection result is null. The intersection result between the effective area 2 and the effective area 1 is an area C, which is equal to the effective area 2. Therefore, none of the intersection results satisfies the invalidation condition. In this case, the effective area 2 may be the newly added labeling object. For the active areas 3 in the third active area set, the active areas 3 may be intersected with the active areas 1, 2, 4 and 5, respectively. Wherein, no intersection exists between the effective area 3 and the effective area 2, between the effective area 4 and the effective area 5, i.e. the corresponding intersection result is null. The intersection result between the effective area 3 and the effective area 1 is an area D, which is equal to the effective area 3. Therefore, none of the intersection results satisfies the invalidation condition. In this case, the effective area 3 may be the new labeling object. For the active areas 4 in the third active area set, the active areas 4 may be intersected with the active areas 1, 2, 3 and 5, respectively. Wherein, no intersection exists between the effective area 4 and the effective area 2, between the effective area 3 and the effective area 5, i.e. the corresponding intersection result is null. The intersection result between the effective area 4 and the effective area 1 is an area E, which is equal to the effective area 4. Therefore, none of the intersection results satisfies the invalidation condition. In this case, the effective area 4 may be the new labeling object. For the active areas 5 in the third active area set, the active areas 5 may be intersected by the active areas 1, 2, 3 and 4, respectively. Wherein, no intersection exists between the effective area 5 and the effective area 2, between the effective area 3 and the effective area 4, i.e. the corresponding intersection result is null. The intersection result between the effective area 5 and the effective area 1 is an area F, which is equal to the effective area 5. Therefore, none of the intersection results satisfies the invalidation condition. In this case, the effective area 5 may be the new labeling object. Finally, the newly added labeling objects in this embodiment can be determined to be the effective area 2, the effective area 3, the effective area 4, and the effective area 5. That is, the newly added labeling objects are the region C, the region D, the region E, and the region F in the drawing.
In the above technical solution, each third effective area may be intersected with all other third effective areas in the third effective area set, and whether the third effective area is invalid or not may be determined according to the intersection result, which is helpful for removing duplicate areas. Compared with the method for judging the effective area by directly adopting the effective condition, the method can avoid the missing of the judgment of some complex erasure conditions, thereby being beneficial to better ensuring the duplication eliminating effect and avoiding the duplication among newly added marked objects.
Illustratively, the method further specifically includes the steps of executing a hollow processing step according to the contour regions in the contour region set to determine an added labeling object: and making a difference between the first effective area and all the newly added labeling objects, and assigning the obtained result as the current first labeling object.
This process will be described by taking fig. 4 as an example. In the embodiment shown in fig. 4, the newly added labeling objects are the region C, the region D, the region E, and the region F in this order. The first effective area includes area a, area C, area D, area E, and area F. In this embodiment, the first effective area may be differenced from all the newly added labeling objects to obtain a difference result, i.e., an area a. The region a may then be assigned as the current first annotation object. In a specific embodiment, the Tostring structure of the PathGeometry type data corresponding to the region a may be assigned to the JsonPath corresponding to the first labeling object.
In the above technical solution, the result obtained by directly making the difference between the first effective area and all the newly added labeling objects may be assigned as the current first labeling object. Thus, the current first annotation object can be updated directly. The scheme is helpful for improving the efficiency of image annotation.
Illustratively, the step of truncating is performed according to the contour regions in the contour region set to determine the newly added labeling object, and specifically includes the steps of: and determining a newly added labeling object based on each contour region except the last contour region in the plurality of contour region sets when the contour region is intersected with a first effective region of the first labeling object, wherein the first effective region is an effective region of the current first labeling object after the erasing operation.
When the current first labeling object is in a non-hollow state after the current erasing operation is finished, whether the contour region is intersected with the first effective region of the first labeling object or not can be judged for each contour region except the last contour region in the contour region set. In some embodiments, each contour region may be referred to as a configuration sub-item. The current configuration sub-item may be converted into PathGeome type data according to the PathGeome. CreateFromGeome method. And then intersecting and comparing the PathGeome type data corresponding to the current configuration sub-item with the PathGeome type data corresponding to the first effective area of the first labeling object, and determining a newly added labeling object based on the contour area if the contour area intersects with the first effective area of the first labeling object. In a specific embodiment, the intersection area between the outline area and the first effective area of the first labeling object may be determined by using a geometry. Specifically, pathGeome type data corresponding to the outline region and PathGeome type data corresponding to a first effective region of a first labeling object can be transmitted, a GeomyCombineMode. Interect identifier is transmitted, an intersection part of the two PathGeome type data is calculated, and if the PathGeome type data corresponding to the obtained intersection part is empty, the outline region is indicated to be disjoint with the first effective region of the first labeling object; otherwise, the description outline region intersects the first effective region of the first annotation object.
In the above technical solution, when the current first labeling object is in a non-hollow state after the current erasing operation is finished, the newly added labeling object may be accurately determined according to the result of the erasing operation. The scheme is helpful for improving the accuracy of the confirmed newly added labeling objects.
For each contour region except the last contour region in the plurality of contour region sets, the method specifically includes, when the contour region intersects with a first effective region of a first labeling object, determining a new labeling object based on the contour region: for each contour region except the last contour region in the plurality of contour region sets, adding the contour region to the active set when the contour region intersects the first active region of the current first labeling object; adding the contour region to an invalid set when the contour region does not intersect with a first valid region of a current first annotation object; and for each contour area stored in the effective set, making a difference between the contour area and all contour areas stored in the ineffective set to obtain a new labeling object.
In this example, the active set and the inactive set may be pre-established. The active set may be referred to as a non-empty image set and the inactive image set may be referred to as an empty image set. In this example, it may be sequentially determined whether each contour region of the contour region set other than the last contour region intersects the first valid region of the current first labeling object, thereby determining a contour region of the contour region set that belongs to the valid set and a contour region of the contour region set that belongs to the invalid set.
After sequentially judging whether each contour region except the last contour region in the contour region set intersects with the first effective region of the current first labeling object and obtaining a corresponding effective set and an ineffective set, each contour region stored in the effective set can be subjected to difference between the contour region and all contour regions stored in the ineffective set so as to obtain a newly added labeling object. For example, the active set includes the contour 1 and the contour 2, and the inactive set includes the contour 3 and the contour 4. The outline 3 and the outline 4 can be subtracted from the outline 1 in sequence, so that a new added labeling object corresponding to the outline 1 is obtained. Similarly, the contour 2 can be subtracted by the contour 3 and the contour 4 in sequence, so as to obtain a new added labeling object corresponding to the contour 2. In a specific embodiment, for each contour region stored in the active set, the contour region may be sequentially differenced from each contour region stored in the inactive set using a geometry.
In the above technical solution, each contour region stored in the active set may be respectively differenced from all contour regions stored in the inactive set, so as to obtain a new labeling object, thereby preventing empty regions from being present in the contour regions in the active set. The scheme is beneficial to improving the accuracy of image annotation.
Illustratively, the step of performing truncation processing according to the contour region in the contour region set to determine an added labeling object further specifically includes: and if the current first labeling object is in a non-hollow state after the current erasing operation is finished, assigning the last contour region in the contour region set as the current first labeling object.
This example is illustrated by way of example in fig. 3. In the embodiment shown in fig. 3, the contour regions in the contour region set are contour 1, contour 2, contour 3, and contour 4 in this order. In this case, the outline region 4 may be copied as the current first annotation object. In a specific embodiment, the Tostring structure of the PathGeometry type data corresponding to the last contour region may be assigned to the JsonPath corresponding to the first labeling object.
In the above technical solution, the last contour region in the contour region set may be directly assigned as the current first labeling object. Therefore, the current first labeling object can be directly updated without newly building a new labeling object corresponding to the last contour region in the contour region set or deleting the current first labeling object. The scheme is helpful for improving the efficiency of image annotation.
The method for acquiring the contour region set of the current first labeling object after the current erasing operation is finished specifically comprises the following steps: judging whether the current first labeling object is completely erased or not according to the erasing result of the current first labeling object; and when the current first labeling object is not completely erased, acquiring a contour region set of the current first labeling object after the current erasing operation is finished.
Optionally, whether the current first labeling object is completely erased may be determined according to the attribute field corresponding to the current first labeling object. As described above, the attribute of the annotation data of the first annotation object defined by the software platform is "JsonPath". In some embodiments, it may be determined whether the attribute field in the json path attribute corresponding to the current first annotation object is the character "F1". If the attribute field in the JsonPath attribute corresponding to the current first labeling object is not the character "F1", it can be determined that the current first labeling object is not completely erased. At this time, the contour region set of the current first labeling object after the current erasing operation is finished may be obtained, and the subsequent steps are executed to determine the newly added labeling object.
In the technical scheme, when the current first labeling object is not completely erased, the contour region set of the current first labeling object after the current erasing operation is finished is obtained, so that the image labeling efficiency is improved.
The method for obtaining the contour region set of the current first labeling object after the current erasing operation is finished specifically further comprises the following steps: the first annotation object is deleted from the image when the first annotation object is completely erased.
As described above, it may be determined whether the attribute field in the json path attribute corresponding to the current first annotation object is the character "F1" to determine whether the first annotation object is completely erased. If the attribute field in the JsonPath attribute corresponding to the current first labeling object is the character "F1", it can be determined that the current first labeling object is completely erased, and at this time, the first labeling object can be directly deleted from the image.
According to the technical scheme, when the first labeling object is completely erased, the first labeling object is directly deleted from the image, so that the image labeling efficiency is improved, and the first labeling object which is completely erased can be prevented from interfering with the subsequent steps.
Illustratively, when the first labeling object is completely erased, deleting the first labeling object from the image specifically includes: when the first annotation object is completely erased, the first annotation object is added to the annotation set to be deleted.
The method further comprises the steps of: and deleting each first annotation object in the annotation set to be deleted on the image.
In this embodiment, it may be determined in sequence whether each first labeling object is completely erased, and when the first labeling object is completely erased, the first labeling object is added to the labeling set to be deleted. And then, after the erasure judgment operation corresponding to each first labeling object is completed, deleting all the first labeling objects in the labeling set to be deleted. The specific steps for determining whether each first labeling object is completely erased are described in detail above and are not repeated.
According to the technical scheme, the first labeling objects to be deleted can be deleted uniformly, so that the whole logic and the corresponding processing codes of the image labeling method are simplified, and the efficiency of image labeling is improved.
Illustratively, the erase judgment operation further includes: adding the new annotation object into the to-be-added annotation set.
The method further comprises the steps of: and for each labeling object in the labeling set to be newly added, displaying the labeling object on the image.
In this embodiment, after each new annotation object is determined, the new annotation object may be added to the set of annotations to be added. And then, after the erasure judgment operation corresponding to each first labeling object is completed, displaying each labeling object in the labeling set to be newly added on the image. Therefore, the scheme can uniformly display all newly added labeling objects on the image, thereby being beneficial to simplifying the overall logic and corresponding processing codes of the image labeling method and improving the efficiency of image labeling.
Illustratively, the method may further comprise the steps of: when any one contour region except the last contour region in the contour region set is intersected with the last contour region, determining that the current first labeling object is in a hollow state; otherwise, determining that the current first labeling object is in a non-hollow state; or, for each contour region except the last contour region in the plurality of contour region sets, determining that the current first labeling object is in a hollow state when the contour region intersects with the last contour region; otherwise, determining that the current first labeling object is in a non-hollow state.
Alternatively, the following steps may be adopted to determine whether the first labeling object is in a hollow state: when any one contour region except the last contour region in the contour region set is intersected with the last contour region, determining that the current first labeling object is in a hollow state; otherwise, determining that the current first labeling object is in a non-hollow state. In this embodiment, it may be sequentially determined whether each contour region except the last contour region in the contour region set intersects the last contour region. When any one contour region except the last contour region in the contour region set is intersected with the last contour region, determining that the current first labeling object is in a hollow state, otherwise, determining that the current first labeling object is in a non-hollow state. In some embodiments, each contour region may be referred to as a configuration sub-item. Can be according to
The PathGeome method converts the current configuration sub-item into PathGeome type data, then the PathGeome type data corresponding to the current configuration sub-item is intersected and compared with the PathGeome type data corresponding to the last configuration sub-item (namely the last contour area), and if the contour area is intersected with the last contour area, the current first labeling object can be determined to be in a hollow state. If the contour region does not intersect the last contour region, then it may be determined that the current first annotation object is in a non-hollow state. In a specific embodiment, a geometry. Combine method may be used to determine whether any of the set of contour regions, except the last contour region, intersects the last contour region. Specifically, for any one contour region except the last contour region in the contour region set, pathGeome type data corresponding to the contour region and PathGeome type data corresponding to the last contour region can be transmitted in the geometry.code, and a PathGeome identifier is transmitted, so that an intersection part of the two PathGeome type data is calculated, and if the obtained intersection part corresponds to the PathGeome type data, the contour region is intersected with the last contour region, otherwise, the contour region is not intersected with the last contour region.
Alternatively, the following steps may be adopted to determine whether the first labeling object is in a hollow state: for each contour region except the last contour region in the plurality of contour region sets, determining that the current first labeling object is in a hollow state when the contour region and the last contour region are in the same state; otherwise, determining that the current first labeling object is in a non-hollow state. In this embodiment, it may be sequentially determined whether each contour region except the last contour region in the contour region set intersects the last contour region. When each contour region except the last contour region in the contour region set is not intersected with the last contour region, determining that the current first labeling object is in a hollow state; otherwise, determining that the current first labeling object is in a non-hollow state. The specific steps for determining whether each contour region intersects the last contour region are described in detail above and are not repeated.
Optionally, the erase judgment operation further includes: when the current first labeling object is in a non-hollow state, determining that the identification information of the current first labeling object is a non-hollow identification; when the current first labeling object is in a hollow state, determining that the identification information of the current first labeling object is a hollow identification. In this embodiment, whether each first labeling object is hollow may be labeled by means of identification information. Thus, the subsequent steps can be executed by reading the identification information corresponding to each first labeling object.
According to the technical scheme, whether the current first labeling object is in the hollow state or not can be accurately determined, so that accurate basis can be provided for the execution of the follow-up steps.
According to another aspect of the application, an image annotation device is provided, wherein a first annotation object with an annotation type of a brush is drawn on an image. Fig. 5 shows a schematic block diagram of an image annotation device according to one embodiment of the application. As shown in fig. 5, the image annotation device 500 can include an execution module 510.
And the execution module 510 is configured to execute an erasure judgment operation on each first labeling object on the image after the current erasure operation is ended.
With continued reference to fig. 5, the execution module 510 includes an acquisition sub-module 511, a first execution sub-module 512, and a second execution sub-module 513.
An obtaining sub-module 511, configured to obtain a set of contour regions of the current first labeling object after the current erasing operation is finished, where a last contour region in the set of contour regions is a contour region formed first in the plurality of contour regions; each contour region in the set of contour regions includes all of the image regions within the contour line.
The first execution sub-module 512 is configured to execute a hollow processing step on the contour region if the current first labeling object is in a hollow state after the current erasing operation is finished, so as to determine a new labeling object based on the plurality of contour regions.
The second execution sub-module 513 is configured to execute a truncation processing step on the contour region if the current first labeling object is in a non-hollow state after the current erasing operation is finished, so as to determine a new labeling object based on the plurality of contour regions.
According to yet another aspect of the present application, an electronic device is also provided. Fig. 6 shows a schematic block diagram of an electronic device according to one embodiment of the present application. As shown in fig. 6, the electronic device 600 includes a processor 610 and a memory 620. Wherein the memory 620 has stored therein computer program instructions which, when executed by the processor, are adapted to carry out the image annotation method described above.
Alternatively, processor 610 may include any suitable processing device having data processing capabilities and/or instruction execution capabilities. For example, the processor may be implemented using one or a combination of several of a Programmable Logic Controller (PLC), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Micro Control Unit (MCU), and other forms of processing units.
According to another aspect of the present application, there is also provided a storage medium. Program instructions are stored on a storage medium. The program instructions, when executed by a computer or processor, cause the computer or processor to perform the respective steps of the above-described image labeling method of the embodiments of the present application and to implement the respective modules of the above-described image labeling apparatus or the respective modules in the above-described electronic device according to the embodiments of the present application. The storage medium may include, for example, a memory card of a smart phone, a memory component of a tablet computer, a hard disk of a personal computer, read-only memory (ROM), erasable programmable read-only memory (EPROM), portable compact disc read-only memory (CD-ROM), USB memory, or any combination of the foregoing storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
Those skilled in the art will understand the specific implementation and beneficial effects of the visual image labeling apparatus, the storage medium and the electronic device by reading the above detailed description about the image labeling method, and for brevity, the detailed description is omitted herein. Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the application and aid in understanding one or more of the various application aspects, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure. However, the method of this application should not be construed to reflect the following intent: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the present application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules in an image marking apparatus according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be embodied as device programs (e.g., computer programs and computer program products) for performing part or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing is merely illustrative of specific embodiments of the present application and the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are intended to be covered by the scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. An image annotation method, characterized in that a first annotation object with an annotation type of a brush is already drawn on the image, the method comprising:
after the current erasing operation is finished, the following erasing judgment operation is executed for each first labeling object on the image:
acquiring a contour region set of the current first labeling object after the current erasing operation is finished, wherein the last contour region in the contour region set is the contour region formed first in a plurality of contour regions; each of the contour regions in the set of contour regions includes all of the image regions within a contour line;
if the current first labeling object is in a hollow state after the current erasing operation is finished, executing a hollow processing step according to the outline area in the outline area set so as to determine a newly added labeling object;
and if the current first labeling object is in a non-hollow state after the current erasing operation is finished, executing a cutting-off processing step according to the contour region in the contour region set so as to determine a newly added labeling object.
2. The image labeling method according to claim 1, wherein the step of performing a hollow processing according to the contour regions in the contour region set to determine an added labeling object specifically comprises:
Determining an erasing area of the current erasing operation based on the last contour area and a first effective area of the current first labeling object, wherein the first effective area is an effective area of the current first labeling object after the erasing operation;
differencing each of the plurality of contour regions except the last contour region with the erasure region to obtain a second set of active regions;
and de-duplicating the second effective area set to obtain a third effective area set, and determining the newly added labeling object based on each third effective area in the third effective area set.
3. The image labeling method according to claim 2, wherein the step of performing a hollow processing according to the contour regions in the contour region set to determine an added labeling object further comprises:
for each of said third active areas in said third set of active areas,
respectively acquiring intersection sets of the third effective areas and all other third effective areas in the third effective area set so as to obtain intersection results corresponding to all other third effective areas in the third effective area set one by one;
Determining that the third effective area is invalid when any one of the intersection results is not empty and the intersection result is different from the third effective area; otherwise, determining the third effective area as the newly added labeling object.
4. The image labeling method of claim 3, wherein the performing a hollow processing step according to the contour regions in the contour region set to determine an added labeling object further comprises:
and making differences between the first effective area and all the newly added labeling objects, and assigning the obtained result as the current first labeling object.
5. The image labeling method according to claim 1, wherein the step of performing a truncation process according to the contour regions in the contour region set to determine an added labeling object specifically comprises:
and determining the newly added annotation object based on the contour area when the contour area intersects with a first effective area of the first annotation object for each contour area except the last contour area in the contour area sets, wherein the first effective area is an effective area of the first annotation object after the erasing operation.
6. The method according to claim 5, wherein for each contour region except the last contour region in the plurality of contour region sets, when the contour region intersects with the first effective region of the first labeling object, determining the newly added labeling object based on the contour region specifically includes:
for each contour region of the plurality of contour region sets except for the last contour region, adding the contour region to an active set when the contour region intersects with a first active region of the first annotation object; adding the contour region to an invalid set when the contour region does not intersect with a first valid region of the first annotation object;
and for each contour area stored in the effective set, making a difference between the contour area and all contour areas stored in the ineffective set to obtain the newly added labeling object.
7. The image labeling method according to claim 5, wherein the step of performing a truncation process according to the contour regions in the contour region set to determine an added labeling object further comprises:
And if the current first labeling object is in a non-hollow state after the current erasing operation is finished, assigning the last contour region in the contour region set as the current first labeling object.
8. The method for labeling images according to any one of claims 1-7, wherein said obtaining a set of contour regions of the current first labeling object after the current erasing operation is finished comprises:
judging whether the first labeling object is completely erased or not according to the erasing result of the first labeling object;
and when the current first labeling object is not completely erased, acquiring a contour region set of the current first labeling object after the current erasing operation is finished.
9. The method for labeling an image according to claim 8, wherein the acquiring the contour region set of the current first labeling object after the current erasing operation is finished further specifically includes:
and deleting the first labeling object from the image when the first labeling object is completely erased.
10. The method for labeling an image according to claim 9, wherein deleting the first labeling object from the image when the first labeling object is completely erased comprises:
When the first annotation object is completely erased, adding the first annotation object into the annotation set to be deleted;
the method further comprises the steps of:
and deleting each first annotation object in the annotation set to be deleted on the image.
11. The image labeling method of any of claims 1-7, wherein the erasure judgment operation further comprises:
adding the newly added annotation object into a to-be-added annotation set;
the method further comprises the steps of:
and for each labeling object in the labeling set to be newly added, displaying the labeling object on the image.
12. The image annotation method as claimed in any one of claims 1-7, further comprising:
when any one of the contour areas except the last contour area in the contour area set is intersected with the last contour area, determining that the first labeling object is in a hollow state currently; otherwise, determining that the current first labeling object is in a non-hollow state; or,
for each contour region except the last contour region in the contour region sets, determining that the first labeling object is in a hollow state when the contour region is intersected with the last contour region; otherwise, determining that the current first labeling object is in a non-hollow state.
13. An image annotation device, wherein a first annotation object of the type of annotation, a brush, is drawn on the image, the device comprising:
the execution module is used for executing erasure judgment operation on each first labeling object on the image after the current erasure operation is finished;
the execution module comprises:
the acquisition sub-module is used for acquiring a contour region set of the current first labeling object after the current erasing operation is finished, wherein the last contour region in the contour region set is a contour region formed first in a plurality of contour regions; each of the contour regions in the set of contour regions includes all of the image regions within a contour line;
the first execution sub-module is used for executing a hollow processing step on the outline area if the current first labeling object is in a hollow state after the current erasing operation is finished so as to determine a newly added labeling object based on a plurality of outline areas;
and the second execution sub-module is used for executing a truncation processing step on the contour area if the current first labeling object is in a non-hollow state after the current erasing operation is finished, so as to determine a new labeling object based on a plurality of contour areas.
14. An electronic device, comprising: a processor and a memory, wherein the memory has stored therein computer program instructions which, when executed by the processor, are adapted to carry out the image annotation method according to any of claims 1-12.
15. A storage medium having stored thereon program instructions for performing the image annotation method according to any of claims 1-12 when run.
CN202311799659.3A 2023-12-25 2023-12-25 Image labeling method and device, electronic equipment and storage medium Pending CN117830804A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311799659.3A CN117830804A (en) 2023-12-25 2023-12-25 Image labeling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311799659.3A CN117830804A (en) 2023-12-25 2023-12-25 Image labeling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117830804A true CN117830804A (en) 2024-04-05

Family

ID=90516547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311799659.3A Pending CN117830804A (en) 2023-12-25 2023-12-25 Image labeling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117830804A (en)

Similar Documents

Publication Publication Date Title
CN110019609B (en) Map updating method, apparatus and computer readable storage medium
CN106650648B (en) Recognition method and system for erasing handwriting
CN112580623A (en) Image generation method, model training method, related device and electronic equipment
CN105677221A (en) Method and device for improving application data detecting accuracy and equipment
CN111652208A (en) User interface component identification method and device, electronic equipment and storage medium
CN111652266A (en) User interface component identification method and device, electronic equipment and storage medium
CN113256583A (en) Image quality detection method and apparatus, computer device, and medium
CN113298078A (en) Equipment screen fragmentation detection model training method and equipment screen fragmentation detection method
CN117372424B (en) Defect detection method, device, equipment and storage medium
CN110796130A (en) Method, device and computer storage medium for character recognition
CN112651315A (en) Information extraction method and device of line graph, computer equipment and storage medium
CN104809053A (en) Control style testing method and device
CN117830804A (en) Image labeling method and device, electronic equipment and storage medium
CN110874170A (en) Image area correction method, image segmentation method and device
CN106708383B (en) Graphic processing method and system
CN117830802A (en) Image labeling method and device, electronic equipment and storage medium
CN117830803A (en) Image labeling method, device, electronic equipment and storage medium
CN113569861B (en) Mobile application illegal content scanning method, system, equipment and medium
CN117935026A (en) Labeling method and device for labeling object on training sample image
CN112446231A (en) Pedestrian crossing detection method and device, computer equipment and storage medium
CN112884656B (en) Printing and imposition method and system for packing box plane expansion diagram image
CN111736748B (en) Data processing method and device based on map information and electronic equipment
CN114139701A (en) Neural network model training method for boundary line extraction and related equipment
CN114519693A (en) Method for detecting surface defects, model construction method and device and electronic equipment
CN113111708A (en) Vehicle matching sample generation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination