CN111091570A - Image segmentation labeling method, device, equipment and storage medium - Google Patents

Image segmentation labeling method, device, equipment and storage medium Download PDF

Info

Publication number
CN111091570A
CN111091570A CN201911148079.1A CN201911148079A CN111091570A CN 111091570 A CN111091570 A CN 111091570A CN 201911148079 A CN201911148079 A CN 201911148079A CN 111091570 A CN111091570 A CN 111091570A
Authority
CN
China
Prior art keywords
image
surrounding
segmentation
image segmentation
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911148079.1A
Other languages
Chinese (zh)
Other versions
CN111091570B (en
Inventor
李金龙
陈曦
李雄
董家林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Merchants Bank Co Ltd
Original Assignee
China Merchants Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Merchants Bank Co Ltd filed Critical China Merchants Bank Co Ltd
Priority to CN201911148079.1A priority Critical patent/CN111091570B/en
Publication of CN111091570A publication Critical patent/CN111091570A/en
Application granted granted Critical
Publication of CN111091570B publication Critical patent/CN111091570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image segmentation labeling method, device, equipment and storage medium, wherein the method comprises the following steps: loading an image to be processed to a current canvas, and drawing one or more regions to be segmented based on a click event received by the current canvas; acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule; extracting one or more images to be annotated from the images to be processed according to the segmentation position index set; and carrying out local processing on one or more images to be labeled and outputting a labeling result. Therefore, based on the non-zero surrounding rule, the image to be processed is rapidly segmented according to the segmentation position index set, and then the image is labeled by using a local processing method, so that the process of image segmentation and labeling is simplified, and the image segmentation and labeling efficiency is improved.

Description

Image segmentation labeling method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an image segmentation and annotation method, device, equipment and storage medium.
Background
In recent years, with the rise of artificial intelligence, data labeling becomes more important, and various labeling tools are provided. The image segmentation and labeling play an extremely important role in the field of image segmentation artificial intelligence. The main processes of image segmentation and annotation include image segmentation, graying, binarization, color value inversion, morphological dilation corrosion and the like, so that the image segmentation and annotation process is complex and inefficient.
Disclosure of Invention
The invention provides an image segmentation labeling method, device, equipment and storage medium, aiming at simplifying the flow of image segmentation labeling and improving the efficiency of image segmentation labeling.
In order to achieve the above object, the present invention provides an image segmentation labeling method, including:
loading an image to be processed to a current canvas, and drawing one or more regions to be segmented based on a click event received by the current canvas;
acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule;
extracting one or more images to be annotated from the images to be processed according to the segmentation position index set;
and carrying out local processing on one or more images to be labeled and outputting a labeling result.
Preferably, the step of obtaining the segmentation position index set of the region to be segmented based on the non-zero surrounding rule includes:
converting the edge of the region to be segmented into a vector, and initializing the surrounding number of each pixel point in the image to be processed to be zero;
taking a pixel point to be judged as a starting point, and making a ray which is forward along the x axis and is parallel to the y axis;
according to the ray, calculating the surrounding number of the pixel point to be judged based on a surrounding number formula;
if the number of the surrounding of the pixel point to be judged is not equal to 0, judging that the pixel point to be judged is positioned in a region to be divided;
and sequentially judging whether each pixel point in the image to be processed is located in the region to be partitioned, and storing the pixel points located in the region to be partitioned to the partition position index set.
Preferably, the regions to be segmented are polygonal approximations, and the step of calculating the number of surrounding pixels to be determined based on a surrounding number formula includes:
expressing the pixel point to be judged as (x, y), and expressing the surrounding number as fm(x, y), then the calculation formula of the number of surrounding is
Figure BDA0002282780450000021
Wherein n +1 represents the number of pixel points in the region to be segmented, m represents the collection of pixel points in the region to be segmented, i is an integer from 0 to n, and% represents the film calculation.
Preferably, the step of calculating the number of surrounding counts of the pixel to be determined based on the number of surrounding counts formula further includes:
let the direction of one edge in the polygon be from the original pixel point (x)0,y0) Point to the first pixel (x)1,y1) And the sides of the polygon are not within the partitioned area;
when y is0<y1Then, the surrounding count f is calculated according to a first surrounding count calculation formula in the counterclockwise direction with the origin as the center(x0,y0)(x1,y1)(x, y), wherein the first surround count calculation formula is
Figure BDA0002282780450000022
When y is0>y1And then, taking the origin as the center, clockwise, and calculating the surrounding count f according to a second surrounding count calculation formula(x0,y0)(x1,y1)(x, y), wherein the second convolution count calculation formula is
Figure BDA0002282780450000023
When y is0=y1According to a third surround counterCalculating said surround count f by a calculation formula(x0,y0)(x1,y1)(x, y), wherein the third surround count calculation formula is
Figure BDA0002282780450000031
Preferably, the step of performing local processing on one or more images to be labeled and outputting a labeling result includes:
representing the image to be marked as IcThe image pixel of the pixel point (x, y) is denoted as Ic(x, y), obtaining the labeling result I of the image to be labeled according to a local processing formulapAnd outputting the labeling result Ip(ii) a Wherein,
Figure BDA0002282780450000032
wherein, (x, y) belongs to E, E is the index set of the image to be marked,
Figure BDA0002282780450000033
is an image processing algorithm.
Preferably, the step of extracting one or more images to be annotated from the images to be processed according to the segmentation position index set comprises:
obtaining corresponding image segmentation interfaces according to the segmentation position index sets, wherein the number of the segmentation position index sets is the same as that of the regions to be segmented, and the number of the image segmentation interfaces is the same as that of the segmentation position index sets;
and calling an image segmentation operation, wherein the image segmentation operation extracts one or more images to be annotated according to the image segmentation interface.
In order to achieve the above object, an embodiment of the present invention further provides an image segmentation and annotation device, where the image segmentation and annotation device includes:
the system comprises a drawing module, a judging module and a display module, wherein the drawing module is used for loading an image to be processed to a current canvas and drawing one or more regions to be segmented based on a click event received by the current canvas;
the acquisition module is used for acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule;
the extraction module is used for extracting one or more images to be annotated from the images to be processed according to the segmentation position index set;
and the local processing module is used for carrying out local processing on one or more images to be labeled and outputting a labeling result.
Preferably, the obtaining module includes:
the conversion unit is used for converting the edge of the region to be segmented into a vector and initializing the surrounding number of each pixel point in the image to be processed to zero;
the ray unit is used for taking the pixel point to be judged as a starting point and making a ray which is forward along the x axis and is parallel to the y axis;
the calculation unit is used for calculating the surrounding number of the pixel point to be judged based on a surrounding number formula according to the ray;
the judging unit is used for judging that the pixel point to be judged is positioned in the area to be divided if the surrounding number of the pixel point to be judged is not equal to 0;
and the storage unit is used for sequentially judging whether each pixel point in the image to be processed is positioned in the area to be partitioned and storing the pixel points positioned in the area to be partitioned to the partition position index set.
In order to achieve the above object, an embodiment of the present invention further provides an image segmentation and annotation device, where the image segmentation and annotation device includes a processor, a memory, and an image segmentation and annotation program stored in the memory, and when the image segmentation and annotation program is executed by the processor, the image segmentation and annotation method as described above is implemented.
In order to achieve the above object, an embodiment of the present invention further provides a computer storage medium, where an image segmentation and annotation program is stored on the computer storage medium, and when the image segmentation and annotation program is executed by a processor, the image segmentation and annotation program implements the steps of the image segmentation and annotation method as described above.
Compared with the prior art, the invention provides an image cutting and labeling method, device, equipment and storage medium, wherein an image to be processed is loaded to a current canvas, and one or more regions to be segmented are drawn based on a click event received by the current canvas; acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule; extracting one or more images to be annotated from the images to be processed according to the segmentation position index set; and carrying out local processing on one or more images to be labeled and outputting a labeling result. Therefore, based on the non-zero surrounding rule, the image to be processed is rapidly segmented according to the segmentation position index set, and then the image is labeled by using a local processing method, so that the process of image segmentation and labeling is simplified, and the image segmentation and labeling efficiency is improved.
Drawings
FIG. 1 is a schematic hardware configuration diagram of an image segmentation and annotation device according to embodiments of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of an image segmentation and annotation method according to the present invention;
FIG. 3 is a functional block diagram of a first embodiment of an image segmentation and annotation device according to the invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The image segmentation and annotation equipment mainly related to the embodiment of the invention is network connection equipment capable of realizing network connection, and the image segmentation and annotation equipment can be a server, a cloud platform and the like. In addition, the mobile terminal related to the embodiment of the invention can be mobile network equipment such as a mobile phone, a tablet personal computer and the like.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of an image segmentation and annotation apparatus according to embodiments of the present invention. In this embodiment of the present invention, the image segmentation and annotation device may include a processor 1001 (e.g., a central processing Unit, CPU), a communication bus 1002, an input port 1003, an output port 1004, and a memory 1005. The communication bus 1002 is used for realizing connection communication among the components; the input port 1003 is used for data input; the output port 1004 is used for data output, the memory 1005 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory, and the memory 1005 may optionally be a storage device independent of the processor 1001. Those skilled in the art will appreciate that the hardware configuration depicted in FIG. 1 is not intended to be limiting of the present invention, and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
With continued reference to fig. 1, the memory 1005 of fig. 1, which is a readable storage medium, may include an operating system, a network communication module, an application program module, and an image segmentation annotation program. In fig. 1, the network communication module is mainly used for connecting to a server and performing data communication with the server; the processor 1001 may call the image segmentation and annotation program stored in the memory 1005, and execute the image segmentation and annotation method provided by the embodiment of the present invention.
The embodiment of the invention provides an image segmentation and annotation method.
Referring to fig. 2, fig. 2 is a flowchart illustrating an image segmentation and annotation method according to a first embodiment of the present invention.
In this embodiment, the image segmentation labeling method is applied to an image segmentation labeling device, and the method includes:
step S101, loading an image to be processed to a current canvas, and drawing one or more regions to be segmented based on a click event received by the current canvas;
in this embodiment, the image to be processed may be an image selected or set by a user. Generally, a user imports the image to be processed through a preset interface.
And loading the image to be processed imported by the user to the current canvas, and displaying the image to be processed on a screen. And then receiving click events received by the current canvas, wherein the click events comprise selection events, stretching events, drawing events and the like. The user can trigger a click event through mouse or touch operation, and select and/or draw the image to be segmented needing segmentation and labeling through the click event. It will be appreciated that the user may select and/or render one or more images to be segmented at a time.
And after receiving the click event, drawing one or more images to be segmented corresponding to the drawing of the click event. The drawing contour generated by the click event can be represented by a black line, a white line or a colored line, that is, the one or more regions to be segmented are identified by the line contour. No matter what shape the outline is drawn, the graphics composed of the outline can be regarded as an infinite approximation polygon, and therefore, in this embodiment, the region to be divided is set as a polygon.
Step S102, acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule;
in the graphics, whether a point is inside a polygon can be judged according to a non-zero rounding Rule (non zero rounding Rule). The edges of the polygon are first made vectors. The number of the circles is initialized to zero, and then a ray in any direction is taken from the point p to be judged. When moving from the p point along the ray direction, counting the edges passing through the ray in each direction, adding 1 to the number of circles each time the edge of the polygon passes through the ray counterclockwise, subtracting 1 from the number of circles when passing through the ray clockwise, and calculating all the relevant edges of the polygon in turn. After all relevant edges of the polygon are processed, if the number of surrounding is non-zero, p is an interior point, otherwise, p is an exterior point.
Specifically, in this embodiment, the step of obtaining the segmentation position index set of the region to be segmented based on the non-zero surrounding rule includes:
step S102a, converting the edge of the region to be divided into vectors, and initializing the surrounding number of each pixel point in the image to be processed to zero;
setting the region to be divided into polygons, and converting the edges of the region to be divided into vectors, wherein the direction of the vectors can be clockwise or clockwise, and the direction can be selected as required.
In this embodiment, it is necessary to determine whether all the pixel points in the image to be processed are located in the region to be partitioned, so that the number of surrounding of each pixel point in the image to be processed is initialized to zero in advance.
Step S102b, taking the pixel point to be judged as the starting point, and making a ray which is positive along the x axis and parallel to the y axis;
in general, the ray may be in any direction. For ease of calculation, this embodiment fixes the ray direction to be positive along the x-axis and parallel to the y-axis.
Step S102c, calculating the surrounding number of the pixel point to be judged based on a surrounding number formula according to the ray;
specifically, the pixel point to be determined is represented as (x, y), and the number of turns is represented as fm(x, y), then the calculation formula of the number of surrounding is
Figure BDA0002282780450000071
Wherein n +1 represents the number of pixel points in the region to be segmented, m represents the collection of pixel points in the region to be segmented, i is an integer from 0 to n,% represents the film calculation, and f is 0(x0,y0)(x1,y1)(x, y) indicates that the direction of the pixel point (x, y) to be judged in one edge of the polygon is from the original pixel point (x)0,y0) Point to the first pixel (x)1,y1) The number of revolutions of (1) counting formula.
Further, the step of calculating the surrounding count of the pixel point to be judged based on a surrounding count formula according to the ray further includes:
let the direction of one edge in the polygon be from the original pixel point (x)0,y0) Point to the first pixel (x)1,y1) And the sides of the polygon are not within the partitioned area;
when y is0<y1When the origin is inIn the heart and anticlockwise direction, calculating the surrounding count f according to a first surrounding count calculation formula(x0,y0)(x1,y1)(x, y), wherein the first surround count calculation formula is
Figure BDA0002282780450000072
According to the first surrounding counting calculation formula, when y is0<y<y1,
Figure BDA0002282780450000073
When, the surround count is 1; when y is equal to y0Or y ═ y1,
Figure BDA0002282780450000074
Then, the wrap count is 0.5; in other cases the surround count is 0.
When y is0>y1And then, taking the origin as the center, clockwise, and calculating the surrounding count f according to a second surrounding count calculation formula(x0,y0)(x1,y1)(x, y), wherein the second convolution count calculation formula is
Figure BDA0002282780450000081
According to the second surrounding counting calculation formula, when y is1<y<y0,
Figure BDA0002282780450000082
Then, the surround count is-1; when y is equal to y0Or y ═ y1,
Figure BDA0002282780450000083
Then, the surround count is-0.5; in other cases the surround count is 0.
When y is0=y1Then, the surrounding count f is calculated according to a third surrounding number calculation formula(x0,y0)(x1,y1)(x, y), wherein the third surround count calculation formula is
Figure BDA0002282780450000084
When y is0=y1Then the edge of the multi-sided row is selected to be a line segment parallel to the x-axis that never intersects the ray or that partially coincides with the ray, at which time the wrap around count is 0.
When y is equal to y0Or y ═ y1Since the intersection point of the ray is calculated twice at the end point of the line segment represented by the edge in calculating the surround count, and the surround count is divided by 2, the surround count is calculated to be 0.5 or-0.5, and thus when y is equal to y0Or y ═ y1And then, calculating the number of the surrounding by using the fourth winding calculation formula.
Step S102d, if the number of the surrounding of the pixel point to be judged is not equal to 0, judging that the pixel point to be judged is located in a region to be segmented;
according to the surrounding number calculation formula, the first surrounding counting calculation formula, the second surrounding counting calculation formula and the third surrounding counting calculation formula, whether the pixel point to be judged is located in the region to be segmented can be judged quickly, namely whether the pixel point to be judged is located in the polygon is judged.
When f ismWhen (x, y) is not equal to 0, judging that the pixel point to be judged is in the area to be divided; otherwise, fmAnd (x, y) is equal to 0, and the pixel point to be judged is judged not to be in the area to be divided.
When the to-be-segmented areas are multiple, one to-be-segmented area satisfying f exists for a pixel point in one of the to-be-segmented areasm(x, y) ≠ 0. In this embodiment, a set of a plurality of the regions to be segmented is denoted as M, and then M ∈ M.
Step S102e, sequentially determining whether each pixel point in the image to be processed is located in the region to be segmented, and storing the pixel points located in the region to be segmented to the segmentation position index set.
And (3) judging whether all the pixel points in the image to be processed are located in the region to be segmented, so as to sequentially judge whether all the pixel points in the image to be processed are located in the region to be segmented according to the repeated steps from S102b to S102d, and storing the pixel points located in the region to be segmented into the segmentation position index set, wherein the segmentation position index set is represented by E. It can be understood that, if there are a plurality of regions to be divided, the number of the division position index sets is also a corresponding plurality.
Step S103, extracting one or more images to be annotated from the images to be processed according to the segmentation position index set;
the step of extracting one or more images to be annotated from the images to be processed according to the segmentation position index set comprises the following steps:
obtaining corresponding image segmentation interfaces according to the segmentation position index sets, wherein the number of the segmentation position index sets is the same as that of the regions to be segmented, and the number of the image segmentation interfaces is the same as that of the segmentation position index sets; and taking the boundary of the segmentation position index set as the image segmentation interface.
And calling an image segmentation operation, wherein the image segmentation operation extracts one or more images to be annotated according to the image segmentation interface. And by calling preset segmentation operation of an application program, extracting one or more images to be annotated through the application program according to the image segmentation interface.
And step S104, performing local processing on one or more images to be annotated, and outputting an annotation result.
Specifically, the step of performing local processing on one or more images to be labeled and outputting a labeling result includes:
representing the image to be marked as IcThe image pixel of the pixel point (x, y) is denoted as Ic(x, y), obtaining the labeling result I of the image to be labeled according to a local processing formulapAnd outputting the labeling result Ip(ii) a Wherein,
Figure BDA0002282780450000091
wherein, (x, y) E is an index set of the image to be marked. The above-mentioned
Figure BDA0002282780450000092
Is an image processing algorithm, wherein the involved processing comprises binarization, color value inversion, swelling, erosion and the like.
Furthermore, the image segmentation and annotation method can be well applied to the front end of the web of the webpage with poor processing performance.
When the web front end is processed, a front-end html native tag Canvas can be adopted for loading and reading an image to be processed; monitoring a click event of a mouse on a canvas through open source tools such as fabric.js and the like, and simply and quickly finishing drawing a polygon so as to obtain a region to be segmented; then obtaining pixel points in the polygon according to the surrounding number calculation formula, clicking page image segmentation operation, calling an ROI instruction, and completing multi-region image segmentation; realizing a required image interface according to a segmentation position index set obtained by image segmentation, and selecting a required processing method on a page to finish local processing; and finally, outputting and storing the labeling result.
According to the scheme, the image to be processed is loaded to the current canvas, and one or more regions to be segmented are drawn based on the click event received by the current canvas; acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule; extracting one or more images to be annotated from the images to be processed according to the segmentation position index set; and carrying out local processing on one or more images to be labeled and outputting a labeling result. Therefore, based on the non-zero surrounding rule, the image to be processed is rapidly segmented according to the segmentation position index set, and then the image is labeled by using a local processing method, so that the process of image segmentation and labeling is simplified, and the image segmentation and labeling efficiency is improved.
In addition, the embodiment also provides an image segmentation and annotation device. Referring to fig. 3, fig. 3 is a functional block diagram of an image segmentation and annotation device according to a first embodiment of the present invention.
In this embodiment, the image segmentation and annotation device is a virtual device, and is stored in the memory 1005 of the image segmentation and annotation apparatus shown in fig. 1, so as to implement all functions of the image segmentation and annotation program: the image processing device is used for loading an image to be processed to a current canvas and drawing one or more regions to be segmented based on a click event received by the current canvas; the segmentation position index set is used for acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule; the image segmentation device is used for extracting one or more images to be annotated from the images to be processed according to the segmentation position index set; and the image annotation device is used for carrying out local processing on one or more images to be annotated and outputting an annotation result.
Specifically, the image segmentation labeling device includes:
the system comprises a drawing module, a judging module and a display module, wherein the drawing module is used for loading an image to be processed to a current canvas and drawing one or more regions to be segmented based on a click event received by the current canvas;
the acquisition module is used for acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule;
the extraction module is used for extracting one or more images to be annotated from the images to be processed according to the segmentation position index set;
and the local processing module is used for carrying out local processing on one or more images to be labeled and outputting a labeling result.
Preferably, the obtaining module includes:
the conversion unit is used for converting the edge of the region to be segmented into a vector and initializing the surrounding number of each pixel point in the image to be processed to zero;
the ray unit is used for taking the pixel point to be judged as a starting point and making a ray which is forward along the x axis and is parallel to the y axis;
the calculation unit is used for calculating the surrounding number of the pixel point to be judged based on a surrounding number formula according to the ray;
the judging unit is used for judging that the pixel point to be judged is positioned in the area to be divided if the surrounding number of the pixel point to be judged is not equal to 0;
and the storage unit is used for sequentially judging whether each pixel point in the image to be processed is positioned in the area to be partitioned and storing the pixel points positioned in the area to be partitioned to the partition position index set.
Preferably, a calculation unit is provided for calculating the number of windings, the number of windings being calculated by the formula
Figure BDA0002282780450000111
Wherein n +1 represents the number of pixel points in the region to be segmented, m represents the collection of pixel points in the region to be segmented, i is an integer from 0 to n, and% represents the film calculation.
Preferably, the setting of the number-of-windings count calculation unit includes:
setting a subunit: let the direction of one edge in the polygon be from the original pixel point (x)0,y0) Point to the first pixel (x)1,y1) And the sides of the polygon are not within the partitioned area;
a first calculating subunit when y0<y1Then, the surrounding count f is calculated according to a first surrounding count calculation formula in the counterclockwise direction with the origin as the center(x0,y0)(x1,y1)(x, y), wherein the first surround count calculation formula is
Figure BDA0002282780450000112
A second calculating subunit, when y0>y1And then, taking the origin as the center, clockwise, and calculating the surrounding count f according to a second surrounding count calculation formula(x0,y0)(x1,y1)(x, y), wherein the second convolution count calculation formula is
Figure BDA0002282780450000121
A third computing subunit, when y0=y1Then, the surrounding count f is calculated according to a third surrounding count calculation formula(x0,y0)(x1,y1)(x, y), wherein the third surround count calculation formula is
Figure BDA0002282780450000122
Preferably, the labeling module comprises:
a labeling unit for representing the image to be marked as IcThe image pixel of the pixel point (x, y) is denoted as Ic(x, y), obtaining the labeling result I of the image to be labeled according to a local processing formulapAnd outputting the labeling result Ip(ii) a Wherein,
Figure BDA0002282780450000123
wherein, (x, y) belongs to E, E is the index set of the image to be marked,
Figure BDA0002282780450000124
is an image processing algorithm.
Preferably, the extraction module comprises:
the obtaining unit is used for obtaining corresponding image segmentation interfaces according to the segmentation position index sets, wherein the number of the segmentation position index sets is the same as that of the regions to be segmented, and the number of the image segmentation interfaces is the same as that of the segmentation position index sets;
and the calling unit is used for calling image segmentation operation, and the image segmentation operation is used for extracting one or more images to be annotated according to the image segmentation interface.
In addition, an embodiment of the present invention further provides a computer storage medium, where an image segmentation and annotation program is stored on the computer storage medium, and when the image segmentation and annotation program is executed by a processor, the steps of the image segmentation and annotation method are implemented, which are not described herein again.
Compared with the prior art, the image cutting and labeling method, the image cutting and labeling device, the image cutting and labeling equipment and the storage medium have the advantages that an image to be processed is loaded to a current canvas, and one or more areas to be segmented are drawn based on a click event received by the current canvas; acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule; extracting one or more images to be annotated from the images to be processed according to the segmentation position index set; and carrying out local processing on one or more images to be labeled and outputting a labeling result. Therefore, based on the non-zero surrounding rule, the image to be processed is rapidly segmented according to the segmentation position index set, and then the image is labeled by using a local processing method, so that the process of image segmentation and labeling is simplified, and the image segmentation and labeling efficiency is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for causing a terminal device to execute the method according to the embodiments of the present invention.
The above description is only for the preferred embodiment of the present invention and is not intended to limit the scope of the present invention, and all equivalent structures or flow transformations made by the present specification and drawings, or applied directly or indirectly to other related arts, are included in the scope of the present invention.

Claims (10)

1. An image segmentation labeling method is characterized by comprising the following steps:
loading an image to be processed to a current canvas, and drawing one or more regions to be segmented based on a click event received by the current canvas;
acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule;
extracting one or more images to be annotated from the images to be processed according to the segmentation position index set;
and carrying out local processing on one or more images to be labeled and outputting a labeling result.
2. The method according to claim 1, wherein the step of obtaining the segmentation position index set of the region to be segmented based on the non-zero surrounding rule comprises:
converting the edge of the region to be segmented into a vector, and initializing the surrounding number of each pixel point in the image to be processed to be zero;
taking a pixel point to be judged as a starting point, and making a ray which is forward along the x axis and is parallel to the y axis;
according to the ray, calculating the surrounding number of the pixel point to be judged based on a surrounding number formula;
if the number of the surrounding of the pixel point to be judged is not equal to 0, judging that the pixel point to be judged is positioned in a region to be divided;
and sequentially judging whether each pixel point in the image to be processed is located in the region to be partitioned, and storing the pixel points located in the region to be partitioned to the partition position index set.
3. The method according to claim 2, wherein the regions to be segmented are polygonal approximations, and the step of calculating the number of surrounding pixels to be determined based on a surrounding number formula comprises:
expressing the pixel point to be judged as (x, y), and expressing the surrounding number as fm(x, y), then the calculation formula of the number of surrounding is
Figure FDA0002282780440000011
Wherein n +1 represents the number of pixel points in the region to be segmented, m represents the collection of pixel points in the region to be segmented, i is an integer from 0 to n, and% represents the film calculation.
4. The method according to claim 2 or 3, wherein the step of calculating the number of surrounding counts of the pixel point to be determined based on the number of surrounding counts formula further comprises:
let the direction of one edge in the polygon be from the original pixel point (x)0,y0) Point to the first pixel (x)1,y1) And the sides of the polygon are not within the partitioned area;
when y is0<y1Then, the surrounding count f is calculated according to a first surrounding count calculation formula in the counterclockwise direction with the origin as the center(x0,y0)(x1,y1)(x, y), wherein the first surround count calculation formula is
Figure FDA0002282780440000021
When y is0>y1And then, taking the origin as the center, clockwise, and calculating the surrounding count f according to a second surrounding count calculation formula(x0,y0)(x1,y1)(x, y), wherein the second convolution count calculation formula is
Figure FDA0002282780440000022
When y is0=y1Then, the surrounding count f is calculated according to a third surrounding count calculation formula(x0,y0)(x1,y1)(x, y), wherein the third surround count calculation formula is
Figure FDA0002282780440000023
5. The method according to claim 1, wherein the step of performing local processing on one or more images to be labeled and outputting a labeling result comprises:
representing the image to be marked as IcThe image pixel of the pixel point (x, y) is denoted as Ic(x, y), obtaining the labeling result I of the image to be labeled according to a local processing formulapAnd outputting the labeling result Ip(ii) a Wherein,
Figure FDA0002282780440000024
wherein, (x, y) belongs to E, E is the index set of the image to be marked,
Figure FDA0002282780440000025
is an image processing algorithm.
6. The method according to claim 1, wherein the step of extracting one or more images to be labeled from the images to be processed according to the segmentation position index set comprises:
obtaining corresponding image segmentation interfaces according to the segmentation position index sets, wherein the number of the segmentation position index sets is the same as that of the regions to be segmented, and the number of the image segmentation interfaces is the same as that of the segmentation position index sets;
and calling an image segmentation operation, wherein the image segmentation operation extracts one or more images to be annotated according to the image segmentation interface.
7. An image segmentation labeling apparatus, comprising:
the system comprises a drawing module, a judging module and a display module, wherein the drawing module is used for loading an image to be processed to a current canvas and drawing one or more regions to be segmented based on a click event received by the current canvas;
the acquisition module is used for acquiring a segmentation position index set corresponding to one or more regions to be segmented based on a non-zero surrounding rule;
the extraction module is used for extracting one or more images to be annotated from the images to be processed according to the segmentation position index set;
and the local processing module is used for carrying out local processing on one or more images to be labeled and outputting a labeling result.
8. The image segmentation labeling device of claim 7, wherein the obtaining module comprises:
the conversion unit is used for converting the edge of the region to be segmented into a vector and initializing the surrounding number of each pixel point in the image to be processed to zero;
the ray unit is used for taking the pixel point to be judged as a starting point and making a ray which is forward along the x axis and is parallel to the y axis;
the calculation unit is used for calculating the surrounding number of the pixel point to be judged based on a surrounding number formula according to the ray;
the judging unit is used for judging that the pixel point to be judged is positioned in the area to be divided if the surrounding number of the pixel point to be judged is not equal to 0;
and the storage unit is used for sequentially judging whether each pixel point in the image to be processed is positioned in the area to be partitioned and storing the pixel points positioned in the area to be partitioned to the partition position index set.
9. An image segmentation annotation device, characterized in that the image segmentation annotation device comprises a processor, a memory and an image segmentation annotation program stored in the memory, wherein when the image segmentation annotation program is executed by the processor, the image segmentation annotation program realizes the steps of the image segmentation annotation method according to any one of claims 1 to 6.
10. A computer storage medium, wherein an image segmentation annotation program is stored on the computer storage medium, and when executed by a processor, the image segmentation annotation program implements the steps of the image segmentation annotation method according to any one of claims 1 to 6.
CN201911148079.1A 2019-11-21 2019-11-21 Image segmentation labeling method, device, equipment and storage medium Active CN111091570B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911148079.1A CN111091570B (en) 2019-11-21 2019-11-21 Image segmentation labeling method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911148079.1A CN111091570B (en) 2019-11-21 2019-11-21 Image segmentation labeling method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111091570A true CN111091570A (en) 2020-05-01
CN111091570B CN111091570B (en) 2023-07-25

Family

ID=70393526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911148079.1A Active CN111091570B (en) 2019-11-21 2019-11-21 Image segmentation labeling method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111091570B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952324A (en) * 2017-04-07 2017-07-14 山东理工大学 The parallel overlap-add procedure device and method of vector polygon rasterizing
WO2019100940A1 (en) * 2017-11-27 2019-05-31 广州视睿电子科技有限公司 Labeling method and apparatus for three-dimensional expanded image surface, and computer device and storage medium
CN109978894A (en) * 2019-03-26 2019-07-05 成都迭迦科技有限公司 A kind of lesion region mask method and system based on three-dimensional mammary gland color ultrasound

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952324A (en) * 2017-04-07 2017-07-14 山东理工大学 The parallel overlap-add procedure device and method of vector polygon rasterizing
WO2019100940A1 (en) * 2017-11-27 2019-05-31 广州视睿电子科技有限公司 Labeling method and apparatus for three-dimensional expanded image surface, and computer device and storage medium
CN109978894A (en) * 2019-03-26 2019-07-05 成都迭迦科技有限公司 A kind of lesion region mask method and system based on three-dimensional mammary gland color ultrasound

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王海啸等: "Chiral p-wave pairing of ultracold fermionic atoms due to a quadratic band touching", 《CHINESE PHYSICS B》 *
马辉等: "基于顶点与邻边相关性的多边形填充算法", 《中国图象图形学报》 *

Also Published As

Publication number Publication date
CN111091570B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN112785674B (en) Texture map generation method, rendering device, equipment and storage medium
CN109583509B (en) Data generation method and device and electronic equipment
CN112308866B (en) Image processing method, device, electronic equipment and storage medium
CN111967297B (en) Image semantic segmentation method and device, electronic equipment and medium
CN111161195B (en) Feature map processing method and device, storage medium and terminal
CN113657396B (en) Training method, translation display method, device, electronic equipment and storage medium
CN114792355A (en) Virtual image generation method and device, electronic equipment and storage medium
CN114708374A (en) Virtual image generation method and device, electronic equipment and storage medium
CN108986210B (en) Method and device for reconstructing three-dimensional scene
CN110110829A (en) A kind of two dimensional code processing method and processing device
CN113902856A (en) Semantic annotation method and device, electronic equipment and storage medium
CN114037630A (en) Model training and image defogging method, device, equipment and storage medium
CN113837194A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN112598687A (en) Image segmentation method and device, storage medium and electronic equipment
CN111402366A (en) Character rendering method and device, electronic equipment and storage medium
CN111091570A (en) Image segmentation labeling method, device, equipment and storage medium
CN115187834A (en) Bill identification method and device
CN113361511B (en) Correction model establishing method, device, equipment and computer readable storage medium
CN111191580B (en) Synthetic rendering method, apparatus, electronic device and medium
CN111754632B (en) Business service processing method, device, equipment and storage medium
CN114373078A (en) Target detection method and device, terminal equipment and storage medium
CN111489418A (en) Image processing method, device, equipment and computer readable storage medium
CN118397298B (en) Self-attention space pyramid pooling method based on mixed pooling and related components
CN114445521B (en) Image processing method, processing device, electronic equipment and readable storage medium
CN111583147B (en) Image processing method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant