CN111612791B - Image segmentation method, device, electronic equipment and storage medium - Google Patents

Image segmentation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111612791B
CN111612791B CN202010403656.3A CN202010403656A CN111612791B CN 111612791 B CN111612791 B CN 111612791B CN 202010403656 A CN202010403656 A CN 202010403656A CN 111612791 B CN111612791 B CN 111612791B
Authority
CN
China
Prior art keywords
image
resolution
target
segmentation
filtered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010403656.3A
Other languages
Chinese (zh)
Other versions
CN111612791A (en
Inventor
李马丁
徐青
章佳杰
郑云飞
于冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010403656.3A priority Critical patent/CN111612791B/en
Publication of CN111612791A publication Critical patent/CN111612791A/en
Application granted granted Critical
Publication of CN111612791B publication Critical patent/CN111612791B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Abstract

The disclosure relates to an image segmentation method, an image segmentation device, an electronic device and a storage medium, which relate to the technical field of Internet and are used for solving the problem of lower image segmentation efficiency in the related technology, and the method comprises the following steps: acquiring an image set to be processed, wherein the image set comprises an original image and at least one target image, and the target image is obtained by downsampling according to an image with resolution larger than that of the image set; sequentially obtaining segmented images corresponding to the resolutions according to the order from the resolution to the resolution, and performing edge filtering processing on the segmented images corresponding to the resolutions according to the images corresponding to the resolutions in the image set to obtain filtered images corresponding to the resolutions; and acquiring a target segmentation image corresponding to the original image according to the filtered image with the maximum resolution. The image segmentation is carried out on the target image with the minimum resolution, so that the calculated amount is small, and the processes of downsampling and the like are easy to accelerate, and therefore, the image segmentation efficiency can be effectively improved.

Description

Image segmentation method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to an image segmentation method, an image segmentation device, electronic equipment and a storage medium.
Background
With the development of artificial intelligence, the application fields of image processing are also becoming wider, for example, the core of high-new technologies such as face recognition, fingerprint recognition, license plate recognition, chinese character recognition, medical image recognition and the like is related knowledge such as image processing, and the image segmentation technology is a key technology of an image processing link.
Image segmentation is a technique and process of dividing an image into several specific regions with unique properties and presenting objects of interest. It is a key step from image processing to image analysis. Related image segmentation methods are mainly classified into the following categories: a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a wavelet transform-based segmentation method, a neural network-based segmentation method, a theory-specific segmentation method, and the like. From a mathematical perspective, image segmentation is the process of dividing a digital image into mutually disjoint regions. The process of image segmentation is also a labeling process, i.e. pixels belonging to the same region are given the same number. However, the image segmentation method in the related art is not efficient.
Disclosure of Invention
The disclosure provides an image segmentation method, an image segmentation device, an electronic device and a storage medium, so as to at least solve the problem of low image segmentation efficiency in the related art. The technical scheme of the present disclosure is as follows:
According to a first aspect of an embodiment of the present disclosure, there is provided an image segmentation method including:
acquiring an image set to be processed, wherein the image set comprises an original image and at least one target image, the target image is obtained by downsampling according to an image with larger resolution than the target image in the image set, and the resolution of the original image is the largest;
sequentially obtaining segmented images corresponding to each resolution according to the sequence of the resolution from small to large, and carrying out edge filtering processing on the segmented images corresponding to each resolution according to the images corresponding to each resolution in the image set to obtain filtered images corresponding to each resolution, wherein the segmented image with the minimum resolution is obtained by carrying out image segmentation on a target image with the minimum resolution in the image set, and the segmented images with other resolutions are obtained by carrying out up-sampling on the filtered images obtained last time;
and acquiring a target segmentation image corresponding to the original image according to the filtered image with the maximum resolution.
In an optional implementation manner, the performing edge filtering processing on the segmented image corresponding to each resolution according to the image corresponding to each resolution in the image set to obtain a filtered image corresponding to each resolution specifically includes:
Taking the images in the image set corresponding to any resolution as guiding images;
and performing guided filtering on the segmented image corresponding to the resolution according to the guided image to obtain a filtered image corresponding to the resolution.
In an optional implementation manner, after performing edge filtering processing on the segmented image corresponding to each resolution according to the image corresponding to each resolution in the image set to obtain a filtered image corresponding to each resolution, before acquiring the target segmented image corresponding to the original image according to the filtered image corresponding to the maximum resolution in the image set, the method specifically includes:
comparing the images in the image set corresponding to the resolution with the filtered images corresponding to the resolution;
and carrying out fuzzy processing on the filtered image corresponding to the resolution according to the comparison result.
In an optional implementation manner, the performing edge filtering processing on the segmented image corresponding to each resolution according to the image corresponding to each resolution in the image set to obtain a filtered image corresponding to each resolution includes:
for any resolution, sequentially performing edge filtering processing on each target segmentation object in the segmentation image corresponding to the resolution according to the images in the image set corresponding to the resolution to obtain a filtered image corresponding to the resolution;
When edge filtering processing is carried out on the first target segmentation object, the edge filtering processing is carried out on the segmentation image corresponding to the resolution; and then, when the edge filtering processing is carried out on other target segmentation objects, the edge filtering processing is carried out on the filtered image after the last time.
In an optional implementation manner, the performing edge filtering processing on the segmented image corresponding to each resolution according to the image corresponding to each resolution in the image set to obtain a filtered image corresponding to each resolution includes:
for any resolution, according to the images in the image set corresponding to the resolution, performing edge filtering processing on each target segmentation object in the segmentation image corresponding to the resolution to obtain a filtering image corresponding to each target segmentation object; when each target segmentation object is subjected to edge filtering processing, the edge filtering processing is performed on the segmentation image corresponding to the resolution;
and taking the filtered image corresponding to each target segmentation object as the filtered image corresponding to the resolution.
In an alternative embodiment, the filtered image of maximum resolution is one;
the obtaining the target segmentation image corresponding to the original image according to the filtered image with the maximum resolution comprises the following steps:
And carrying out binarization processing on the filtered image with the maximum resolution to obtain a target segmentation image corresponding to the original image.
In an alternative embodiment, the maximum resolution of the filtered image is multiple;
the obtaining the target segmentation image corresponding to the original image according to the filtered image with the maximum resolution comprises the following steps:
determining pixel values of all pixel points in the target filter image according to the positions of all target segmentation objects in the corresponding filter image;
and carrying out binarization processing on the target filtered image to obtain a target segmentation image corresponding to the original image.
In an optional implementation manner, the determining the pixel value of each pixel point in the target filtered image according to the position of each target segmentation object in the corresponding filtered image includes:
and regarding any pixel point, taking the maximum value in the pixel values of the pixel point positions in the filter image corresponding to each target segmentation object as the pixel value of the pixel point at the corresponding position in the target filter image.
In an optional implementation manner, the determining the pixel value of each pixel point in the target filtered image according to the position of each target segmentation object in the corresponding filtered image includes:
For a target pixel point, up-sampling a segmented image with the minimum resolution to obtain a pixel value of a position of the target pixel point in the image with the maximum resolution, wherein the pixel value is used as a pixel value of a pixel point in a corresponding position in the target filtering image, and the pixel value of the target pixel point is a pixel point in a position of a preset pixel value in the filtering image corresponding to each target segmented object; or (b)
And regarding non-target pixel points, taking the maximum value in the pixel values of the non-target pixel point positions in the filtered image corresponding to each target segmentation object as the pixel value of the pixel point at the corresponding position in the target filtered image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image segmentation apparatus including:
a first acquisition unit configured to perform acquisition of an image set to be processed, wherein the image set includes an original image and at least one target image, the target image is obtained by downsampling according to an image with a resolution larger than that of the target image in the image set, and the resolution of the original image is the largest;
the processing unit is configured to sequentially acquire segmented images corresponding to each resolution according to the order of the resolution from the small to the large, and perform edge filtering processing on the segmented images corresponding to each resolution according to the images corresponding to each resolution in the image set to obtain filtered images corresponding to each resolution, wherein the segmented image with the minimum resolution is obtained by image segmentation of a target image with the minimum resolution in the image set, and the segmented images with other resolutions are obtained by up-sampling the filtered images obtained last time;
And a second acquisition unit configured to perform filtering of the image according to the maximum resolution, and acquire a target segmentation image corresponding to the original image.
In an alternative embodiment, the processing unit is specifically configured to perform:
taking the images in the image set corresponding to any resolution as guiding images;
and performing guided filtering on the segmented image corresponding to the resolution according to the guided image to obtain a filtered image corresponding to the resolution.
In an optional implementation manner, after the processing unit performs edge filtering processing on the segmented image corresponding to each resolution according to the image corresponding to each resolution in the image set to obtain a filtered image corresponding to each resolution, before the second obtaining unit obtains the target segmented image corresponding to the original image according to the filtered image corresponding to the maximum resolution in the image set, the processing unit is further configured to perform:
comparing the images in the image set corresponding to the resolution with the filtered images corresponding to the resolution;
and carrying out fuzzy processing on the filtered image corresponding to the resolution according to the comparison result.
In an alternative embodiment, the processing unit is specifically configured to perform:
for any resolution, sequentially performing edge filtering processing on each target segmentation object in the segmentation image corresponding to the resolution according to the images in the image set corresponding to the resolution to obtain a filtered image corresponding to the resolution;
when edge filtering processing is carried out on the first target segmentation object, the edge filtering processing is carried out on the segmentation image corresponding to the resolution; and then, when the edge filtering processing is carried out on other target segmentation objects, the edge filtering processing is carried out on the filtered image after the last time.
In an alternative embodiment, the processing unit is specifically configured to perform:
for any resolution, according to the images in the image set corresponding to the resolution, performing edge filtering processing on each target segmentation object in the segmentation image corresponding to the resolution to obtain a filtering image corresponding to each target segmentation object; when each target segmentation object is subjected to edge filtering processing, the edge filtering processing is performed on the segmentation image corresponding to the resolution;
And taking the filtered image corresponding to each target segmentation object as the filtered image corresponding to the resolution.
In an alternative embodiment, the filtered image of maximum resolution is one;
the second acquisition unit is specifically configured to perform:
and carrying out binarization processing on the filtered image with the maximum resolution to obtain a target segmentation image corresponding to the original image.
In an alternative embodiment, the maximum resolution of the filtered image is multiple;
the second acquisition unit is specifically configured to perform:
determining pixel values of all pixel points in the target filter image according to the positions of all target segmentation objects in the corresponding filter image;
and carrying out binarization processing on the target filtered image to obtain a target segmentation image corresponding to the original image.
In an alternative embodiment, the second acquisition unit is specifically configured to perform:
and regarding any pixel point, taking the maximum value in the pixel values of the pixel point positions in the filter image corresponding to each target segmentation object as the pixel value of the pixel point at the corresponding position in the target filter image.
In an alternative embodiment, the second acquisition unit is specifically configured to perform:
For a target pixel point, up-sampling a segmented image with the minimum resolution to obtain a pixel value of a position of the target pixel point in the image with the maximum resolution, wherein the pixel value is used as a pixel value of a pixel point in a corresponding position in the target filtering image, and the pixel value of the target pixel point is a pixel point in a position of a preset pixel value in the filtering image corresponding to each target segmented object; or (b)
And regarding non-target pixel points, taking the maximum value in the pixel values of the non-target pixel point positions in the filtered image corresponding to each target segmentation object as the pixel value of the pixel point at the corresponding position in the target filtered image.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image segmentation method according to any one of the first aspects of the embodiments of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the image segmentation method of any one of the first aspects of embodiments of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product which, when run on an electronic device, causes the electronic device to perform a method of implementing the above-described first aspect and any one of the possible concerns of the first aspect of embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the method and the device for obtaining the target segmentation image, the original image or other target images are subjected to downsampling, the target images with the minimum resolution are obtained, the segmented images corresponding to the other images are obtained by processing the results obtained by segmenting the target images with the minimum resolution, and the target images with the minimum resolution are much smaller than the resolution of the original image with the maximum resolution, so that a lot of calculation amount can be saved in the most time-consuming image segmentation process, and then after the segmented images with the minimum resolution are obtained, the segmented images are subjected to certain boundary segmentation, upsampling and other processes, so that the target segmentation image consistent with the resolution of the original image can be obtained, wherein downsampling, upsampling, boundary segmentation and the like can be effectively accelerated through hardware lines, and compared with a general image segmentation algorithm, the calculation amount is smaller, and therefore the performance can be greatly improved, and the image segmentation efficiency can be effectively improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram of an application scenario shown in accordance with an exemplary embodiment;
FIG. 2 is a flowchart illustrating a method of image segmentation according to an exemplary embodiment;
FIG. 3 is a schematic diagram of an image segmentation pyramid, shown according to an exemplary embodiment;
FIG. 4 is a flowchart illustrating a complete method of first image segmentation, according to an exemplary embodiment;
FIG. 5 is a flowchart illustrating a complete method of a second type of image segmentation, according to an exemplary embodiment;
FIG. 6 is a block diagram of an image segmentation apparatus according to an exemplary embodiment;
FIG. 7 is a block diagram of an electronic device, shown in accordance with an exemplary embodiment;
FIG. 8 is a block diagram of a computing device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Some words appearing hereinafter are explained:
1. the term "and/or" in the embodiments of the present disclosure describes an association relationship of association objects, which indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
2. The term "electronic device" in embodiments of the present disclosure may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
3. The term "image pyramid" in the presently disclosed embodiments is one of a multi-scale representation of an image, an efficient but conceptually simple structure that interprets images in multiple resolutions. A pyramid of one image is a series of image sets that are arranged in a pyramid shape with progressively lower resolution and that are derived from the same original image. It is obtained by downsampling a step down and does not stop sampling until a certain termination condition is reached. We metaphe a layer-by-layer image into a pyramid, the higher the level, the smaller the image and the lower the resolution. A pyramid of an image is a series of progressively lower resolution sets of images arranged in a pyramid shape. The bottom of the pyramid is a high resolution representation of the image to be processed, while the top is an approximation of the low resolution. When moving towards the upper layer of the pyramid, the size and resolution decreases.
4. The term "edge preserving filtering" in the embodiments of the present disclosure refers to a filtering method that preserves the edges (detail information) of an image. Guided Filtering (Guided Filtering) and Bilateral Filtering (BF), least squares Filtering (WLS) are three large Edge-preserving (Edge-preserving) filters. Of course, the function of the pilot filtering is not just edge preserving, but it becomes an edge preserving filter only when the pilot pattern is original. The method has corresponding application in defogging images and matting images.
5. The term "guided filtering" in the embodiments of the present disclosure is to use a guided image as a filtered content image, implement a local linear function expression on the guided image, implement various linear transformations, and output a deformed guided filtered image. The guide image may be different from or consistent with the target image, as desired. Let I be the pilot image, p be the target image, q be the pilot filtered output image, the pilot filtering describing the relation between the pilot image I and the output image q as a local linear model. For this algorithm, when i=p, i.e., the original image and the guide image are the same image, the algorithm becomes an edge preserving filter.
The application scenario described in the embodiments of the present disclosure is for more clearly describing the technical solution of the embodiments of the present disclosure, and does not constitute a limitation on the technical solution provided by the embodiments of the present disclosure, and as a person of ordinary skill in the art can know that, with the appearance of a new application scenario, the technical solution provided by the embodiments of the present disclosure is equally applicable to similar technical problems. Wherein in the description of the present disclosure, unless otherwise indicated, the meaning of "plurality" is used.
The following briefly describes an application scenario of an embodiment of the present disclosure:
Fig. 1 is a schematic view of an application scenario of an embodiment of the disclosure. The application scenario diagram includes two terminal devices 110 and a server 130, and the relevant interface 120 can be logged in through the terminal devices 110. Communication between the terminal device 110 and the server 130 may be performed through a communication network.
In an alternative embodiment, the communication network is a wired network or a wireless network.
In the embodiment of the present disclosure, the terminal device 110 is an electronic device used by a user, and the electronic device may be a computer device having a certain computing capability, such as a personal computer, a mobile phone, a tablet computer, a notebook, an electronic book reader, and the like, and running instant messaging software and a website or social software and a website. Each terminal device 110 is connected to a server 130 through a wireless network, where the server 130 is a server cluster or cloud computing center formed by one server or several servers, or is a virtualization platform.
Optionally, the server 130 may also have an image database that may store a large number of images.
In the embodiment of the present disclosure, the terminal device 110 may directly segment the image according to the image segmentation method in the embodiment of the present disclosure, and display the image to the user through the interface 120; or, when receiving the request triggered by the user, the terminal device 110 sends the local image to the server 130, the server 130 segments the image after receiving the image, and sends the segmentation result to the terminal device 110, and the terminal device 110 displays the segmentation result to the user through the interface 120. Alternatively, when the terminal device 110 detects a request triggered by a user, the request may also be sent to the server 130, the server 130 searches for an image according to the request sent by the terminal device, and after image segmentation is performed on the searched image, the segmentation result is sent to the terminal device 110, and then the terminal device 110 displays the segmentation result to the user through the interface 120, and so on. Among other things, image segmentation techniques may be applied in many scenarios, such as medical image segmentation, face detection, etc.
The following describes in detail an image segmentation method according to an embodiment of the present disclosure:
fig. 2 is a flowchart illustrating an image segmentation method according to an exemplary embodiment, including the following steps, as shown in fig. 2.
In step S21, an image set to be processed is obtained, where the image set includes an original image and at least one target image, the target image is obtained by downsampling according to an image in the image set, where the resolution of the original image is the largest;
in step S22, sequentially obtaining segmented images corresponding to each resolution according to the order from the small resolution to the large resolution, and performing edge filtering processing on the segmented images corresponding to each resolution according to the images corresponding to each resolution in the image set to obtain filtered images corresponding to each resolution, wherein the segmented image with the minimum resolution is obtained by performing image segmentation on a target image with the minimum resolution in the image set, and the segmented images with other resolutions are obtained by performing up-sampling on the filtered images obtained last time;
the object to be segmented refers to an object to be segmented when segmenting the image content, for example, in the case of classification, the image may be divided into a foreground and a background other than the foreground, where the foreground is the object to be segmented. Taking face detection as an example, a face can be regarded as a foreground, and other parts can be regarded as a background, and the face is a target segmentation object. In the case of multiple classification, there are multiple target division objects, for example, when an image is classified into three parts of sky, grass and road, the sky, grass and road all belong to the target division objects.
In the embodiment of the present disclosure, when the images are ranked according to the size of the image resolution, the images may be ranked according to the order of the image resolution from low to high or from high to low, which is not limited herein. An image pyramid, an inverted pyramid, or the like can be constructed based on the above-described process.
It should be noted that, in the embodiment of the present disclosure, the image sorting according to the size of the image resolution is only one possible sorting manner, and in addition, the image sorting according to the size, the scale, the area, and the like of the image may achieve the same effect, that is, in the embodiment of the present disclosure, the image resolution may be equally replaced by the image scale, the image size, the image area, and the like, which may represent the description of the size of the image, and is not limited herein.
In step S23, a target segmentation image corresponding to the original image is acquired from the filtered image of the maximum resolution.
According to the scheme, a plurality of target images can be obtained based on downsampling, only the target image with the minimum resolution is subjected to image segmentation, segmented images corresponding to other images are obtained by processing the result obtained by the segmentation of the target image with the minimum resolution, and as the target image with the minimum resolution is much smaller than the resolution of the original image with the maximum resolution, a lot of calculation amount can be saved in the time-consuming image segmentation process, and then after the segmented image with the minimum resolution is obtained, the segmented image is subjected to certain boundary segmentation, upsampling and other processes, so that the target segmented image consistent with the resolution of the original image can be obtained, wherein downsampling, upsampling, boundary segmentation and the like can be effectively accelerated through hardware lines, and compared with a common image segmentation algorithm, the calculation amount is smaller, so that the performance can be greatly improved, and the image segmentation efficiency can be effectively improved.
In the embodiment of the disclosure, the target image is obtained by downsampling, and specifically, the target image may be obtained by downsampling the original image or may be obtained by downsampling another target image with a resolution greater than that of the target image.
Taking the image pyramid shown in fig. 3 as an example, the pyramid has n layers in total, and the resolution of the image gradually decreases from bottom to top. Wherein the bottom (n-th) layer image is the original image, the resolution is maximum, and the remaining n-1 layer images are all target images, wherein the resolution of the top (1 st) layer target image is minimum. It is assumed that the resolution of the next layer image is twice that of the previous layer image in every adjacent two layer images. Let the resolution of each layer from top to bottom be: k (K) 1 <K 2 <K 3 <…<K n Wherein the bottom layer resolution is K n The top-most layer resolution is K 1
Taking the target image shown in the n-2 layer as an example, the resolution of the target image is K n-2 An image having a resolution larger than that of the target image has an original image of the nth layer (resolution of K n ) And an n-1 layer target image (resolution of K n-1 ) Therefore, the target image can be obtained by 0.25 times downsampling the original image of the n layer, or 0.5 times downsampling the n-1 layer image Obtained.
In an embodiment of the disclosure, the image set includes at least two images, an original image and at least one target image, wherein the resolutions of different images in the image set are different, and the resolution of the original image is the largest. In the embodiment of the disclosure, the downsampling multiple may be set as appropriate, and is generally 0.5.
It should be noted that, when the image set includes only one target image, that is, when the image set includes two images, there is no iterative process in step S22; and when the number of target images contained in the image set is greater than one, that is, when at least three images are contained in the image set, there is an iterative process in step S22. That is, in the simplest case, when only two images are included in the image set, as the number of images in the image set increases, the number of iterations increases.
The implementation process of step S22 is described in detail below according to the difference of the number of images in the image set:
example one: the image set only comprises two images, and at the moment, the image set only comprises one target image, and the target image is obtained by downsampling an original image. In this case, the images are sorted according to the image resolution, and a two-layer image pyramid is obtained. When images are ordered by resolution, the image of the smallest resolution (layer 1) is adjacent to the image of the largest resolution (layer 2).
The following describes the specific implementation procedure of step S22, taking a two-class segmentation as an example (for example, front-background segmentation, and image content is divided into two classes):
firstly, performing operations such as image segmentation on a target image with minimum resolution (namely an image of a 1 st layer) in an image pyramid to obtain an image segmentation result with minimum resolution, namely a segmented image with minimum resolution;
then, according to a target image (image of layer 1) with minimum resolution in the image set, carrying out edge filtering processing on the segmented image with the minimum resolution to obtain a filtered image with the minimum resolution;
then, up-sampling the filtered image with the minimum resolution to obtain a segmented image with the same resolution as the layer 2 image (namely, the resolution of the original image);
then, carrying out edge filtering processing on the segmented image corresponding to the layer 2 according to the image (namely the original image) of the layer 2 to obtain a filtered image with the maximum resolution;
and finally, obtaining a target segmentation image corresponding to the original image based on the filtered image with the maximum resolution.
In the second example, when the image set includes at least three images, that is, the image set includes an original image and at least two target images, in this case, the images are sorted according to the image resolution, so as to obtain an image pyramid with at least three layers, where the image with the minimum resolution is located at the top layer, and the image with the maximum resolution is located at the bottom layer. In this case, step S22 is an iterative process, sequentially obtaining the segmented images corresponding to each resolution in order from the resolution to the resolution, and performing edge filtering processing on the segmented images corresponding to each resolution according to the images corresponding to each resolution in the image set, so as to obtain a specific implementation manner of the process of obtaining the filtered image corresponding to each resolution as follows:
Firstly, performing operations such as image segmentation on a target image with minimum resolution (namely an image of a 1 st layer) in an image pyramid to obtain an image segmentation result with minimum resolution, namely a segmented image with minimum resolution;
then, according to a target image with the minimum resolution in the image set, carrying out edge filtering processing on the segmented image with the minimum resolution to obtain a filtered image with the minimum resolution; up-sampling the filter image with the minimum resolution for one time, and up-sampling to obtain a segmented image corresponding to the layer 2, wherein the size of the filtered image is consistent with that of the layer 2 image; performing edge filtering processing on the obtained segmented image according to the layer 2 image to obtain a filtered image corresponding to the layer 2 image;
then, up-sampling is carried out on the obtained filtered image until the resolution of the obtained filtered image is consistent with that of the layer 3 image, and a segmented image corresponding to the layer 3 image is obtained; performing edge filtering processing on the obtained segmented image according to the layer 3 image to obtain a filtered image corresponding to the layer 3 image;
repeating the steps until reaching the bottommost layer, namely the nth layer, and obtaining a filtered image corresponding to the nth layer image, namely the filtered image with the maximum resolution;
And finally, obtaining a target segmentation image corresponding to the original image based on the filtered image with the maximum resolution.
The following details the implementation procedure of S22 and S23:
in an alternative embodiment, when performing edge filtering processing on the segmented image corresponding to each resolution according to the image corresponding to each resolution in the image set to obtain a filtered image corresponding to each resolution, the specific process is as follows:
taking an image in an image set corresponding to any resolution as a guide image; and performing guided filtering on the segmented image corresponding to the resolution according to the guided image to obtain a filtered image corresponding to the resolution.
The method of guiding filtering is adopted, the images in the image set corresponding to the resolution are used as guiding images, namely guiding images, the segmented images corresponding to the resolution are guided and filtered, so that the filtered images corresponding to the resolution obtained after filtering are basically similar to the segmented images corresponding to the resolution, but texture parts are similar to the guiding images, the effect of edge smoothing can be achieved through edge guiding filtering, and the segmented boundaries are clearer.
It should be noted that, other edge preserving filtering methods besides the guide filtering are also applicable to the embodiments of the present disclosure, and detail information such as edges in the image is preserved while the image is filtered.
In an optional implementation manner, after performing edge filtering processing on the segmented image corresponding to each resolution according to the image corresponding to each resolution in the image set to obtain a filtered image corresponding to each resolution, before acquiring the target segmented image corresponding to the original image according to the filtered image corresponding to the maximum resolution in the image set, the method specifically includes: comparing the images in the image set corresponding to the resolution with the filtered images corresponding to the resolution; and carrying out fuzzy processing on the filtered image corresponding to the resolution according to the comparison result.
In the embodiment of the disclosure, after a filtered image corresponding to any one resolution is obtained through edge-directed filtering or other filtering modes, the filtered image may be further compared with an image (an original image or a target image) corresponding to the resolution in an image set, and according to a comparison result, an edge portion where a conflict occurs in the image corresponding to the resolution in the image set is subjected to fuzzy processing, so that the edge portion of a final result is kept consistent with the edge portion of the image corresponding to the resolution in the image set as much as possible, and accuracy of image segmentation is ensured.
After the process, a filtered image with the maximum resolution, namely a filtered image with the resolution consistent with that of the original image, can be obtained, and then the target segmentation image corresponding to the original image can be determined based on the filtered image.
It should be noted that, in the above-mentioned process, for the two-classification case, the segmented image or the filtered image corresponding to each resolution is one; in the case of multiple classification, the original image contains at least two target segmented objects, where for each resolution, the segmented image or the filtered image corresponding to each resolution may be one or more, which is related to the filtering mode. When the number of filtered images is different, the manner in which the target divided image of the original image is determined is also different. The following is a detailed description:
in an alternative embodiment, when performing edge filtering processing on the segmented image corresponding to each resolution according to the image corresponding to each resolution in the image set to obtain a filtered image corresponding to each resolution, the following two filtering modes may be adopted:
according to the first filtering mode, for any resolution, according to images in an image set corresponding to the resolution, edge filtering processing is sequentially carried out on each target segmentation object in the segmentation image corresponding to the resolution, so as to obtain a filtering image corresponding to the resolution; wherein, when the edge filtering processing is performed on the first target segmentation object, the edge filtering processing is performed on the segmentation image corresponding to the resolution; and then, when the edge filtering processing is carried out on other target segmentation objects, the edge filtering processing is carried out on the filtered image after the last time.
For any one resolution, e.g. K n (lowest layer of image pyramid):
taking two classifications as an example, the target segmentation object only comprises one foreground, for example, at this time, only the edges of the foreground in the segmentation image need to be subjected to filtering processing, and a filtered image corresponding to the resolution can be obtained.
Taking multi-classification as an example, the target segmented object includes at least two objects, such as sky, grassland and highway, at this time, the edge of the first target segmented object may be filtered on the segmented image, then the edge of the second target segmented object may be filtered on the filtering result, and finally the edge of the third target segmented object may be filtered on the filtering result of the edge of the second target segmented object, to obtain a filtered image corresponding to the resolution.
In this way, each resolution corresponds to only one filtered image, regardless of whether it is classified into two or more. That is, the image segmentation method in the embodiment of the present disclosure is not only applicable to the case of two classifications, but also applicable to the case of multiple classifications, and can effectively improve the efficiency of image segmentation.
In the above embodiment, the order of the target division targets may be predetermined or may be random, and is not particularly limited.
According to any resolution, performing edge filtering processing on each target segmentation object in the segmentation image corresponding to the resolution according to the image in the image set corresponding to the resolution to obtain a filtering image corresponding to each target segmentation object; when each target segmentation object is subjected to edge filtering processing, the edge filtering processing is performed on the segmentation image corresponding to the resolution; and taking the filtered image corresponding to each target segmentation object as the filtered image corresponding to the resolution.
For any one resolution, e.g. K n (lowest layer of image pyramid):
taking the two classification listed in the first filtering mode as an example, the target segmentation object only comprises one object, and at the moment, only the edge of the foreground in the segmentation image needs to be filtered, so that a filtered image corresponding to the resolution can be obtained.
Still taking multi-classification listed in the first filtering mode as an example, the target segmentation objects comprise sky, grassland and highway, and for each target segmentation object, corresponding filtering processing is needed on the segmentation image corresponding to each target segmentation object, so as to obtain a filtered image corresponding to each target segmentation object.
Specifically, in this embodiment, at layer 1, the plurality of target divided objects also correspond to only one divided image, and at each layer other than layer 1, each target divided object has its corresponding divided image. For example at resolution K 2 The layer 2 is that filtering the edges of the 3 target split objects on the split images to obtain 3 filtered images corresponding to the 3 target split objects, each target split object corresponds to one filtered image, upsampling the 3 filtered images to obtain 3 split images corresponding to the layer 3 image, each split image corresponds to one target split object, filtering the edges of the target split objects corresponding to each target split object on the split images corresponding to each target split object to obtain the filtered images corresponding to each target split object, and iterating the process continuously until the resolution is K n When there are 3 sheets of resolution K n Respectively corresponding to 3 target segmented objects, filtering the edges of the sky on the first segmented image to obtain a filtered image corresponding to the sky, filtering the edges of the grasslands on the second segmented image to obtain a filtered image corresponding to the grasslands, and filtering the edges of the roads on the third segmented image to obtain a filtered image corresponding to the roads Like an image.
In this way, one target division object corresponds to one filtered image, so that in the case of multiple classifications, there are multiple target division objects, and thus one resolution corresponds to multiple filtered images. In the embodiment of the disclosure, only the multi-classification is needed to be regarded as a plurality of two-classification processes, and only a certain class can be similarly processed in turn, so that the acceleration of the multi-classification image segmentation algorithm is realized.
As can be seen from the above embodiments, the number of the filtered images corresponding to one resolution may be one or more; therefore, the manner in which the target divided image corresponding to the original image is acquired is different depending on the number of filtered images corresponding to the maximum resolution. Based on this, the following describes a case of acquiring a target divided image corresponding to an original image.
The filtered image corresponding to one resolution is one case one.
At this time, the filtered image with the maximum resolution is also one, and when the target segmentation image corresponding to the original image is obtained according to the filtered image with the maximum resolution, the filtered image with the maximum resolution can be directly subjected to binarization processing, so that the target segmentation image corresponding to the original image is obtained.
The method comprises the steps of performing binarization processing on a filtered image obtained by edge-directed filtering at the bottommost layer of an image pyramid through threshold processing to obtain a final target segmentation image.
In the second case, the number of filtered images corresponding to one resolution is plural.
When the target segmentation images corresponding to the original images are acquired according to the filter images with the maximum resolution, the pixel values of all pixel points in the target filter images are determined according to the positions of all target segmentation objects in the corresponding filter images; and further, the target filtered image is directly subjected to binarization processing, and a target segmentation image corresponding to the original image is obtained.
In the embodiments of the present disclosure, there are various ways of acquiring a target filtered image based on a plurality of filtered images corresponding to a maximum resolution, and two types of methods are listed below:
in an alternative embodiment, when determining the pixel value of each pixel point in the target filtered image according to the position of each target segmentation object in the corresponding filtered image, the following manner may be adopted:
and regarding any pixel point, taking the maximum value in the pixel values of the pixel point positions in the filter image corresponding to each target segmentation object as the pixel value of the pixel point at the corresponding position in the target filter image.
For resolution K n Taking the above-listed multi-classification case as an example, the sky, the grass and the highway are respectively corresponding to one filtered image, and it is assumed that A1, A2 and A3 are respectively used, where the 3 filtered images can be understood as probability maps, where the pixel value of each pixel point in A1 represents the probability that the pixel point is sky, and similarly, the pixel value of each pixel point in A2 represents the probability that the pixel point is grass, the pixel value of each pixel point in A3 represents the probability that the pixel point is highway, and for the pixel point at any position, the pixel value of the pixel point in A1 is A1, the pixel value in A2 is A2, the pixel value in A3 is A3, and it is assumed that A1>a2>a3, determining that the pixel value of the pixel point in the target filtered image is a1; and determining the pixel values of the pixel points at each position in the target filtered image by adopting the same mode aiming at other pixel points, so as to obtain the target filtered image.
In the above embodiment, the target filtered image may be determined based on the pixel values of the pixel points in the filtered image corresponding to each target segmentation object, which provides a simple and efficient target filtered image determining method.
In another alternative embodiment, when determining the pixel value of each pixel point in the target filtered image according to the position of each target segmentation object in the corresponding filtered image, the following manner may be adopted:
For a target pixel point, up-sampling the segmented image with the minimum resolution to obtain a pixel value of a target pixel point position in the image with the maximum resolution, wherein the pixel value is used as a pixel value of a pixel point at a corresponding position in a target filtering image, and the target pixel point is a pixel point at a position where the pixel value of the filtering image corresponding to each target segmented object is a preset pixel value; and regarding the non-target pixel points, taking the maximum value in the pixel values of the non-target pixel point positions in the filtered image corresponding to each target segmentation object as the pixel value of the pixel point at the corresponding position in the target filtered image.
In this way, the pixels in one image are divided into two types, namely, a target pixel and a non-target pixel, and the multi-classification case listed in the above embodiment is still taken as an example in the following, and for 3 filtered images A1, A2 and A3, if the pixel values of the pixels at a certain position in the 3 images are the same and are all the preset pixel values (for example, 0), the pixel is indicated as the target pixel, otherwise, the pixel is the non-target pixel.
Wherein, a preset pixel value of 0 indicates that the probability that the pixel point is the target segmented object is 0, so that when the pixel values of the pixel points at the same position in the 3 images are all 0, the pixel point is neither sky nor grassland nor highway, and at this time, the pixel point can be up-sampled to K according to the segmented image with the minimum resolution n The pixel value of the pixel point at the position in the obtained image is used for determining the pixel value at the position in the target filtered image. For non-target pixels, the method described in the previous embodiment may be used for determining the non-target pixels.
For example, for the target pixel point X1, the pixel value is 0 at this position in A1, A2, A3, but the minimum resolution segmented image is upsampled to K n The pixel value at the X1 position in the obtained image is X, and at the moment, the pixel value at the X1 position in the target filtered image can be determined to be X; for the non-target pixel point X2, the pixel values in A1, A2 and A3 are b1, b2 and b3 respectively, and b3 is the same>b2>b1, therefore, the pixel value at the X2 position in the target filtered image is b3, and based on this mode, the pixel value of the pixel point at each position in the target filtered image can be determined, so as to obtain the target filtered image.
In the above embodiment, the target filtered image may be determined based on the pixel values of the pixel points in the filtered image corresponding to each target segmentation object, which also provides a simple and efficient target filtered image determining method. In addition, the pixel value of the target pixel point is determined by utilizing the segmented image with the minimum resolution, so that the accuracy is improved.
It should be noted that the above image segmentation method is applicable to any task of assigning labels to pixels, for example, foreground segmentation, that is, a task of assigning labels of foreground or background to pixels, that is, any image segmentation task may be accelerated by the method in the embodiments of the present disclosure. For multi-classification image segmentation methods, such as image semantic segmentation, only some classes can be similarly processed in turn, namely the segmentation modes and the like listed in the embodiment. The method is suitable for any segmentation algorithm, and the operations such as downsampling, edge-directed filtering and the like are easy to accelerate through hardware, and compared with a general image segmentation algorithm, the method has smaller calculated amount, so that the performance can be greatly improved. In addition, the edge-directed filtering may be replaced by other filtering methods that keep the edges, such as bilateral filtering, least squares filtering, etc., which are not particularly limited herein.
The following describes a complete method of image segmentation, taking a two-class segmentation as an example. FIG. 4 is a flowchart of a complete method of image segmentation, according to an exemplary embodiment, specifically including the steps of:
s41: downsampling an original image to obtain an image pyramid, wherein the image pyramid has three layers, the resolution of the top layer is minimum, and the resolution of the bottom layer is maximum;
S42: performing image segmentation operation on the topmost image to obtain a segmented image corresponding to the topmost image;
s43: performing edge-directed filtering on the segmented image corresponding to the topmost layer according to the image of the topmost layer to obtain a filtered image corresponding to the topmost layer;
s44: up-sampling the filter image corresponding to the topmost layer to obtain a segmented image corresponding to the middle layer;
s45: performing edge-directed filtering on the segmented image corresponding to the middle layer according to the image of the middle layer to obtain a filtered image corresponding to the middle layer;
s46: up-sampling the filtered image corresponding to the middle layer to obtain a segmented image corresponding to the topmost layer;
s47: performing edge-directed filtering on the segmented image corresponding to the topmost layer according to the image of the topmost layer to obtain a filtered image corresponding to the topmost layer;
s48: and carrying out binarization processing on the filtering image at the topmost layer to obtain a target segmentation image of the original image.
Fig. 4 illustrates a specific flow when the image segmentation method in the embodiment of the present disclosure is applied to two classifications, and in the case of multiple classifications, it is necessary to consider multiple two classifications to be processed.
In the following, taking multi-classification segmentation as an example, a complete method of image segmentation is described, for example, the object segmentation objects are respectively an object a, an object B and an object C, and the image content is classified into three classes. FIG. 5 is a flowchart of a complete method of image segmentation, according to an exemplary embodiment, specifically including the steps of:
S51: downsampling an original image to obtain an image pyramid, wherein the image pyramid has three layers, the resolution of the top layer is minimum, and the resolution of the bottom layer is maximum;
s52: performing image segmentation operation on the topmost image to obtain a segmented image corresponding to the topmost image;
s53: performing edge-directed filtering on the object A in the segmented image corresponding to the topmost layer according to the image of the topmost layer to obtain a filtered image corresponding to the topmost layer;
s54: up-sampling the filter image corresponding to the topmost layer to obtain a segmented image corresponding to the middle layer;
s55: performing edge-directed filtering on the object A in the segmented image corresponding to the middle layer according to the image of the middle layer to obtain a filtered image corresponding to the middle layer;
s56: up-sampling the filtered image corresponding to the middle layer to obtain a segmented image corresponding to the topmost layer;
s57: performing edge-directed filtering on the A object in the segmented image corresponding to the topmost layer according to the image of the topmost layer to obtain a filtered image corresponding to the topmost layer, wherein the filtered image is a filtered image with the maximum resolution corresponding to the A object;
s53': performing edge-directed filtering on the B object in the segmented image corresponding to the topmost layer according to the image of the topmost layer to obtain a filtered image corresponding to the topmost layer;
S54': up-sampling the filter image corresponding to the topmost layer to obtain a segmented image corresponding to the middle layer;
s55': performing edge guide filtering on the B object in the segmented image corresponding to the middle layer according to the image of the middle layer to obtain a filtered image corresponding to the middle layer;
s56': up-sampling the filtered image corresponding to the middle layer to obtain a segmented image corresponding to the topmost layer;
s57': performing edge-directed filtering on the B object in the segmented image corresponding to the topmost layer according to the image of the topmost layer to obtain a filtered image corresponding to the topmost layer, wherein the filtered image is a filtered image with the maximum resolution corresponding to the B object;
s53': performing edge-directed filtering on the C object in the segmented image corresponding to the topmost layer according to the image of the topmost layer to obtain a filtered image corresponding to the topmost layer;
s54": up-sampling the filter image corresponding to the topmost layer to obtain a segmented image corresponding to the middle layer;
s55': performing edge-directed filtering on the C object in the segmented image corresponding to the middle layer according to the image of the middle layer to obtain a filtered image corresponding to the middle layer;
s56': up-sampling the filtered image corresponding to the middle layer to obtain a segmented image corresponding to the topmost layer;
S57': performing edge-directed filtering on the C object in the segmented image corresponding to the topmost layer according to the image of the topmost layer to obtain a filtered image corresponding to the topmost layer, wherein the filtered image is a filtered image with the maximum resolution corresponding to the C object;
s58: obtaining pixel values of all pixel points in the target filtered image according to the filtered image with the maximum resolution corresponding to the object A, the object B and the object C;
s59: and performing binarization processing on the target filtered image to obtain a target segmentation image of the original image.
Based on the same inventive concept, there is also provided an image segmentation apparatus in an embodiment of the present disclosure, and fig. 6 is a schematic diagram of an image segmentation apparatus 600 according to an exemplary embodiment, and referring to fig. 6, the apparatus includes a first acquisition unit 601, a processing unit 602, and a second acquisition unit 603.
A first obtaining unit 601, configured to perform obtaining an image set to be processed, where the image set includes an original image and at least one target image, the target image is obtained by downsampling an image in the image set, where the resolution of the original image is the largest;
a processing unit 602, configured to sequentially obtain segmented images corresponding to each resolution in order of resolution from small to large, and perform edge filtering processing on the segmented images corresponding to each resolution according to the images corresponding to each resolution in the image set to obtain filtered images corresponding to each resolution, where the segmented image with the minimum resolution is obtained by image segmentation of a target image with the minimum resolution in the image set, and the segmented images with other resolutions are obtained by up-sampling the filtered images obtained last time;
The second acquisition unit 603 is configured to perform filtering according to a maximum resolution to acquire a target segmentation image corresponding to the original image.
In an alternative embodiment, the processing unit 602 is specifically configured to perform:
taking the images in the image set corresponding to any resolution as guiding images;
and performing guided filtering on the segmented image corresponding to the resolution according to the guided image to obtain a filtered image corresponding to the resolution.
In an alternative embodiment, after the processing unit 602 performs edge filtering processing on the segmented image corresponding to each resolution according to the image corresponding to each resolution in the image set to obtain a filtered image corresponding to each resolution, before the second obtaining unit 603 obtains, according to the filtered image corresponding to the largest resolution in the image set, the target segmented image corresponding to the original image, the processing unit 602 is further configured to perform:
comparing the images in the image set corresponding to the resolution with the filtered images corresponding to the resolution;
and carrying out fuzzy processing on the filtered image corresponding to the resolution according to the comparison result.
In an alternative embodiment, the processing unit 602 is specifically configured to perform:
for any resolution, sequentially performing edge filtering processing on each target segmentation object in the segmentation image corresponding to the resolution according to the images in the image set corresponding to the resolution to obtain a filtered image corresponding to the resolution;
when edge filtering processing is carried out on the first target segmentation object, the edge filtering processing is carried out on the segmentation image corresponding to the resolution; and then, when the edge filtering processing is carried out on other target segmentation objects, the edge filtering processing is carried out on the filtered image after the last time.
In an alternative embodiment, the processing unit 602 is specifically configured to perform:
for any resolution, according to the images in the image set corresponding to the resolution, performing edge filtering processing on each target segmentation object in the segmentation image corresponding to the resolution to obtain a filtering image corresponding to each target segmentation object; when each target segmentation object is subjected to edge filtering processing, the edge filtering processing is performed on the segmentation image corresponding to the resolution;
And taking the filtered image corresponding to each target segmentation object as the filtered image corresponding to the resolution.
In an alternative embodiment, the filtered image of maximum resolution is one;
the second acquisition unit 603 is specifically configured to perform:
and carrying out binarization processing on the filtered image with the maximum resolution to obtain a target segmentation image corresponding to the original image.
In an alternative embodiment, the maximum resolution of the filtered image is multiple;
the second acquisition unit 603 is specifically configured to perform:
determining pixel values of all pixel points in the target filter image according to the positions of all target segmentation objects in the corresponding filter image;
and carrying out binarization processing on the target filtered image to obtain a target segmentation image corresponding to the original image.
In an alternative embodiment, the second obtaining unit 603 is specifically configured to perform:
and regarding any pixel point, taking the maximum value in the pixel values of the pixel point positions in the filter image corresponding to each target segmentation object as the pixel value of the pixel point at the corresponding position in the target filter image.
In an alternative embodiment, the second obtaining unit 603 is specifically configured to perform:
For a target pixel point, up-sampling a segmented image with the minimum resolution to obtain a pixel value of a position of the target pixel point in the image with the maximum resolution, wherein the pixel value is used as a pixel value of a pixel point in a corresponding position in the target filtering image, and the pixel value of the target pixel point is a pixel point in a position of a preset pixel value in the filtering image corresponding to each target segmented object; or (b)
And regarding non-target pixel points, taking the maximum value in the pixel values of the non-target pixel point positions in the filtered image corresponding to each target segmentation object as the pixel value of the pixel point at the corresponding position in the target filtered image.
For convenience of description, the above parts are described as being functionally divided into modules (or units) respectively. Of course, the functions of each module (or unit) may be implemented in the same piece or pieces of software or hardware when implementing the present disclosure.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
Fig. 7 is a block diagram of an electronic device 700, according to an example embodiment, the apparatus comprising:
a processor 710;
a memory 720 for storing instructions executable by the processor 710;
wherein the processor 710 is configured to execute the instructions to implement the image segmentation method in the embodiments of the present disclosure.
In an exemplary embodiment, a storage medium is also provided, such as a memory 720 including instructions executable by the processor 710 of the electronic device 700 to perform the above-described method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
In some possible implementations, a computing device according to the present disclosure may include at least one processing unit, and at least one storage unit. Wherein the storage unit stores program code which, when executed by the processing unit, causes the processing unit to perform the steps in the image segmentation method according to various exemplary embodiments of the present disclosure described above in the present specification. For example, the processing unit may perform the steps as shown in fig. 2.
A computing device 80 according to such an embodiment of the present disclosure is described below with reference to fig. 8. The computing device 80 of fig. 8 is merely an example and should not be taken as limiting the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 8, the computing device 80 is in the form of a general purpose computing device. Components of computing device 80 may include, but are not limited to: the at least one processing unit 81, the at least one memory unit 82, a bus 83 connecting the different system components, including the memory unit 82 and the processing unit 81.
Bus 83 represents one or more of several types of bus structures, including a memory bus or memory control module, a peripheral bus, a processor, and a local bus using any of a variety of bus architectures.
The storage unit 82 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 821 and/or cache memory unit 822, and may further include Read Only Memory (ROM) 823.
The storage unit 82 may also include a program/utility 825 having a set (at least one) of program modules 824, such program modules 824 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The computing device 80 may also communicate with one or more external devices 84 (e.g., keyboard, pointing device, etc.), one or more devices that enable a user to interact with the computing device 80, and/or any devices (e.g., routers, modems, etc.) that enable the computing device 80 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 85. Moreover, computing device 80 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through a network adapter 86. As shown, network adapter 86 communicates with other modules for computing device 80 over bus 83. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with computing device 80, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
In some possible embodiments, aspects of the image segmentation method provided by the present disclosure may also be implemented in the form of a program product comprising program code for causing a computer device to perform the steps in the image segmentation method according to the various exemplary embodiments of the present disclosure described above, when the program product is run on a computer device, e.g. the computer device may perform the steps as shown in fig. 2.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product of embodiments of the present disclosure may employ a portable compact disc read only memory (CD-ROM) and include program code and may run on a computing device. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with a command execution system, apparatus, or device.
The readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with a command execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's equipment, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the units described above may be embodied in one unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit described above may be further divided into a plurality of units to be embodied.
Furthermore, although the operations of the methods of the present disclosure are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program commands may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the commands executed by the processor of the computer or other programmable data processing apparatus produce means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program commands may also be stored in a computer readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the commands stored in the computer readable memory produce an article of manufacture including command means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present disclosure have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the disclosure.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present disclosure without departing from the spirit or scope of the disclosure. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (18)

1. An image segmentation method, comprising:
Acquiring an image set to be processed, wherein the image set comprises an original image and at least one target image, the original image comprises at least one target segmentation object, the target image is obtained by downsampling according to an image with larger resolution than the target image in the image set, and the resolution of the original image is the largest;
sequentially obtaining segmented images corresponding to each resolution according to the sequence of the resolution from small to large, and carrying out edge filtering processing on the segmented images corresponding to each resolution according to the images corresponding to each resolution in the image set to obtain filtered images corresponding to each resolution, wherein the segmented image with the minimum resolution is obtained by carrying out image segmentation on a target image with the minimum resolution in the image set, and the segmented images with other resolutions are obtained by carrying out up-sampling on the filtered images obtained last time; if the target segmentation object is a plurality of, each resolution corresponds to at least one filtering image;
comparing the images in the image set corresponding to the resolution with the filtered images corresponding to the resolution;
performing fuzzy processing on the filtered image corresponding to the resolution according to the comparison result;
And acquiring a target segmentation image corresponding to the original image according to the filtered image with the maximum resolution.
2. The method according to claim 1, wherein the performing edge filtering processing on the segmented image corresponding to each resolution according to the image corresponding to each resolution in the image set to obtain a filtered image corresponding to each resolution specifically includes:
taking the images in the image set corresponding to any resolution as guiding images;
and performing guided filtering on the segmented image corresponding to the resolution according to the guided image to obtain a filtered image corresponding to the resolution.
3. The method of claim 1, wherein performing edge filtering processing on the segmented image corresponding to each resolution according to the image corresponding to each resolution in the image set to obtain a filtered image corresponding to each resolution, includes:
for any resolution, sequentially performing edge filtering processing on each target segmentation object in the segmentation image corresponding to the resolution according to the images in the image set corresponding to the resolution to obtain a filtered image corresponding to the resolution;
When edge filtering processing is carried out on the first target segmentation object, the edge filtering processing is carried out on the segmentation image corresponding to the resolution; and then, when the edge filtering processing is carried out on other target segmentation objects, the edge filtering processing is carried out on the filtered image after the last time.
4. The method of claim 1, wherein performing edge filtering processing on the segmented image corresponding to each resolution according to the image corresponding to each resolution in the image set to obtain a filtered image corresponding to each resolution, includes:
for any resolution, according to the images in the image set corresponding to the resolution, performing edge filtering processing on each target segmentation object in the segmentation image corresponding to the resolution to obtain a filtering image corresponding to each target segmentation object; when each target segmentation object is subjected to edge filtering processing, the edge filtering processing is performed on the segmentation image corresponding to the resolution;
and taking the filtered image corresponding to each target segmentation object as the filtered image corresponding to the resolution.
5. The method of any of claims 1-4, wherein the filtered image of maximum resolution is one;
The obtaining the target segmentation image corresponding to the original image according to the filtered image with the maximum resolution comprises the following steps:
and carrying out binarization processing on the filtered image with the maximum resolution to obtain a target segmentation image corresponding to the original image.
6. The method of any of claims 1-4, wherein the filtered image of maximum resolution is a plurality;
the obtaining the target segmentation image corresponding to the original image according to the filtered image with the maximum resolution comprises the following steps:
determining pixel values of all pixel points in the target filter image according to the positions of all target segmentation objects in the corresponding filter image;
and carrying out binarization processing on the target filtered image to obtain a target segmentation image corresponding to the original image.
7. The method of claim 6, wherein determining the pixel value of each pixel in the target filtered image based on the position of each target segmented object in the corresponding filtered image comprises:
and regarding any pixel point, taking the maximum value in the pixel values of the pixel point positions in the filter image corresponding to each target segmentation object as the pixel value of the pixel point at the corresponding position in the target filter image.
8. The method of claim 6, wherein determining the pixel value of each pixel in the target filtered image based on the position of each target segmented object in the corresponding filtered image comprises:
for a target pixel point, up-sampling a segmented image with the minimum resolution to obtain a pixel value of a position of the target pixel point in the image with the maximum resolution, wherein the pixel value is used as a pixel value of a pixel point in a corresponding position in the target filtering image, and the pixel value of the target pixel point is a pixel point in a position of a preset pixel value in the filtering image corresponding to each target segmented object; or (b)
And regarding non-target pixel points, taking the maximum value in the pixel values of the non-target pixel point positions in the filtered image corresponding to each target segmentation object as the pixel value of the pixel point at the corresponding position in the target filtered image.
9. An image dividing apparatus, comprising:
a first obtaining unit configured to obtain an image set to be processed, where the image set includes an original image and at least one target image, the original image includes at least one target segmentation object, the target image is obtained by downsampling an image in the image set, where the resolution of the original image is the largest;
The processing unit is configured to sequentially acquire segmented images corresponding to each resolution according to the order of the resolution from the small to the large, and perform edge filtering processing on the segmented images corresponding to each resolution according to the images corresponding to each resolution in the image set to obtain filtered images corresponding to each resolution, wherein the segmented image with the minimum resolution is obtained by image segmentation of a target image with the minimum resolution in the image set, and the segmented images with other resolutions are obtained by up-sampling the filtered images obtained last time; if the target segmentation object is a plurality of, each resolution corresponds to at least one filtering image; comparing the images in the image set corresponding to the resolution with the filtered images corresponding to the resolution; performing fuzzy processing on the filtered image corresponding to the resolution according to the comparison result;
and a second acquisition unit configured to perform filtering of the image according to the maximum resolution, and acquire a target segmentation image corresponding to the original image.
10. The image segmentation apparatus according to claim 9, characterized in that the processing unit is specifically configured to perform:
Taking the images in the image set corresponding to any resolution as guiding images;
and performing guided filtering on the segmented image corresponding to the resolution according to the guided image to obtain a filtered image corresponding to the resolution.
11. The apparatus of claim 9, wherein the processing unit is specifically configured to perform:
for any resolution, sequentially performing edge filtering processing on each target segmentation object in the segmentation image corresponding to the resolution according to the images in the image set corresponding to the resolution to obtain a filtered image corresponding to the resolution;
when edge filtering processing is carried out on the first target segmentation object, the edge filtering processing is carried out on the segmentation image corresponding to the resolution; and then, when the edge filtering processing is carried out on other target segmentation objects, the edge filtering processing is carried out on the filtered image after the last time.
12. The apparatus of claim 9, wherein the processing unit is specifically configured to perform:
for any resolution, according to the images in the image set corresponding to the resolution, performing edge filtering processing on each target segmentation object in the segmentation image corresponding to the resolution to obtain a filtering image corresponding to each target segmentation object; when each target segmentation object is subjected to edge filtering processing, the edge filtering processing is performed on the segmentation image corresponding to the resolution;
And taking the filtered image corresponding to each target segmentation object as the filtered image corresponding to the resolution.
13. The apparatus of any of claims 9-12, wherein the filtered image of maximum resolution is one;
the second acquisition unit is specifically configured to perform:
and carrying out binarization processing on the filtered image with the maximum resolution to obtain a target segmentation image corresponding to the original image.
14. Apparatus according to any of claims 9 to 12, wherein the filtered image of maximum resolution is a plurality;
the second acquisition unit is specifically configured to perform:
determining pixel values of all pixel points in the target filter image according to the positions of all target segmentation objects in the corresponding filter image;
and carrying out binarization processing on the target filtered image to obtain a target segmentation image corresponding to the original image.
15. The apparatus of claim 14, wherein the second acquisition unit is specifically configured to perform:
and regarding any pixel point, taking the maximum value in the pixel values of the pixel point positions in the filter image corresponding to each target segmentation object as the pixel value of the pixel point at the corresponding position in the target filter image.
16. The apparatus of claim 14, wherein the second acquisition unit is specifically configured to perform:
for a target pixel point, up-sampling a segmented image with the minimum resolution to obtain a pixel value of a position of the target pixel point in the image with the maximum resolution, wherein the pixel value is used as a pixel value of a pixel point in a corresponding position in the target filtering image, and the pixel value of the target pixel point is a pixel point in a position of a preset pixel value in the filtering image corresponding to each target segmented object; or (b)
And regarding non-target pixel points, taking the maximum value in the pixel values of the non-target pixel point positions in the filtered image corresponding to each target segmentation object as the pixel value of the pixel point at the corresponding position in the target filtered image.
17. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image segmentation method of any one of claims 1 to 8.
18. A storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the image segmentation method of any one of claims 1-8.
CN202010403656.3A 2020-05-13 2020-05-13 Image segmentation method, device, electronic equipment and storage medium Active CN111612791B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010403656.3A CN111612791B (en) 2020-05-13 2020-05-13 Image segmentation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010403656.3A CN111612791B (en) 2020-05-13 2020-05-13 Image segmentation method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111612791A CN111612791A (en) 2020-09-01
CN111612791B true CN111612791B (en) 2023-11-28

Family

ID=72200199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010403656.3A Active CN111612791B (en) 2020-05-13 2020-05-13 Image segmentation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111612791B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800094A (en) * 2012-07-13 2012-11-28 南京邮电大学 Fast color image segmentation method
CN110084818A (en) * 2019-04-29 2019-08-02 清华大学深圳研究生院 Dynamic down-sampled images dividing method
CN110197491A (en) * 2019-05-17 2019-09-03 上海联影智能医疗科技有限公司 Image partition method, device, equipment and storage medium
CN110866878A (en) * 2019-11-13 2020-03-06 首都师范大学 Multi-scale denoising method for low-dose X-ray CT image
CN110930416A (en) * 2019-11-25 2020-03-27 宁波大学 MRI image prostate segmentation method based on U-shaped network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10304220B2 (en) * 2016-08-31 2019-05-28 International Business Machines Corporation Anatomy segmentation through low-resolution multi-atlas label fusion and corrective learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800094A (en) * 2012-07-13 2012-11-28 南京邮电大学 Fast color image segmentation method
CN110084818A (en) * 2019-04-29 2019-08-02 清华大学深圳研究生院 Dynamic down-sampled images dividing method
CN110197491A (en) * 2019-05-17 2019-09-03 上海联影智能医疗科技有限公司 Image partition method, device, equipment and storage medium
CN110866878A (en) * 2019-11-13 2020-03-06 首都师范大学 Multi-scale denoising method for low-dose X-ray CT image
CN110930416A (en) * 2019-11-25 2020-03-27 宁波大学 MRI image prostate segmentation method based on U-shaped network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Fast two-stage segmentation based on local correntropy-based K-means clustering;Yangyang Song等;《2017 IEEE 9th International Conference on Communication Software and Networks (ICCSN)》;全文 *
基于小波变换的多分辨率图像分割;刘海华等;《计算机工程与应用》;全文 *
多分辨率下的彩色图像分割方法;许利显;潘建寿;唐宏震;;现代电子技术(第12期);全文 *

Also Published As

Publication number Publication date
CN111612791A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
US10229499B2 (en) Skin lesion segmentation using deep convolution networks guided by local unsupervised learning
US10373312B2 (en) Automated skin lesion segmentation using deep side layers
US11030750B2 (en) Multi-level convolutional LSTM model for the segmentation of MR images
US10828000B2 (en) Medical image data analysis
AU2021354030B2 (en) Processing images using self-attention based neural networks
CN111444807B (en) Target detection method, device, electronic equipment and computer readable medium
JP2021135993A (en) Text recognition method, text recognition apparatus, electronic device, and storage medium
CN116403094A (en) Embedded image recognition method and system
CN112966687B (en) Image segmentation model training method and device and communication equipment
CN111597845A (en) Two-dimensional code detection method, device and equipment and readable storage medium
CN116227573B (en) Segmentation model training method, image segmentation device and related media
CN115908363B (en) Tumor cell statistics method, device, equipment and storage medium
CN116994721A (en) Quick processing system of digital pathological section graph
CN111612791B (en) Image segmentation method, device, electronic equipment and storage medium
Dhar et al. Interval type-2 fuzzy set and human vision based multi-scale geometric analysis for text-graphics segmentation
KR102026280B1 (en) Method and system for scene text detection using deep learning
CN111612804B (en) Image segmentation method, device, electronic equipment and storage medium
AU2022221413A1 (en) Domo v2: on-device object detection and instance segmentation for object selection
CN115880702A (en) Data processing method, device, equipment, program product and storage medium
CN114283087A (en) Image denoising method and related equipment
CN112749576A (en) Image recognition method and device, computing equipment and computer storage medium
CN113822846A (en) Method, apparatus, device and medium for determining region of interest in medical image
CN114627456A (en) Bill text information detection method, device and system
CN112396613B (en) Image segmentation method, device, computer equipment and storage medium
US11983903B2 (en) Processing images using self-attention based neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant