WO2021212913A1 - 一种图像分割方法、装置、设备及介质 - Google Patents

一种图像分割方法、装置、设备及介质 Download PDF

Info

Publication number
WO2021212913A1
WO2021212913A1 PCT/CN2020/141617 CN2020141617W WO2021212913A1 WO 2021212913 A1 WO2021212913 A1 WO 2021212913A1 CN 2020141617 W CN2020141617 W CN 2020141617W WO 2021212913 A1 WO2021212913 A1 WO 2021212913A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sub
segmentation
target
grayscale
Prior art date
Application number
PCT/CN2020/141617
Other languages
English (en)
French (fr)
Inventor
华静
Original Assignee
杭州涂鸦信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州涂鸦信息技术有限公司 filed Critical 杭州涂鸦信息技术有限公司
Priority to US17/913,560 priority Critical patent/US20230377158A1/en
Publication of WO2021212913A1 publication Critical patent/WO2021212913A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • G06T2207/20041Distance transform

Definitions

  • This application relates to the field of image processing technology, and in particular to an image segmentation method, device, equipment, and medium.
  • Image segmentation refers to the technology and process of dividing an image into regions with various characteristics and extracting objects of interest. It is a very important step in image processing and detection and analysis. In image segmentation, the lack of target has always been a difficult point.
  • the OTSU maximum between-class variance method is usually used, but for complex background images with noise interference, uneven illumination, and large background gray changes, the OTSU algorithm is used to obtain The global single threshold often cannot take into account the actual situation of each area of the image, resulting in the missing of the target, and it is difficult to perform effective image segmentation.
  • the embodiments of the present application provide an image segmentation method, device, device, and medium, which can avoid the lack of target of image segmentation, thereby improving the effectiveness of image segmentation.
  • an image segmentation method including:
  • the target grayscale image is the grayscale image corresponding to the target color image
  • the grayscale histogram corresponding to each first subimage in the first subimage set is used to determine the target grayscale data corresponding to the first subimage;
  • the target grayscale data includes a grayscale average value and a maximum grayscale value And the minimum gray value;
  • the first sub-image meets the preset segmentation condition, the first sub-image is segmented a second time to obtain the corresponding second sub-image set; if the first sub-image does not meet the preset segmentation condition If the segmentation condition is set, the first sub-image is not segmented;
  • the method before the first segmentation of the target gray-scale image to obtain the corresponding first sub-image set, the method further includes
  • the method further includes:
  • the performing filtering processing on the converted grayscale image to obtain the corresponding filtered image includes:
  • Bilateral filtering is performed on the converted gray image to obtain the filtered image.
  • said using the first binarized image to perform watershed segmentation on the target color image to obtain a corresponding segmented image includes:
  • the second binarized image is determined as a first marked image, and the target color image is watershed segmented by using the first marked image to obtain a corresponding segmented image.
  • the method before performing distance transformation on the first binarized image to obtain a distance-transformed image, the method further includes:
  • Morphological opening processing is performed on the first binary image to obtain the first binary image after the morphological opening processing.
  • the performing morphological opening processing on the first binary image to obtain the first binary image after the morphological opening processing includes:
  • the second mark image is continuously expanded and processed until the second mark image approaches the mask image, so as to obtain the first binarized image after the morphological opening operation processing.
  • an image segmentation device including:
  • the first image segmentation module is configured to perform the first segmentation of the target grayscale image to obtain the corresponding first sub-image set;
  • the target grayscale image is a grayscale image corresponding to the target color image;
  • the gray-scale data determining module is configured to determine the target gray-scale data corresponding to the first sub-image by using the gray-scale histogram corresponding to each first sub-image in the first sub-image set; the target gray-scale data includes Average gray value, maximum gray value and minimum gray value;
  • a segmentation condition determination module configured to determine whether the corresponding first sub-image satisfies a preset segmentation condition by using the target grayscale data
  • the second image segmentation module is configured to perform a second segmentation on the first sub-image to obtain the corresponding second sub-image if the segmentation condition determination module determines that the first sub-image satisfies the preset segmentation condition. Image set; if the segmentation condition determination module determines that the first sub-image does not meet the preset segmentation condition, then the first sub-image is not segmented;
  • the image binarization module is set to use the OTSU maximum between-class variance method to perform binarization processing on the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division, respectively, to obtain The first binarized image corresponding to the target grayscale image;
  • the watershed segmentation module is configured to use the first binarized image to perform watershed segmentation on the target color image to obtain a corresponding segmented image.
  • an image segmentation device including a processor and a memory; wherein,
  • the memory is configured to store a computer program
  • the processor is configured to execute the computer program to implement the aforementioned image segmentation method.
  • a computer-readable storage medium is also provided, which is configured to store a computer program, wherein the computer program is executed by a processor to implement the aforementioned image segmentation method.
  • the target grayscale image is first segmented to obtain the corresponding first sub-image set; the target grayscale image is the grayscale image corresponding to the target color image, and then the first sub-image set is obtained.
  • the gray-scale histogram corresponding to each first sub-image in a sub-image set determines the target gray-scale data corresponding to the first sub-image; the target gray-scale data includes the average gray value, the maximum gray value, and the minimum gray value.
  • the target grayscale data is to determine whether the corresponding first sub-image meets the preset segmentation condition, and if the first sub-image meets the preset segmentation condition, perform the first sub-image
  • the second segmentation is to obtain the corresponding second sub-image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented, and the OTSU maximum between-class variance method is used respectively Binarize the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division to obtain the first binarized image corresponding to the target grayscale image, and finally Using the first binarized image to perform watershed segmentation on the target color image to obtain a corresponding segmented image.
  • the average gray value, the maximum gray value, and the minimum gray value of the gray histogram corresponding to each first sub-image obtained by the first segmentation are determined, and the gray average value corresponding to each first sub-image is used.
  • the maximum gray value and the minimum gray value to determine whether the first sub-image meets the preset segmentation conditions, the first sub-image that meets the preset segmentation conditions is segmented a second time, and then each sub-image is separately performed using the OTSU algorithm
  • Binarization to obtain the corresponding binarized image for image segmentation, overcomes the problem that the global single threshold obtained by the OTSU algorithm cannot take into account the actual situation of each region of the image, and can avoid the lack of target for image segmentation, thereby improving image segmentation Effectiveness.
  • FIG. 1 is a flowchart of an image segmentation method disclosed in this application.
  • Fig. 2 is a flowchart of an adaptive local dynamic threshold segmentation algorithm disclosed in this application.
  • FIG. 3 is a flowchart of a specific image segmentation method disclosed in this application.
  • FIG. 4 is a flowchart of a specific image segmentation method disclosed in this application.
  • FIG. 5 is a schematic diagram of the structure of an image segmentation device disclosed in this application.
  • Fig. 6 is a structural diagram of an image segmentation device disclosed in this application.
  • FIG. 7 is a structural diagram of an electronic terminal disclosed in this application.
  • the OTSU maximum between-class variance method is usually used, but for complex background images with noise interference, uneven illumination, and large background gray changes, the OTSU algorithm is used to obtain
  • the global single threshold often cannot take into account the actual situation of each area of the image, resulting in the missing of the target, and it is difficult to perform effective image segmentation.
  • the present application provides an image segmentation solution, which can avoid the lack of target of image segmentation, thereby improving the effectiveness of image segmentation.
  • an embodiment of the present application discloses an image segmentation method, including:
  • Step S11 Perform a first segmentation on the target grayscale image to obtain a corresponding first sub-image set; the target grayscale image is a grayscale image corresponding to the target color image.
  • the embodiment of the present application may perform the first segmentation of the target grayscale image to obtain the corresponding first sub-image set; wherein, the size of the first sub-image in the first sub-image set Are equal.
  • the embodiment of the present application may perform the first segmentation of the target grayscale image to obtain a first sub-image set including a first preset number of first sub-images of equal size.
  • Step S12 Determine the target gray level data corresponding to the first sub-image by using the gray-level histogram corresponding to each first sub-image in the first sub-image set; Gray value and minimum gray value.
  • the grayscale histogram of each first image can be analyzed and calculated, and the grayscale average, maximum grayscale value, and minimum grayscale value corresponding to each sub-image grayscale histogram can be obtained.
  • Step S13 Use the target gray scale data to determine whether the corresponding first sub-image satisfies a preset segmentation condition.
  • this embodiment can determine whether the first difference or the second difference corresponding to the first sub-image is less than a preset threshold, and if the first difference or the second difference is less than The preset threshold value, it is determined that the first sub-image satisfies the preset segmentation condition, and if the first difference value or the second difference value is greater than or equal to the preset threshold value, it is determined that the The first sub-image does not satisfy the preset segmentation condition; wherein, the first difference is the difference between the average gray value and the maximum gray value, and the second difference is the average gray value The difference from the minimum gray value.
  • Step S14 If the first sub-image meets the preset segmentation condition, perform a second segmentation on the first sub-image to obtain a corresponding second sub-image set; if the first sub-image does not meet According to the preset segmentation condition, the first sub-image is not segmented.
  • Step S15 Binarize the second sub-image in the second sub-image set and the first sub-image that has not undergone the second segmentation by using the OTSU maximum between-class variance method to obtain the target gray level
  • the first binarized image corresponding to the image uses the OTSU maximum between-class variance method to binarize the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division.
  • the obtained second sub-image and the binarized image corresponding to the first sub-image are spliced to obtain the first binarized image corresponding to the target grayscale image. That is, the first binarized image is a binarized image obtained by splicing the second sub-image and the binarized image corresponding to the first sub-image.
  • the embodiment of the present application proposes an adaptive local dynamic segmentation algorithm, which may specifically include first segmenting the target gray image to obtain a first set of sub-images a 1 , a 2 , a 3 of equal size.
  • n is the number of first sub-images in the first sub-image set. Then analyze the gray-level histogram of each first sub-image, calculate the gray-level mean value ⁇ 1 , maximum gray-level value g 1 , and minimum gray-level value g 2 of the first sub-image.
  • FIG. 2 is a flowchart of an adaptive local dynamic threshold segmentation algorithm disclosed in an embodiment of this application.
  • Step S16 Use the first binarized image to perform watershed segmentation on the target color image to obtain a corresponding segmented image.
  • the first binarized image may be determined as a labeled image, and then the target color image may be watershed to obtain a corresponding segmented image.
  • the target gray-scale image is first segmented to obtain the corresponding first sub-image set; the target gray-scale image is the gray-scale image corresponding to the target color image, and then the first sub-image set is used.
  • the grayscale histogram corresponding to each first subimage in the image set determines the target grayscale data corresponding to the first subimage; the target grayscale data includes grayscale average, maximum grayscale value, and minimum grayscale value, Afterwards, the target gray scale data is used to determine whether the corresponding first sub-image meets a preset segmentation condition, and if the first sub-image meets the preset segmentation condition, a second sub-image is performed on the first sub-image.
  • the second segmentation is performed to obtain the corresponding second sub-image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented, and the OTSU maximum between-class variance method is used to separate all the sub-images.
  • Binarization is performed on the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division to obtain the first binarized image corresponding to the target gray-scale image, and finally use all
  • the first binarized image performs watershed segmentation on the target color image to obtain a corresponding segmented image.
  • the average gray value, the maximum gray value, and the minimum gray value of the gray histogram corresponding to each first sub-image obtained by the first segmentation are determined, and the gray average value corresponding to each first sub-image is used.
  • the maximum gray value and the minimum gray value to determine whether the first sub-image meets the preset segmentation conditions, the first sub-image that meets the preset segmentation conditions is segmented a second time, and then each sub-image is separately performed using the OTSU algorithm
  • Binarization to obtain the corresponding binarized image for image segmentation, overcomes the problem that the global single threshold obtained by the OTSU algorithm cannot take into account the actual situation of each region of the image, and can avoid the lack of target for image segmentation, thereby improving image segmentation Effectiveness.
  • an embodiment of the present application discloses a specific image segmentation method, including:
  • Step S201 Convert the collected target color image into a grayscale image.
  • Step S202 Perform filtering processing on the converted grayscale image to obtain a corresponding filtered image.
  • bilateral filtering may be performed on the converted grayscale image to obtain the filtered image.
  • bilateral filtering can not only eliminate noise well, but also smooth fine structures.
  • Step S203 Perform sharpening and enhancement processing on the filtered image to obtain the corresponding target grayscale image.
  • Laplace sharpening processing may be performed on the filtered image to obtain a sharpened and enhanced image.
  • image sharpening is a processing method for image enhancement, which can make the image clearer and the details more obvious.
  • the Laplace operator is used for sharpening, and the applied template is as follows:
  • Step S204 Perform a first segmentation on the target grayscale image to obtain a corresponding first sub-image set; the target grayscale image is a grayscale image corresponding to the target color image.
  • Step S205 Determine the target gray level data corresponding to the first sub-image by using the gray-level histogram corresponding to each first sub-image in the first sub-image set; Gray value and minimum gray value.
  • Step S206 Use the target grayscale data to determine whether the corresponding first sub-image satisfies a preset segmentation condition.
  • Step S207 If the first sub-image meets the preset segmentation condition, perform a second segmentation on the first sub-image to obtain a corresponding second sub-image set; if the first sub-image does not meet According to the preset segmentation condition, the first sub-image is not segmented.
  • Step S208 Binarize the second sub-image in the second sub-image set and the first sub-image that has not undergone the second segmentation by using the OTSU maximum between-class variance method to obtain the target gray level The first binarized image corresponding to the image.
  • Step S209 Perform morphological opening processing on the first binary image to obtain the first binary image after the morphological opening processing.
  • the first binarized image is corroded, and the first binarized image after the corroded process is determined to be the second mark image; the first and second binarized image before the corroded
  • the quantized image is determined to be a mask image; the second marked image is continuously expanded and processed until the second marked image is close to the mask image to obtain the first binarized image after morphological opening processing .
  • Using morphology to open the operation is helpful to remove the tiny details and a lot of noise in the image.
  • Step S210 Perform distance transformation on the first binarized image to obtain an image after distance transformation.
  • this embodiment can perform distance transformation on the first binarized image processed by the open operation, and the result of the distance transformation is to obtain a grayscale image similar to the target grayscale image, but the grayscale value only appears In the foreground area, and the farther away from the edge of the background, the greater the gray value of the pixel.
  • the distance transformation formula of this embodiment is specifically
  • G(x,y) 255 ⁇ (S(x,y)-Min)/(Max-Min);
  • S(x,y) is the set formed by the shortest distance from each internal point to the non-internal point set in the connected domain in the first binarized image
  • Min is the minimum value in the set S(x,y)
  • Max is the maximum value in the set S(x,y)
  • G(x,y) is the gray value corresponding to each internal pixel in the connected domain after distance transformation
  • (x,y) is the pixel coordinate
  • the central pixel is the farthest from all zero pixels on the boundary, and the gray value is also the largest. A bright line will be formed in the center of the connected domain, and finally the binary image will be converted into a gray image.
  • the gray value of each pixel is the corresponding distance value.
  • the value of (Max-Min) may be relatively large, such as 255, resulting in the transformed G(x The value of ,y) is small.
  • the coefficient 255 in 255 ⁇ (S(x,y)-Min) is used to prevent the value of G(x,y) from being too small, and to ensure that the value of G(x,y) is greater than 1.
  • Step S211 Perform normalization processing on the distance transformed image to obtain a normalized image.
  • a normalization operation can be performed after the distance transformation to convert the original data range after the distance transformation to the range of [0,1].
  • the normalization formula is as follows:
  • G(x,y) norm is the normalized data
  • G(x,y) is the original data before normalization
  • G(x,y) max and G(x,y) min are the original The maximum and minimum values of the data set.
  • Step S212 Binarize the normalized image by using the OTSU maximum between-class variance method to obtain a second binarized image.
  • the second binarized image is a binarized image obtained by performing binarization processing on the normalized image by using the OTSU maximum between-class variance method.
  • Step S213 Determine the second binarized image as a first marked image, and use the first marked image to perform watershed segmentation on the target color image to obtain a corresponding segmented image.
  • an embodiment of the present application discloses a flow chart of a specific image segmentation method.
  • an image segmentation device including:
  • the first image segmentation module 11 is configured to perform the first segmentation of the target grayscale image to obtain the corresponding first sub-image set;
  • the target grayscale image is a grayscale image corresponding to the target color image;
  • the gray-scale data determining module 12 is configured to determine the target gray-scale data corresponding to the first sub-image by using the gray-scale histogram corresponding to each first sub-image in the first sub-image set; the target gray-scale data Including the mean gray value, the maximum gray value and the minimum gray value;
  • the segmentation condition determination module 13 is configured to determine whether the corresponding first sub-image satisfies a preset segmentation condition by using the target grayscale data;
  • the second image segmentation module 14 is configured to perform a second segmentation on the first sub-image if the segmentation condition determination module determines that the first sub-image satisfies the preset segmentation condition to obtain a corresponding second sub-image. A set of sub-images; if the segmentation condition determination module determines that the first sub-image does not meet the preset segmentation condition, then the first sub-image is not segmented;
  • the image binarization module 15 is configured to perform binarization processing on the second sub-images in the second sub-image set and the first sub-images that have not undergone the second division respectively by using the OTSU maximum between-class variance method, Obtaining a first binarized image corresponding to the target grayscale image;
  • the watershed segmentation module 16 is configured to use the first binarized image to perform watershed segmentation on the target color image to obtain a corresponding segmented image.
  • the target gray-scale image is first segmented to obtain the corresponding first sub-image set; the target gray-scale image is the gray-scale image corresponding to the target color image, and then the first sub-image set is used.
  • the grayscale histogram corresponding to each first subimage in the image set determines the target grayscale data corresponding to the first subimage; the target grayscale data includes grayscale average, maximum grayscale value, and minimum grayscale value, Afterwards, the target gray scale data is used to determine whether the corresponding first sub-image meets a preset segmentation condition, and if the first sub-image meets the preset segmentation condition, a second sub-image is performed on the first sub-image.
  • the second segmentation is performed to obtain the corresponding second sub-image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented, and the OTSU maximum between-class variance method is used to separate all the sub-images.
  • Binarization is performed on the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division to obtain the first binarized image corresponding to the target gray-scale image, and finally use all
  • the first binarized image performs watershed segmentation on the target color image to obtain a corresponding segmented image.
  • the average gray value, the maximum gray value, and the minimum gray value of the gray histogram corresponding to each first sub-image obtained by the first segmentation are determined, and the gray average value corresponding to each first sub-image is used.
  • the maximum gray value and the minimum gray value to determine whether the first sub-image meets the preset segmentation conditions, the first sub-image that meets the preset segmentation conditions is segmented a second time, and then each sub-image is separately performed using the OTSU algorithm
  • Binarization to obtain the corresponding binarized image for image segmentation, overcomes the problem that the global single threshold obtained by the OTSU algorithm cannot take into account the actual situation of each region of the image, and can avoid the lack of target for image segmentation, thereby improving image segmentation Effectiveness.
  • the image segmentation device also includes an image grayscale processing module, which is configured to convert the collected target color image into a grayscale image.
  • the image segmentation device also includes an image filtering processing module, which is configured to perform filtering processing on the converted grayscale image to obtain a corresponding filtered image.
  • the image filtering processing module is specifically configured to perform bilateral filtering on the converted gray image to obtain the filtered image.
  • the image segmentation device also includes an image enhancement processing module configured to perform sharpening enhancement processing on the filtered image to obtain the corresponding target grayscale image.
  • the watershed segmentation module 16 may specifically include:
  • the distance transformation sub-module is configured to perform distance transformation on the first binarized image to obtain a distance-transformed image
  • a normalization processing sub-module configured to perform normalization processing on the distance transformed image to obtain a normalized image
  • the binarization processing sub-module is set to perform binarization processing on the normalized image by using the OTSU maximum between-class variance method to obtain a second binarized image;
  • the image segmentation sub-module is configured to determine the second binarized image as a first marked image, and use the first marked image to perform watershed segmentation on the target color image to obtain a corresponding segmented image.
  • the image segmentation device further includes an open operation processing module configured to perform morphological open operation processing on the first binarized image to obtain the first binarized image after morphological open operation processing.
  • the open operation processing module is specifically configured to perform corrosion processing on the first binarized image, and determine the first binarized image after the corrosion processing as the second mark image;
  • the first binarized image before is determined to be a mask image;
  • the second mark image is continuously expanded and processed until the second mark image is close to the mask image, so as to obtain the all after morphological opening processing.
  • the first binary image is specifically configured to perform corrosion processing on the first binarized image, and determine the first binarized image after the corrosion processing as the second mark image;
  • the first binarized image before is determined to be a mask image;
  • the second mark image is continuously expanded and processed until the second mark image is close to the mask image, so as to obtain the all after morphological opening processing.
  • the first binary image is specifically configured to perform corrosion processing on the first binarized image, and determine the first binarized image after the corrosion processing as the second mark image;
  • the first binarized image before is determined to be a mask image;
  • the second mark image is continuously expanded and processed until
  • Figure 6 is an image segmentation device disclosed in an embodiment of the application, including a processor 21 and a memory 22; wherein, the memory 22 is configured to store a computer program; the processor 21 is configured to The computer program is executed to realize the following steps:
  • the target grayscale image is the grayscale image corresponding to the target color image; each first sub-image in the first sub-image set is used
  • the corresponding gray-scale histogram determines the target gray-scale data corresponding to the first sub-image; the target gray-scale data includes the average gray-scale value, the maximum gray-scale value, and the minimum gray-scale value; the target gray-scale data is used to determine Whether the corresponding first sub-image meets the preset segmentation condition; if the first sub-image meets the preset segmentation condition, the first sub-image is segmented a second time to obtain the corresponding second sub-image Image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented; the second sub-image in the second sub-image set is separately divided by the OTSU maximum between-class variance method The image and the first sub-image that has not undergone the second division are binarized to obtain the first bin
  • the target gray-scale image is first segmented to obtain the corresponding first sub-image set; the target gray-scale image is the gray-scale image corresponding to the target color image, and then the first sub-image set is used.
  • the grayscale histogram corresponding to each first subimage in the image set determines the target grayscale data corresponding to the first subimage; the target grayscale data includes grayscale average, maximum grayscale value, and minimum grayscale value, Afterwards, the target gray scale data is used to determine whether the corresponding first sub-image meets a preset segmentation condition, and if the first sub-image meets the preset segmentation condition, a second sub-image is performed on the first sub-image.
  • the second segmentation is performed to obtain the corresponding second sub-image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented, and the OTSU maximum between-class variance method is used to separate all the sub-images.
  • Binarization is performed on the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division to obtain the first binarized image corresponding to the target gray-scale image, and finally use all
  • the first binarized image performs watershed segmentation on the target color image to obtain a corresponding segmented image.
  • the average gray value, the maximum gray value, and the minimum gray value of the gray histogram corresponding to each first sub-image obtained by the first segmentation are determined, and the gray average value corresponding to each first sub-image is used.
  • the maximum gray value and the minimum gray value to determine whether the first sub-image meets the preset segmentation conditions, the first sub-image that meets the preset segmentation conditions is segmented a second time, and then each sub-image is separately performed using the OTSU algorithm
  • Binarization to obtain the corresponding binarized image for image segmentation, overcomes the problem that the global single threshold obtained by the OTSU algorithm cannot take into account the actual situation of each region of the image, and can avoid the lack of target for image segmentation, thereby improving image segmentation Effectiveness.
  • the processor 21 executes the computer subprogram stored in the memory 22, the following steps can be specifically implemented: converting the collected target color image into a grayscale image; The image is filtered to obtain the corresponding filtered image.
  • the processor 21 executes the computer subprogram stored in the memory 22, the following steps can be specifically implemented: perform sharpening and enhancement processing on the filtered image to obtain the corresponding target gray image .
  • the processor 21 executes the computer subprogram stored in the memory 22, the following steps can be specifically implemented: performing bilateral filtering on the converted gray image to obtain the filtered image.
  • the processor 21 when the processor 21 executes the computer subprogram stored in the memory 22, it can specifically implement the following steps: perform distance transformation on the first binarized image to obtain a distance-transformed image; The image after the distance transformation is normalized to obtain a normalized image; the normalized image is binarized using the OTSU maximum between-class variance method to obtain a second binarized image; The second binarized image is determined to be the first marked image, and the target color image is watershed segmented by using the first marked image to obtain a corresponding segmented image.
  • the processor 21 when the processor 21 executes the computer subprogram stored in the memory 22, it can specifically implement the following steps: perform morphological open operation processing on the first binary image to obtain morphological open operation The processed first binarized image.
  • the processor 21 when the processor 21 executes the computer subprogram stored in the memory 22, it can specifically implement the following steps: corroding the first binarized image, and corroding the first binarized image.
  • a binarized image is determined as the second marked image; the first binarized image before the corrosion process is determined as a mask image; the second marked image is continuously expanded and processed until the second marked image is close to the The mask image is used to obtain the first binarized image after morphological opening processing.
  • the memory 22, as a carrier for resource storage may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the storage method may be short-term storage or permanent storage.
  • an embodiment of the present application discloses an electronic terminal 20, which includes the processor 21 and the memory 22 disclosed in the foregoing embodiment.
  • the processor 21 includes the memory 22 disclosed in the foregoing embodiment.
  • the specific steps that the processor 21 can perform reference may be made to the corresponding content disclosed in the foregoing embodiments, and details are not described herein again.
  • the electronic terminal 20 in this embodiment may also specifically include a power supply 23, a communication interface 24, an input/output interface 25, and a communication bus 26; wherein, the power supply 23 is used for each hardware device on the terminal 20 Provide a working voltage; the communication interface 24 can create a data transmission channel between the terminal 20 and external devices, and the communication protocol it follows is any communication protocol that can be applied to the technical solution of the present application, and it will not be carried out here.
  • the input and output interface 25 is set to obtain external input data or output data to the outside world, and its specific interface type can be selected according to specific application needs, and no specific limitation is made here.
  • the embodiment of the present application also discloses a computer-readable storage medium configured to store a computer program, wherein the computer program is executed by a processor to implement the following steps:
  • the target grayscale image is the grayscale image corresponding to the target color image; each first sub-image in the first sub-image set is used
  • the corresponding gray-scale histogram determines the target gray-scale data corresponding to the first sub-image; the target gray-scale data includes the average gray-scale value, the maximum gray-scale value, and the minimum gray-scale value; the target gray-scale data is used to determine Whether the corresponding first sub-image meets the preset segmentation condition; if the first sub-image meets the preset segmentation condition, the first sub-image is segmented a second time to obtain the corresponding second sub-image Image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented; the second sub-image in the second sub-image set is separately divided by the OTSU maximum between-class variance method The image and the first sub-image that has not undergone the second segmentation are binarized to obtain the first
  • the target gray-scale image is first segmented to obtain the corresponding first sub-image set; the target gray-scale image is the gray-scale image corresponding to the target color image, and then the first sub-image set is used.
  • the grayscale histogram corresponding to each first subimage in the image set determines the target grayscale data corresponding to the first subimage; the target grayscale data includes grayscale average, maximum grayscale value, and minimum grayscale value, Afterwards, the target gray scale data is used to determine whether the corresponding first sub-image meets a preset segmentation condition, and if the first sub-image meets the preset segmentation condition, a second sub-image is performed on the first sub-image.
  • the second segmentation is performed to obtain the corresponding second sub-image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented, and the OTSU maximum between-class variance method is used to separate all the sub-images.
  • Binarization is performed on the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division to obtain the first binarized image corresponding to the target gray-scale image, and finally use all
  • the first binarized image performs watershed segmentation on the target color image to obtain a corresponding segmented image.
  • the average gray value, the maximum gray value, and the minimum gray value of the gray histogram corresponding to each first sub-image obtained by the first segmentation are determined, and the gray average value corresponding to each first sub-image is used.
  • the maximum gray value and the minimum gray value to determine whether the first sub-image meets the preset segmentation conditions, the first sub-image that meets the preset segmentation conditions is segmented a second time, and then each sub-image is separately performed using the OTSU algorithm
  • Binarization to obtain the corresponding binarized image for image segmentation, overcomes the problem that the global single threshold obtained by the OTSU algorithm cannot take into account the actual situation of each region of the image, and can avoid the lack of target for image segmentation, thereby improving image segmentation Effectiveness.
  • the following steps can be specifically implemented: converting the collected target color image into a grayscale image; The degree image is filtered to obtain the corresponding filtered image.
  • the following steps can be specifically implemented: performing sharpening and enhancement processing on the filtered image to obtain the corresponding target gray scale image.
  • the following steps can be specifically implemented: performing bilateral filtering on the converted grayscale image to obtain the filtered image.
  • the following steps can be specifically implemented: performing distance transformation on the first binarized image to obtain a distance-transformed image;
  • the distance-transformed image is normalized to obtain a normalized image;
  • the normalized image is binarized using the OTSU maximum between-class variance method to obtain a second binarized image;
  • the second binarized image is determined to be the first marked image, and the target color image is watershed segmented by using the first marked image to obtain a corresponding segmented image.
  • the following steps can be specifically implemented: performing morphological opening operation processing on the first binary image to obtain morphological opening The first binarized image after arithmetic processing.
  • the following steps can be specifically implemented: corroding the first binary image, and corroding the corroded
  • the first binarized image is determined to be the second marked image
  • the first binarized image before the erosion process is determined to be the mask image
  • the second marked image is continuously expanded and processed until the second marked image approaches
  • the mask image is used to obtain the first binarized image after morphological opening processing.
  • the steps of the method or algorithm described in the embodiments disclosed in this document can be directly implemented by hardware, a software module executed by a processor, or a combination of the two.
  • the software module can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disks, removable disks, CD-ROMs, or all areas in the technical field. Any other known storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种图像分割方法、装置、设备及介质,包括:对目标灰度图像进行第一次分割,得到第一子图像集;目标灰度图像为目标彩色图像的灰度图(S11);利用第一子图像集中每个第一子图像的灰度直方图确定对应的目标灰度数据包括灰度均值、最大灰度值、最小灰度值(S12);利用目标灰度数据判断第一子图像是否满足预设分割条件(S13);若满足,则对第一子图像进行第二次分割,得到第二子图像集(S14);利用OTSU最大类间方差法分别对第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到对应的第一二值化图像(S15);利用第一二值化图像进行分水岭分割,得到分割图像(S16)。能够避免图像分割的目标缺失,提高图像分割有效性。

Description

一种图像分割方法、装置、设备及介质 技术领域
本申请涉及图像处理技术领域,特别涉及一种图像分割方法、装置、设备及介质。
背景技术
图像分割是指把图像分成各具特性的区域,并提取出感兴趣目标的技术和过程,是图像处理和检测分析中一个非常重要的步骤。在图像分割中,目标缺失一直是一个难点。
在现有技术中,针对目标分割缺失的问题,通常采用OTSU最大类间方差法,但是对于存在噪声干扰、光照不均匀、背景灰度变化较大等情况的复杂背景图像,使用OTSU算法获得的全局单一阈值往往不能兼顾图像各个区域的实际情况,造成目标的缺失,难以进行有效的图像分割。
发明内容
本申请实施例提供了一种图像分割方法、装置、设备及介质,能够避免图像分割的目标缺失,从而提高了图像分割的有效性。
在本申请其中一实施例中,提供了一种图像分割方法,包括:
对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图;
利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值;
利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件;
若所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第 二次分割,得到对应的第二子图像集;若所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割;
利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像;
利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
可选的,所述对目标灰度图像进行第一次分割,得到对应的第一子图像集之前,还包括
将采集到的所述目标彩色图像转换为灰度图像;
对转换后的灰度图像进行滤波处理,得到对应的滤波后图像。
可选的,所述对转换后的灰度图像进行滤波处理,得到对应的滤波后图像之后,还包括:
对所述滤波后图像进行锐化增强处理,得到对应的所述目标灰度图像。
可选的,所述对转换后的灰度图像进行滤波处理,得到对应的滤波后图像,包括:
对转换后的灰度图像进行双边滤波,得到所述滤波后图像。
可选的,所述利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像,包括:
对所述第一二值化图像进行距离变换,得到距离变换后图像;
对所述距离变换后图像进行归一化处理,得到归一化后图像;
利用OTSU最大类间方差法对所述归一化后图像进行二值化处理,得到第二二值化图像;
将所述第二二值化图像确定为第一标记图像,利用所述第一标记图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
可选的,所述对所述第一二值化图像进行距离变换,得到距离变换后图像之前,还包括:
对所述第一二值化图像进行形态学开运算处理,得到形态学开运算处理后的所述第一二值化图像。
可选的,所述对所述第一二值化图像进行形态学开运算处理,得到形态 学开运算处理后的所述第一二值化图像,包括:
对所述第一二值化图像进行腐蚀处理,将腐蚀处理后的所述第一二值化图像确定为第二标记图像;
将腐蚀处理前的所述第一二值化图像确定为掩模图像;
不断膨胀处理所述第二标记图像,直到所述第二标记图像逼近所述掩模图像,以得到形态学开运算处理后的所述第一二值化图像。
在本申请其中一实施例中,还提供了一种图像分割装置,包括:
第一图像分割模块,设置为对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图;
灰度数据确定模块,设置为利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值;
分割条件判断模块,设置为利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件;
第二图像分割模块,设置为若所述分割条件判断模块判定所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述分割条件判断模块判定所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割;
图像二值化模块,设置为利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像;
分水岭分割模块,设置为利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
在本申请其中一实施例中,还提供了一种图像分割设备,包括处理器和存储器;其中,
所述存储器,设置为保存计算机程序;
所述处理器,设置为执行所述计算机程序以实现前述的图像分割方法。
在本申请其中一实施例中,还提供了一种计算机可读存储介质,设置为保存计算机程序,其中,所述计算机程序被处理器执行时实现前述的图像分割方法。
由此可见,本申请实施例先对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图,然后利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值,之后利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件,若所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割,以及利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像,最后利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。这样,分别确定出第一次分割得到的每个第一子图像对应的灰度直方图的灰度均值、最大灰度值以及最小灰度值,利用每个第一子图像对应的灰度均值、最大灰度值以及最小灰度值来判断第一子图像是否满足预设分割条件,对满足预设分割条件的第一子图像进行第二次分割,然后利用OTSU算法分别对每个子图像进行二值化,得到对应的二值化图像,以进行图像分割,克服了OTSU算法获得的全局单一阈值不能兼顾图像各个区域的实际情况的问题,能够避免图像分割的目标缺失,从而提高了图像分割的有效性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本申请公开的一种图像分割方法流程图。
图2为本申请公开的一种自适应局部动态阈值分割算法流程图。
图3为本申请公开的一种具体的图像分割方法流程图。
图4为本申请公开的一种具体的图像分割方法流程图。
图5为本申请公开的一种图像分割装置结构示意图。
图6为本申请公开的一种图像分割设备结构图。
图7为本申请公开的一种电子终端结构图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在现有技术中,针对目标分割缺失的问题,通常采用OTSU最大类间方差法,但是对于存在噪声干扰、光照不均匀、背景灰度变化较大等情况的复杂背景图像,使用OTSU算法获得的全局单一阈值往往不能兼顾图像各个区域的实际情况,造成目标的缺失,难以进行有效的图像分割。为此,本申请提供了一种图像分割方案,能够避免图像分割的目标缺失,从而提高了图像分割的有效性。
参见图1所示,本申请实施例公开了一种图像分割方法,包括:
步骤S11:对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图。
在具体的实施方式中,本申请实施例可以对目标灰度图像进行第一次分割,得到对应的第一子图像集;其中,所述第一子图像集中的所述第一子图像的大小均相等。
也即,本申请实施例可以将所述目标灰度图像进行第一次分割,得到包括第一预设数量个大小相等的第一子图像的第一子图像集。
步骤S12:利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值。
也即,本实施例可以分析计算每个第一图像的灰度直方图,得到每个子图像灰度直方图对应的灰度均值、最大灰度值以及最小灰度值。
步骤S13:利用所述目标灰度数据判断对应的所述第一子图像是否满足预 设分割条件。
在具体的实施方式中,本实施例可以判断所述第一子图像对应的第一差值或第二差值是否小于预设阈值,若所述第一差值或所述第二差值小于所述预设阈值,则判定所述第一子图像满足所述预设分割条件,若所述第一差值或所述第二差值均大于或等于所述预设阈值,则判定所述第一子图像不满足所述预设分割条件;其中,所述第一差值为所述灰度均值与所述最大灰度值的差值,所述第二差值为所述灰度均值与所述最小灰度值的差值。
步骤S14:若所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割。
步骤S15:利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像。在具体的实施方式中,本实施例在利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理之后,对得到的第二子图像和第一子图像对应的二值化图像进行拼接,得到所述目标灰度图像对应的第一二值化图像。也即,第一二值化图像为对第二子图像和第一子图像对应的二值化图像进行拼接得到二值化图像。
也即,本申请实施例提出了一种自适应局部动态分割算法,具体可以包括首先对目标灰度图像的第一次分割,得到大小相等的第一子图像集合a 1,a 2,a 3…a n;n为第一子图像集合中第一子图像的数量。然后分析每个第一子图像的灰度直方图,计算第一子图像的灰度均值μ 1,最大灰度值g 1,最小灰度值g 2,当图像中目标和背景比例过于悬殊,图像的平均灰度接近于最高灰度或者最低灰度,即当|μ 1-g 1|或|μ 1-g 2|小于某一阈值时,满足分割条件,其他情况视为不满足分割条件,若第一子图像不满足分割条件,则停止继续分割,若第一子图像满足分割条件,则进行二次分割,将满足分割条件的第一子图像划分成四个大小相等的第二子图像,然后对分割得到的所有的子图像采用OTSU最大类间方差法,最终得到二值化图像。该算法克服了对复杂背景图像全局阈值难以有效分割的问题,提高了复杂背景图像分割的有效性。例如,参见图2所示,图2为本申请实施例公开的一种自适应局部动态阈值分割算法 流程图。
步骤S16:利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
在具体的实施方式,可以将所述第一二值化图像确定为标记图像,然后对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
可见,本申请实施例先对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图,然后利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值,之后利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件,若所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割,以及利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像,最后利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。这样,分别确定出第一次分割得到的每个第一子图像对应的灰度直方图的灰度均值、最大灰度值以及最小灰度值,利用每个第一子图像对应的灰度均值、最大灰度值以及最小灰度值来判断第一子图像是否满足预设分割条件,对满足预设分割条件的第一子图像进行第二次分割,然后利用OTSU算法分别对每个子图像进行二值化,得到对应的二值化图像,以进行图像分割,克服了OTSU算法获得的全局单一阈值不能兼顾图像各个区域的实际情况的问题,能够避免图像分割的目标缺失,从而提高了图像分割的有效性。
参见图3所示,本申请实施例公开了一种具体的图像分割方法,包括:
步骤S201:将采集到的所述目标彩色图像转换为灰度图像。
步骤S202:对转换后的灰度图像进行滤波处理,得到对应的滤波后图像。
在具体的实施方式中,可以对转换后的灰度图像进行双边滤波,得到所述滤波后图像。
需要指出的是,双边滤波不仅能很好的消除噪声,还能够平滑细小的结构。
步骤S203:对所述滤波后图像进行锐化增强处理,得到对应的所述目标灰度图像。
在具体的实施方式中,本实施例可以对所述滤波后图像进行Laplace锐化处理,得到锐化增强后的图像。
需要指出的是,图像锐化为一种图像增强的处理方法,可使图像更清晰、细节更明显。本实施例采用Laplace算子进行锐化,应用的模板如下:
Figure PCTCN2020141617-appb-000001
步骤S204:对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图。
步骤S205:利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值。
步骤S206:利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件。
步骤S207:若所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割。
步骤S208:利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像。
其中,关于上述步骤S204至S208的具体过程可以参考前述实施例中公开的相应内容,在此不再进行赘述。
步骤S209:对所述第一二值化图像进行形态学开运算处理,得到形态学开运算处理后的所述第一二值化图像。
在具体的实施方式中,对所述第一二值化图像进行腐蚀处理,将腐蚀处理后的所述第一二值化图像确定为第二标记图像;将腐蚀处理前的所述第一 二值化图像确定为掩模图像;不断膨胀处理所述第二标记图像,直到所述第二标记图像逼近所述掩模图像,以得到形态学开运算处理后的所述第一二值化图像。利用形态学开运算有利于去除图像中的微小细节和大量噪声。
步骤S210:对所述第一二值化图像进行距离变换,得到距离变换后图像。
也即,本实施例可以对开运算处理后的所述第一二值化图像进行距离变换,距离变换的结果是得到一张与目标灰度图像类似的灰度图像,但是灰度值只出现在前景区域,并且越远离背景边缘的像素灰度值越大。
在具体的实施方式中,本实施例距离变换公式具体为
G(x,y)=255×(S(x,y)-Min)/(Max-Min);
其中,S(x,y)为所述第一二值化图像中连通域内每一个内部点到非内部点集的最短距离构成的集合,Min为集合S(x,y)中的最小值,Max为集合S(x,y)中的最大值,G(x,y)为距离变换后连通域中每一个内部像素点所对应的灰度值,(x,y)为像素点坐标,并且,本实施例利用欧式距离计算连通域内每一个内部点到非内部点集的距离。假设连通域边缘像素点坐标为集合A={(i,j)},连通域内部像素点为集合B={(t,s)},欧式距离的计算公式为:
Figure PCTCN2020141617-appb-000002
需要指出的是,每个连通域中,中心像素点距离边界所有零像素点最远,灰度值也最大,将在连通域中心形成一条亮纹,最终二值图像将转换为灰度图像,每个像素点的灰度值既为对应的距离值。
并且,由于灰度图像中每个点的灰度值最大为255,最小为0,在上述距离变换公式中,(Max-Min)的值可能比较大,比如255,导致变换得到的G(x,y)数值较小,255×(S(x,y)-Min)中的系数255用于防止得到的G(x,y)数值过小,保障G(x,y)的值大于1。
步骤S211:对所述距离变换后图像进行归一化处理,得到归一化后图像。
在具体实施方式中,本实施例可以在距离变换之后进行归一化操作,将距离变换后的原始数据范围转换到[0,1]范围,归一化公式如下:
Figure PCTCN2020141617-appb-000003
其中,G(x,y) norm为归一化后的数据,G(x,y)为归一化之前的原始数据,G(x,y) max、G(x,y) min分别为原始数据集的最大值和最小值。
步骤S212:利用OTSU最大类间方差法对所述归一化后图像进行二值化处理,得到第二二值化图像。
也即,第二二值化图像为利用OTSU最大类间方差法对所述归一化后图像进行二值化处理后得到的二值化图像。
步骤S213:将所述第二二值化图像确定为第一标记图像,利用所述第一标记图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
例如,参见图4所示,本申请实施例公开了一种具体的图像分割方法流程图。
参见图5所示,本申请实施例公开了一种图像分割装置,包括:
第一图像分割模块11,设置为对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图;
灰度数据确定模块12,设置为利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值;
分割条件判断模块13,设置为利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件;
第二图像分割模块14,设置为若所述分割条件判断模块判定所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述分割条件判断模块判定所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割;
图像二值化模块15,设置为利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像;
分水岭分割模块16,设置为利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
可见,本申请实施例先对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图,然后利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最 小灰度值,之后利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件,若所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割,以及利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像,最后利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。这样,分别确定出第一次分割得到的每个第一子图像对应的灰度直方图的灰度均值、最大灰度值以及最小灰度值,利用每个第一子图像对应的灰度均值、最大灰度值以及最小灰度值来判断第一子图像是否满足预设分割条件,对满足预设分割条件的第一子图像进行第二次分割,然后利用OTSU算法分别对每个子图像进行二值化,得到对应的二值化图像,以进行图像分割,克服了OTSU算法获得的全局单一阈值不能兼顾图像各个区域的实际情况的问题,能够避免图像分割的目标缺失,从而提高了图像分割的有效性。
所述图像分割装置还包括图像灰度处理模块,设置为将采集到的所述目标彩色图像转换为灰度图像。
所述图像分割装置还包括图像滤波处理模块,设置为对转换后的灰度图像进行滤波处理,得到对应的滤波后图像。在具体的实施方式中,所述图像滤波处理模块,具体用于对转换后的灰度图像进行双边滤波,得到所述滤波后图像。
所述图像分割装置还包括图像增强处理模块,设置为对所述滤波后图像进行锐化增强处理,得到对应的所述目标灰度图像。
所述分水岭分割模块16具体可以包括:
距离变换子模块,设置为对所述第一二值化图像进行距离变换,得到距离变换后图像;
归一化处理子模块,设置为对所述距离变换后图像进行归一化处理,得到归一化后图像;
二值化处理子模块,设置为利用OTSU最大类间方差法对所述归一化后图像进行二值化处理,得到第二二值化图像;
图像分割子模块,设置为将所述第二二值化图像确定为第一标记图像,利用所述第一标记图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
所述图像分割装置还还包括,开运算处理模块,设置为对所述第一二值化图像进行形态学开运算处理,得到形态学开运算处理后的所述第一二值化图像。
在具体的实施方式中,开运算处理模块具体用于对所述第一二值化图像进行腐蚀处理,将腐蚀处理后的所述第一二值化图像确定为第二标记图像;将腐蚀处理前的所述第一二值化图像确定为掩模图像;不断膨胀处理所述第二标记图像,直到所述第二标记图像逼近所述掩模图像,以得到形态学开运算处理后的所述第一二值化图像。
参见图6所示,图6为本申请实施例公开的一种图像分割设备,包括处理器21和存储器22;其中,所述存储器22,设置为保存计算机程序;所述处理器21,设置为执行所述计算机程序,以实现以下步骤:
对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图;利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值;利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件;若所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割;利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像;利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
可见,本申请实施例先对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图,然后利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最 小灰度值,之后利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件,若所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割,以及利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像,最后利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。这样,分别确定出第一次分割得到的每个第一子图像对应的灰度直方图的灰度均值、最大灰度值以及最小灰度值,利用每个第一子图像对应的灰度均值、最大灰度值以及最小灰度值来判断第一子图像是否满足预设分割条件,对满足预设分割条件的第一子图像进行第二次分割,然后利用OTSU算法分别对每个子图像进行二值化,得到对应的二值化图像,以进行图像分割,克服了OTSU算法获得的全局单一阈值不能兼顾图像各个区域的实际情况的问题,能够避免图像分割的目标缺失,从而提高了图像分割的有效性。
本实施例中,所述处理器21执行所述存储器22中保存的计算机子程序时,可以具体实现以下步骤:将采集到的所述目标彩色图像转换为灰度图像;对转换后的灰度图像进行滤波处理,得到对应的滤波后图像。
本实施例中,所述处理器21执行所述存储器22中保存的计算机子程序时,可以具体实现以下步骤:对所述滤波后图像进行锐化增强处理,得到对应的所述目标灰度图像。
本实施例中,所述处理器21执行所述存储器22中保存的计算机子程序时,可以具体实现以下步骤:对转换后的灰度图像进行双边滤波,得到所述滤波后图像。
本实施例中,所述处理器21执行所述存储器22中保存的计算机子程序时,可以具体实现以下步骤:对所述第一二值化图像进行距离变换,得到距离变换后图像;对所述距离变换后图像进行归一化处理,得到归一化后图像;利用OTSU最大类间方差法对所述归一化后图像进行二值化处理,得到第二二值化图像;将所述第二二值化图像确定为第一标记图像,利用所述第一标记图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
本实施例中,所述处理器21执行所述存储器22中保存的计算机子程序时,可以具体实现以下步骤:对所述第一二值化图像进行形态学开运算处理,得到形态学开运算处理后的所述第一二值化图像。
本实施例中,所述处理器21执行所述存储器22中保存的计算机子程序时,可以具体实现以下步骤:对所述第一二值化图像进行腐蚀处理,将腐蚀处理后的所述第一二值化图像确定为第二标记图像;将腐蚀处理前的所述第一二值化图像确定为掩模图像;不断膨胀处理所述第二标记图像,直到所述第二标记图像逼近所述掩模图像,以得到形态学开运算处理后的所述第一二值化图像。
并且,所述存储器22作为资源存储的载体,可以是只读存储器、随机存储器、磁盘或者光盘等,存储方式可以是短暂存储或者永久存储。
参见图7所示,本申请实施例公开了一种电子终端20,包括前述实施例中公开的处理器21和存储器22。关于上述处理器21具体可以执行的步骤可以参考前述实施例中公开的相应内容,在此不再进行赘述。
进一步的,本实施例中的电子终端20,还可以具体包括电源23、通信接口24、输入输出接口25和通信总线26;其中,所述电源23用于为所述终端20上的各硬件设备提供工作电压;所述通信接口24能够为所述终端20创建与外界设备之间的数据传输通道,其所遵循的通信协议是能够适用于本申请技术方案的任意通信协议,在此不对其进行具体限定;所述输入输出接口25,设置为获取外界输入数据或向外界输出数据,其具体的接口类型可以根据具体应用需要进行选取,在此不进行具体限定。
进一步的,本申请实施例还公开了一种计算机可读存储介质,设置为保存计算机程序,其中,所述计算机程序被处理器执行时实现以下步骤:
对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图;利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值;利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件;若所述第一子图 像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割;利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像;利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
可见,本申请实施例先对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图,然后利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值,之后利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件,若所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割,以及利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像,最后利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。这样,分别确定出第一次分割得到的每个第一子图像对应的灰度直方图的灰度均值、最大灰度值以及最小灰度值,利用每个第一子图像对应的灰度均值、最大灰度值以及最小灰度值来判断第一子图像是否满足预设分割条件,对满足预设分割条件的第一子图像进行第二次分割,然后利用OTSU算法分别对每个子图像进行二值化,得到对应的二值化图像,以进行图像分割,克服了OTSU算法获得的全局单一阈值不能兼顾图像各个区域的实际情况的问题,能够避免图像分割的目标缺失,从而提高了图像分割的有效性。
本实施例中,所述计算机可读存储介质中保存的计算机子程序被处理器执行时,可以具体实现以下步骤:将采集到的所述目标彩色图像转换为灰度图像;对转换后的灰度图像进行滤波处理,得到对应的滤波后图像。
本实施例中,所述计算机可读存储介质中保存的计算机子程序被处理器执行时,可以具体实现以下步骤:对所述滤波后图像进行锐化增强处理,得 到对应的所述目标灰度图像。
本实施例中,所述计算机可读存储介质中保存的计算机子程序被处理器执行时,可以具体实现以下步骤:对转换后的灰度图像进行双边滤波,得到所述滤波后图像。
本实施例中,所述计算机可读存储介质中保存的计算机子程序被处理器执行时,可以具体实现以下步骤:对所述第一二值化图像进行距离变换,得到距离变换后图像;对所述距离变换后图像进行归一化处理,得到归一化后图像;利用OTSU最大类间方差法对所述归一化后图像进行二值化处理,得到第二二值化图像;将所述第二二值化图像确定为第一标记图像,利用所述第一标记图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
本实施例中,所述计算机可读存储介质中保存的计算机子程序被处理器执行时,可以具体实现以下步骤:对所述第一二值化图像进行形态学开运算处理,得到形态学开运算处理后的所述第一二值化图像。
本实施例中,所述计算机可读存储介质中保存的计算机子程序被处理器执行时,可以具体实现以下步骤:对所述第一二值化图像进行腐蚀处理,将腐蚀处理后的所述第一二值化图像确定为第二标记图像;将腐蚀处理前的所述第一二值化图像确定为掩模图像;不断膨胀处理所述第二标记图像,直到所述第二标记图像逼近所述掩模图像,以得到形态学开运算处理后的所述第一二值化图像。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
以上对本申请所提供的一种图像分割方法、装置、设备及介质进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以 上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (10)

  1. 一种图像分割方法,包括:
    对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图;
    利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值;
    利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件;
    若所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割;
    利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像;
    利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
  2. 根据权利要求1所述的图像分割方法,其中,所述对目标灰度图像进行第一次分割,得到对应的第一子图像集之前,还包括
    将采集到的所述目标彩色图像转换为灰度图像;
    对转换后的灰度图像进行滤波处理,得到对应的滤波后图像。
  3. 根据权利要求2所述的图像分割方法,其中,所述对转换后的灰度图像进行滤波处理,得到对应的滤波后图像之后,还包括:
    对所述滤波后图像进行锐化增强处理,得到对应的所述目标灰度图像。
  4. 根据权利要求2所述的图像分割方法,其中,所述对转换后的灰度图像进行滤波处理,得到对应的滤波后图像,包括:
    对转换后的灰度图像进行双边滤波,得到所述滤波后图像。
  5. 根据权利要求1至4任一项所述的图像分割方法,其中,所述利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像,包括:
    对所述第一二值化图像进行距离变换,得到距离变换后图像;
    对所述距离变换后图像进行归一化处理,得到归一化后图像;
    利用OTSU最大类间方差法对所述归一化后图像进行二值化处理,得到第二二值化图像;
    将所述第二二值化图像确定为第一标记图像,利用所述第一标记图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
  6. 根据权利要求5所述的图像分割方法,其中,所述对所述第一二值化图像进行距离变换,得到距离变换后图像之前,还包括:
    对所述第一二值化图像进行形态学开运算处理,得到形态学开运算处理后的所述第一二值化图像。
  7. 根据权利要求6所述的图像分割方法,其中,所述对所述第一二值化图像进行形态学开运算处理,得到形态学开运算处理后的所述第一二值化图像,包括:
    对所述第一二值化图像进行腐蚀处理,将腐蚀处理后的所述第一二值化图像确定为第二标记图像;
    将腐蚀处理前的所述第一二值化图像确定为掩模图像;
    不断膨胀处理所述第二标记图像,直到所述第二标记图像逼近所述掩模图像,以得到形态学开运算处理后的所述第一二值化图像。
  8. 一种图像分割装置,包括:
    第一图像分割模块,设置为对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图;
    灰度数据确定模块,设置为利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值;
    分割条件判断模块,设置为利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件;
    第二图像分割模块,设置为若所述分割条件判断模块判定所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述分割条件判断模块判定所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割;
    图像二值化模块,设置为利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像;
    分水岭分割模块,设置为利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
  9. 一种图像分割设备,包括处理器和存储器;其中,
    所述存储器,设置为保存计算机程序;
    所述处理器,设置为执行所述计算机程序以实现如权利要求1至7任一项所述的图像分割方法。
  10. 一种计算机可读存储介质,置为保存计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的图像分割方法。
PCT/CN2020/141617 2020-04-22 2020-12-30 一种图像分割方法、装置、设备及介质 WO2021212913A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/913,560 US20230377158A1 (en) 2020-04-22 2020-12-30 Image segmentation method, apparatus, device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010321730.7 2020-04-22
CN202010321730.7A CN111223115B (zh) 2020-04-22 2020-04-22 一种图像分割方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2021212913A1 true WO2021212913A1 (zh) 2021-10-28

Family

ID=70808073

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141617 WO2021212913A1 (zh) 2020-04-22 2020-12-30 一种图像分割方法、装置、设备及介质

Country Status (3)

Country Link
US (1) US20230377158A1 (zh)
CN (1) CN111223115B (zh)
WO (1) WO2021212913A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762220A (zh) * 2021-11-03 2021-12-07 通号通信信息集团有限公司 目标识别方法、电子设备、计算机可读存储介质
CN114742749A (zh) * 2022-02-27 2022-07-12 扬州盛强薄膜材料有限公司 基于图像处理的pvc薄膜质量检测方法
CN115880318A (zh) * 2023-02-08 2023-03-31 合肥中科类脑智能技术有限公司 激光测距方法、介质、设备及装置
CN116310448A (zh) * 2023-05-24 2023-06-23 山东曙岳车辆有限公司 基于计算机视觉的集装箱拼装匹配性检测方法
CN116600210A (zh) * 2023-07-18 2023-08-15 长春工业大学 基于机器人视觉的图像采集优化系统

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223115B (zh) * 2020-04-22 2020-07-14 杭州涂鸦信息技术有限公司 一种图像分割方法、装置、设备及介质
CN111767920B (zh) * 2020-06-30 2023-07-28 北京百度网讯科技有限公司 感兴趣区域的提取方法、装置、电子设备及存储介质
CN112365450A (zh) * 2020-10-23 2021-02-12 安徽启新明智科技有限公司 基于图像识别的物品分类计数的方法、装置和存储介质
CN113503970A (zh) * 2021-07-08 2021-10-15 江苏高速公路信息工程有限公司 一种火灾监控方法和监控系统
CN113629574A (zh) * 2021-08-18 2021-11-09 国网湖北省电力有限公司襄阳供电公司 基于视频监测技术的强风沙地区输电导线舞动预警系统
CN117689673B (zh) * 2024-02-04 2024-04-23 湘潭大学 基于分水岭的wc颗粒电镜图像分割及粒度分布计算方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140294318A1 (en) * 2013-03-29 2014-10-02 Fujitsu Limited Gray image processing method and apparatus
CN106157323A (zh) * 2016-08-30 2016-11-23 西安工程大学 一种动态分块阈值和块搜索结合的绝缘子分割提取方法
CN109559318A (zh) * 2018-10-12 2019-04-02 昆山博泽智能科技有限公司 基于积分算法的局部自适应图像阈值处理方法
CN109816681A (zh) * 2019-01-10 2019-05-28 中国药科大学 基于自适应局部阈值二值化的水体微生物图像分割方法
CN111223115A (zh) * 2020-04-22 2020-06-02 杭州涂鸦信息技术有限公司 一种图像分割方法、装置、设备及介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012061669A2 (en) * 2010-11-05 2012-05-10 Cytognomix,Inc. Centromere detector and method for determining radiation exposure from chromosome abnormalities
CN102509296B (zh) * 2011-11-10 2014-04-16 西安电子科技大学 基于最大相似性区域合并的胃部ct图像交互式分割方法
CN103426156A (zh) * 2012-05-15 2013-12-04 中国科学院声学研究所 一种基于svm分类器的sas图像分割方法及系统
CN107945200B (zh) * 2017-12-14 2021-08-03 中南大学 图像二值化分割方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140294318A1 (en) * 2013-03-29 2014-10-02 Fujitsu Limited Gray image processing method and apparatus
CN106157323A (zh) * 2016-08-30 2016-11-23 西安工程大学 一种动态分块阈值和块搜索结合的绝缘子分割提取方法
CN109559318A (zh) * 2018-10-12 2019-04-02 昆山博泽智能科技有限公司 基于积分算法的局部自适应图像阈值处理方法
CN109816681A (zh) * 2019-01-10 2019-05-28 中国药科大学 基于自适应局部阈值二值化的水体微生物图像分割方法
CN111223115A (zh) * 2020-04-22 2020-06-02 杭州涂鸦信息技术有限公司 一种图像分割方法、装置、设备及介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
T. ROMEN SINGH, SUDIPTA ROY, O. IMOCHA SINGH, TEJMANI SINAM, KH. MANGLEM SINGH: "A New Local Adaptive Thresholding Technique in Binarization", IJCSI INTERNATIONAL JOURNAL OF COMPUTER SCIENCE, vol. 8, no. 6, 30 November 2011 (2011-11-30), pages 271 - 277, XP055860873, ISSN: 1694-0814 *
WU WENYI;CUI CHANGCAI;YE RUIFANG;ZHANG YONGZHEN;YU QING: "Image Segmentation Method Using Second Time Gray Level Histogram of Connected Component Labeling of Grinding Wheel Abrasives Grains", JOURNAL OF HUAQIAO UNIVERSITY (NATURAL SCIENCE), vol. 37, no. 4, 20 July 2016 (2016-07-20), pages 422 - 426, XP055860876, ISSN: 1000-5013, DOI: 10.11830/issn.1000-5013.201604006 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762220A (zh) * 2021-11-03 2021-12-07 通号通信信息集团有限公司 目标识别方法、电子设备、计算机可读存储介质
CN114742749A (zh) * 2022-02-27 2022-07-12 扬州盛强薄膜材料有限公司 基于图像处理的pvc薄膜质量检测方法
CN114742749B (zh) * 2022-02-27 2023-04-18 扬州盛强薄膜材料有限公司 基于图像处理的pvc薄膜质量检测方法
CN115880318A (zh) * 2023-02-08 2023-03-31 合肥中科类脑智能技术有限公司 激光测距方法、介质、设备及装置
CN116310448A (zh) * 2023-05-24 2023-06-23 山东曙岳车辆有限公司 基于计算机视觉的集装箱拼装匹配性检测方法
CN116600210A (zh) * 2023-07-18 2023-08-15 长春工业大学 基于机器人视觉的图像采集优化系统
CN116600210B (zh) * 2023-07-18 2023-10-10 长春工业大学 基于机器人视觉的图像采集优化系统

Also Published As

Publication number Publication date
US20230377158A1 (en) 2023-11-23
CN111223115A (zh) 2020-06-02
CN111223115B (zh) 2020-07-14

Similar Documents

Publication Publication Date Title
WO2021212913A1 (zh) 一种图像分割方法、装置、设备及介质
US9292928B2 (en) Depth constrained superpixel-based depth map refinement
Lalimi et al. A vehicle license plate detection method using region and edge based methods
WO2022205525A1 (zh) 基于双目视觉自主式水下机器人回收导引伪光源去除方法
TW201740316A (zh) 圖像文字的識別方法和裝置
Kaur et al. A review on various methods of image thresholding
CN116597392B (zh) 基于机器视觉的液压油杂质识别方法
CN109447117B (zh) 双层车牌识别方法、装置、计算机设备及存储介质
CN112528868B (zh) 一种基于改进Canny边缘检测算法的违章压线判别方法
CN111369570B (zh) 一种视频图像的多目标检测跟踪方法
CN115272362A (zh) 一种数字病理全场图像有效区域分割方法、装置
CN110751156A (zh) 用于表格线大块干扰去除方法、系统、设备及介质
CN113537037A (zh) 路面病害识别方法、系统、电子设备及存储介质
CN110826360A (zh) Ocr图像预处理与文字识别
CN116739943A (zh) 图像平滑处理方法及目标轮廓提取方法
CN114550173A (zh) 图像预处理方法、装置、电子设备以及可读存储介质
CN108647713B (zh) 胚胎边界识别与激光轨迹拟合方法
CN110796076A (zh) 一种高光谱图像河流检测方法
CN114529570A (zh) 图像分割方法、图像识别方法、用户凭证补办方法及系统
CN115862044A (zh) 用于从图像中提取目标文档部分的方法、设备和介质
CN115841632A (zh) 输电线路提取方法、装置以及双目测距方法
Jena et al. An algorithmic approach based on CMS edge detection technique for the processing of digital images
CN111476800A (zh) 一种基于形态学操作的文字区域检测方法及装置
CN112102350A (zh) 一种基于Otsu和Tsallis熵的二次图像分割方法
Cherala et al. Palm leaf manuscript/color document image enhancement by using improved adaptive binarization method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20931749

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20931749

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20931749

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20931749

Country of ref document: EP

Kind code of ref document: A1