WO2021212913A1 - 一种图像分割方法、装置、设备及介质 - Google Patents
一种图像分割方法、装置、设备及介质 Download PDFInfo
- Publication number
- WO2021212913A1 WO2021212913A1 PCT/CN2020/141617 CN2020141617W WO2021212913A1 WO 2021212913 A1 WO2021212913 A1 WO 2021212913A1 CN 2020141617 W CN2020141617 W CN 2020141617W WO 2021212913 A1 WO2021212913 A1 WO 2021212913A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- sub
- segmentation
- target
- grayscale
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 238000003709 image segmentation Methods 0.000 title claims abstract description 69
- 230000011218 segmentation Effects 0.000 claims abstract description 133
- 238000012545 processing Methods 0.000 claims abstract description 58
- 230000000877 morphologic effect Effects 0.000 claims description 22
- 230000009466 transformation Effects 0.000 claims description 18
- 238000001914 filtration Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 13
- 238000010606 normalization Methods 0.000 claims description 8
- 230000002146 bilateral effect Effects 0.000 claims description 7
- 238000005530 etching Methods 0.000 claims description 6
- 238000013459 approach Methods 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 description 17
- 238000004891 communication Methods 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000005260 corrosion Methods 0.000 description 3
- 230000007797 corrosion Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
- G06T2207/20041—Distance transform
Definitions
- This application relates to the field of image processing technology, and in particular to an image segmentation method, device, equipment, and medium.
- Image segmentation refers to the technology and process of dividing an image into regions with various characteristics and extracting objects of interest. It is a very important step in image processing and detection and analysis. In image segmentation, the lack of target has always been a difficult point.
- the OTSU maximum between-class variance method is usually used, but for complex background images with noise interference, uneven illumination, and large background gray changes, the OTSU algorithm is used to obtain The global single threshold often cannot take into account the actual situation of each area of the image, resulting in the missing of the target, and it is difficult to perform effective image segmentation.
- the embodiments of the present application provide an image segmentation method, device, device, and medium, which can avoid the lack of target of image segmentation, thereby improving the effectiveness of image segmentation.
- an image segmentation method including:
- the target grayscale image is the grayscale image corresponding to the target color image
- the grayscale histogram corresponding to each first subimage in the first subimage set is used to determine the target grayscale data corresponding to the first subimage;
- the target grayscale data includes a grayscale average value and a maximum grayscale value And the minimum gray value;
- the first sub-image meets the preset segmentation condition, the first sub-image is segmented a second time to obtain the corresponding second sub-image set; if the first sub-image does not meet the preset segmentation condition If the segmentation condition is set, the first sub-image is not segmented;
- the method before the first segmentation of the target gray-scale image to obtain the corresponding first sub-image set, the method further includes
- the method further includes:
- the performing filtering processing on the converted grayscale image to obtain the corresponding filtered image includes:
- Bilateral filtering is performed on the converted gray image to obtain the filtered image.
- said using the first binarized image to perform watershed segmentation on the target color image to obtain a corresponding segmented image includes:
- the second binarized image is determined as a first marked image, and the target color image is watershed segmented by using the first marked image to obtain a corresponding segmented image.
- the method before performing distance transformation on the first binarized image to obtain a distance-transformed image, the method further includes:
- Morphological opening processing is performed on the first binary image to obtain the first binary image after the morphological opening processing.
- the performing morphological opening processing on the first binary image to obtain the first binary image after the morphological opening processing includes:
- the second mark image is continuously expanded and processed until the second mark image approaches the mask image, so as to obtain the first binarized image after the morphological opening operation processing.
- an image segmentation device including:
- the first image segmentation module is configured to perform the first segmentation of the target grayscale image to obtain the corresponding first sub-image set;
- the target grayscale image is a grayscale image corresponding to the target color image;
- the gray-scale data determining module is configured to determine the target gray-scale data corresponding to the first sub-image by using the gray-scale histogram corresponding to each first sub-image in the first sub-image set; the target gray-scale data includes Average gray value, maximum gray value and minimum gray value;
- a segmentation condition determination module configured to determine whether the corresponding first sub-image satisfies a preset segmentation condition by using the target grayscale data
- the second image segmentation module is configured to perform a second segmentation on the first sub-image to obtain the corresponding second sub-image if the segmentation condition determination module determines that the first sub-image satisfies the preset segmentation condition. Image set; if the segmentation condition determination module determines that the first sub-image does not meet the preset segmentation condition, then the first sub-image is not segmented;
- the image binarization module is set to use the OTSU maximum between-class variance method to perform binarization processing on the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division, respectively, to obtain The first binarized image corresponding to the target grayscale image;
- the watershed segmentation module is configured to use the first binarized image to perform watershed segmentation on the target color image to obtain a corresponding segmented image.
- an image segmentation device including a processor and a memory; wherein,
- the memory is configured to store a computer program
- the processor is configured to execute the computer program to implement the aforementioned image segmentation method.
- a computer-readable storage medium is also provided, which is configured to store a computer program, wherein the computer program is executed by a processor to implement the aforementioned image segmentation method.
- the target grayscale image is first segmented to obtain the corresponding first sub-image set; the target grayscale image is the grayscale image corresponding to the target color image, and then the first sub-image set is obtained.
- the gray-scale histogram corresponding to each first sub-image in a sub-image set determines the target gray-scale data corresponding to the first sub-image; the target gray-scale data includes the average gray value, the maximum gray value, and the minimum gray value.
- the target grayscale data is to determine whether the corresponding first sub-image meets the preset segmentation condition, and if the first sub-image meets the preset segmentation condition, perform the first sub-image
- the second segmentation is to obtain the corresponding second sub-image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented, and the OTSU maximum between-class variance method is used respectively Binarize the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division to obtain the first binarized image corresponding to the target grayscale image, and finally Using the first binarized image to perform watershed segmentation on the target color image to obtain a corresponding segmented image.
- the average gray value, the maximum gray value, and the minimum gray value of the gray histogram corresponding to each first sub-image obtained by the first segmentation are determined, and the gray average value corresponding to each first sub-image is used.
- the maximum gray value and the minimum gray value to determine whether the first sub-image meets the preset segmentation conditions, the first sub-image that meets the preset segmentation conditions is segmented a second time, and then each sub-image is separately performed using the OTSU algorithm
- Binarization to obtain the corresponding binarized image for image segmentation, overcomes the problem that the global single threshold obtained by the OTSU algorithm cannot take into account the actual situation of each region of the image, and can avoid the lack of target for image segmentation, thereby improving image segmentation Effectiveness.
- FIG. 1 is a flowchart of an image segmentation method disclosed in this application.
- Fig. 2 is a flowchart of an adaptive local dynamic threshold segmentation algorithm disclosed in this application.
- FIG. 3 is a flowchart of a specific image segmentation method disclosed in this application.
- FIG. 4 is a flowchart of a specific image segmentation method disclosed in this application.
- FIG. 5 is a schematic diagram of the structure of an image segmentation device disclosed in this application.
- Fig. 6 is a structural diagram of an image segmentation device disclosed in this application.
- FIG. 7 is a structural diagram of an electronic terminal disclosed in this application.
- the OTSU maximum between-class variance method is usually used, but for complex background images with noise interference, uneven illumination, and large background gray changes, the OTSU algorithm is used to obtain
- the global single threshold often cannot take into account the actual situation of each area of the image, resulting in the missing of the target, and it is difficult to perform effective image segmentation.
- the present application provides an image segmentation solution, which can avoid the lack of target of image segmentation, thereby improving the effectiveness of image segmentation.
- an embodiment of the present application discloses an image segmentation method, including:
- Step S11 Perform a first segmentation on the target grayscale image to obtain a corresponding first sub-image set; the target grayscale image is a grayscale image corresponding to the target color image.
- the embodiment of the present application may perform the first segmentation of the target grayscale image to obtain the corresponding first sub-image set; wherein, the size of the first sub-image in the first sub-image set Are equal.
- the embodiment of the present application may perform the first segmentation of the target grayscale image to obtain a first sub-image set including a first preset number of first sub-images of equal size.
- Step S12 Determine the target gray level data corresponding to the first sub-image by using the gray-level histogram corresponding to each first sub-image in the first sub-image set; Gray value and minimum gray value.
- the grayscale histogram of each first image can be analyzed and calculated, and the grayscale average, maximum grayscale value, and minimum grayscale value corresponding to each sub-image grayscale histogram can be obtained.
- Step S13 Use the target gray scale data to determine whether the corresponding first sub-image satisfies a preset segmentation condition.
- this embodiment can determine whether the first difference or the second difference corresponding to the first sub-image is less than a preset threshold, and if the first difference or the second difference is less than The preset threshold value, it is determined that the first sub-image satisfies the preset segmentation condition, and if the first difference value or the second difference value is greater than or equal to the preset threshold value, it is determined that the The first sub-image does not satisfy the preset segmentation condition; wherein, the first difference is the difference between the average gray value and the maximum gray value, and the second difference is the average gray value The difference from the minimum gray value.
- Step S14 If the first sub-image meets the preset segmentation condition, perform a second segmentation on the first sub-image to obtain a corresponding second sub-image set; if the first sub-image does not meet According to the preset segmentation condition, the first sub-image is not segmented.
- Step S15 Binarize the second sub-image in the second sub-image set and the first sub-image that has not undergone the second segmentation by using the OTSU maximum between-class variance method to obtain the target gray level
- the first binarized image corresponding to the image uses the OTSU maximum between-class variance method to binarize the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division.
- the obtained second sub-image and the binarized image corresponding to the first sub-image are spliced to obtain the first binarized image corresponding to the target grayscale image. That is, the first binarized image is a binarized image obtained by splicing the second sub-image and the binarized image corresponding to the first sub-image.
- the embodiment of the present application proposes an adaptive local dynamic segmentation algorithm, which may specifically include first segmenting the target gray image to obtain a first set of sub-images a 1 , a 2 , a 3 of equal size.
- n is the number of first sub-images in the first sub-image set. Then analyze the gray-level histogram of each first sub-image, calculate the gray-level mean value ⁇ 1 , maximum gray-level value g 1 , and minimum gray-level value g 2 of the first sub-image.
- FIG. 2 is a flowchart of an adaptive local dynamic threshold segmentation algorithm disclosed in an embodiment of this application.
- Step S16 Use the first binarized image to perform watershed segmentation on the target color image to obtain a corresponding segmented image.
- the first binarized image may be determined as a labeled image, and then the target color image may be watershed to obtain a corresponding segmented image.
- the target gray-scale image is first segmented to obtain the corresponding first sub-image set; the target gray-scale image is the gray-scale image corresponding to the target color image, and then the first sub-image set is used.
- the grayscale histogram corresponding to each first subimage in the image set determines the target grayscale data corresponding to the first subimage; the target grayscale data includes grayscale average, maximum grayscale value, and minimum grayscale value, Afterwards, the target gray scale data is used to determine whether the corresponding first sub-image meets a preset segmentation condition, and if the first sub-image meets the preset segmentation condition, a second sub-image is performed on the first sub-image.
- the second segmentation is performed to obtain the corresponding second sub-image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented, and the OTSU maximum between-class variance method is used to separate all the sub-images.
- Binarization is performed on the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division to obtain the first binarized image corresponding to the target gray-scale image, and finally use all
- the first binarized image performs watershed segmentation on the target color image to obtain a corresponding segmented image.
- the average gray value, the maximum gray value, and the minimum gray value of the gray histogram corresponding to each first sub-image obtained by the first segmentation are determined, and the gray average value corresponding to each first sub-image is used.
- the maximum gray value and the minimum gray value to determine whether the first sub-image meets the preset segmentation conditions, the first sub-image that meets the preset segmentation conditions is segmented a second time, and then each sub-image is separately performed using the OTSU algorithm
- Binarization to obtain the corresponding binarized image for image segmentation, overcomes the problem that the global single threshold obtained by the OTSU algorithm cannot take into account the actual situation of each region of the image, and can avoid the lack of target for image segmentation, thereby improving image segmentation Effectiveness.
- an embodiment of the present application discloses a specific image segmentation method, including:
- Step S201 Convert the collected target color image into a grayscale image.
- Step S202 Perform filtering processing on the converted grayscale image to obtain a corresponding filtered image.
- bilateral filtering may be performed on the converted grayscale image to obtain the filtered image.
- bilateral filtering can not only eliminate noise well, but also smooth fine structures.
- Step S203 Perform sharpening and enhancement processing on the filtered image to obtain the corresponding target grayscale image.
- Laplace sharpening processing may be performed on the filtered image to obtain a sharpened and enhanced image.
- image sharpening is a processing method for image enhancement, which can make the image clearer and the details more obvious.
- the Laplace operator is used for sharpening, and the applied template is as follows:
- Step S204 Perform a first segmentation on the target grayscale image to obtain a corresponding first sub-image set; the target grayscale image is a grayscale image corresponding to the target color image.
- Step S205 Determine the target gray level data corresponding to the first sub-image by using the gray-level histogram corresponding to each first sub-image in the first sub-image set; Gray value and minimum gray value.
- Step S206 Use the target grayscale data to determine whether the corresponding first sub-image satisfies a preset segmentation condition.
- Step S207 If the first sub-image meets the preset segmentation condition, perform a second segmentation on the first sub-image to obtain a corresponding second sub-image set; if the first sub-image does not meet According to the preset segmentation condition, the first sub-image is not segmented.
- Step S208 Binarize the second sub-image in the second sub-image set and the first sub-image that has not undergone the second segmentation by using the OTSU maximum between-class variance method to obtain the target gray level The first binarized image corresponding to the image.
- Step S209 Perform morphological opening processing on the first binary image to obtain the first binary image after the morphological opening processing.
- the first binarized image is corroded, and the first binarized image after the corroded process is determined to be the second mark image; the first and second binarized image before the corroded
- the quantized image is determined to be a mask image; the second marked image is continuously expanded and processed until the second marked image is close to the mask image to obtain the first binarized image after morphological opening processing .
- Using morphology to open the operation is helpful to remove the tiny details and a lot of noise in the image.
- Step S210 Perform distance transformation on the first binarized image to obtain an image after distance transformation.
- this embodiment can perform distance transformation on the first binarized image processed by the open operation, and the result of the distance transformation is to obtain a grayscale image similar to the target grayscale image, but the grayscale value only appears In the foreground area, and the farther away from the edge of the background, the greater the gray value of the pixel.
- the distance transformation formula of this embodiment is specifically
- G(x,y) 255 ⁇ (S(x,y)-Min)/(Max-Min);
- S(x,y) is the set formed by the shortest distance from each internal point to the non-internal point set in the connected domain in the first binarized image
- Min is the minimum value in the set S(x,y)
- Max is the maximum value in the set S(x,y)
- G(x,y) is the gray value corresponding to each internal pixel in the connected domain after distance transformation
- (x,y) is the pixel coordinate
- the central pixel is the farthest from all zero pixels on the boundary, and the gray value is also the largest. A bright line will be formed in the center of the connected domain, and finally the binary image will be converted into a gray image.
- the gray value of each pixel is the corresponding distance value.
- the value of (Max-Min) may be relatively large, such as 255, resulting in the transformed G(x The value of ,y) is small.
- the coefficient 255 in 255 ⁇ (S(x,y)-Min) is used to prevent the value of G(x,y) from being too small, and to ensure that the value of G(x,y) is greater than 1.
- Step S211 Perform normalization processing on the distance transformed image to obtain a normalized image.
- a normalization operation can be performed after the distance transformation to convert the original data range after the distance transformation to the range of [0,1].
- the normalization formula is as follows:
- G(x,y) norm is the normalized data
- G(x,y) is the original data before normalization
- G(x,y) max and G(x,y) min are the original The maximum and minimum values of the data set.
- Step S212 Binarize the normalized image by using the OTSU maximum between-class variance method to obtain a second binarized image.
- the second binarized image is a binarized image obtained by performing binarization processing on the normalized image by using the OTSU maximum between-class variance method.
- Step S213 Determine the second binarized image as a first marked image, and use the first marked image to perform watershed segmentation on the target color image to obtain a corresponding segmented image.
- an embodiment of the present application discloses a flow chart of a specific image segmentation method.
- an image segmentation device including:
- the first image segmentation module 11 is configured to perform the first segmentation of the target grayscale image to obtain the corresponding first sub-image set;
- the target grayscale image is a grayscale image corresponding to the target color image;
- the gray-scale data determining module 12 is configured to determine the target gray-scale data corresponding to the first sub-image by using the gray-scale histogram corresponding to each first sub-image in the first sub-image set; the target gray-scale data Including the mean gray value, the maximum gray value and the minimum gray value;
- the segmentation condition determination module 13 is configured to determine whether the corresponding first sub-image satisfies a preset segmentation condition by using the target grayscale data;
- the second image segmentation module 14 is configured to perform a second segmentation on the first sub-image if the segmentation condition determination module determines that the first sub-image satisfies the preset segmentation condition to obtain a corresponding second sub-image. A set of sub-images; if the segmentation condition determination module determines that the first sub-image does not meet the preset segmentation condition, then the first sub-image is not segmented;
- the image binarization module 15 is configured to perform binarization processing on the second sub-images in the second sub-image set and the first sub-images that have not undergone the second division respectively by using the OTSU maximum between-class variance method, Obtaining a first binarized image corresponding to the target grayscale image;
- the watershed segmentation module 16 is configured to use the first binarized image to perform watershed segmentation on the target color image to obtain a corresponding segmented image.
- the target gray-scale image is first segmented to obtain the corresponding first sub-image set; the target gray-scale image is the gray-scale image corresponding to the target color image, and then the first sub-image set is used.
- the grayscale histogram corresponding to each first subimage in the image set determines the target grayscale data corresponding to the first subimage; the target grayscale data includes grayscale average, maximum grayscale value, and minimum grayscale value, Afterwards, the target gray scale data is used to determine whether the corresponding first sub-image meets a preset segmentation condition, and if the first sub-image meets the preset segmentation condition, a second sub-image is performed on the first sub-image.
- the second segmentation is performed to obtain the corresponding second sub-image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented, and the OTSU maximum between-class variance method is used to separate all the sub-images.
- Binarization is performed on the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division to obtain the first binarized image corresponding to the target gray-scale image, and finally use all
- the first binarized image performs watershed segmentation on the target color image to obtain a corresponding segmented image.
- the average gray value, the maximum gray value, and the minimum gray value of the gray histogram corresponding to each first sub-image obtained by the first segmentation are determined, and the gray average value corresponding to each first sub-image is used.
- the maximum gray value and the minimum gray value to determine whether the first sub-image meets the preset segmentation conditions, the first sub-image that meets the preset segmentation conditions is segmented a second time, and then each sub-image is separately performed using the OTSU algorithm
- Binarization to obtain the corresponding binarized image for image segmentation, overcomes the problem that the global single threshold obtained by the OTSU algorithm cannot take into account the actual situation of each region of the image, and can avoid the lack of target for image segmentation, thereby improving image segmentation Effectiveness.
- the image segmentation device also includes an image grayscale processing module, which is configured to convert the collected target color image into a grayscale image.
- the image segmentation device also includes an image filtering processing module, which is configured to perform filtering processing on the converted grayscale image to obtain a corresponding filtered image.
- the image filtering processing module is specifically configured to perform bilateral filtering on the converted gray image to obtain the filtered image.
- the image segmentation device also includes an image enhancement processing module configured to perform sharpening enhancement processing on the filtered image to obtain the corresponding target grayscale image.
- the watershed segmentation module 16 may specifically include:
- the distance transformation sub-module is configured to perform distance transformation on the first binarized image to obtain a distance-transformed image
- a normalization processing sub-module configured to perform normalization processing on the distance transformed image to obtain a normalized image
- the binarization processing sub-module is set to perform binarization processing on the normalized image by using the OTSU maximum between-class variance method to obtain a second binarized image;
- the image segmentation sub-module is configured to determine the second binarized image as a first marked image, and use the first marked image to perform watershed segmentation on the target color image to obtain a corresponding segmented image.
- the image segmentation device further includes an open operation processing module configured to perform morphological open operation processing on the first binarized image to obtain the first binarized image after morphological open operation processing.
- the open operation processing module is specifically configured to perform corrosion processing on the first binarized image, and determine the first binarized image after the corrosion processing as the second mark image;
- the first binarized image before is determined to be a mask image;
- the second mark image is continuously expanded and processed until the second mark image is close to the mask image, so as to obtain the all after morphological opening processing.
- the first binary image is specifically configured to perform corrosion processing on the first binarized image, and determine the first binarized image after the corrosion processing as the second mark image;
- the first binarized image before is determined to be a mask image;
- the second mark image is continuously expanded and processed until the second mark image is close to the mask image, so as to obtain the all after morphological opening processing.
- the first binary image is specifically configured to perform corrosion processing on the first binarized image, and determine the first binarized image after the corrosion processing as the second mark image;
- the first binarized image before is determined to be a mask image;
- the second mark image is continuously expanded and processed until
- Figure 6 is an image segmentation device disclosed in an embodiment of the application, including a processor 21 and a memory 22; wherein, the memory 22 is configured to store a computer program; the processor 21 is configured to The computer program is executed to realize the following steps:
- the target grayscale image is the grayscale image corresponding to the target color image; each first sub-image in the first sub-image set is used
- the corresponding gray-scale histogram determines the target gray-scale data corresponding to the first sub-image; the target gray-scale data includes the average gray-scale value, the maximum gray-scale value, and the minimum gray-scale value; the target gray-scale data is used to determine Whether the corresponding first sub-image meets the preset segmentation condition; if the first sub-image meets the preset segmentation condition, the first sub-image is segmented a second time to obtain the corresponding second sub-image Image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented; the second sub-image in the second sub-image set is separately divided by the OTSU maximum between-class variance method The image and the first sub-image that has not undergone the second division are binarized to obtain the first bin
- the target gray-scale image is first segmented to obtain the corresponding first sub-image set; the target gray-scale image is the gray-scale image corresponding to the target color image, and then the first sub-image set is used.
- the grayscale histogram corresponding to each first subimage in the image set determines the target grayscale data corresponding to the first subimage; the target grayscale data includes grayscale average, maximum grayscale value, and minimum grayscale value, Afterwards, the target gray scale data is used to determine whether the corresponding first sub-image meets a preset segmentation condition, and if the first sub-image meets the preset segmentation condition, a second sub-image is performed on the first sub-image.
- the second segmentation is performed to obtain the corresponding second sub-image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented, and the OTSU maximum between-class variance method is used to separate all the sub-images.
- Binarization is performed on the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division to obtain the first binarized image corresponding to the target gray-scale image, and finally use all
- the first binarized image performs watershed segmentation on the target color image to obtain a corresponding segmented image.
- the average gray value, the maximum gray value, and the minimum gray value of the gray histogram corresponding to each first sub-image obtained by the first segmentation are determined, and the gray average value corresponding to each first sub-image is used.
- the maximum gray value and the minimum gray value to determine whether the first sub-image meets the preset segmentation conditions, the first sub-image that meets the preset segmentation conditions is segmented a second time, and then each sub-image is separately performed using the OTSU algorithm
- Binarization to obtain the corresponding binarized image for image segmentation, overcomes the problem that the global single threshold obtained by the OTSU algorithm cannot take into account the actual situation of each region of the image, and can avoid the lack of target for image segmentation, thereby improving image segmentation Effectiveness.
- the processor 21 executes the computer subprogram stored in the memory 22, the following steps can be specifically implemented: converting the collected target color image into a grayscale image; The image is filtered to obtain the corresponding filtered image.
- the processor 21 executes the computer subprogram stored in the memory 22, the following steps can be specifically implemented: perform sharpening and enhancement processing on the filtered image to obtain the corresponding target gray image .
- the processor 21 executes the computer subprogram stored in the memory 22, the following steps can be specifically implemented: performing bilateral filtering on the converted gray image to obtain the filtered image.
- the processor 21 when the processor 21 executes the computer subprogram stored in the memory 22, it can specifically implement the following steps: perform distance transformation on the first binarized image to obtain a distance-transformed image; The image after the distance transformation is normalized to obtain a normalized image; the normalized image is binarized using the OTSU maximum between-class variance method to obtain a second binarized image; The second binarized image is determined to be the first marked image, and the target color image is watershed segmented by using the first marked image to obtain a corresponding segmented image.
- the processor 21 when the processor 21 executes the computer subprogram stored in the memory 22, it can specifically implement the following steps: perform morphological open operation processing on the first binary image to obtain morphological open operation The processed first binarized image.
- the processor 21 when the processor 21 executes the computer subprogram stored in the memory 22, it can specifically implement the following steps: corroding the first binarized image, and corroding the first binarized image.
- a binarized image is determined as the second marked image; the first binarized image before the corrosion process is determined as a mask image; the second marked image is continuously expanded and processed until the second marked image is close to the The mask image is used to obtain the first binarized image after morphological opening processing.
- the memory 22, as a carrier for resource storage may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the storage method may be short-term storage or permanent storage.
- an embodiment of the present application discloses an electronic terminal 20, which includes the processor 21 and the memory 22 disclosed in the foregoing embodiment.
- the processor 21 includes the memory 22 disclosed in the foregoing embodiment.
- the specific steps that the processor 21 can perform reference may be made to the corresponding content disclosed in the foregoing embodiments, and details are not described herein again.
- the electronic terminal 20 in this embodiment may also specifically include a power supply 23, a communication interface 24, an input/output interface 25, and a communication bus 26; wherein, the power supply 23 is used for each hardware device on the terminal 20 Provide a working voltage; the communication interface 24 can create a data transmission channel between the terminal 20 and external devices, and the communication protocol it follows is any communication protocol that can be applied to the technical solution of the present application, and it will not be carried out here.
- the input and output interface 25 is set to obtain external input data or output data to the outside world, and its specific interface type can be selected according to specific application needs, and no specific limitation is made here.
- the embodiment of the present application also discloses a computer-readable storage medium configured to store a computer program, wherein the computer program is executed by a processor to implement the following steps:
- the target grayscale image is the grayscale image corresponding to the target color image; each first sub-image in the first sub-image set is used
- the corresponding gray-scale histogram determines the target gray-scale data corresponding to the first sub-image; the target gray-scale data includes the average gray-scale value, the maximum gray-scale value, and the minimum gray-scale value; the target gray-scale data is used to determine Whether the corresponding first sub-image meets the preset segmentation condition; if the first sub-image meets the preset segmentation condition, the first sub-image is segmented a second time to obtain the corresponding second sub-image Image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented; the second sub-image in the second sub-image set is separately divided by the OTSU maximum between-class variance method The image and the first sub-image that has not undergone the second segmentation are binarized to obtain the first
- the target gray-scale image is first segmented to obtain the corresponding first sub-image set; the target gray-scale image is the gray-scale image corresponding to the target color image, and then the first sub-image set is used.
- the grayscale histogram corresponding to each first subimage in the image set determines the target grayscale data corresponding to the first subimage; the target grayscale data includes grayscale average, maximum grayscale value, and minimum grayscale value, Afterwards, the target gray scale data is used to determine whether the corresponding first sub-image meets a preset segmentation condition, and if the first sub-image meets the preset segmentation condition, a second sub-image is performed on the first sub-image.
- the second segmentation is performed to obtain the corresponding second sub-image set; if the first sub-image does not meet the preset segmentation condition, the first sub-image is not segmented, and the OTSU maximum between-class variance method is used to separate all the sub-images.
- Binarization is performed on the second sub-image in the second sub-image set and the first sub-image that has not undergone the second division to obtain the first binarized image corresponding to the target gray-scale image, and finally use all
- the first binarized image performs watershed segmentation on the target color image to obtain a corresponding segmented image.
- the average gray value, the maximum gray value, and the minimum gray value of the gray histogram corresponding to each first sub-image obtained by the first segmentation are determined, and the gray average value corresponding to each first sub-image is used.
- the maximum gray value and the minimum gray value to determine whether the first sub-image meets the preset segmentation conditions, the first sub-image that meets the preset segmentation conditions is segmented a second time, and then each sub-image is separately performed using the OTSU algorithm
- Binarization to obtain the corresponding binarized image for image segmentation, overcomes the problem that the global single threshold obtained by the OTSU algorithm cannot take into account the actual situation of each region of the image, and can avoid the lack of target for image segmentation, thereby improving image segmentation Effectiveness.
- the following steps can be specifically implemented: converting the collected target color image into a grayscale image; The degree image is filtered to obtain the corresponding filtered image.
- the following steps can be specifically implemented: performing sharpening and enhancement processing on the filtered image to obtain the corresponding target gray scale image.
- the following steps can be specifically implemented: performing bilateral filtering on the converted grayscale image to obtain the filtered image.
- the following steps can be specifically implemented: performing distance transformation on the first binarized image to obtain a distance-transformed image;
- the distance-transformed image is normalized to obtain a normalized image;
- the normalized image is binarized using the OTSU maximum between-class variance method to obtain a second binarized image;
- the second binarized image is determined to be the first marked image, and the target color image is watershed segmented by using the first marked image to obtain a corresponding segmented image.
- the following steps can be specifically implemented: performing morphological opening operation processing on the first binary image to obtain morphological opening The first binarized image after arithmetic processing.
- the following steps can be specifically implemented: corroding the first binary image, and corroding the corroded
- the first binarized image is determined to be the second marked image
- the first binarized image before the erosion process is determined to be the mask image
- the second marked image is continuously expanded and processed until the second marked image approaches
- the mask image is used to obtain the first binarized image after morphological opening processing.
- the steps of the method or algorithm described in the embodiments disclosed in this document can be directly implemented by hardware, a software module executed by a processor, or a combination of the two.
- the software module can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disks, removable disks, CD-ROMs, or all areas in the technical field. Any other known storage media.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (10)
- 一种图像分割方法,包括:对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图;利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值;利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件;若所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割;利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像;利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
- 根据权利要求1所述的图像分割方法,其中,所述对目标灰度图像进行第一次分割,得到对应的第一子图像集之前,还包括将采集到的所述目标彩色图像转换为灰度图像;对转换后的灰度图像进行滤波处理,得到对应的滤波后图像。
- 根据权利要求2所述的图像分割方法,其中,所述对转换后的灰度图像进行滤波处理,得到对应的滤波后图像之后,还包括:对所述滤波后图像进行锐化增强处理,得到对应的所述目标灰度图像。
- 根据权利要求2所述的图像分割方法,其中,所述对转换后的灰度图像进行滤波处理,得到对应的滤波后图像,包括:对转换后的灰度图像进行双边滤波,得到所述滤波后图像。
- 根据权利要求1至4任一项所述的图像分割方法,其中,所述利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像,包括:对所述第一二值化图像进行距离变换,得到距离变换后图像;对所述距离变换后图像进行归一化处理,得到归一化后图像;利用OTSU最大类间方差法对所述归一化后图像进行二值化处理,得到第二二值化图像;将所述第二二值化图像确定为第一标记图像,利用所述第一标记图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
- 根据权利要求5所述的图像分割方法,其中,所述对所述第一二值化图像进行距离变换,得到距离变换后图像之前,还包括:对所述第一二值化图像进行形态学开运算处理,得到形态学开运算处理后的所述第一二值化图像。
- 根据权利要求6所述的图像分割方法,其中,所述对所述第一二值化图像进行形态学开运算处理,得到形态学开运算处理后的所述第一二值化图像,包括:对所述第一二值化图像进行腐蚀处理,将腐蚀处理后的所述第一二值化图像确定为第二标记图像;将腐蚀处理前的所述第一二值化图像确定为掩模图像;不断膨胀处理所述第二标记图像,直到所述第二标记图像逼近所述掩模图像,以得到形态学开运算处理后的所述第一二值化图像。
- 一种图像分割装置,包括:第一图像分割模块,设置为对目标灰度图像进行第一次分割,得到对应的第一子图像集;所述目标灰度图像为目标彩色图像对应的灰度图;灰度数据确定模块,设置为利用所述第一子图像集中每个第一子图像对应的灰度直方图确定出所述第一子图像对应的目标灰度数据;所述目标灰度数据包括灰度均值、最大灰度值以及最小灰度值;分割条件判断模块,设置为利用所述目标灰度数据判断对应的所述第一子图像是否满足预设分割条件;第二图像分割模块,设置为若所述分割条件判断模块判定所述第一子图像满足所述预设分割条件,则对所述第一子图像进行第二次分割,得到对应的第二子图像集;若所述分割条件判断模块判定所述第一子图像不满足所述预设分割条件,则不对所述第一子图像进行分割;图像二值化模块,设置为利用OTSU最大类间方差法分别对所述第二子图像集中的第二子图像以及未经过第二次分割的所述第一子图像进行二值化处理,得到所述目标灰度图像对应的第一二值化图像;分水岭分割模块,设置为利用所述第一二值化图像对所述目标彩色图像进行分水岭分割,得到对应的分割图像。
- 一种图像分割设备,包括处理器和存储器;其中,所述存储器,设置为保存计算机程序;所述处理器,设置为执行所述计算机程序以实现如权利要求1至7任一项所述的图像分割方法。
- 一种计算机可读存储介质,置为保存计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的图像分割方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/913,560 US20230377158A1 (en) | 2020-04-22 | 2020-12-30 | Image segmentation method, apparatus, device, and medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010321730.7 | 2020-04-22 | ||
CN202010321730.7A CN111223115B (zh) | 2020-04-22 | 2020-04-22 | 一种图像分割方法、装置、设备及介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021212913A1 true WO2021212913A1 (zh) | 2021-10-28 |
Family
ID=70808073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/141617 WO2021212913A1 (zh) | 2020-04-22 | 2020-12-30 | 一种图像分割方法、装置、设备及介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230377158A1 (zh) |
CN (1) | CN111223115B (zh) |
WO (1) | WO2021212913A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113762220A (zh) * | 2021-11-03 | 2021-12-07 | 通号通信信息集团有限公司 | 目标识别方法、电子设备、计算机可读存储介质 |
CN114742749A (zh) * | 2022-02-27 | 2022-07-12 | 扬州盛强薄膜材料有限公司 | 基于图像处理的pvc薄膜质量检测方法 |
CN115880318A (zh) * | 2023-02-08 | 2023-03-31 | 合肥中科类脑智能技术有限公司 | 激光测距方法、介质、设备及装置 |
CN116310448A (zh) * | 2023-05-24 | 2023-06-23 | 山东曙岳车辆有限公司 | 基于计算机视觉的集装箱拼装匹配性检测方法 |
CN116600210A (zh) * | 2023-07-18 | 2023-08-15 | 长春工业大学 | 基于机器人视觉的图像采集优化系统 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111223115B (zh) * | 2020-04-22 | 2020-07-14 | 杭州涂鸦信息技术有限公司 | 一种图像分割方法、装置、设备及介质 |
CN111767920B (zh) * | 2020-06-30 | 2023-07-28 | 北京百度网讯科技有限公司 | 感兴趣区域的提取方法、装置、电子设备及存储介质 |
CN112365450A (zh) * | 2020-10-23 | 2021-02-12 | 安徽启新明智科技有限公司 | 基于图像识别的物品分类计数的方法、装置和存储介质 |
CN113503970A (zh) * | 2021-07-08 | 2021-10-15 | 江苏高速公路信息工程有限公司 | 一种火灾监控方法和监控系统 |
CN113629574A (zh) * | 2021-08-18 | 2021-11-09 | 国网湖北省电力有限公司襄阳供电公司 | 基于视频监测技术的强风沙地区输电导线舞动预警系统 |
CN117689673B (zh) * | 2024-02-04 | 2024-04-23 | 湘潭大学 | 基于分水岭的wc颗粒电镜图像分割及粒度分布计算方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140294318A1 (en) * | 2013-03-29 | 2014-10-02 | Fujitsu Limited | Gray image processing method and apparatus |
CN106157323A (zh) * | 2016-08-30 | 2016-11-23 | 西安工程大学 | 一种动态分块阈值和块搜索结合的绝缘子分割提取方法 |
CN109559318A (zh) * | 2018-10-12 | 2019-04-02 | 昆山博泽智能科技有限公司 | 基于积分算法的局部自适应图像阈值处理方法 |
CN109816681A (zh) * | 2019-01-10 | 2019-05-28 | 中国药科大学 | 基于自适应局部阈值二值化的水体微生物图像分割方法 |
CN111223115A (zh) * | 2020-04-22 | 2020-06-02 | 杭州涂鸦信息技术有限公司 | 一种图像分割方法、装置、设备及介质 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012061669A2 (en) * | 2010-11-05 | 2012-05-10 | Cytognomix,Inc. | Centromere detector and method for determining radiation exposure from chromosome abnormalities |
CN102509296B (zh) * | 2011-11-10 | 2014-04-16 | 西安电子科技大学 | 基于最大相似性区域合并的胃部ct图像交互式分割方法 |
CN103426156A (zh) * | 2012-05-15 | 2013-12-04 | 中国科学院声学研究所 | 一种基于svm分类器的sas图像分割方法及系统 |
CN107945200B (zh) * | 2017-12-14 | 2021-08-03 | 中南大学 | 图像二值化分割方法 |
-
2020
- 2020-04-22 CN CN202010321730.7A patent/CN111223115B/zh active Active
- 2020-12-30 US US17/913,560 patent/US20230377158A1/en active Pending
- 2020-12-30 WO PCT/CN2020/141617 patent/WO2021212913A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140294318A1 (en) * | 2013-03-29 | 2014-10-02 | Fujitsu Limited | Gray image processing method and apparatus |
CN106157323A (zh) * | 2016-08-30 | 2016-11-23 | 西安工程大学 | 一种动态分块阈值和块搜索结合的绝缘子分割提取方法 |
CN109559318A (zh) * | 2018-10-12 | 2019-04-02 | 昆山博泽智能科技有限公司 | 基于积分算法的局部自适应图像阈值处理方法 |
CN109816681A (zh) * | 2019-01-10 | 2019-05-28 | 中国药科大学 | 基于自适应局部阈值二值化的水体微生物图像分割方法 |
CN111223115A (zh) * | 2020-04-22 | 2020-06-02 | 杭州涂鸦信息技术有限公司 | 一种图像分割方法、装置、设备及介质 |
Non-Patent Citations (2)
Title |
---|
T. ROMEN SINGH, SUDIPTA ROY, O. IMOCHA SINGH, TEJMANI SINAM, KH. MANGLEM SINGH: "A New Local Adaptive Thresholding Technique in Binarization", IJCSI INTERNATIONAL JOURNAL OF COMPUTER SCIENCE, vol. 8, no. 6, 30 November 2011 (2011-11-30), pages 271 - 277, XP055860873, ISSN: 1694-0814 * |
WU WENYI;CUI CHANGCAI;YE RUIFANG;ZHANG YONGZHEN;YU QING: "Image Segmentation Method Using Second Time Gray Level Histogram of Connected Component Labeling of Grinding Wheel Abrasives Grains", JOURNAL OF HUAQIAO UNIVERSITY (NATURAL SCIENCE), vol. 37, no. 4, 20 July 2016 (2016-07-20), pages 422 - 426, XP055860876, ISSN: 1000-5013, DOI: 10.11830/issn.1000-5013.201604006 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113762220A (zh) * | 2021-11-03 | 2021-12-07 | 通号通信信息集团有限公司 | 目标识别方法、电子设备、计算机可读存储介质 |
CN114742749A (zh) * | 2022-02-27 | 2022-07-12 | 扬州盛强薄膜材料有限公司 | 基于图像处理的pvc薄膜质量检测方法 |
CN114742749B (zh) * | 2022-02-27 | 2023-04-18 | 扬州盛强薄膜材料有限公司 | 基于图像处理的pvc薄膜质量检测方法 |
CN115880318A (zh) * | 2023-02-08 | 2023-03-31 | 合肥中科类脑智能技术有限公司 | 激光测距方法、介质、设备及装置 |
CN116310448A (zh) * | 2023-05-24 | 2023-06-23 | 山东曙岳车辆有限公司 | 基于计算机视觉的集装箱拼装匹配性检测方法 |
CN116600210A (zh) * | 2023-07-18 | 2023-08-15 | 长春工业大学 | 基于机器人视觉的图像采集优化系统 |
CN116600210B (zh) * | 2023-07-18 | 2023-10-10 | 长春工业大学 | 基于机器人视觉的图像采集优化系统 |
Also Published As
Publication number | Publication date |
---|---|
US20230377158A1 (en) | 2023-11-23 |
CN111223115A (zh) | 2020-06-02 |
CN111223115B (zh) | 2020-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021212913A1 (zh) | 一种图像分割方法、装置、设备及介质 | |
US9292928B2 (en) | Depth constrained superpixel-based depth map refinement | |
Lalimi et al. | A vehicle license plate detection method using region and edge based methods | |
WO2022205525A1 (zh) | 基于双目视觉自主式水下机器人回收导引伪光源去除方法 | |
TW201740316A (zh) | 圖像文字的識別方法和裝置 | |
Kaur et al. | A review on various methods of image thresholding | |
CN116597392B (zh) | 基于机器视觉的液压油杂质识别方法 | |
CN109447117B (zh) | 双层车牌识别方法、装置、计算机设备及存储介质 | |
CN112528868B (zh) | 一种基于改进Canny边缘检测算法的违章压线判别方法 | |
CN111369570B (zh) | 一种视频图像的多目标检测跟踪方法 | |
CN115272362A (zh) | 一种数字病理全场图像有效区域分割方法、装置 | |
CN110751156A (zh) | 用于表格线大块干扰去除方法、系统、设备及介质 | |
CN113537037A (zh) | 路面病害识别方法、系统、电子设备及存储介质 | |
CN110826360A (zh) | Ocr图像预处理与文字识别 | |
CN116739943A (zh) | 图像平滑处理方法及目标轮廓提取方法 | |
CN114550173A (zh) | 图像预处理方法、装置、电子设备以及可读存储介质 | |
CN108647713B (zh) | 胚胎边界识别与激光轨迹拟合方法 | |
CN110796076A (zh) | 一种高光谱图像河流检测方法 | |
CN114529570A (zh) | 图像分割方法、图像识别方法、用户凭证补办方法及系统 | |
CN115862044A (zh) | 用于从图像中提取目标文档部分的方法、设备和介质 | |
CN115841632A (zh) | 输电线路提取方法、装置以及双目测距方法 | |
Jena et al. | An algorithmic approach based on CMS edge detection technique for the processing of digital images | |
CN111476800A (zh) | 一种基于形态学操作的文字区域检测方法及装置 | |
CN112102350A (zh) | 一种基于Otsu和Tsallis熵的二次图像分割方法 | |
Cherala et al. | Palm leaf manuscript/color document image enhancement by using improved adaptive binarization method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20931749 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20931749 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20931749 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.07.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20931749 Country of ref document: EP Kind code of ref document: A1 |