CN113256645B - Color image segmentation method based on improved density clustering - Google Patents

Color image segmentation method based on improved density clustering Download PDF

Info

Publication number
CN113256645B
CN113256645B CN202110388649.5A CN202110388649A CN113256645B CN 113256645 B CN113256645 B CN 113256645B CN 202110388649 A CN202110388649 A CN 202110388649A CN 113256645 B CN113256645 B CN 113256645B
Authority
CN
China
Prior art keywords
dimensional
matrix
image
convolution
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110388649.5A
Other languages
Chinese (zh)
Other versions
CN113256645A (en
Inventor
张淼
陈爱军
卢男凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN202110388649.5A priority Critical patent/CN113256645B/en
Publication of CN113256645A publication Critical patent/CN113256645A/en
Application granted granted Critical
Publication of CN113256645B publication Critical patent/CN113256645B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a color image segmentation method based on improved density clustering, which comprises three steps of mapping an image color space into a three-dimensional matrix, and mapping a density clustering result into segmented images. First, color information of an original image is converted into a three-dimensional matrix. Then, the three-dimensional matrix is subjected to three-dimensional convolution operation to obtain a core object matrix, clustering is carried out according to the core object matrix to generate clusters, and finally the obtained clusters are mapped into segmented images. The core idea of the method is a density clustering method of three-dimensional convolution optimization, and linear time complexity O (m) of density clustering is realized, wherein m is the number of the obtained core objects. The invention provides a general color image segmentation method which is particularly suitable for large-size image segmentation, and has great advantages in time and space complexity.

Description

Color image segmentation method based on improved density clustering
Technical Field
The invention relates to a general color image segmentation method in the field of image processing, in particular to an improved density clustering method based on three-dimensional convolution operation optimization in the field of machine learning.
Background
In the field of image processing, image segmentation is one of key steps for preprocessing an image before image analysis, and an interested region in the image can be extracted through image segmentation, so that an insignificant background part is eliminated, further processing and analysis on the interested region are facilitated, and therefore, the image segmentation effect has a direct influence on the image analysis processing result.
The image segmentation method can be roughly divided into two major categories of a gray level image segmentation method and a color image segmentation method according to the type of an input image, and compared with the gray level image, the color image has more abundant information, so that the color image segmentation method has more excellent effects on the extraction and division of the areas in the image, and meanwhile, the complexity of an algorithm and the implementation difficulty are increased equally. The main current color image segmentation method mostly comes from the expansion of gray image segmentation methods in various color spaces, such as a histogram threshold method, a region-based method and the like, and is also realized by combining with the current front-edge technology, and typical methods such as a neural network method in the field of deep learning, a genetic algorithm, a clustering method in the field of machine learning and the like.
The color image segmentation method is realized based on a typical Density clustering algorithm DBSCAN (Density-Based Spatial Clustering of Applications with Noise). The DBSCAN algorithm can find noise points in data and can find cluster clusters with any shape without specifying classified category numbers, and the advantages make the DBSCAN algorithm very suitable for being applied to the field of image processing, but the temporal complexity of the naive DBSCAN algorithm is O (n 2 ) The spatial complexity is O (n), where n is the number of sample points, and if implemented using a distance index method, the spatial complexity can reach O (n 2 ) The time complexity is reduced, and the number of pixels in an image is usually above a million level, so that the naive DBSCAN algorithm can not meet the time requirement. To solve this problem, the relevant scholars optimize the DBSCAN algorithm time complexity to O (n·logn) based on R-tree, and this application still fails to meet the requirement in image segmentation. In the aspect of domestic research, wang Peng et al apply a PB-DBSCAN (Pixel-Based DBSCAN) method to GPS data denoising, so that linear time complexity O (n) of the DBSCAN method is realized, but the method is not applicable to image segmentation and cannot realize linear complexity.
Therefore, the invention provides a color image segmentation method based on improved density clustering aiming at the characteristics of an image color space, which realizes the linear complexity O (m) of the density clustering method, wherein m is the number of obtained core objects and is superior to O (n), and n is the number of sample points.
Disclosure of Invention
The invention provides a density clustering method based on three-dimensional convolution optimization aiming at the characteristics of an image color space, and the density clustering method is applied to the field of image processing, so that the rapid segmentation of color images is realized.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the method is implemented according to the following steps:
step 1: reading an original color image, and mapping color information of the image into a three-dimensional matrix H;
step 2: constructing a three-dimensional convolution kernel K according to the color characteristics of an input image;
step 3: performing three-dimensional convolution operation on the obtained three-dimensional matrix H and the constructed three-dimensional convolution kernel K to obtain a convolution result C;
step 4: the convolution result C is subjected to linear rectification unit operation to obtain a core object matrix O, and the core object matrix O is traversed to construct a core object dictionary D;
step 5: sequentially selecting a core object from the core object dictionary D as a seed, iteratively generating cluster clusters by taking the seed as a starting point, and repeating the process until all the core objects in the dictionary D are accessed, thereby completing the generation of all the cluster clusters;
step 6: and mapping the generated cluster into a segmented image according to the position information of the pixel points in the original image.
The method for mapping the image color space into the three-dimensional matrix in the step 1 is as follows: the three channels of the color image are in one-to-one correspondence with the three dimensions of the three-dimensional matrix, and the color value of the pixel point in the image corresponds to a position in the three-dimensional matrix, wherein the value of the position represents the frequency of occurrence of the color in the image; traversing each pixel point of the image according to the rule, adding 1 to the matrix position value corresponding to the pixel point color value, and finally obtaining a three-dimensional color matrix mapped by the two-dimensional color imageWherein m is 1 ,m 2 ,m 3 Representing the size of the image color space.
The construction method of the three-dimensional convolution kernel in the step 2 is as follows: the three-dimensional convolution kernel is defined as an r 1 ×r 2 ×r 3 Three-dimensional matrix of (2)The value of each element in the convolution kernel is 1, and the convolution kernel defines the neighborhood range of the sample points of the same class; when in use, the optimal clustering effect can be achieved by designing a proper convolution kernel shape according to the characteristics of the image color space.
The fast convolution method of the three-dimensional matrix in the step 3 is as follows: the three-dimensional convolution process is decomposed into a one-dimensional convolution process and a plurality of two-dimensional convolution processes, the two-dimensional convolution process is decomposed into a one-dimensional convolution process in two directions, all one-dimensional convolution operations are realized based on a dynamic programming idea, and compared with a naive convolution method, the time complexity of the fast convolution method is reduced to O (m 1 ×m 2 ×m 3 ) Finally, the convolution result is obtained
The method for acquiring the core object in the step 4 is as follows: convolution resultsThe core object matrix +.>And traversing the three-dimensional matrix in sequence, wherein if the value of the matrix element is not-1, the position coordinate of the matrix element is the core object, and the dictionary D of the core object can be generated after the traversing is finished, and the core objects in the dictionary D have order.
The cluster generation method in the step 5 is as follows: creating a core object queue Q, adding the first non-accessed element in the core object dictionary D into the queue Q, dequeuing the first element in the queue, and forming a core object matrixSearching other core objects in the neighborhood range of the core object to be added into a queue Q, marking that the core object is accessed, adding all sample points in the neighborhood range into the cluster, and iterating the process until the queue Q is empty to realize generation of a cluster; and repeating the process until all the core objects in the core object dictionary D are accessed, and completing the clustering process.
The method for mapping the clustering result in the step 6 into the segmented image comprises the following steps: the image segmentation result corresponds to each cluster, in order to convert the clusters into segmented images, a blank gray image is newly established, any cluster is extracted, all pixel points in the original image are traversed, if the color value of each pixel point is in the cluster, the gray value of the corresponding position of the blank image is set to be 0, otherwise, 255, and the binary segmented image corresponding to the cluster can be obtained after the traversing is finished.
The beneficial effects of the invention are as follows: the invention discloses a color image segmentation method based on improved density clustering, which maps the color space of an original image into a three-dimensional matrix, calculates a core object in the three-dimensional matrix through three-dimensional convolution, optimizes the convolution process based on dynamic programming thought, greatly reduces the time complexity of convolution operation, and reduces the color space (m 1 ,m 2 ,m 3 ) The time required to compute the core object is O (m 1 ·m 2 ·m 3 ). The core objects are then clustered, the part of the temporal complexity being dependent on the convolution kernel size (r 1 ,r 2 ,r 3 ) And the number m of the core objects isThe final improved density clustering algorithm has a temporal complexity of +.>In case of color space and convolution kernel size determination of an image, m 1 ·m 2 ·m 3 And r 1 ·r 2 ·r 3 The time complexity of the method is O (m) which is constant. And finally, dividing the original image according to the clustering result. In summary, the method is suitable for segmentation of general color images, and has a great time advantage especially in segmentation of large-size color images.
Drawings
Fig. 1 is a general flow chart of color image segmentation of the present invention.
Fig. 2 is a color raw image to be segmented of the present invention.
FIG. 3 is a core object acquisition flow chart of the present invention.
Fig. 4 is a schematic representation of a three-dimensional convolution kernel of the present invention.
FIG. 5 is a schematic representation of a three-dimensional matrix hierarchical convolution of the present invention.
Fig. 6 is a schematic diagram of a one-dimensional convolution operation of the present invention.
Fig. 7 is an exploded view of the two-dimensional convolution operation steps of the present invention.
Fig. 8 is a graph of the linear rectification function of the present invention.
FIG. 9 is a flow chart of cluster generation of the present invention.
Fig. 10 is an original image segmentation result of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and embodiments, which divide an RGB image. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
For example, please refer to fig. 1 to 9: FIG. 1 schematically shows the overall flow of the disclosed color image segmentation method based on improved density clustering; fig. 2 schematically shows a color raw image to be segmented according to the invention; FIG. 3 schematically illustrates a core object acquisition flow chart of the present invention; FIG. 4 schematically shows a three-dimensional convolution kernel of the present invention; FIG. 5 schematically illustrates a three-dimensional matrix hierarchical convolution of the present invention; FIG. 6 schematically illustrates a one-dimensional convolution operation of the present invention; FIG. 7 schematically illustrates an exploded view of the two-dimensional convolution operation steps of the present invention; FIG. 8 schematically shows a graph of the linear rectification function of the present invention; FIG. 9 schematically illustrates a cluster generation flow chart of the present invention; fig. 10 schematically shows the original image segmentation result of the present invention.
According to the flow shown in fig. 1, an input color image is segmented according to the following steps:
(1) In order to illustrate the process of the color image segmentation method according to the present invention, without loss of generality, a capsule color image as shown in FIG. 2 is selected as an example, the image size is 1741×1057, denoted as I o . First, building corresponding three-dimensional according to RGB color spaceMatrix H 256×256×256 Traversing each pixel point in the image, if the color value of the ith pixel is (r) i ,g i ,b i ) The matrix operation is shown in formula (1):
wherein, the liquid crystal display device comprises a liquid crystal display device,represents a three-dimensional matrix H (r) i ,g i ,b i ) The value of the position, whose initial value is 0.
(2) For three-dimensional matrix H 256×256×256 The density clustering can be mainly divided into two large steps, namely, obtaining a core object and generating a cluster. The process of obtaining the core object is realized through three-dimensional convolution operation, and the basic idea is shown in fig. 3.
For an embodiment, a three-dimensional convolution kernel K as shown in FIG. 4 is first constructed 7×7×6 The definition of the convolution kernel corresponds to epsilon-neighborhood concept in DBSCAN algorithm, the size of the convolution kernel corresponds to epsilon parameter, the difference is that the clustering effect can be optimized by designing proper convolution kernel shape, if the size of one dimension of the convolution kernel is reduced, more subdivision effect can be obtained in the corresponding dimension during clustering.
(3) The hierarchical convolution method of the three-dimensional matrix is shown in fig. 5, and the three-dimensional matrix H 256×256×256 In the Z direction, into a plurality of two-dimensional matrices, the Z-th layer two-dimensional matrix being denoted asCarrying out two-dimensional convolution operation on each two-dimensional matrix, and finally merging the results of all the two-dimensional convolution operations into a three-dimensional matrix T 256×256×256 Then at T 256×256×256 One-dimensional convolution operation is carried out again in the Z direction of the model to obtain a three-dimensional convolution result C 256×256×256 Formulas are expressed as (3), (4):
wherein, the liquid crystal display device comprises a liquid crystal display device,representing a z-th layer two-dimensional matrix->Is a convolution result of->Represents a row of convolution results, K, corresponding to the x and y positions of the three-dimensional matrix H 7×7 Representing a three-dimensional convolution kernel K 7×7×6 Two-dimensional convolution kernel in XY direction, K 6 Representing a three-dimensional convolution kernel K 7×7×6 One-dimensional convolution kernels in the Z-direction.
Similarly, the two-dimensional convolution process is decomposed into a one-dimensional convolution operation, and the one-dimensional convolution operation is realized based on a dynamic programming thought, and the convolution range of the ith position and the convolution range of the ith-1 position have an overlapped part during one-dimensional convolution, as shown in fig. 6, so that the convolution result of the ith position can be calculated according to the convolution result of the ith-1 position. The two-dimensional convolution decomposition step is shown in FIG. 7, which first involves a two-dimensional matrixOne-dimensional convolution is carried out in the y direction, and the state transition equation is shown as a formula (5):
in the formula, matrix A represents a matrixOne dimension in the y-directionConvolution value, R y The radius of the convolution kernel K in the y-direction is indicated.
Similarly, the one-dimensional convolution is performed again on the y-direction convolution result a in the x-direction, and the state transition equation is similar to the equation (5), as shown in the equation (6):
in the formula, the matrix T represents a matrixTwo-dimensional convolution result, R x The radius of the convolution kernel K in the x-direction is indicated.
Matrix H 256×256×256 And convolution kernel K 7×7×6 The three-dimensional convolution operation is carried out to obtain a convolution result as a matrix C 256×256×256 The values of the elements in the matrix represent the number of pixels in the neighborhood range corresponding to the position.
(4) Will convolve the result C 256×256×256 The core object matrix O is calculated by a linear rectifying unit ReLu 256×256×256 The linear rectification function is defined as formula (7) and the curve is shown in FIG. 8, wherein the value of the threshold value parameter MinPts is required to be preset, and the parameter is used for screening out the core object, namely the matrix C 256×256×256 Element values greater than or equal to MinPts are core objects, in the embodiment minpts=2500.
Traversing core object matrix O in order 256×256×256 The method comprises the steps of adding a dictionary into a core object with a matrix element value of not 0, wherein the key of the dictionary is a matrix coordinate (i, j, k), the corresponding value is assigned to 0, the core object is not accessed, and the dictionary D is obtained after traversal is finished. Since the matrix is traversed in order, the dictionary D has an order, and the key-value pairs D (i, j, k) corresponding to keys (i, j, k) represent their order of occurrence in the dictionary D in terms of magnitude, as shown in equation (8), which feature helps to optimize the generationClustering time complexity.
D(i,j,k)>D(i+r,j,k)>D(i,j+r,k)>D(i,j,k+r),r∈N + (8)
(5) The cluster generation flow is shown in fig. 9, the cluster tag tag=0 is initialized, one unviewed core object D in the dictionary D is taken and added into the queue Q, the queue Q is used for traversing all core objects belonging to the same class, the queue head element Q in the queue Q is dequeued, and the actual value of Q is a three-dimensional coordinate (Q i ,q j ,q k ) In core object matrix O 256×256×256 The epsilon-neighborhood range corresponding to the middle q is recorded as N ε (q) matrix O 256×256×256 Neighborhood N ε Adding all element values in the range of (Q) to be tags, adding the unviewed core objects into a queue Q, repeating the process until the queue Q is empty, finishing generation of one cluster, adding 1 to the tags, searching the next unviewed core object in a dictionary D, repeating the process until all the core objects in the dictionary D are visited, finishing generation of all the cluster, and obtaining a core object matrix O 256×256×256 Through the calculation, the clustering result matrix is obtained and marked as omega 256×256×256 The elements in the matrix which are not accessed are default values of-1, namely noise points.
According to the order of the core objects in the dictionary D, if the current core object is (q i ,q j ,q k ) Range N v ={(x,y,z)|x≤q i ,y≤q j ,z≤q k All core objects within the neighborhood must have been accessed, and can be excluded from the search range of the neighborhood, reducing the search range by about 1/8 on average, whereby the set S of search ranges within the neighborhood is represented by formula (9):
S(p)=N ε (p)-N v (p) (9)
(6) In order to map the clustering result into segmented images, a blank image array Mat is newly built according to the number of the clustering clusters, wherein Mat is a blank image array i Representing the segmented image corresponding to the ith cluster, traversing the original image I o Each pixel point p of (c) is located at (r, c) in the image, and the color value is (p 1 ,p 2 ,p 3 ) The operation of the following formula (10) is carried out:
wherein ω represents the clustering result matrix and label represents the clustering cluster label.
Finally, the image segmentation results for the example are shown in fig. 10.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it is intended to cover all modifications or variations within the scope of the invention as defined by the claims of the present invention.

Claims (4)

1. A color image segmentation method based on improved density clustering is characterized by comprising the following steps of: the method is implemented according to the following steps:
step 1: reading an original color image, and mapping color information of the image into a three-dimensional matrix H;
step 2: constructing a three-dimensional convolution kernel K according to the color characteristics of an input image;
step 3: performing three-dimensional convolution operation on the obtained three-dimensional matrix H and the constructed three-dimensional convolution kernel K to obtain a convolution result C;
step 4: the convolution result C is subjected to linear rectification unit operation to obtain a core object matrix O, and the core object matrix O is traversed to construct a core object dictionary D;
step 5: sequentially selecting a core object from the core object dictionary D as a seed, iteratively generating cluster clusters by taking the seed as a starting point, and repeating the process until all the core objects in the dictionary D are accessed, thereby completing the generation of all the cluster clusters;
step 6: mapping the generated cluster into a segmented image according to the position information of the pixel points in the original image;
in the step 1, three channels of the color image are in one-to-one correspondence with three dimensions of the three-dimensional matrix, so that colors of pixel points in the image are obtainedThe value corresponds to a position in the three-dimensional matrix, the value of the position representing the number of times the color appears in the image; traversing each pixel point of the image according to the rule, adding 1 to the matrix position value corresponding to the pixel point color value, and finally obtaining a three-dimensional color matrix mapped by the two-dimensional color imageWherein m is 1 ,m 2 ,m 3 Representing the size of the image color space;
in the step 2, the three-dimensional convolution kernel is defined as an r 1 ×r 2 ×r 3 Three-dimensional matrix of (2)The value of each element in the convolution kernel is 1, and the convolution kernel defines the neighborhood range of the sample points of the same class;
in the step 4, the convolution resultObtaining a core object matrix after the operation of the linear rectifying unitAnd traversing the three-dimensional matrix in sequence, wherein if the value of the matrix element is not-1, the position coordinate of the matrix element is the core object, and the dictionary D of the core object can be generated after the traversing is finished, and the core objects in the dictionary D have order.
2. The color image segmentation method based on improved density clustering as set forth in claim 1, wherein: in the step 3, the three-dimensional convolution process is decomposed into a one-dimensional convolution process and a plurality of two-dimensional convolution processes, and meanwhile, the two-dimensional convolution process is decomposed into a one-dimensional convolution process in two directions, all one-dimensional convolution operations are realized based on a dynamic programming idea, and compared with a naive convolution method, the time complexity of the fast convolution method is reduced to O (m 1 ×m 2 ×m 3 ) Obtaining convolution result
3. The color image segmentation method based on improved density clustering as set forth in claim 1, wherein: in the step 5, a core object queue Q is created, the first non-accessed element in the core object dictionary D is added to the queue Q, the first element in the queue is dequeued, and the core object matrix is obtainedSearching other core objects in the neighborhood range of the core object to be added into a queue Q, marking that the core object is accessed, adding all sample points in the neighborhood range into the cluster, and iterating the process until the queue Q is empty to realize generation of a cluster; and repeating the process until all the core objects in the core object dictionary D are accessed, and completing the clustering process.
4. The color image segmentation method based on improved density clustering as set forth in claim 1, wherein: in the step 6, the image segmentation result corresponds to each cluster, in order to convert the clusters into segmented images, newly create blank gray images, extract any cluster, traverse all pixel points in the original image, if the color value of the pixel point is in the cluster, the gray value of the corresponding position of the blank image is set to 0, otherwise, is 255, and the binary segmented image corresponding to the cluster can be obtained after the traversing is finished.
CN202110388649.5A 2021-04-12 2021-04-12 Color image segmentation method based on improved density clustering Active CN113256645B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110388649.5A CN113256645B (en) 2021-04-12 2021-04-12 Color image segmentation method based on improved density clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110388649.5A CN113256645B (en) 2021-04-12 2021-04-12 Color image segmentation method based on improved density clustering

Publications (2)

Publication Number Publication Date
CN113256645A CN113256645A (en) 2021-08-13
CN113256645B true CN113256645B (en) 2023-07-28

Family

ID=77220736

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110388649.5A Active CN113256645B (en) 2021-04-12 2021-04-12 Color image segmentation method based on improved density clustering

Country Status (1)

Country Link
CN (1) CN113256645B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622753A (en) * 2012-02-28 2012-08-01 西安电子科技大学 Semi-supervised spectral clustering synthetic aperture radar (SAR) image segmentation method based on density reachable measure
CN105184772A (en) * 2015-08-12 2015-12-23 陕西师范大学 Adaptive color image segmentation method based on super pixels
CN106447676A (en) * 2016-10-12 2017-02-22 浙江工业大学 Image segmentation method based on rapid density clustering algorithm
CN106503743A (en) * 2016-10-31 2017-03-15 天津大学 A kind of quantity is more and the point self-adapted clustering method of the high image local feature of dimension
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map
CN110796667A (en) * 2019-10-22 2020-02-14 辽宁工程技术大学 Color image segmentation method based on improved wavelet clustering
CN110910390A (en) * 2019-11-11 2020-03-24 大连理工大学 Panoramic three-dimensional color point cloud semantic segmentation method based on depth distortion convolution

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622753A (en) * 2012-02-28 2012-08-01 西安电子科技大学 Semi-supervised spectral clustering synthetic aperture radar (SAR) image segmentation method based on density reachable measure
CN105184772A (en) * 2015-08-12 2015-12-23 陕西师范大学 Adaptive color image segmentation method based on super pixels
CN106447676A (en) * 2016-10-12 2017-02-22 浙江工业大学 Image segmentation method based on rapid density clustering algorithm
CN106503743A (en) * 2016-10-31 2017-03-15 天津大学 A kind of quantity is more and the point self-adapted clustering method of the high image local feature of dimension
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map
CN110796667A (en) * 2019-10-22 2020-02-14 辽宁工程技术大学 Color image segmentation method based on improved wavelet clustering
CN110910390A (en) * 2019-11-11 2020-03-24 大连理工大学 Panoramic three-dimensional color point cloud semantic segmentation method based on depth distortion convolution

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
三维卷积详解;Python图像识别;<https://blog.csdn.net/qq_28949847/article/details/107046266>;论文第 *

Also Published As

Publication number Publication date
CN113256645A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN108428220B (en) Automatic geometric correction method for ocean island reef area of remote sensing image of geostationary orbit satellite sequence
CN110866896A (en) Image saliency target detection method based on k-means and level set super-pixel segmentation
CN112052783A (en) High-resolution image weak supervision building extraction method combining pixel semantic association and boundary attention
CN109493417A (en) Three-dimension object method for reconstructing, device, equipment and storage medium
CN114694038A (en) High-resolution remote sensing image classification method and system based on deep learning
Cai et al. Improving sampling-based image matting with cooperative coevolution differential evolution algorithm
CN114723583A (en) Unstructured electric power big data analysis method based on deep learning
Jiang et al. Local and global structure for urban ALS point cloud semantic segmentation with ground-aware attention
CN112508066A (en) Hyperspectral image classification method based on residual error full convolution segmentation network
CN116152544A (en) Hyperspectral image classification method based on residual enhancement spatial spectrum fusion hypergraph neural network
Kavitha et al. Convolutional Neural Networks Based Video Reconstruction and Computation in Digital Twins.
Kumar et al. A hybrid cluster technique for improving the efficiency of colour image segmentation
CN111860668B (en) Point cloud identification method for depth convolution network of original 3D point cloud processing
CN117115563A (en) Remote sensing land coverage classification method and system based on regional semantic perception
CN113256645B (en) Color image segmentation method based on improved density clustering
CN116188428A (en) Bridging multi-source domain self-adaptive cross-domain histopathological image recognition method
CN116109656A (en) Interactive image segmentation method based on unsupervised learning
CN116958624A (en) Method, device, equipment, medium and program product for identifying appointed material
CN115937540A (en) Image Matching Method Based on Transformer Encoder
CN113344947B (en) Super-pixel aggregation segmentation method
Zhang et al. Transcending the limit of local window: Advanced super-resolution transformer with adaptive token dictionary
Rathore Big data cluster analysis and its applications.
Liu et al. Automatic algorithm for fractal plant art image similarity feature generation
CN111798473A (en) Image collaborative segmentation method based on weak supervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant