CN110211126B - Image segmentation method based on intuitive fuzzy C-means clustering - Google Patents

Image segmentation method based on intuitive fuzzy C-means clustering Download PDF

Info

Publication number
CN110211126B
CN110211126B CN201910507275.7A CN201910507275A CN110211126B CN 110211126 B CN110211126 B CN 110211126B CN 201910507275 A CN201910507275 A CN 201910507275A CN 110211126 B CN110211126 B CN 110211126B
Authority
CN
China
Prior art keywords
image
value
pixel
membership
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910507275.7A
Other languages
Chinese (zh)
Other versions
CN110211126A (en
Inventor
赵凤
郝浩
刘汉强
范九伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN201910507275.7A priority Critical patent/CN110211126B/en
Publication of CN110211126A publication Critical patent/CN110211126A/en
Application granted granted Critical
Publication of CN110211126B publication Critical patent/CN110211126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image segmentation method based on intuitive fuzzy C-means clustering. The method mainly solves the problems that the image segmentation is sensitive to noise and easy to fall into local optimum, and the clustering number needs to be set, and has the scheme that: constructing an intuitive fuzzy set robust to noise using a switch mean based strategy
Figure DDA0002092240770000011
Performing curve fitting on the gray value distribution of the image, screening all peak points of a fitting curve as a cluster center initial value range set, and counting the number of the maximum cluster centers; in that
Figure DDA0002092240770000012
On the basis, the position information and the gray level information of the pixels are utilized to construct a linear weighting function coefficient in an intuitive fuzzy target function, and a membership degree matrix U is obtained; and evaluating the index evaluation U according to the grading distance index to obtain an optimal membership matrix, and screening the misclassified pixels by using an error detection strategy to carry out correct classification. The invention enhances the robustness to noise, can self-adaptively determine the image clustering number, and can be used for image recognition and computer vision preprocessing.

Description

Image segmentation method based on intuitive fuzzy C-means clustering
Technical Field
The invention belongs to the field of digital image processing, and particularly relates to an image segmentation method which can be used for image recognition and computer vision preprocessing.
Background
Since the seventies of the last century, many scholars continuously pay attention to image segmentation, the image segmentation technology has become a basic technology in many fields, the image segmentation technology cannot be lacked as long as the problem of extracting the content in the image is concerned, and the quality and effect of the image segmentation work directly or indirectly influence the subsequent image engineering. The existing image segmentation methods are mainly based on threshold value image segmentation methods and clustering image segmentation methods, and the existing image segmentation techniques are a method of using statistical information segmentation, and usually extracting appropriate threshold values to separate out interested target parts in images. However, it is only suitable for images with obvious gray scale difference, and in addition, noise and other interference elements also cause difficulty in accurate image segmentation. Until now, a unified theoretical system is not formed to be applied to different fields, and image segmentation still is a popular research field.
The clustering-based image segmentation method divides images into a plurality of categories according to the similarity between pixel points in the images, has the characteristics of good segmentation effect, short time consumption, stable method operation and the like, and becomes one of the commonly applied methods in recent years. Common clustering segmentation methods include: k-means, fuzzy C-means clustering, hierarchical clustering, and spectral clustering), etc., with fuzzy C-means clustering being one of the most widely used methods. The fuzzy C-means method was first proposed by dunn, see: dunn, a Fuzzy image processing method related to an iterative self-organization analysis algorithm and application thereof to detection of Compact and easily separable Clusters, report on the control theory 1973,3(3):32-57 (J.C. Dunn.A Fuzzy relationship of the ISODATA Process and Its Use in Detecting Compact Well-Separated Clusters J. Journal of Cybernetics 1973,3(3):32-57.), the method is sensitive to noise in an image due to no consideration of any spatial information in the image, so that the segmentation of a noisy image cannot achieve an ideal segmentation effect. Many scholars later published improved methods of combining image space neighborhood information. Ehamald et al add a spatial neighborhood information item to an objective function of a fuzzy C-means, and propose an FCM _ S algorithm, which improves robustness against noise, but has high computational complexity, and in order to reduce the computational complexity of the FCM _ S algorithm, cheng and zhao introduce neighborhood information to the objective function of the algorithm through mean filtering and median filtering, and propose FCM _ S1 and FCM _ S2 algorithms, see: volume 34, 1907-; the FCM _ S1 algorithm in the two algorithms has a good Gaussian noise processing effect but poor robustness to salt and pepper noise, and the two FCM _ S2 algorithms have a good processing effect on salt and pepper noise but poor robustness to Gaussian noise, so that the two algorithms cannot achieve robust universality to various types of noise. Chua-Weiling et al constructs a linear weighted sum image by combining spatial information and gray information of the image, and proposes a fast generation FCM algorithm, see: a Fast Robust Fuzzy C-means Clustering algorithm for Image Segmentation, which introduces Local Information, is characterized by Pattern recognition, volume 40,825-838,2007 (W.Cai, S.Chen, and D.Zhang, "Fast and Robust fusing Local Information for Image Segmentation," Pattern recognition, vol.40, No.3, pp.825-838, Mar.2007), which is Robust to Gaussian noise as well as to salt-pepper noise; however, the algorithms do not consider more fuzziness of data, and charles et al further find that more fuzziness of data can be considered by using an intuitive fuzzy set theory, the classification of the data is more accurate, and a fuzzy clustering method based on intuitive fuzzy data is provided, which is as follows: application of soft computing (T, Chaira. "A novel intuitionistic fuzzy C means clustering and its application to medical images." application. Soft. Compat. 11(2): 1711-; because the fuzzy clustering method based on the intuitive fuzzy data is sensitive to noise, Welman et al further introduces local spatial information into the intuitive fuzzy C-means algorithm, see: welmar, Aglaval, Salan. Improved intuitive Fuzzy C-means Algorithm for Brain Image Segmentation Incorporating Local spatial Information application Soft computing 543-.
Although the improved method optimizes the performance of the fuzzy clustering algorithm on the anti-noise performance to a certain extent, the improved method still has the defects of not strong enough robustness on noise, very sensitivity to the initial value of a clustering center, incapability of self-adaptively analyzing the image clustering number and the like.
Disclosure of Invention
The invention aims to provide an image segmentation method based on intuitive fuzzy C-means clustering to overcome the defects of the prior art, so as to enhance the robustness to noise, improve the segmentation accuracy, reduce the sensitivity to the initial value of a clustering center and determine the number of image clusters in a self-adaptive manner.
In order to achieve the above object, the technique of the present invention comprises the steps of:
(1) inputting an image to be segmented, and converting the image to be segmented into a gray image if the image to be segmented is a color image;
(2) initializing parameters: setting the maximum iteration number T as 100 and the stop threshold epsilon as 10-5The fuzzy weighting index m is 2, the radius omega of the neighborhood window is 3, the initial iteration time t is 1, and the default initial value Y of the clustering number is 2;
(3) constructing an intuitive fuzzy set robust to noise using a switch-mean based strategy
Figure GDA0003525548360000031
Wherein 1 ≦ i ≦ n, which represents the total number of pixels;
(4) performing curve fitting by using a least square method according to the gray value distribution of the input image, selecting the gray level corresponding to the peak value point of a fitting curve as an initialized clustering center value, and setting the range of the estimated clustering number as {2, …, YmaxThe initial value range set C of the cluster center is
Figure GDA0003525548360000032
Wherein, cYDenotes an initial value of the Yth cluster center, Y is 1. ltoreq. Y.ltoreq.Ymax,YmaxMaximum value representing the number of clusters, 2. ltoreq. YmaxN is less than or equal to n, and T represents transposition;
(5) iterative computation of linear weighting function coefficients HirIntuitive fuzzy distance
Figure GDA0003525548360000033
And an intuitive fuzzy objective function J:
(5a) construction of linear weighting function coefficient H using position information and gradation information of pixelir
Figure GDA0003525548360000034
Wherein (a)i,bi) And (a)r,br) Spatial coordinate values, x, representing the central pixel i and the neighborhood pixel r, respectivelyiAnd xrRepresenting the gray values, u, of the central pixel i and the neighborhood pixel r, respectivelykiDenotes xiFor the k-th cluster center ckThe membership degree of k is more than or equal to 1 and less than or equal to Y, r is more than or equal to 1 and less than or equal to omega2
(5b) Calculating pixel x under intuitive fuzzy setiTo the center of the cluster ckIntuitive fuzzy distance of
Figure GDA0003525548360000035
Wherein the content of the first and second substances,
Figure GDA0003525548360000036
and
Figure GDA0003525548360000037
respectively represent pixels xiAnd cluster center ckThe intuitive fuzzy set of (1);
(5c) combining the linear weighting function coefficient H obtained in (5a)irThe intuitive blur distance obtained in (5b)
Figure GDA0003525548360000038
Calculating an intuitive fuzzy objective function J and solving a membership function u in the intuitive fuzzy objective function JkiAnd a cluster center ckIntuition fuzzy set of
Figure GDA0003525548360000041
(6) Judging iteration stop condition | | C(t+1)-C(t)If | | < epsilon or the number of operation iterations T > T holds:
if not, making t equal to t +1, and returning to the step (5);
if yes, outputting a membership matrix U and judging that Y is less than YmaxIf yes, making Y equal to Y +1, taking the first Y numerical values of the initial value range set C of the clustering center, and making the initial value set of the clustering center as
Figure GDA0003525548360000042
Making the initial iteration number t equal to 1, returning to (5), if Y < YmaxIf not, performing (7), wherein,
Figure GDA0003525548360000043
in the form of a set of cluster center values,
Figure GDA0003525548360000044
represents the t-th set of intuitive ambiguities for the qth cluster-center iteration,
Figure GDA0003525548360000045
denotes an initial value of the pth cluster center, 1. ltoreq. Q.ltoreq.Y, 1. ltoreq. P.ltoreq.Y, U ═ U ≦ Yki};
(7) Output (Y) from (6)max-1) membership matrices U and evaluating them using a hierarchical distance index evaluation index GD; setting the membership matrix with the maximum GD value as UmaxWill U ismaxCorresponding cluster number YfinAs the final cluster number of the input image, where 1. ltoreq. Yfin≤Ymax
(8) Membership degree matrix U of (7) outputmaxAnd (3) carrying out pixel classification, carrying out error detection classification, and outputting a segmentation image:
(8a) according to the maximum membership degree principle and the final clustering number YfinMembership matrix U for the output of (7)maxIs classified, and a label matrix U is outputLWherein, ULIncluding only the class values {1,2, …, YW,…,Yfin},1≤YW≤Yfin
(8b) Let the label class value of the center pixel i be LiIn the neighborhood of 5 by 5 of the central pixel i, the label type value L is judgediNumber of pixels nLWhether or not n is satisfiedL≤25/(Yfin+1) condition:
if yes, the central pixel i is judged to be a wrong pixel, and the label class value L of the wrong pixel is divided intoiCorrecting the label category value to be the label category value with the maximum number of the same label category values in the neighborhood, and executing (8 c);
if not, judging that the central pixel i is a correct distribution pixel, and executing (8 c);
(8c) judging tag matrix ULIf so, outputting a final label matrix UFExecuting (8d), if not, returning to (8b) to continue traversing;
(8d) for the final label matrix UFThe middle label class value is YWThe pixels of (2) are assigned different corresponding gray values xWAnd the matrix given different gray values is output as a divided image.
Compared with the prior art, the invention has the following beneficial technical effects:
firstly, an intuitive fuzzy set robust to noise is constructed by using a switch mean value-based strategy, a linear weighting function coefficient is constructed by using position information and gray information of pixels and is introduced into an intuitive fuzzy target function, and the problem that a clustering algorithm is sensitive to noise is solved.
Secondly, the invention uses the least square method to carry out curve fitting on the gray value distribution of the image, selects the gray levels corresponding to all peak points of the fitting curve as the initial value range of the clustering center, evaluates the membership matrix according to the grading distance index evaluation index, selects the clustering number suitable for the input image, realizes the self-adaptive determination of the image clustering number, and overcomes the defect that the traditional clustering algorithm is easy to fall into the local optimum.
Thirdly, the invention screens the misclassified pixels by using an error detection strategy and correctly distributes the misclassified elements, thereby improving the segmentation accuracy and ensuring the segmentation effect to be more ideal.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a comparison graph of the results of a simulation segmentation of a #3096 image in a Berkeley image database using the present invention and a prior art method;
FIG. 3 is a comparison of results of simulated segmentation of #238011 images in a Berkeley image database using the present invention and a prior art method.
Detailed Description
The embodiments and effects of the present invention are further described below with reference to the accompanying drawings:
referring to fig. 1, the implementation steps of the present invention are as follows:
step 1: an image to be segmented is input.
And inputting an image to be segmented, and converting the image to be segmented into a gray image if the image to be segmented is a color image.
Step 2: setting initial parameter values.
Setting the maximum iteration number T as 100 and the stop threshold epsilon as 10-5The fuzzy weighting index m is 2, the neighborhood window radius ω is 3, the initial iteration number t is 1, and the default initial value Y of the cluster number is 2.
And step 3: constructing an intuitive fuzzy set robust to noise
Figure GDA0003525548360000051
The prior art method for constructing the intuitive fuzzy set includes an IFCM algorithm and an IIFCM algorithm, which both use Yager operator to construct the intuitive fuzzy set.
The example uses a switch mean strategy to correct the pixel gray value of the input image, and constructs an intuitive fuzzy set robust to noise through a Yager operator, wherein the switch mean strategy is to correct the gray value according to the pixel gray value distribution condition in the neighborhood of the pixel i, and the implementation is as follows:
3.1) first, it is determined whether the pixel gray values in the 3 by 3 neighborhood of the center pixel i are not all 0 or 255:
if yes, the gray value of the pixel in the neighborhood is represented to have other gray values except 0 or 255, the pixels with the gray values of 0 and 255 in the neighborhood are taken out, then the gray average value of the rest pixels in the neighborhood is taken and set as the correction value of the central pixel, and then 3.2) is executed;
if not, directly executing 3.2) without modifying the gray value of the central pixel;
3.2) judging whether all pixels in the input image traverse: if yes, outputting the image with the corrected gray value, executing 3.3), and if not, returning to 3.1) to continue traversing;
3.3) calculating the pixel gray value of the corrected image output in 3.2) as an intuitive fuzzy set
Figure GDA0003525548360000061
3.3.1) setting μ (x)i) Representing pixel grey value xiDegree of membership of, v (x)i) And pi (x)i) Are each xiDegree of non-membership and hesitation, μ (x), using Yager operator constructioni)、v(xi) And pi (x)i) The calculation formulas of (A) are respectively as follows:
Figure GDA0003525548360000062
Figure GDA0003525548360000063
π(xi)=1-μ(xi)-v(xi),
wherein 0 is less than or equal to mu (x)i)≤1,0≤v(xi)≤1,0≤μ(xi)+v(xi) 1, and alpha is 0.8 to represent a Yager operator coefficient;
3.3.2) calculating the resulting μ (x) according to 3.3.1)i)、v(xi) And pi (x)i) Structure intuition fuzzy set
Figure GDA0003525548360000064
Figure GDA0003525548360000065
And 4, step 4: and aiming at the gray value distribution of the input image, performing curve fitting by using a least square method, selecting the gray level corresponding to the peak value point of a fitting curve as an initialized clustering center value, and estimating the maximum clustering number and the initial value range of the clustering center.
The existing technology for selecting the initial value of the clustering center directly selects the peak point of the image gray level histogram as the initial clustering center.
In this example, a least square method is used to perform curve fitting on the image gray value distribution, and then a peak point is selected on the smooth fitting curve, which is implemented as follows:
4.1) setting the gray level image to 256 gray levels, wherein the gray level range is [0,255], and drawing a gray level histogram of the gray level image;
4.2) counting the number of pixels corresponding to 256 gray levels on the gray level histogram, setting the number as gray level distribution frequency f (l), and fitting a curve to f (l) by using a seven-order least square method to fit gray level data points into a gray level curve, wherein l is more than or equal to 0 and less than or equal to 255;
4.3) solving the first derivative f '(l) of f (l), and comparing f' (l) with the first derivatives f '(l-1) and f' (l +1) of the front and back gray levels of f (l);
4.4) judging whether the gray level l is under the condition that l is 0 or 255:
if l is 0 or 255 and l is 0, then judging whether f' (l) < 0 is true, if true, outputting the gray level l is 0 as the peak point, and executing 4.5 again);
if l is 0 or 255 and l is 255, then determining whether f' (l) > 0 is true, if true, outputting the gray level l is 255 as the peak point, and then executing 4.5);
if not under the condition that l is 0 or 255, namely, under the condition that 0 < l < 255, judging whether f ' (l) is 0 and f ' (l-1) > 0 and f ' (l +1) < 0 are true, if so, outputting the gray level l as a peak point, and executing 4.5 again);
4.5) judging whether 256 gray levels are completely traversed, if so, outputting peak points, executing 4.6), and if not, returning to 4.4) and continuing traversing;
4.6) sorting the peak points output by 4.5) according to the sequence of the gray level distribution frequency values from large to small, and counting the number of the peak points as YmaxAssuming that the number of clusters ranges from 2 to YmaxAnd counting the gray levels l corresponding to the peak points ordered from large to small to obtain the gray level value range of
Figure GDA0003525548360000071
The corresponding value of the gray level is counted as the initial value of the cluster center, that is
Figure GDA0003525548360000072
Setting the initial value range set C of the clustering center as
Figure GDA0003525548360000073
Wherein:
lYexpressing the gray level corresponding to the Y-th peak point gray level distribution frequency value in the descending order of the peak point gray level distribution frequency value, cYRepresents an initial value of the Yth cluster center, Y is not less than 1 and not more than Ymax,YmaxMaximum value representing the number of clusters, 2. ltoreq.YmaxN is less than or equal to n, and T represents transposition.
And 5: iterative computation of a linear weighting function coefficient HirIntuitive fuzzy distance
Figure GDA0003525548360000074
And an intuitive blur objective function J.
The existing FCM _ S1 algorithm and FCM _ S2 algorithm introduce local spatial information of an image into its objective function, wherein the FCM _ S1 algorithm considers the mean filter characteristic of an image and the FCM _ S2 algorithm considers the median filter characteristic of an image.
In the embodiment, the linear weighting function coefficient H is constructed by using the position information and the gray information of the pixelirThen H containing local spatial information of the imageirIntroducing an intuitive fuzzy objective function J, which is realized as follows:
5.1) construction of Linear weighting function coefficients H Using position information and Gray-level information of pixelsir
Figure GDA0003525548360000081
Wherein (a)i,bi) And (a)r,br) Spatial coordinate values, x, representing the central pixel i and the neighborhood pixel r, respectivelyiAnd xrRepresenting the gray values, u, of the central pixel i and the neighborhood pixel r, respectivelykiDenotes xiFor the k-th cluster center ckThe membership degree of k is more than or equal to 1 and less than or equal to Y, r is more than or equal to 1 and less than or equal to omega2
5.2) calculating the pixel gray value x under the intuitive fuzzy setiTo the center of the cluster ckIntuitive fuzzy distance of
Figure GDA0003525548360000082
5.2.1) setting μ (c)k) Representing the center of the cluster ckDegree of membership of v (c)k) And pi (c)k) Respectively represent the cluster centers ckDegree of non-membership and hesitation, μ (c), using Yager operator constructionk)、v(ck) And pi (c)k) The calculation formulas of (A) are respectively as follows:
Figure GDA0003525548360000083
Figure GDA0003525548360000084
π(ck)=1-μ(ck)-v(ck),
wherein the content of the first and second substances,0≤μ(ck)≤1,0≤v(ck)≤1,0≤μ(ck)+v(ck)≤1;
5.2.2) calculating the intuitive fuzzy distance based on the result obtained in (5.2.1)
Figure GDA0003525548360000085
Figure GDA0003525548360000086
Wherein, mu (x)i)、v(xi) And pi (x)i) Respectively representing pixel grey values xiDegree of membership, degree of non-membership, and degree of hesitation;
5.3) combining the linear weighting function coefficients H obtained in (5.1)irThe intuitive fuzzy distance obtained from (5.2)
Figure GDA0003525548360000087
Calculating an intuitive fuzzy objective function J:
Figure GDA0003525548360000091
wherein u iskiIs a pixel xiTo the clustering center ckM represents a fuzzy weighting index;
5.4) solving an intuitive fuzzy target function J by using a Lagrange multiplier method and obtaining a membership function ukiAnd a cluster center ckIntuition fuzzy set of
Figure GDA0003525548360000092
5.4.1) introducing Lagrange multiplier lambda in the intuitive fuzzy target function JiConstructing an objective function L with a Lagrange multiplier:
Figure GDA0003525548360000093
5.4.2) separately calculating the objective function L with Lagrange multiplier versus the membership function ukiCluster center ckIntuition fuzzy set of
Figure GDA0003525548360000094
And lagrange multiplier λiPartial derivative of (2)
Figure GDA0003525548360000095
And
Figure GDA0003525548360000096
5.4.3) in partial derivatives
Figure GDA0003525548360000097
Under the condition (1), the membership function u is obtained as followskiAnd cluster center ckIntuition fuzzy set of
Figure GDA0003525548360000098
Figure GDA0003525548360000099
Figure GDA00035255483600000910
Wherein k is 1,2, …, Y, i is 1,2, …, n, r is 1,2, …, ω2
Step 6: and judging whether the iteration stop condition is met.
If the iteration stop condition C is satisfied(t+1)-C(t)If | is less than epsilon or the operation iteration times T is more than T, outputting a membership degree matrix U and executing 7), wherein,
Figure GDA00035255483600000911
is a set of cluster center values that are,
Figure GDA00035255483600000912
representing the visual fuzzy set of the t iteration of the Q clustering center;
if the iteration stop condition is not satisfied, let t be t +1, return 5.1).
And 7: and judging whether the current clustering number Y meets the condition or not.
If the current clustering number Y is less than the maximum clustering number YmaxI.e. Y < YmaxIf Y is Y +1, then taking the first Y values of the cluster center initial value range set C, outputting the values to the cluster center initial value set, and making the cluster center initial value set C(1)Is composed of
Figure GDA0003525548360000101
Let the initial number of iterations t be 1, return 5.1), where,
Figure GDA0003525548360000102
representing the initial value of the P-th clustering center, wherein P is more than or equal to 1 and less than or equal to Y;
if Y < Y is not satisfiedmaxThen (8) is executed.
And 8: and evaluating the membership matrix by using the grading distance index evaluation index GD.
The existing evaluation index for the membership matrix has a division coefficient VpcAnd dividing entropy VpeAll of which reflect the degree of ambiguity of the membership degree matrix, where VpcThe larger the numerical value, the smaller the ambiguity of the membership degree matrix, VpeThe smaller the value, the more accurate the pixel classification.
The membership matrix is evaluated using the hierarchical distance index evaluation index GD in this example, i.e. input (Y) from 7max-1) membership matrices U which are evaluated using a hierarchical distance index evaluation index GD, outputting a membership matrix with a maximum GD value, and setting the membership matrix with the maximum GD value as UmaxWill U ismaxCorresponding cluster number YfinAs the final cluster number of the input image, wherein: the formula of the grading distance index evaluation index GD is as follows:
Figure GDA0003525548360000103
wherein u isi1Representing the maximum membership value, u, of the ith pixeli2Represents the second largest membership value of the ith pixel, and is more than or equal to 1 and less than or equal to Yfin≤Ymax
And step 9: to degree of membership matrix UmaxPixel classification is performed.
According to the maximum membership principle and the final clustering number YfinTo the membership matrix U of 8) outputmaxIs classified, and a label matrix U is outputLWherein the maximum membership criterion is based on Vk=argk{max(uki) Divide the pixel i into the category V with the largest degree of membershipk,ULIncluding only the class values {1,2, …, YW,…,Yfin},1≤YW≤Yfin
Step 10: for tag matrix ULAnd performing error detection classification.
10.1) setting the tag matrix U of 9) outputLHas a label class value of LiIn the neighborhood of 5 by 5 of the central pixel i, the label type value L is judgediNumber of pixels nLWhether or not n is satisfiedL≤25(Yfin+1) condition:
if yes, the central pixel i is judged to be a wrong pixel, and the label class value L of the wrong pixel is divided intoiCorrecting the label category value to be the label category value with the maximum number of the same label category values in the neighborhood, and executing 10.2);
if not, judging that the central pixel i is a correctly distributed pixel, and executing 10.2);
10.2) judging the tag matrix ULIf the pixels in the image are all traversed, outputting a final label matrix UFExecution 11), if not, return to 10.1) continue traversal.
Step 11: for the final label matrix UFAnd assigning and outputting the image.
For the final label matrix U output in 10.2)FThe middle label class value is YWThe pixels of (2) are assigned different corresponding gray values xWAnd outputting the matrix with different gray values as a segmentation image, and finally labeling the matrix UFIncluding only the class values {1,2, …, YW,…,YfinThe assignment formula is as follows:
xW=[255/YW],
wherein [. cndot. ] represents an internal integer.
The technical effects of the invention are further explained by combining simulation experiments;
1. simulation conditions are as follows:
the simulation experiment is carried out in the software environment of computer Inter (R) core (TM) i5-3210M 2.50GHZ CPU, 8G memory and MATLAB R2014 a.
2. Simulation content:
simulation 1, selecting an image with the number of 3096 in a Berkeley image database, and segmenting the image by using the method of the present invention and the existing FCM method, FCM _ S1 method, FCM _ S2 method, FGFCM method, IFCM method and IIFCM method respectively, wherein the result is shown in FIG. 2, wherein:
2(a) is an original image of 3096 image;
2(b) is a standard segmentation map of the 3096 image;
2(c) is a gaussian noisy image of 3096 image, with a noise intensity of 0.006;
2(d) is the result of segmenting the gaussian noisy image of 3096 images by the existing FCM method;
2(e) is the result of segmentation of the gaussian noisy image of 3096 image using the existing FCM _ S1 method;
2(f) is the result of segmentation of the gaussian noisy image of 3096 image using the existing FCM _ S2 method;
2(g) is a result of segmenting the gaussian noisy image of 3096 image by the conventional FGFCM method;
2(h) is a result of segmenting the gaussian noisy image of 3096 image by the existing IFCM method;
2(i) is the result of segmentation of the gaussian noisy image of 3096 images using the existing IIFCM method;
2(j) is the result of the segmentation of the Gaussian noisy image of 3096 image using the present invention;
2(k) is a noisy image of salt and pepper of 3096 image, with a noise intensity of 0.01;
2(l) is the segmentation result of the salt-pepper noisy image of 3096 image by the existing FCM method;
2(m) is the segmentation result of the salt-pepper noisy image of 3096 image using the existing FCM _ S1 method;
2(n) is the segmentation result of the salt-pepper noisy image of 3096 image using the existing FCM _ S2 method;
2(o) is the segmentation result of the salt-and-pepper noisy image of 3096 image by the existing FGFCM method;
2(p) is the segmentation result of the salt-pepper noisy image of 3096 image by the existing IFCM method;
2(q) is the segmentation result of the salt-pepper noisy image of 3096 image by the existing IIFCM method;
2(r) is the segmentation result of the salt-pepper noisy image of 3096 image by the invention;
as can be seen from FIG. 2, the background and the target can be separated on the basis of noise suppression, the number of image clusters can be determined adaptively, and the method is insensitive to the initial value of the cluster center, so that the segmentation effect of the method on noisy images is superior to that of the existing FCM method, FCM _ S1 method, FCM _ S2 method, FGFCM method, IFCM method and IIFCM method.
Simulation 2, selecting an image with the number of 238011 in the Berkeley image database, and segmenting the image by using the method of the present invention and the existing FCM method, FCM _ S1 method, FCM _ S2 method, FGFCM method, IFCM method and IIFCM method respectively, wherein the result is shown in fig. 3, wherein:
reference numeral 3(a) denotes an original drawing of an 238011 image;
3(b) is a standard segmentation map of the 238011 image;
3(c) is a gaussian noisy image of 238011 images with a noise intensity of 0.006;
3(d) is the result of segmentation of the gaussian noisy image of 238011 image using the existing FCM method;
3(e) is the result of segmentation of the gaussian noisy image of 238011 images using the existing FCM _ S1 method;
3(f) is the result of segmentation of the gaussian noisy image of 238011 images using the existing FCM _ S2 method;
3(g) is a result of segmenting 238011 gaussian noisy images using the existing FGFCM method;
3(h) is a result of segmenting a gaussian noisy image of the 238011 image by the existing IFCM method;
3(i) is the result of segmentation of the gaussian noisy image of 238011 images using the existing IIFCM method;
3(j) is the result of the segmentation of the gaussian noisy image of 238011 images using the present invention;
3(k) is a salt-pepper noisy image of 238011 images, with a noise intensity of 0.01;
3(l) is the segmentation result of the salt-pepper noisy image of 238011 image by the existing FCM method;
3(m) is the segmentation result of the salt-pepper noisy image of 238011 image by the existing FCM _ S1 method;
3(n) is the segmentation result of the salt-and-pepper noisy image of 238011 image by the existing FCM _ S2 method;
3(o) is the segmentation result of the salt-pepper noisy image of 238011 image by the existing FGFCM method;
3(p) is the segmentation result of the salt-pepper noisy image of 238011 image by the existing IFCM method;
3(q) is the segmentation result of the salt-pepper noisy image of 238011 image by the existing IIFCM method;
3(r) is the segmentation result of the salt-pepper noisy image of 238011 image by the present invention;
as can be seen from FIG. 3, the invention can separate the background and the target on the basis of noise suppression, the number of image clusters can be determined adaptively, the method is insensitive to the initial value of the cluster center, and the segmentation effect on the noisy images is better than that of the existing FCM method, FCM _ S1 method, FCM _ S2 method, FGFCM method, IFCM method and IIFCM method.

Claims (7)

1. The image segmentation method based on the intuitive fuzzy C-means clustering is characterized by comprising the following steps:
(1) inputting an image to be segmented, and converting the image to be segmented into a gray image if the image to be segmented is a color image;
(2) initializing parameters: setting the maximum iteration number T as 100 and the stop threshold epsilon as 10-5The fuzzy weighting index m is 2, the radius omega of the neighborhood window is 3, the initial iteration time t is 1, and the default initial value Y of the clustering number is 2;
(3) constructing an intuitive fuzzy set robust to noise using a switch mean based strategy
Figure FDA0003525548350000011
Wherein i is not less than 1 and not more than n, and n represents the total number of pixels; the implementation is as follows:
(3a) first, it is determined whether the pixel gradation values in the 3 by 3 neighborhood of the ith pixel are not all 0 or 255:
if yes, taking out the pixels with the gray values of 0 and 255, then taking the mean value of the gray values of the rest pixels, setting the mean value as the correction value of the central pixel, and executing (3 b);
if not, not modifying the gray value of the central pixel, and executing (3 b);
(3b) judging whether all pixels in the input image traverse: if yes, outputting the corrected image, executing (3c), and if not, returning to (3a) to continue traversing;
(3c) computing pixel grayscale values of a modified image as an intuitive blur set
Figure FDA0003525548350000012
(3c1) Let u (x)i) Representing pixel grey value xiDegree of membership of, v (x)i) And pi (x)i) Are each xiThe non-membership degree and the hesitation degree constructed by using the Yager operator respectively have the following calculation formulas:
Figure FDA0003525548350000013
Figure FDA0003525548350000014
π(xi)=1-μ(xi)-v(xi),
wherein, 0 is less than or equal to mu (x)i)≤1,0≤v(xi)≤1,0≤μ(xi)+v(xi) 1, and alpha is 0.8 to represent a Yager operator coefficient;
(3c2) constructing an intuitive fuzzy set according to the result obtained in (3c1)
Figure FDA0003525548350000015
Figure FDA0003525548350000016
(4) Performing curve fitting by using a least square method according to the gray value distribution of the input image, selecting the gray level corresponding to the peak value point of a fitting curve as an initialized clustering center value, and setting the range of the estimated clustering number as {2, …, YmaxThe initial value range set C of the cluster center is
Figure FDA0003525548350000021
Wherein, cYDenotes an initial value of the Yth cluster center, Y is 1. ltoreq. Y.ltoreq.Ymax,YmaxMaximum value representing the number of clusters, 2. ltoreq.YmaxN is less than or equal to n, and T represents transposition;
(5) iterative computation of linear weighting function coefficients HirIntuitive fuzzy distance
Figure FDA0003525548350000022
And an intuitive fuzzy objective function J:
(5a) construction of linear weighting function coefficient H using position information and gradation information of pixelir
Figure FDA0003525548350000023
Wherein (a)i,bi) And (a)r,br) Individual watchSpatial coordinate values, x, of a center pixel i and a neighborhood pixel riAnd xrRepresenting the gray values, u, of the central pixel i and the neighborhood pixels r, respectivelykiDenotes xiFor the k-th cluster center ckThe membership degree of k is more than or equal to 1 and less than or equal to Y, r is more than or equal to 1 and less than or equal to omega2
(5b) Calculating pixel x under intuitive fuzzy setiTo the center of the cluster ckIntuitive fuzzy distance of
Figure FDA0003525548350000024
Wherein the content of the first and second substances,
Figure FDA0003525548350000025
and
Figure FDA0003525548350000026
respectively represent pixels xiAnd cluster center ckAn intuitive fuzzy set of;
(5c) combining the linear weighting function coefficient H obtained in (5a)irThe intuitive blur distance obtained in (5b)
Figure FDA0003525548350000027
Calculating an intuitive fuzzy objective function J and solving a membership function u in the intuitive fuzzy objective function JkiAnd a cluster center ckIntuition fuzzy set of
Figure FDA0003525548350000028
(6) Judging iteration stop condition | | C(t+1)-C(t)If | | < epsilon or the number of operation iterations T > T holds:
if not, making t equal to t +1, and returning to the step (5);
if yes, outputting a membership matrix U and judging that Y is less than YmaxIf yes, making Y equal to Y +1, taking the first Y numerical values of the initial value range set C of the clustering center, and making the initial value set of the clustering center as
Figure FDA0003525548350000029
Making the initial iteration number t equal to 1, returning to (5), if Y < YmaxIf not, performing (7), wherein,
Figure FDA00035255483500000210
in the form of a set of cluster center values,
Figure FDA00035255483500000211
represents the set of intuitive ambiguities for the qth cluster-center iteration,
Figure FDA00035255483500000212
denotes an initial value of the pth cluster center, 1. ltoreq. Q.ltoreq.Y, 1. ltoreq. P.ltoreq.Y, U ═ U ≦ Yki};
(7) Output (Y) from (6)max-1) membership matrices U and evaluating them using a hierarchical distance index evaluation index GD; setting the membership matrix with the maximum GD value as UmaxWill U ismaxCorresponding cluster number YfinAs the final cluster number of the input image, wherein 1 ≦ Yfin≤Ymax(ii) a The formula of the graded distance index evaluation index GD is expressed as follows:
Figure FDA0003525548350000031
wherein u isi1Represents the maximum membership value, u, of the ith pixeli2Represents the second largest membership value for the ith pixel;
(8) membership degree matrix U of (7) outputmaxAnd (3) carrying out pixel classification, carrying out error detection classification, and outputting a segmentation image:
(8a) according to the maximum membership degree principle and the final clustering number YfinMembership matrix U for the output of (7)maxIs classified, and a label matrix U is outputLWherein, ULIncluding only the class values {1,2, …, YW,…,Yfin},1≤YW≤Yfin
(8b) Is provided withThe label class value of the center pixel i is LiIn the neighborhood of 5 by 5 of the central pixel i, the label type value L is judgediNumber of pixels nLWhether or not n is satisfiedL≤25/(Yfin+1) condition:
if yes, the central pixel i is judged to be a wrong pixel, and the label class value L of the wrong pixel is divided intoiCorrecting the label category value to be the label category value with the maximum number of the same label category values in the neighborhood, and executing (8 c);
if not, judging that the central pixel i is a correctly distributed pixel, and executing (8 c);
(8c) judging tag matrix ULIf so, outputting a final label matrix UFExecuting (8d), if not, returning to (8b) to continue traversing;
(8d) for the final label matrix UFThe middle label class value is YWThe pixels of (2) are assigned different corresponding gray values xWAnd the matrix given different gray values is output as a divided image.
2. The method according to claim 1, wherein (4) a curve fitting is performed using a least square method with respect to the gray value distribution of the input image, and a corresponding gray level is selected as an initialized cluster center value, which is implemented as follows:
(4a) setting the gray level image to 256 gray levels, wherein the range of the gray levels is [0,255], and drawing a gray level histogram of the gray level image;
(4b) counting the gray value distribution frequency f (l) of a gray level l on the gray level histogram, and performing seven-order least square curve fitting, wherein l is more than or equal to 0 and less than or equal to 255;
(4c) calculating a first derivative f ' (l) of f (l), comparing f ' (l) with first derivatives f ' (l-1) and f ' (l +1) of gray levels before and after f (l), and judging whether f ' (l) is less than or greater than 0 only when l is 0 or 255:
when l is 0, judging whether f' (l) < 0 is true, if yes, outputting the gray level l is 0 as a peak point;
when 0 < l < 255, judging whether f ' (l) ═ 0 and f ' (l-1) > 0 and f ' (l +1) < 0 hold, if yes, outputting the gray level l as the peak value point;
when l is 255, judging whether f' (l) > 0 is established, if so, outputting the gray level l is 255 as a peak point;
(4d) judging whether 256 gray levels are traversed completely:
if yes, outputting the peak point, executing (4e), and if not, returning to (4c) to continue traversing;
(4e) sorting the peak points output in the step (4d) according to the sequence of the gray distribution frequency values from large to small, wherein the number of the statistical peak points is YmaxAnd counting the gray levels l corresponding to the peak points ordered from large to small to obtain the gray level value range of
Figure FDA0003525548350000041
Counting the gray level correspondence values as initial values of cluster centers, e.g.
Figure FDA0003525548350000042
Setting the initial value range set C of the clustering center as
Figure FDA0003525548350000043
Wherein lYAnd expressing the gray level corresponding to the Y-th peak point gray level distribution frequency value in the descending order of the peak point gray level distribution frequency value.
3. The method of claim 1, wherein (5b) the gray value x of the pixel under the intuitive blur set is calculatediTo the center of the cluster ckIntuitive fuzzy distance of
Figure FDA0003525548350000044
The implementation is as follows:
(5b1) let u (c)k) Representing the center of clustering ckDegree of membership of v (c)k) And pi (c)k) Respectively represent the cluster centers ckThe non-membership degree and the hesitation degree constructed by using the Yager operator respectively have the following calculation formulas:
Figure FDA0003525548350000045
Figure FDA0003525548350000046
π(ck)=1-μ(ck)-v(ck),
wherein 0 is not more than mu (c)k)≤1,0≤v(ck)≤1,0≤μ(ck)+v(ck)≤1;
(5b2) Calculating the intuitive blur distance from the result obtained in (5b1)
Figure FDA0003525548350000051
Figure FDA0003525548350000052
Wherein, mu (x)i)、v(xi) And pi (x)i) Respectively representing pixel grey values xiDegree of membership, degree of non-membership, and degree of hesitation.
4. The method according to claim 1, wherein (5c) local spatial information of the pixels is utilized in combination with the linear weighting function coefficients H obtained in (5a)irThe intuitive blur distance obtained in (5b)
Figure FDA0003525548350000053
Calculating an intuitive fuzzy objective function J, wherein the formula is as follows:
Figure FDA0003525548350000054
wherein u iskiIs a pixel xiTo the clustering center ckM represents a fuzzy degree of membership function ofAnd (4) weighting the indexes.
5. The method of claim 4, wherein the membership function u is derived from the intuitive fuzzy objective function J using the Lagrangian multiplier methodkiAnd a cluster center ckIntuition fuzzy set of
Figure FDA0003525548350000055
The implementation is as follows:
(5c1) introducing Lagrange multiplier lambda into intuition fuzzy target function JiConstructing an objective function L with a Lagrange multiplier:
Figure FDA0003525548350000056
(5c2) respectively calculating L to membership function ukiCluster center ckIntuition fuzzy set of
Figure FDA0003525548350000057
And lagrange multiplier λiPartial derivatives of
Figure FDA0003525548350000058
And
Figure FDA0003525548350000059
and in partial derivatives
Figure FDA00035255483500000510
Under the conditions of (1), u is obtained as followskiAnd
Figure FDA00035255483500000511
Figure FDA00035255483500000512
Figure FDA00035255483500000513
wherein k is 1,2, …, Y, i is 1,2, …, n, r is 1,2, …, ω2
6. The method of claim 1, wherein the maximum membership criterion in (8a) is based on Vk=argk{max(uki) Divide the pixel i into the category V with the largest degree of membershipk
7. The method of claim 1, wherein (8d) is performed on the final tag matrix UFThe middle label class value is YWThe pixels of (2) are assigned different corresponding gray values xWThe formula is as follows:
xW=[255/YW]
wherein [ ·]Indicating the integers taken inside, the final label matrix UFIncluding only the class values {1,2, …, YW,…,Yfin}。
CN201910507275.7A 2019-06-12 2019-06-12 Image segmentation method based on intuitive fuzzy C-means clustering Active CN110211126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910507275.7A CN110211126B (en) 2019-06-12 2019-06-12 Image segmentation method based on intuitive fuzzy C-means clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910507275.7A CN110211126B (en) 2019-06-12 2019-06-12 Image segmentation method based on intuitive fuzzy C-means clustering

Publications (2)

Publication Number Publication Date
CN110211126A CN110211126A (en) 2019-09-06
CN110211126B true CN110211126B (en) 2022-06-03

Family

ID=67792371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910507275.7A Active CN110211126B (en) 2019-06-12 2019-06-12 Image segmentation method based on intuitive fuzzy C-means clustering

Country Status (1)

Country Link
CN (1) CN110211126B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110634141B (en) * 2019-09-19 2022-02-11 南京邮电大学 Image segmentation method based on improved intuitionistic fuzzy c-means clustering and storage medium
CN110956637B (en) * 2019-12-02 2022-08-02 兰州交通大学 Adaptive fuzzy C-means clustering noise image segmentation method and system
CN111402272B (en) * 2020-03-19 2023-09-29 西安石油大学 Neighborhood weighted fuzzy clustering image segmentation method
CN111476303A (en) * 2020-04-09 2020-07-31 国网河北省电力有限公司电力科学研究院 Line loss analysis method of fuzzy C-means clustering based on quantum optimization particle swarm
CN111539966A (en) * 2020-04-16 2020-08-14 华北电力大学 Colorimetric sensor array image segmentation method based on fuzzy c-means clustering
CN111932578B (en) * 2020-08-12 2022-06-07 河北地质大学 Pixel block clustering nondestructive testing image segmentation method integrating class index suppression factors
CN112541528A (en) * 2020-12-02 2021-03-23 国家电网有限公司 Power transmission and transformation project cost prediction index optimization method based on fuzzy clustering
CN112733825A (en) * 2020-12-18 2021-04-30 北京城市排水集团有限责任公司 Method for extracting image defect space of swing arm radar pipeline
CN113269796B (en) * 2021-06-10 2023-08-25 河北地质大学 Image segmentation method and device and terminal equipment
CN113409335B (en) * 2021-06-22 2023-04-07 西安邮电大学 Image segmentation method based on strong and weak joint semi-supervised intuitive fuzzy clustering
CN115131588B (en) * 2022-06-13 2024-02-23 西北工业大学 Image robust clustering method based on fuzzy clustering

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062757A (en) * 2018-01-05 2018-05-22 北京航空航天大学 It is a kind of to utilize the method for improving Intuitionistic Fuzzy Clustering algorithm extraction infrared target
WO2018119808A1 (en) * 2016-12-29 2018-07-05 浙江工商大学 Stereo video generation method based on 3d convolutional neural network
WO2019000455A1 (en) * 2017-06-30 2019-01-03 上海联影医疗科技有限公司 Method and system for segmenting image
CN109145921A (en) * 2018-08-29 2019-01-04 江南大学 A kind of image partition method based on improved intuitionistic fuzzy C mean cluster
CN109741330A (en) * 2018-12-21 2019-05-10 东华大学 A kind of medical image cutting method of mixed filtering strategy and fuzzy C-mean algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018119808A1 (en) * 2016-12-29 2018-07-05 浙江工商大学 Stereo video generation method based on 3d convolutional neural network
WO2019000455A1 (en) * 2017-06-30 2019-01-03 上海联影医疗科技有限公司 Method and system for segmenting image
CN108062757A (en) * 2018-01-05 2018-05-22 北京航空航天大学 It is a kind of to utilize the method for improving Intuitionistic Fuzzy Clustering algorithm extraction infrared target
CN109145921A (en) * 2018-08-29 2019-01-04 江南大学 A kind of image partition method based on improved intuitionistic fuzzy C mean cluster
CN109741330A (en) * 2018-12-21 2019-05-10 东华大学 A kind of medical image cutting method of mixed filtering strategy and fuzzy C-mean algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"A Modified Intuitionistic Fuzzy Clustering Algorithm for Medical Image Segmentation";S.V. Aruna Kumar;《DE GRUYTER》;20190429;第1-15页 *
"显著信息引导的直觉空间模糊聚类图像分割";赵凤;《西安邮电大学学报》;20180731;第1-7页 *

Also Published As

Publication number Publication date
CN110211126A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110211126B (en) Image segmentation method based on intuitive fuzzy C-means clustering
CN106875395B (en) Super-pixel-level SAR image change detection method based on deep neural network
CN111340824B (en) Image feature segmentation method based on data mining
CN109509196B (en) Tongue diagnosis image segmentation method based on fuzzy clustering of improved ant colony algorithm
CN110766058B (en) Battlefield target detection method based on optimized RPN (resilient packet network)
Wu et al. Adaptive entropy weighted picture fuzzy clustering algorithm with spatial information for image segmentation
CN111242878B (en) Mine image enhancement method based on cuckoo search
CN110569782A (en) Target detection method based on deep learning
CN109255326B (en) Traffic scene smoke intelligent detection method based on multi-dimensional information feature fusion
CN109801305B (en) SAR image change detection method based on deep capsule network
CN103761726B (en) Block adaptive image partition method based on FCM
CN113408605A (en) Hyperspectral image semi-supervised classification method based on small sample learning
CN116091455A (en) Steel mesh surface defect judging method based on machine vision
CN116912568A (en) Noise-containing label image recognition method based on self-adaptive class equalization
Song et al. A novel self-learning weighted fuzzy local information clustering algorithm integrating local and non-local spatial information for noise image segmentation
CN113989256A (en) Detection model optimization method, detection method and detection device for remote sensing image building
CN108846845B (en) SAR image segmentation method based on thumbnail and hierarchical fuzzy clustering
CN114758226A (en) Hyperspectral image anomaly detection method for two-stage decision-directed double-dictionary construction
CN114399630A (en) Countercheck sample generation method based on belief attack and significant area disturbance limitation
Ayech et al. Image segmentation based on adaptive Fuzzy-C-Means clustering
CN107423771B (en) Two-time-phase remote sensing image change detection method
CN113657394A (en) Remote sensing image segmentation method based on Markov random field and evidence theory
CN110910497B (en) Method and system for realizing augmented reality map
CN102930558A (en) Real-time tracking method for infrared image target with multi-feature fusion
CN111666999A (en) Remote sensing image classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant