CN112184740B - Image segmentation method based on statistical active contour and texture dictionary - Google Patents
Image segmentation method based on statistical active contour and texture dictionary Download PDFInfo
- Publication number
- CN112184740B CN112184740B CN202011064617.1A CN202011064617A CN112184740B CN 112184740 B CN112184740 B CN 112184740B CN 202011064617 A CN202011064617 A CN 202011064617A CN 112184740 B CN112184740 B CN 112184740B
- Authority
- CN
- China
- Prior art keywords
- representing
- image
- level set
- dictionary
- probability
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20116—Active contour; Active surface; Snakes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20161—Level set
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image segmentation method based on a statistical active contour and texture dictionary, which is used for solving the technical problems that an active contour model based on sparse texture cannot clearly represent the structure and texture of an image and the model calculation amount is large. The invention establishes a level set energy function and a level set updating equation based on Gaussian mixture distribution under a statistical framework. The method comprises the following steps: firstly, obtaining a binary sparse matrix by utilizing a dictionary learning algorithm; secondly, initializing a level set and obtaining a probability label through linear transformation of a binary sparse matrix; then, obtaining the statistical parameters of the current segmentation by utilizing the probability label; the current level set function, probability labels and statistical parameters are then combined to predict a new segmentation curve. The level set function evolves under the drive of the probabilistic tags, which are updated from the level set based binary tags by a line transformation. Compared with the traditional method, the method has the advantages that the calculation cost is greatly reduced, and meanwhile, the complex texture is effectively segmented.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image segmentation method based on a statistical active contour and a texture dictionary, which can efficiently complete complex texture image segmentation and can be widely applied to the field of image analysis.
Background
Active contour models are a class of methods for object segmentation in the field of computer vision and have irreplaceable positions in applications such as medical image analysis. The model is typically implemented by minimizing an energy function and evolving a level set function to a time dependent partial differential equation and constraining the boundaries of objects coupling the image data with a smoothness of zero level set. Most of the existing research focuses on designing a class of effective energy functional, and brings various constraints and constructions about images into the functional, and some effective methods are provided. In general, these methods can be divided into two categories: edge-based methods and region-based active contour methods, which have been receiving much attention because of their many advantages (e.g., closed and smooth contour, pixel-level accuracy, lower initial sensitivity).
The basic idea of the region-based active contour approach, in which the most common piecewise smoothing model is the Chan-Vese model, which is a simplified variant of the Mumford-Shah model and relies on global information to guide contour evolution, is to identify each region of interest by driving the evolution of the level set function using some region descriptor. The Chan-Vese model assumes that the image is statistically uniform, but this condition is difficult to achieve for an ideal case. Therefore, lie-pure-minds proposed a method driven by local binary fitting energy, defined by a kernel function with localized nature of variational formulas. But such models use only local intensity means to characterize the contour model and do not provide sufficient information. Most existing region-based methods use global or local intensity information as region descriptors to guide contour evolution, but cannot process natural images with rich texture features.
To improve the effectiveness of region-based active contours for complex natural images, researchers have proposed many texture descriptor models that introduce other types of information into the energy function for the purpose of formalizing and measuring differences between different objects. Some methods represent the texture image as a Beltrami framework of a two-dimensional Riemannian manifold and introduce a hybrid active contour model for segmentation of the texture image. Still others use texture detection operators to generate an enhanced image containing texture information, then use image segmentation methods to initially segment the image, and use adaptive active contour methods to improve the segmentation results. In the national Liuliu male professor, a local self-similarity-based texture description operator is introduced into a local Gaussian distribution-based active contour model fitting method, so that an evolution contour effectively captures a texture boundary. In addition to these texture detection operators, sparse representations are also introduced into the active contour model. There is an evolution of the active contours used to optimize the fidelity of the sparse representation of texture information provided by the object dictionary, and also sparse texture energy defined by using a weighted combination of texture and structural variation maps to enhance robustness to object boundary detection. Current sparse texture-based active contour models construct only complex energy functions and use edge-based active contours, which can result in high computational effort and often are not effective in characterizing structures and textures in images.
Disclosure of Invention
Aiming at the defects in the background technology, the invention provides an image segmentation method based on a statistical active contour and texture dictionary, and solves the technical problems that an active contour model based on sparse texture cannot clearly represent the structure and texture of an image and the model calculation amount is large.
The technical scheme of the invention is realized as follows:
an image segmentation method based on a statistical active contour and texture dictionary comprises the following steps:
the method comprises the following steps: modeling an observed value of each pixel in an original image by using a Gaussian mixture model;
step two: constructing dictionary elements according to the original image, and constructing a binary sparse matrix according to the dictionary elements;
step three: initializing a segmentation curve according to an original image, and initializing a level set function by using the segmentation curve;
step four: obtaining a probability label of each pixel in an original image belonging to a target and background region through a level set function through binary sparse matrix linear transformation;
step five: updating the statistical parameters of the target and the background area according to the probability label in the step four and the observed value of each pixel in the original image;
step six: according to the probability label of the step four, the level set function of the step three and the statistical parameter of the step five, a new level set function is obtained through a probabilistic level set evolution equation;
step seven: and returning to the step four to iteratively solve the level set function and the probability label until the change value of the new level set function is smaller than the threshold value T, and finishing image segmentation.
The method for modeling the observed value of each pixel in the original image by using the Gaussian mixture model comprises the following steps:
wherein the content of the first and second substances,gaussian distribution function, x, representing the observed value of an imagejRepresenting an image observation value, { pi ═ pi }1,π2},Θ={θ1,θ2},πiRepresents the weight, p (x)j|θi) Representing a probability density function, thetaiStatistical parameters indicating a region i, where j ═ {1, 2., U × V }, U × V being an image size, i ∈ {1,2}, i ═ 1 indicating a target region, and i ═ 2 indicating a background region;
wherein, thetai={μi,∑iDenotes the statistical parameter of the probability space, ΣiIs a variance matrix, mu, of region i in probability spaceiIs the mean of the region i in probability space.
The method for constructing the dictionary elements according to the original image and constructing the binary sparse matrix according to the dictionary elements comprises the following steps:
s21, extracting N1 images with the size ofThe image blocks are clustered by using a K-means algorithm, N1 image blocks are clustered to obtain N clustering centers, and the N clustering centers are used as dictionary elements D [ D ]1,D2,...,Dl,...,DN],Dl∈RM,l={1,...,N};
S22, pixel point xjThe centered image block is associated with a dictionary element:
wherein the content of the first and second substances,is a pixel point xjIs a center of sizeThe image block of (a) is selected,is an image blockJ ═ 1, 2.., uxv }, uxv being the image size;
s23, defining a binary sparse matrix Representing a pixel point xjAnd dictionary element DlMiddle pixel pointIn which Representing a dictionary element DlThe mth pixel point in (1);
the level set function is:
H(Φ)∈RUV×1
wherein phi is (phi)1,φ2,...,φU×V) H (-), is the Heaviside step function;
wherein phijRepresenting a pixel point xjThe corresponding level set function value.
The method for calculating the probability label of each pixel belonging to the target comprises the following steps:
P1Y=diag(SI')-1SH(Φ)
P1X=diag(STI”)-1STP1Y
wherein I 'represents a unit column vector of length UV, I' represents a unit column vector of length MN, P1YProbability labels, P, representing objects to which pixels in dictionary elements belong1XProbability labels representing that the pixels of the image to be segmented belong to the target;
wherein the content of the first and second substances,representing pixels in dictionary elementsThe probability of belonging to the object is,representing a pixel point x in an original imagejProbability of belonging to the object, Ω1Representing a target area in the original image.
The statistical parameters include mean and variance matrices:
the calculation formula of the mean value is as follows:
the calculation formula of the variance matrix is as follows:
wherein the content of the first and second substances,representing pixels x learned from a dictionaryjProbability labels, x, belonging to region ijIs the observed value of the image pixel.
The method for obtaining the new level set function through the probabilistic level set evolution equation comprises the following steps:
wherein, the first and the second end of the pipe are connected with each other,pixel point x representing k iterationsjCorresponding level set functionThe value of the one or more of the one,pixel point x representing k +1 iterationsjThe corresponding level set function value,. DELTA.t, denotes the step size,. epsilon.is the scale factor,representing k iterations pixel x from dictionary learningjThe probability label belonging to the object is,representing k iterations pixel x from dictionary learningjA probability label belonging to the background is assigned,pixel x representing k timesjAbout statistical parametersBeta represents the weight of the adjustment area term, div represents the curvature operator,for gradient, γ represents the weight of the adjustment of the girth term.
The beneficial effect that this technical scheme can produce: firstly, acquiring a sparse dictionary by utilizing a dictionary learning algorithm; then initializing a level set and obtaining probability labels through linear transformation based on a learning dictionary; obtaining a statistical parameter of the current segmentation by using the probability label; and then predicting a new segmentation curve by combining the current level set function, the probability label and the statistical parameter. The level set function evolves under the drive of the probabilistic tags, which are updated from the level set based binary tags by a line transformation. Compared with the traditional method, the method has the advantages that the calculation cost is greatly reduced, and meanwhile, the complex texture is effectively segmented.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the present invention.
FIG. 2 is an initial profile image of the present invention.
Fig. 3 shows the GPAC model segmentation result.
FIG. 4 shows the results of the TACSMM model segmentation.
Fig. 5 shows the DSNAKE model segmentation results.
FIG. 6 shows the segmentation result of the proposed method.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art based on the embodiments of the present invention without inventive step, are within the scope of the present invention.
The embodiment of the invention provides an image segmentation method based on a statistical active contour and a texture dictionary, which is used for solving the problems of accuracy and efficiency in texture segmentation. The method is carried out under the drive of a sparse dictionary, the target is carried out by probability label drive segmentation representing a texture target, and the method can be divided into two steps: 1) obtaining probability labels belonging to textures by a binary level set function; 2) and obtaining probability distribution of the target and the background by the probability label, and obtaining a new level set function. These two steps are alternated until a stop condition is met. The specific process is shown in fig. 1, and comprises the following steps:
the method comprises the following steps: describing an observed value of each pixel in an original image by using a Gaussian mixture model; the original image is expressed as omega E R2Wherein Ω ═ { Ω ═ Ωi},i∈{1,2},Ω1Representing objects in an original imageRegion, Ω2Representing the background area in the original image. d-dimensional random variable xjThe j-th pixel point in the original image is represented, j is {1, 2., U × V }, and U × V represents the number of pixels in the original image. The present invention utilizes a Gaussian Mixture Model (GMM) to model image data, let p (x)j|θi) Is a gaussian distribution function and is also an element in the GMM.
The method for describing the observed value of each pixel in the original image by using the Gaussian mixture model comprises the following steps:
wherein the content of the first and second substances,gaussian distribution function, x, representing observed values of an imagejRepresenting an image observation value, { pi ═ pi }1,π2},Θ={θ1,θ2},πiRepresents the weight, p (x)j|θi) Representing the probability density, θiA statistical parameter indicating an area i, where j ═ 1, 2., U × V }, U × V is an image size, i ∈ {1,2}, i ═ 1 indicates a target area, and i ═ 2 indicates a background area;
wherein, thetai={μi,∑iDenotes the statistical parameter of the probability space, ΣiIs a variance matrix, mu, of region i in probability spaceiIs the mean of the region i in probability space. d is an observed value xjDimension of (a), sigmaiIs a positive definite matrix with size d x d.
Step two: constructing dictionary elements according to the original image, and constructing a binary sparse matrix according to the dictionary elements; the specific method comprises steps S21 to S23:
s21, extracting N1 images with the size ofThe image blocks are clustered by using a K-means algorithm, N1 image blocks are obtained, and the N clustering centers are used as dictionary elements D ═ D1,D2,...,Dl,...,DN],Dl∈RM1, ·, N }; in the embodiment, M is 9;
s22, dividing the pixel point x intojThe centered image block is associated with a dictionary element:
wherein, the first and the second end of the pipe are connected with each other,is a pixel point xjIs a center of sizeThe image block of (a) is selected,is an image blockJ ═ 1, 2.., uxv };
s23, defining a binary sparse matrix Representing a pixel point xjAnd dictionary element DlMiddle pixel pointIn which Representing a dictionary element DlThe mth pixel point in (1);
the step two process is the most time-consuming step in the whole segmentation, but the step can be performed in advance, and the sparse matrix can be used for multiple times after calculation is carried out on one picture.
The above process has the following properties, each image blockAll pointing to a single dictionary element DlBut due to image blocksAre overlapping, i.e. each pixel point xjMay appear in different positions of different image blocks, each pixel being M ═ M2Dictionary pixels are related (image pixels are related to less than M dictionary pixels at the edge of width M-1). In other words, each assignment associates m elements in the image into m pixels in the dictionary. This binary relationship between image pixels and dictionary pixels is represented here using a sparse binary matrix S that extracts texture features by simultaneous dictionary assignment to each image block and spatial relationships between image blocks.
Step three: initializing a segmentation curve according to an original image, and initializing a level set function by using the segmentation curve;
level set function phijIs a signed distance function, defined herein as
Wherein ε is a scaleThe coefficients of which are such that,representing a pixel point x in an original imagejThe probability of belonging to the target region is,representing a pixel point x in an original imagejProbability of belonging to a background region.
The invention allows the level set function phi to be setjInitialized to an arbitrary shape. This is typically done by setting an identity matrix equal in size to the image and setting the values in the initial curve to zero, let phijRepresenting a pixel point xjCorresponding level set function value:
definition H (·) is the Heaviside step function:
through the operation of H (-), the target and the background of the current image can be obtained according to the level set function, namely H (-) > omega → {1,2 }. For each image block selected in the imageCorresponding to a label image patch in spatial position Representing image blocksThe corresponding level set function. In turn, there are multiple D's in each dictionarylImage block pointingThe label of a dictionary cell can thus be calculated from H (Φ) ═ H1,φ2,...,φU×V)。
Step four: obtaining a probability label that each pixel in an original image belongs to a target region and a background region through a level set function through binary sparse matrix linear transformation;
the method for calculating the probability label of each pixel belonging to the target area comprises the following steps:
P1Y=diag(SI')-1SH(Φ) (8)
wherein I' represents a unit column vector of length UV, P1YProbability labels representing pixels in dictionary elements, H (Φ) E RUV×1。
The label of the dictionary unit is obtained by the pixel-by-pixel average value of the label patch corresponding to the image block assigned to it. Dictionary labels can be computed by arranging the pixels in the label image H (Φ) into a binary vector and multiplying with a regularized matrix S that adds one for each row, noting that the label computation is a multiple label average and thus no longer binary. Probability map P1XIs achieved by means of averaging, each dictionary label being placed in the image space at the position of the image block assigned to the dictionary element in question. As patches overlap, up to M values need to be averaged to calculate the pixel probability.
The probability labels of the dictionary elements are defined by:
wherein, the first and the second end of the pipe are connected with each other,representing pixels in dictionary elementsProbability of belonging to an object。
The probability map is defined as follows:
wherein the content of the first and second substances,representing pixels in dictionary elementsThe probability of belonging to the object is,representing a pixel point x in an original imagejProbability of belonging to the object, Ω1Representing a target area in the original image.For the probability label belonging to the background,
the probability map is defined as follows:
P1X=diag(STI”)-1STP1Y (11)
where I "represents a unit column vector of length MN, P1XProbability labels representing pixels of the image to be segmented.
P2YRepresenting pixels in dictionary elementsProbability of belonging to the background, P2XProbability labels for the pixels of the image to be segmented belonging to the background, and P1Y+P2Y=1,P1X+P2X=1。
Step five: updating the statistical parameters of the target area and the background area according to the probability label in the step four;
the invention uses the cost function of a probabilistic Chan-Vese model:
wherein, the first two terms on the right side of the equation of the formula (12) are data terms, the third term on the right side of the equation of the formula (12) is used for limiting the length of the segmentation contour, the fourth term on the right side of the equation of the formula (12) is used for limiting the area of the segmentation contour,pixel x representing degree kjBased on parametersThe probability density function of the obtained Gaussian distribution,representation is based on parametersThe obtained probability density function of Gaussian distribution, beta represents the weight of the adjusting area term,gamma is a weight parameter for adjusting the perimeter term; beta and gamma are used to adjust the relationship between the terms. H (-) is defined by equation (7), is discrete and non-differentiable, and is approximated in practical applications using a regular expression:
the parameter update is realized by minimizing the energy function, and firstly, formula (5) is substituted for formula (13) to obtain a new approximate expression:
wherein, the first and the second end of the pipe are connected with each other,denotes H (phi)j) In the approximation of (a) to (b),to representThe inverted image of (1);
rewriting the cost function equation (12) into a discrete form, and substituting equations (2) and (14) to obtain:
and minimizing the above equation for the mean and variance, respectively, first derivative of equation (15) with respect to the mean μ is first solved:
let equation (16) equal 0, then we get:
the same way can be found for the updated equation of variance:
unlike the traditional area-based active contour, the invention uses soft classification in parameter estimation, which can provide more information and accelerate muiAnd sigmaiSo that the results can be obtained in fewer iterations.
Step six: obtaining a new level set function through a probabilistic level set evolution equation according to the probability label in the step four, the level set function in the step three and the statistical parameter in the step five;
for a fixed θ, an updated equation for φ can be derived from the gradient descent method:
from equations (5) and (12):
substituting the above equation into equation (19) yields a probabilistic level set evolution equation:
wherein the content of the first and second substances,pixel point x representing k iterationsjThe value of the corresponding level set function,pixel point x representing k +1 iterationsjCorresponding level set function values, Δ t represents the step size, ε is the scale factor,representing k iterations pixel x from dictionary learningjThe probability label belonging to the object is,representing k iterations pixel x from dictionary learningjA probability label belonging to the background is assigned,pixel x representing k timesjBeta represents the weight of the adjustment area term, div represents the curvature operator,to be gradient, γ represents the weight of the adjustment perimeter term. Please note that the level set update formula given here is an expression under the probability framework, and it can be seen from the formula that the level set function is updated by the probability label and the statistical parameterTo be driven byIt is derived from the texture dictionary.
Step seven: returning to the step of four iterative solution level set functions phi and probability labelsAnd completing image segmentation until the change value of the new level set function is less than the threshold value T. The stop condition is a change in H (Φ) as a threshold, which can be set according to the image size. When H (Φ) does not change much (a specific value can be set according to the image size), the segmentation can be stopped.
Specific examples
To verify the performance of the proposed method, for an image generated by two textures with similar colors, the image with 473 × 473 resolution data set is compared with three algorithms as shown in fig. 2-6. Fig. 2 shows the initial contour, the same initial level set is used for the four algorithms, and fig. 6 shows the segmentation result of the method of the present invention. Three contrast algorithms include graph cut-based active contours (GPAC), student t-blending-based texture perception active contours (TACSM) and learning dictionary-based snake models (DSNAKE), and the segmentation results are respectively shown in FIGS. 3-5.
TABLE 1 comparison of several texture Activity Profile methods
GPAC | TACSMM | DSNAKE | Ours | |
RI | 0.5846 | 0.9428 | 0.9533 | 0.9637 |
GCE | 0.3620 | 0.0392 | 0.0255 | 0.0178 |
VI | 1.8726 | 0.5197 | 0.4570 | 0.3567 |
Table 1 gives the time taken for four algorithms to segment the image. Where RI represents a probability edge index, GCE represents a global consistency error, and VI represents change information. The higher the RI value, the better the segmentation, while the lower the GCE and VI values, the better. Experimental results show that the segmentation performance of the invention is better.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (6)
1. An image segmentation method based on a statistical active contour and texture dictionary is characterized by comprising the following steps:
the method comprises the following steps: modeling an observed value of each pixel in an original image by using a Gaussian mixture model;
step two: constructing dictionary elements according to the original image, and constructing a binary sparse matrix according to the dictionary elements;
step three: initializing a segmentation curve according to an original image, and initializing a level set function by using the segmentation curve;
step four: obtaining a probability label that each pixel in an original image belongs to a target and background region through a level set function through binary sparse matrix linear transformation;
step five: updating the statistical parameters of the target and the background area according to the probability label in the step four and the observed value of each pixel in the original image;
step six: according to the probability label of the step four, the level set function of the step three and the statistical parameter of the step five, a new level set function is obtained through a probabilistic level set evolution equation;
the method for obtaining the new level set function through the probabilistic level set evolution equation comprises the following steps:
wherein phi isj (k)Pixel x representing k iterationsjThe value of the corresponding level set function,pixel point x representing k +1 iterationsjThe corresponding level set function value, at, represents the step size, epsilon is the scale factor,representing k iterations of pixel x learned from a dictionaryjThe probability label belonging to the object is,representing k iterations pixel x from dictionary learningjA probability label belonging to the background is assigned,pixel point x representing k timesjAbout statistical parametersOf the Gaussian distribution probability density function, thetaiStatistical parameters representing region i, i ∈ {1,2}, i ═ 1 representing the target region, i ═ 2 representing the background region, β representing the weight of the adjustment area term, div representing the curvature operator,is a gradient, gamma represents the weight of the adjustment perimeter term;
step seven: and returning to the step four to iteratively solve the level set function and the probability label until the change value of the new level set function is smaller than the threshold value T, and finishing image segmentation.
2. The image segmentation method based on the statistical active contour and texture dictionary as claimed in claim 1, wherein the method for modeling the observed value of each pixel in the original image by using the gaussian mixture model is as follows:
wherein the content of the first and second substances,gaussian distribution function, x, representing observed values of an imagejRepresenting an image observation value, { pi ═ pi }1,π2},Θ={θ1,θ2},πiRepresents a weight, p (x)j|θi) Representing a probability density function, thetaiStatistical parameters indicating a region i, where j ═ {1, 2., U × V }, U × V being an image size, i ∈ {1,2}, i ═ 1 indicating a target region, and i ═ 2 indicating a background region;
wherein, thetai={μi,ΣiDenotes statistical parameters of the probability space, sigmaiIs a variance matrix, μ, of region i in probability spaceiIs the mean of the region i in probability space.
3. The image segmentation method based on the statistical active contour and texture dictionary as claimed in claim 1, wherein the method for constructing dictionary elements from the original image and constructing the binary sparse matrix from the dictionary elements comprises:
s21, extracting N1 images with the size ofThe image blocks are clustered by using a K-means algorithm, N1 image blocks are clustered to obtain N clustering centers, and the N clustering centers are used as dictionary elements D [ D ]1,D2,...,Dl,...,DN],Dl∈RM,l={1,...,N};
S22, dividing the pixel point x intojThe centered image block is associated with a dictionary element:
wherein, the first and the second end of the pipe are connected with each other,is a pixel point xjIs a center of sizeThe image block of (2) is selected,is an image blockJ ═ 1,2,., uxv }, uxv being the image size;
s23, defining a binary sparse matrix S ═ S1 T,S2 T,...,Sj T,...,SU×V T]∈RMN×UV,Representing a pixel point xjAnd dictionary element DlMiddle pixel pointIn which Representing a dictionary element DlThe mth pixel point in (1);
4. the statistically active contours and texture dictionary-based image segmentation method of claim 3 wherein the level set function is:
H(Φ)∈RUV×1
wherein phi is (phi)1,φ2,...,φU×V) H (-), is the Heaviside step function;
wherein phijRepresenting a pixel point xjThe corresponding level set function value.
5. The image segmentation method based on the statistical active contour and texture dictionary as claimed in claim 4, wherein the probability label of each pixel belonging to the target is calculated by:
P1Y=diag(SI')-1SH(Φ)
P1X=diag(STI”)-1STP1Y
wherein I 'represents a unit column vector of length UV, I' represents a unit column vector of length MN, P1YProbability labels, P, representing objects to which pixels in dictionary elements belong1XProbability labels representing that the pixels of the image to be segmented belong to the target;
6. The statistically active contours and texture dictionaries-based image segmentation method of claim 5 wherein the statistical parameters include mean and variance matrices:
the calculation formula of the mean value is as follows:
the calculation formula of the variance matrix is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011064617.1A CN112184740B (en) | 2020-09-30 | 2020-09-30 | Image segmentation method based on statistical active contour and texture dictionary |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011064617.1A CN112184740B (en) | 2020-09-30 | 2020-09-30 | Image segmentation method based on statistical active contour and texture dictionary |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112184740A CN112184740A (en) | 2021-01-05 |
CN112184740B true CN112184740B (en) | 2022-06-21 |
Family
ID=73948195
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011064617.1A Active CN112184740B (en) | 2020-09-30 | 2020-09-30 | Image segmentation method based on statistical active contour and texture dictionary |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112184740B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014152919A1 (en) * | 2013-03-14 | 2014-09-25 | Arizona Board Of Regents, A Body Corporate Of The State Of Arizona For And On Behalf Of Arizona State University | Kernel sparse models for automated tumor segmentation |
CN104637056A (en) * | 2015-02-02 | 2015-05-20 | 复旦大学 | Method for segmenting adrenal tumor of medical CT (computed tomography) image based on sparse representation |
CN104933711A (en) * | 2015-06-10 | 2015-09-23 | 南通大学 | Automatic fast segmenting method of tumor pathological image |
CN109559328A (en) * | 2018-11-13 | 2019-04-02 | 河海大学 | A kind of Fast image segmentation method and device based on Bayesian Estimation and level set |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101586276B1 (en) * | 2013-08-02 | 2016-01-18 | 서울대학교산학협력단 | Automated Mammographic Density Estimation and Display Method Using Prior Probability Information, System for the Method and Media Storing the Computer Program for the Method |
-
2020
- 2020-09-30 CN CN202011064617.1A patent/CN112184740B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014152919A1 (en) * | 2013-03-14 | 2014-09-25 | Arizona Board Of Regents, A Body Corporate Of The State Of Arizona For And On Behalf Of Arizona State University | Kernel sparse models for automated tumor segmentation |
CN104637056A (en) * | 2015-02-02 | 2015-05-20 | 复旦大学 | Method for segmenting adrenal tumor of medical CT (computed tomography) image based on sparse representation |
CN104933711A (en) * | 2015-06-10 | 2015-09-23 | 南通大学 | Automatic fast segmenting method of tumor pathological image |
CN109559328A (en) * | 2018-11-13 | 2019-04-02 | 河海大学 | A kind of Fast image segmentation method and device based on Bayesian Estimation and level set |
Non-Patent Citations (2)
Title |
---|
"Automatic Segmentation of Right Ventricle on Ultrasound Images Using Sparse Matrix Transform and Level Set";Xulei Qin;《SPIE 2013》;20131212;全文 * |
"基于纹理特征的多区域水平集图像分割方法";王慧斌 等;《电子学报》;20181130;第46卷(第11期);2588-2596 * |
Also Published As
Publication number | Publication date |
---|---|
CN112184740A (en) | 2021-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109118564B (en) | Three-dimensional point cloud marking method and device based on fusion voxels | |
CN109472792B (en) | Local energy functional and non-convex regular term image segmentation method combining local entropy | |
CN109741341B (en) | Image segmentation method based on super-pixel and long-and-short-term memory network | |
Lozes et al. | Partial difference operators on weighted graphs for image processing on surfaces and point clouds | |
WO2023083059A1 (en) | Road surface defect detection method and apparatus, and electronic device and readable storage medium | |
Mandal et al. | Splinedist: Automated cell segmentation with spline curves | |
CN103593855B (en) | The image partition method of cluster is estimated based on particle group optimizing and space length | |
CN107491734B (en) | Semi-supervised polarimetric SAR image classification method based on multi-core fusion and space Wishart LapSVM | |
CN109559328B (en) | Bayesian estimation and level set-based rapid image segmentation method and device | |
CN112634149B (en) | Point cloud denoising method based on graph convolution network | |
CN101571951B (en) | Method for dividing level set image based on characteristics of neighborhood probability density function | |
CN110414616B (en) | Remote sensing image dictionary learning and classifying method utilizing spatial relationship | |
CN112116599A (en) | Sputum smear tubercle bacillus semantic segmentation method and system based on weak supervised learning | |
Abdelsamea et al. | A SOM-based Chan–Vese model for unsupervised image segmentation | |
CN113177592B (en) | Image segmentation method and device, computer equipment and storage medium | |
CN108090913B (en) | Image semantic segmentation method based on object-level Gauss-Markov random field | |
CN107766895B (en) | Induced non-negative projection semi-supervised data classification method and system | |
CN111126169B (en) | Face recognition method and system based on orthogonalization graph regular nonnegative matrix factorization | |
CN111815640A (en) | Memristor-based RBF neural network medical image segmentation algorithm | |
CN111639686B (en) | Semi-supervised classification method based on dimension weighting and visual angle feature consistency | |
CN110930369B (en) | Pathological section identification method based on group et-variable neural network and conditional probability field | |
CN112184740B (en) | Image segmentation method based on statistical active contour and texture dictionary | |
CN108009570A (en) | A kind of data classification method propagated based on the positive and negative label of core and system | |
Nguyen et al. | A new image segmentation approach based on the Louvain algorithm | |
CN115100406B (en) | Weight information entropy fuzzy C-means clustering method based on superpixel processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |