CN110751664A - Brain tissue segmentation method based on hyper-voxel matching - Google Patents

Brain tissue segmentation method based on hyper-voxel matching Download PDF

Info

Publication number
CN110751664A
CN110751664A CN201910931927.XA CN201910931927A CN110751664A CN 110751664 A CN110751664 A CN 110751664A CN 201910931927 A CN201910931927 A CN 201910931927A CN 110751664 A CN110751664 A CN 110751664A
Authority
CN
China
Prior art keywords
voxel
magnetic resonance
hyper
image
resonance image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910931927.XA
Other languages
Chinese (zh)
Other versions
CN110751664B (en
Inventor
孔佑勇
周彬
章品正
杨冠羽
舒华忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910931927.XA priority Critical patent/CN110751664B/en
Publication of CN110751664A publication Critical patent/CN110751664A/en
Application granted granted Critical
Publication of CN110751664B publication Critical patent/CN110751664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

The invention discloses a brain tissue segmentation method based on hyper-voxel matching, which comprises the following steps: s1: all the magnetic resonance images generate hyper-voxel data through an SLIC algorithm; s2: preprocessing all magnetic resonance images; s3: acquiring the hyper-voxel characteristics of each magnetic resonance image; s4: calculating the characteristic gradient between each hyper-voxel and the adjacent hyper-voxel, and acquiring the sum of the characteristic gradients; s5: determining a label corresponding to each hyper-voxel in the template image; s6: serially connecting the super voxel characteristics and the characteristic gradient sum of each magnetic resonance image into a vector, and calculating the similarity of each super voxel in the magnetic resonance image to be matched and the template image according to the vector; s7: and matching the magnetic resonance image to be matched with the template image to determine the segmentation result of each magnetic resonance image to be matched. According to the method, the characteristics of the hyper-voxels are considered in the matching process, and meanwhile, the relation between adjacent hyper-voxels can be considered, so that an effective matching result can be obtained.

Description

Brain tissue segmentation method based on hyper-voxel matching
Technical Field
The invention relates to the technical field of image processing, in particular to a brain tissue segmentation method based on hyper-voxel matching.
Background
The goal of brain tissue segmentation in magnetic resonance imaging is to separate the brain into White Matter (WM), Gray Matter (GM) and cerebrospinal fluid (CSF), and accurate segmentation of brain tissue is an important part for the diagnosis and treatment of diseases, and can be used to assess the severity of certain diseases and brain evolution by measuring changes in the tissue structure of regions of interest in the brain.
Feature matching is a common means for image segmentation, and an image with a label is used as a template and is divided into a plurality of regions, for an image to be segmented, the image is also divided into a plurality of regions, for each region in the image to be segmented, a region closest to the feature of the image in the template image, namely, a region closest to the image in the template image, is searched, and for each region in the image to be segmented, the label of the region is set to be the label of the template region closest to the region. The regions may be regular image blocks, but in order to group more similar pixels together, a superpixel (a superpixel in a three-dimensional image) algorithm is generally adopted to generate superpixels of the image, so as to realize matching between the superpixels.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problem that only the characteristics of an image area are considered in the traditional image matching and the relation between the hyper-voxels is ignored, the invention provides a brain tissue segmentation method based on the hyper-voxel matching.
The technical scheme is as follows: in order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows:
a brain tissue segmentation method based on hyper-voxel matching specifically comprises the following steps:
s1: generating hyper-voxel data by all magnetic resonance images according to a three-dimensional SLIC algorithm, wherein all the magnetic resonance images comprise all the magnetic resonance images to be matched and template images;
s2: preprocessing all the magnetic resonance images;
s3: superposing the segmentation boundaries obtained by all the magnetic resonance images based on the three-dimensional SLIC algorithm in all the preprocessed magnetic resonance images to obtain the hyper-voxel characteristics of each magnetic resonance image;
s4: according to the super voxel characteristics of each magnetic resonance image, calculating the characteristic gradient between each super voxel and the adjacent super voxel, and acquiring the sum of the characteristic gradients;
s5: determining a label corresponding to each hyper-voxel in the template image;
s6: serially connecting the super voxel characteristics and the characteristic gradient sum of each magnetic resonance image into a vector, and calculating the similarity of each super voxel in the magnetic resonance image to be matched and the template image according to the vector;
s7: and matching the magnetic resonance image to be matched with the template image according to the similarity of each hyper-voxel in the magnetic resonance image to be matched and the template image, and determining the segmentation result of each magnetic resonance image to be matched.
Further, in step S1, all the magnetic resonance images generate hyper-voxel data as follows:
s1.1: setting clustering centers, uniformly distributing the clustering centers in all the magnetic resonance images respectively, and simultaneously carrying out sequencing labeling on all the clustering centers in each magnetic resonance image;
s1.2: calculating the distance between the clustering center and each voxel in the clustering center field, wherein the field size and the distance size specifically comprise:
Figure BDA0002220494180000021
wherein:
Figure BDA0002220494180000022
Rifor a range of fields around the cluster center in the ith magnetic resonance image, DiFor clusteringDistance between center and ith voxel in its field, SiIs the distance between adjacent seed points in the ith magnetic resonance image, dicDistance in gray space for the ith voxel in the cluster center and its domain, disIs the spatial distance, v, between the cluster center and the ith voxel in its domainjIs the gray value of the current voxel vkIs the gray value of the cluster center, xjFor the x-axis coordinate, y, of the current voxel in three-dimensional spacejIs the y-axis coordinate, z, of the current voxel in three-dimensional spacejIs the z-axis coordinate, x, of the current voxel in three-dimensional spacekFor the x-axis coordinate, y, of the cluster center in three-dimensional spacekAs y-axis coordinate, z, of the center of the cluster in three-dimensional spacekThe z-axis coordinate of a clustering center in a three-dimensional space is used, S is the distance between adjacent seed points in a magnetic resonance image where the clustering center is located, and m is a parameter for adjusting the weight between the gray scale space distance and the space distance;
s1.3: comparing all distances corresponding to each voxel according to the distance between the clustering center and each voxel in the field, and selecting a minimum distance, wherein the clustering center corresponding to the minimum distance is the clustering center to which the voxel belongs, and the label of the clustering center corresponding to the minimum distance is the label of the voxel;
s1.4: according to the label of each voxel, updating the spatial position of the clustering center to the geometric centers of all voxels in the clustering center field;
s1.5: and repeating the step S1.2 to the step S1.4 according to the updated spatial position of the clustering center until the spatial positions of all the clustering centers are not changed any more.
Further, in step S1.1, the size of the hyper-voxel in each of the magnetic resonance images and the distance between adjacent cluster centers are specifically:
Figure BDA0002220494180000031
wherein: l isiFor the ith magnetic resonance imageSize of medium hyper-voxel, SiIs the distance between adjacent seed points in the ith magnetic resonance image, NiAs the number of voxels in the ith magnetic resonance image, KiThe number of superpixels in the ith magnetic resonance image.
Further, in step S2, all the magnetic resonance images are preprocessed, specifically:
template image: normalizing the gray value of the template image to be between 0 and 1, and processing the template image through a histogram equalization algorithm;
magnetic resonance image to be matched: normalizing the gray value of the magnetic resonance image to be matched to be between 0 and 1, and processing the magnetic resonance image by the histogram equalization algorithm according to the preprocessed template image.
Further, in step S4, the feature gradient and the sum of the feature gradients between each super voxel and the neighboring super voxel are specifically:
Figure BDA0002220494180000032
wherein: Δ HiIs the characteristic gradient between the current superpixel and the adjacent ith superpixel, HaAs a gray-level histogram feature of the current superpixel, HiGradsum, the gray histogram feature of the ith adjacent hyper-voxelaIs the sum of the characteristic gradients of the current superpixel and all the adjacent superpixels, and n is the number of the adjacent superpixels.
Further, in step S5, a label corresponding to each hyper-voxel in the template image is determined, specifically:
and according to the segmentation labels marked in the template image, counting the number of pixels of each voxel which belong to each category in the region corresponding to the segmentation labels, and selecting the maximum number of pixels from the pixels, wherein the category corresponding to the maximum number of pixels is the label of the voxel.
Further, in step S6, the sum of the hyper-voxel characteristics and the characteristic gradients of each of the magnetic resonance images is concatenated into a vector, where the concatenation principle specifically is:
and multiplying the sum of the feature gradients by the weight W according to a preset weight W, and then serially connecting the feature vectors with the gray level histogram feature vector of the gray level feature of each voxel.
Further, in step S6, the similarity of each super voxel in the magnetic resonance image to be matched and the template image is calculated according to the vector, specifically:
calculating Euclidean distance between vectors obtained by series connection in the magnetic resonance image to be matched and vectors obtained by series connection in the template image, wherein the smaller the Euclidean distance is, the more similar the two hyper-voxels are, and the calculation formula of the Euclidean distance is specifically as follows:
Figure BDA0002220494180000041
wherein: d (A)i,Bj) The euclidean distance between the ith voxel in the magnetic resonance image to be matched and the jth voxel in the template image,is the k-dimension vector in the final characteristic vector of the super voxel in the magnetic resonance image to be matched,
Figure BDA0002220494180000043
is the k-dimension vector in the final characteristic vector of the super voxel in the template image.
Further, in step S7, the segmentation result of each to-be-matched magnetic resonance image is determined, specifically:
s7.1: searching N hyper-voxels with the minimum distance to the hyper-voxel in the magnetic resonance image to be matched in the template image, counting the label with the maximum ratio from the N hyper-voxels, and endowing the label with the maximum ratio to the current hyper-voxel in the magnetic resonance image to be matched;
s7.2: and repeating the step S7.1 for each hyper-voxel in the magnetic resonance image to be matched, obtaining a label corresponding to each hyper-voxel, and determining a segmentation result of each magnetic resonance image to be matched according to the label corresponding to each hyper-voxel.
Has the advantages that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
the brain tissue segmentation method firstly extracts the characteristics of the superpixel, the characteristics can only describe the superpixel, secondly calculates the gradient characteristics of each superpixel and the neighborhood superpixel, then performs superpixel matching, and finally maps the superpixel classification result back to the voxel to obtain the tissue segmentation result, so that the relationship between adjacent superpixels can be considered while the characteristics of the superpixel are considered in the matching process, and an effective matching result is obtained.
Drawings
FIG. 1 is a schematic flow diagram of a brain tissue segmentation method of the present invention;
FIG. 2 is a brain magnetic resonance image;
FIG. 3 is a label image corresponding to a brain magnetic resonance image;
FIG. 4 is a hyper-voxel image generated on a brain magnetic resonance image;
FIG. 5 is a graph of the results obtained without the brain tissue segmentation method of the present invention;
fig. 6 is a graph showing the results of matching using the brain tissue segmentation method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. The described embodiments are a subset of the embodiments of the invention and are not all embodiments of the invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention.
Example 1
Referring to fig. 1, the present embodiment provides a brain tissue segmentation method based on hyper-voxel matching, which specifically includes the following steps:
step S1: generating hyper-voxel data for all magnetic resonance images by applying a three-dimensional SLIC algorithm, wherein all the magnetic resonance images comprise all the magnetic resonance images to be matched and template images, and the method specifically comprises the following steps:
step S1.1: and taking the initialized seed points as clustering centers, and simultaneously uniformly distributing the seed points in each magnetic resonance image according to the preset number of the superpixels, namely determining the number of the clustering centers by the preset number of the superpixels, wherein the clustering centers in each magnetic resonance image are uniformly distributed in each magnetic resonance image. And all cluster centers in each magnetic resonance image are labeled in order.
It should be noted that the setting of the number of voxels is not specifically required, and may be selected according to the user's requirement.
In the present embodiment, the size of the super voxel in each magnetic resonance image is specifically:
Li=Ni/Ki
wherein: l isiIs the size of the super voxel in the ith magnetic resonance image, NiAs the number of voxels in the ith magnetic resonance image, KiThe number of superpixels in the ith magnetic resonance image.
The distance between adjacent seed points in each magnetic resonance image, that is, the distance between adjacent cluster centers, specifically is:
wherein: siIs the distance between adjacent seed points in the ith magnetic resonance image, NiAs the number of voxels in the ith magnetic resonance image, KiThe number of superpixels in the ith magnetic resonance image.
Step S1.2: in a range of fields around each seed point, i.e. cluster center, the distance between the cluster center and each voxel in the field is calculated, in particular, a range of fields around the cluster center in each magnetic resonance image, in particular:
Ri=(2Si)3
wherein: riFor a range of fields around the cluster center in the ith magnetic resonance image, SiIs the distance between adjacent seed points in the ith magnetic resonance image.
The distance between the clustering center and each voxel in the field is specifically as follows:
wherein:
Figure BDA0002220494180000063
Didistance between cluster center and ith voxel in its domain, dicDistance in gray space for the ith voxel in the cluster center and its domain, disIs the spatial distance, v, between the cluster center and the ith voxel in its domainjIs the gray value of the current voxel vkIs the gray value of the cluster center, xjFor the x-axis coordinate, y, of the current voxel in three-dimensional spacejIs the y-axis coordinate, z, of the current voxel in three-dimensional spacejIs the z-axis coordinate, x, of the current voxel in three-dimensional spacekFor the x-axis coordinate, y, of the cluster center in three-dimensional spacekAs y-axis coordinate, z, of the center of the cluster in three-dimensional spacekIs the z-axis coordinate of the clustering center in the three-dimensional space, S is the distance between adjacent seed points in the magnetic resonance image where the clustering center is located, and m is a parameter for adjusting the weight between the gray scale space distance and the space distance.
Step S1.3: and determining the clustering center to which each voxel belongs according to the distance between the clustering center and each voxel in the field, thereby allocating a label to each voxel, wherein the label is the label of the clustering center corresponding to the field in which the voxel is located. Specifically, all distances calculated by each voxel are compared, and the minimum distance is selected from the distances, the cluster center corresponding to the minimum distance is the cluster center to which the voxel belongs, and the label of the voxel is the label of the cluster center corresponding to the minimum distance.
Step S1.4: and updating the spatial position of the clustering center according to the label set by each voxel, namely updating the spatial position of the clustering center corresponding to the field to the geometric center of all voxels in the same field according to the distribution of all voxels in the same field.
Step S1.5: and (4) repeating the step (S1.2) to the step (S1.4) according to the updated spatial positions of the clustering centers until the spatial positions of all the clustering centers are not changed any more.
Step S2: preprocessing is carried out on the magnetic resonance image to be matched and the template image, so that the gray feature distribution among the samples is ensured to be similar, especially the gray feature distribution of the magnetic resonance image to be matched and the gray feature distribution of the template image are ensured to be similar, and the matching accuracy is further improved. The method specifically comprises the following steps:
for the template image, the gray value of the template image needs to be normalized to be between 0 and 1, and then the template image is processed by using a histogram equalization algorithm, so that the contrast of the template image is improved.
For the magnetic resonance image to be matched, the gray value of the magnetic resonance image to be matched is normalized to be between 0 and 1, and then the magnetic resonance image to be matched is processed by using a histogram equalization algorithm according to the preprocessed template image, so that the gray value distribution of the magnetic resonance image to be matched is closer to that of the template image.
Step S3: for the preprocessed magnetic resonance image to be matched and the template image, the hyper-voxel characteristics of each magnetic resonance image are obtained, that is, the segmentation boundary of the magnetic resonance image to be matched and the template image obtained based on the three-dimensional SLIC algorithm is superimposed on the magnetic resonance image to be matched and the template image preprocessed in step S2 and is divided into different regions.
Meanwhile, in the embodiment, a gray level histogram feature is adopted, that is, the gray level range of the whole image is uniformly divided into 16 intervals, and the number of voxels of each gray level value in each interval in each super voxel is calculated, so that a 16-dimensional vector is obtained, the vector is the gray level histogram feature of each super voxel, and the vector can represent the gray level distribution condition of the super voxel.
Step S4: according to the gray histogram feature of each superpixel, calculating the feature gradient between each superpixel and each adjacent superpixel thereof, and obtaining the sum of the feature gradients, specifically:
Figure BDA0002220494180000071
wherein: Δ HiIs the characteristic gradient between the current superpixel and the adjacent ith superpixel, HaAs a gray-level histogram feature of the current superpixel, HiGradsum, the gray histogram feature of the ith adjacent hyper-voxelaIs the sum of the characteristic gradients of the current superpixel and all the adjacent superpixels, and n is the number of the adjacent superpixels.
Step S5: and determining a label corresponding to each hyper-voxel in the template image. For a template image, a segmentation label manually labeled by an expert is present in the image, and a label of each hyper-voxel in the image can be generated according to the segmentation label, specifically:
and counting the number of pixels of each super voxel in each category in the region corresponding to the segmentation label, and selecting the largest number of pixels from the pixels, wherein the category corresponding to the largest number of pixels is the label of the super voxel.
Step S6: and serially connecting the gray histogram feature vector of the gray feature of each super voxel and the sum of the feature gradients of each super voxel and all adjacent super voxels into a vector, and taking the vector as the final feature vector of each super voxel for matching. The series principle is specifically as follows:
firstly, setting a weight W for the sum vector of the feature gradients of each superpixel and all adjacent superpixels, multiplying the sum of the feature gradients by the weight W, and then connecting the sum with the feature vector of the gray histogram of the gray feature of each superpixel in series. Wherein the setting of the weight W is determined by a specific data set.
Meanwhile, the similarity of each hyper-voxel in the magnetic resonance image to be matched and the template image can be calculated according to the vector, wherein the similarity can be compared through Euclidean distance, and the similarity specifically comprises the following steps:
in the magnetic resonance image and the template image to be matched, the gray level histogram feature vector of each super voxel and the sum of the feature gradients of each super voxel and all adjacent super voxels are 16-dimensional vectors, and after the 16-dimensional vectors are connected in series to form one vector, a 32-dimensional vector is obtained. In the magnetic resonance image to be matched, the 32-dimensional vector of each super voxel is the final feature vector of each super voxel, and is marked as: faIn the template image, the 32-dimensional vector of each super voxel is the final feature vector of each super voxel and is labeled as: fb
According to the final characteristic vector F of each hyper-voxel in the magnetic resonance image to be matchedaFinal feature vector F for each superpixel in the template imagebCalculating a Euclidean distance between the two voxels, wherein the smaller the Euclidean distance is, the more similar the two hyper-voxels are, and the calculation formula of the Euclidean distance is specifically as follows:
Figure BDA0002220494180000081
wherein: d (A)i,Bj) The euclidean distance between the ith voxel in the magnetic resonance image to be matched and the jth voxel in the template image,
Figure BDA0002220494180000082
is the k-dimension vector in the final characteristic vector of the super voxel in the magnetic resonance image to be matched,is the k-dimension vector in the final characteristic vector of the super voxel in the template image.
Step S7: and matching all the magnetic resonance images to be matched with the template images according to the Euclidean distance in the step S6, determining the segmentation result of each magnetic resonance image to be matched, and performing the matching process in a voting mode. The method comprises the following specific steps:
step S7.1: for the voxels in the magnetic resonance image to be matched, the N voxels whose distance from the voxel is the smallest are searched in the template image, and since a label is assigned to each voxel in the template image in step S5, the label with the largest proportion is counted from the N voxels, and the label with the largest proportion is assigned to the current voxel in the magnetic resonance image to be matched.
In the present embodiment, specifically, 10 nearest voxels are searched for in the template image, that is, N is 10, where N is the number of nearest voxels in the template image.
Step S7.2: and (4) executing the step (S7.1) for each hyper-voxel in the magnetic resonance image to be matched, so that each hyper-voxel is endowed with a label, and a segmentation result of the magnetic resonance image to be matched is obtained.
The brain tissue segmentation method based on the hyper-voxel matching is used in the practical application process, and the implementation of the brain tissue extraction of the magnetic resonance imaging of the brain is verified by taking the IBSR18 data set as an example in the embodiment.
The experimental conditions are as follows: a computer is selected for an experiment, a 64-bit operating system is adopted, and Matlab and Python are used in a programming language, wherein Matlab is R2014a version, and Python is 3.5 version.
The experimental data is a cerebral magnetic resonance image of the IBSR18 data set. The internet brain segmentation library provides manually guided expert segmentation results as well as magnetic resonance brain image data. The IBSR18 data set contains magnetic resonance images of the T1-Weighted modality of 18 healthy subjects, each magnetic resonance image is 256 × 256 × 128 three-dimensional data, the data set contains expert labeled labels, which are gray matter, white matter and spinal fluid respectively, where 0 in the label represents that the pixel belongs to non-brain tissue, i.e., a background region, 1 represents that the pixel belongs to spinal fluid, 2 represents gray matter, and 3 represents white matter. Referring to fig. 2 and 3, MRI images in the IBSR18 dataset and their corresponding semantic segmentation labels are as shown. The brain tissue segmentation is achieved by performing the hyper-voxel matching according to the above design method, and referring to fig. 4, the hyper-voxel result generated for the magnetic resonance image of the brain is shown, and referring to fig. 6, the segmentation result obtained after the whole process is executed is shown. Referring to fig. 5, for comparison, the graph shows the result obtained using only the gray histogram feature matching without using the above-described design method.
In the experiment, the first sample is used as a template image, and the rest 17 samples are used as images to be matched, and the experiment is carried out according to the steps of the invention.
In order to test the segmentation precision of the brain magnetic resonance image, a Dice coefficient is used as an evaluation index, and an obtaining formula of the Dice coefficient specifically comprises the following steps:
wherein: TP is the overlapping area of the divided brain tissue area and the expert manual division template, FP is the area of the divided brain tissue but not belonging to the expert manual division template, and FN is the area of the brain tissue but not divided in the expert manual division template.
The Dice coefficient is a set similarity metric function, and is generally used for calculating the similarity of two samples. This is used to measure the degree of similarity between the brain tissue region extracted by the method and the real result.
On the IBSR18 data set, the Dice index corresponding to the method of the invention and the method not adopted by the invention is as shown in the following table 1, specifically:
TABLE 1
Figure BDA0002220494180000101
Based on the table 1, it can be proved that the design method of the present invention can obtain better matching result by adding the sum of the feature gradients compared to the matching only using the feature of the voxel itself. Meanwhile, the accuracy of segmentation can be improved by considering the relation between adjacent hyper-voxels in the matching process.
The present invention and its embodiments have been described in an illustrative manner, and are not to be considered limiting, as illustrated in the accompanying drawings, which are merely exemplary embodiments of the invention and not limiting of the actual constructions and methods. Therefore, if the person skilled in the art receives the teaching, the structural modes and embodiments similar to the technical solutions are not creatively designed without departing from the spirit of the invention, and all of them belong to the protection scope of the invention.

Claims (9)

1. A brain tissue segmentation method based on hyper-voxel matching is characterized by comprising the following steps:
s1: generating hyper-voxel data by all magnetic resonance images according to a three-dimensional SLIC algorithm, wherein all the magnetic resonance images comprise all the magnetic resonance images to be matched and template images;
s2: preprocessing all the magnetic resonance images;
s3: superposing the segmentation boundaries obtained by all the magnetic resonance images based on the three-dimensional SLIC algorithm in all the preprocessed magnetic resonance images to obtain the hyper-voxel characteristics of each magnetic resonance image;
s4: according to the super voxel characteristics of each magnetic resonance image, calculating the characteristic gradient between each super voxel and the adjacent super voxel, and acquiring the sum of the characteristic gradients;
s5: determining a label corresponding to each hyper-voxel in the template image;
s6: serially connecting the super voxel characteristics and the characteristic gradient sum of each magnetic resonance image into a vector, and calculating the similarity of each super voxel in the magnetic resonance image to be matched and the template image according to the vector;
s7: and matching the magnetic resonance image to be matched with the template image according to the similarity of each hyper-voxel in the magnetic resonance image to be matched and the template image, and determining the segmentation result of each magnetic resonance image to be matched.
2. A method for brain tissue segmentation based on hyper-voxel matching according to claim 1, wherein in step S1, all the magnetic resonance images generate hyper-voxel data, specifically as follows:
s1.1: setting clustering centers, uniformly distributing the clustering centers in all the magnetic resonance images respectively, and simultaneously carrying out sequencing labeling on all the clustering centers in each magnetic resonance image;
s1.2: calculating the distance between the clustering center and each voxel in the clustering center field, wherein the field size and the distance size specifically comprise:
Figure FDA0002220494170000011
wherein:
Rifor a range of fields around the cluster center in the ith magnetic resonance image, DiIs the distance between the cluster center and the ith voxel in its domain, SiIs the distance between adjacent seed points in the ith magnetic resonance image, dicDistance in gray space for the ith voxel in the cluster center and its domain, disIs the spatial distance, v, between the cluster center and the ith voxel in its domainjIs the gray value of the current voxel vkIs the gray value of the cluster center, xjFor the x-axis coordinate, y, of the current voxel in three-dimensional spacejIs the y-axis coordinate, z, of the current voxel in three-dimensional spacejIs the z-axis coordinate, x, of the current voxel in three-dimensional spacekFor the x-axis coordinate, y, of the cluster center in three-dimensional spacekAs y-axis coordinate, z, of the center of the cluster in three-dimensional spacekThe z-axis coordinate of a clustering center in a three-dimensional space is used, S is the distance between adjacent seed points in a magnetic resonance image where the clustering center is located, and m is a parameter for adjusting the weight between the gray scale space distance and the space distance;
s1.3: comparing all distances corresponding to each voxel according to the distance between the clustering center and each voxel in the field, and selecting a minimum distance, wherein the clustering center corresponding to the minimum distance is the clustering center to which the voxel belongs, and the label of the clustering center corresponding to the minimum distance is the label of the voxel;
s1.4: according to the label of each voxel, updating the spatial position of the clustering center to the geometric centers of all voxels in the clustering center field;
s1.5: and repeating the step S1.2 to the step S1.4 according to the updated spatial position of the clustering center until the spatial positions of all the clustering centers are not changed any more.
3. The brain tissue segmentation method based on voxel matching according to claim 2, wherein in step S1.1, the size of the voxels in each of the mr images and the distance between the centers of adjacent clusters are specifically:
Figure FDA0002220494170000021
wherein: l isiIs the size of the hyper-voxel in the ith magnetic resonance image, SiIs the distance between adjacent seed points in the ith magnetic resonance image, NiAs the number of voxels in the ith magnetic resonance image, KiThe number of superpixels in the ith magnetic resonance image.
4. The method for brain tissue segmentation based on hyper-voxel matching according to claim 1 or 2, wherein in step S2, all the mr images are preprocessed, specifically:
template image: normalizing the gray value of the template image to be between 0 and 1, and processing the template image through a histogram equalization algorithm;
magnetic resonance image to be matched: normalizing the gray value of the magnetic resonance image to be matched to be between 0 and 1, and processing the magnetic resonance image by the histogram equalization algorithm according to the preprocessed template image.
5. The method for brain tissue segmentation based on voxel matching according to claim 4, wherein in step S4, the feature gradient and the sum of the feature gradients between each voxel and its neighboring voxels are:
Figure FDA0002220494170000031
wherein: Δ HiIs the characteristic gradient between the current superpixel and the adjacent ith superpixel, HaAs a gray-level histogram feature of the current superpixel, HiGradsum, the gray histogram feature of the ith adjacent hyper-voxelaIs the sum of the characteristic gradients of the current superpixel and all the adjacent superpixels, and n is the number of the adjacent superpixels.
6. The method according to claim 5, wherein in step S5, a label corresponding to each voxel in the template image is determined, specifically:
and according to the segmentation labels marked in the template image, counting the number of pixels of each voxel which belong to each category in the region corresponding to the segmentation labels, and selecting the maximum number of pixels from the pixels, wherein the category corresponding to the maximum number of pixels is the label of the voxel.
7. The method according to claim 6, wherein in step S6, the sum of the hyper-voxel characteristics and the characteristic gradients of each mr image is concatenated into a vector, and the concatenation principle is specifically:
and multiplying the sum of the feature gradients by the weight W according to a preset weight W, and then serially connecting the feature vectors with the gray level histogram feature vector of the gray level feature of each voxel.
8. The method according to claim 6, wherein in step S6, the similarity of each hyper-voxel in the magnetic resonance image to be matched and the template image is calculated according to the vector, specifically:
calculating Euclidean distance between vectors obtained by series connection in the magnetic resonance image to be matched and vectors obtained by series connection in the template image, wherein the smaller the Euclidean distance is, the more similar the two hyper-voxels are, and the calculation formula of the Euclidean distance is specifically as follows:
Figure FDA0002220494170000032
wherein: d (A)i,Bj) The euclidean distance between the ith voxel in the magnetic resonance image to be matched and the jth voxel in the template image,
Figure FDA0002220494170000033
is the k-dimension vector in the final characteristic vector of the super voxel in the magnetic resonance image to be matched,
Figure FDA0002220494170000034
is the k-dimension vector in the final characteristic vector of the super voxel in the template image.
9. The method for segmenting brain tissue based on hyper-voxel matching according to claim 8, wherein in the step S7, the segmentation result of each magnetic resonance image to be matched is determined, specifically:
s7.1: searching N hyper-voxels with the minimum distance to the hyper-voxel in the magnetic resonance image to be matched in the template image, counting the label with the maximum ratio from the N hyper-voxels, and endowing the label with the maximum ratio to the current hyper-voxel in the magnetic resonance image to be matched;
s7.2: and repeating the step S7.1 for each hyper-voxel in the magnetic resonance image to be matched, obtaining a label corresponding to each hyper-voxel, and determining a segmentation result of each magnetic resonance image to be matched according to the label corresponding to each hyper-voxel.
CN201910931927.XA 2019-09-29 2019-09-29 Brain tissue segmentation method based on hyper-voxel matching Active CN110751664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910931927.XA CN110751664B (en) 2019-09-29 2019-09-29 Brain tissue segmentation method based on hyper-voxel matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910931927.XA CN110751664B (en) 2019-09-29 2019-09-29 Brain tissue segmentation method based on hyper-voxel matching

Publications (2)

Publication Number Publication Date
CN110751664A true CN110751664A (en) 2020-02-04
CN110751664B CN110751664B (en) 2022-11-18

Family

ID=69277423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910931927.XA Active CN110751664B (en) 2019-09-29 2019-09-29 Brain tissue segmentation method based on hyper-voxel matching

Country Status (1)

Country Link
CN (1) CN110751664B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508844A (en) * 2020-09-30 2021-03-16 东南大学 Weak supervision-based brain magnetic resonance image segmentation method
CN115359074A (en) * 2022-10-20 2022-11-18 之江实验室 Image segmentation and training method and device based on hyper-voxel clustering and prototype optimization

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100027865A1 (en) * 2008-08-01 2010-02-04 Siemens Corporate Research, Inc. Method and System for Brain Tumor Segmentation in 3D Magnetic Resonance Images
CN107146228A (en) * 2017-03-22 2017-09-08 东南大学 A kind of super voxel generation method of brain magnetic resonance image based on priori
CN108305279A (en) * 2017-12-27 2018-07-20 东南大学 A kind of brain magnetic resonance image super voxel generation method of iteration space fuzzy clustering

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100027865A1 (en) * 2008-08-01 2010-02-04 Siemens Corporate Research, Inc. Method and System for Brain Tumor Segmentation in 3D Magnetic Resonance Images
CN107146228A (en) * 2017-03-22 2017-09-08 东南大学 A kind of super voxel generation method of brain magnetic resonance image based on priori
CN108305279A (en) * 2017-12-27 2018-07-20 东南大学 A kind of brain magnetic resonance image super voxel generation method of iteration space fuzzy clustering

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508844A (en) * 2020-09-30 2021-03-16 东南大学 Weak supervision-based brain magnetic resonance image segmentation method
CN115359074A (en) * 2022-10-20 2022-11-18 之江实验室 Image segmentation and training method and device based on hyper-voxel clustering and prototype optimization

Also Published As

Publication number Publication date
CN110751664B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
Arunkumar et al. K-means clustering and neural network for object detecting and identifying abnormality of brain tumor
Zhang et al. Detecting anatomical landmarks for fast Alzheimer’s disease diagnosis
US6950544B2 (en) Automated measurement of anatomical structures in medical imaging
CN108171697B (en) WMH automatic extraction system based on cluster
CN111931811A (en) Calculation method based on super-pixel image similarity
WO2009156719A1 (en) Morphological analysis
CN115393269A (en) Extensible multi-level graph neural network model based on multi-modal image data
Xue et al. Knowledge-based segmentation and labeling of brain structures from MRI images
CN110751664B (en) Brain tissue segmentation method based on hyper-voxel matching
CN111080658A (en) Cervical MRI image segmentation method based on deformable registration and DCNN
CN116862889A (en) Nuclear magnetic resonance image-based cerebral arteriosclerosis detection method
Ziyan et al. Consistency clustering: a robust algorithm for group-wise registration, segmentation and automatic atlas construction in diffusion MRI
CN110728685B (en) Brain tissue segmentation method based on diagonal voxel local binary pattern texture operator
Liu et al. Supervoxel clustering with a novel 3d descriptor for brain tissue segmentation
Mure et al. Classification of multiple sclerosis lesion evolution patterns a study based on unsupervised clustering of asynchronous time-series
Biniaz et al. Fast FCM algorithm for brain MR image segmentation
Selvaganesh et al. A hybrid segmentation and classification techniques for detecting the neurodegenerative disorder from brain Magnetic Resonance Images
Atho et al. The Similarity Cloud Model: A novel and efficient hippocampus segmentation technique
Qu et al. Positive unanimous voting algorithm for focal cortical dysplasia detection on magnetic resonance image
Rahim et al. 3D texture features mining for MRI brain tumor identification
Ledig et al. Alzheimer’s disease state classification using structural volumetry, cortical thickness and intensity features
Fletcher et al. Applications of deep learning to brain segmentation and labeling of mri brain structures
Song et al. Automatic Hippocampus segmentation of magnetic resonance imaging images using multiple atlases
CN109615605B (en) Functional magnetic resonance imaging brain partitioning method and system based on quantum potential energy model
Park et al. Deep learning-based brain metastatic detection and treatment response assessment system on 3D MRI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant