CN110751664B - Brain tissue segmentation method based on hyper-voxel matching - Google Patents

Brain tissue segmentation method based on hyper-voxel matching Download PDF

Info

Publication number
CN110751664B
CN110751664B CN201910931927.XA CN201910931927A CN110751664B CN 110751664 B CN110751664 B CN 110751664B CN 201910931927 A CN201910931927 A CN 201910931927A CN 110751664 B CN110751664 B CN 110751664B
Authority
CN
China
Prior art keywords
voxel
magnetic resonance
image
hyper
resonance image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910931927.XA
Other languages
Chinese (zh)
Other versions
CN110751664A (en
Inventor
孔佑勇
周彬
章品正
杨冠羽
舒华忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910931927.XA priority Critical patent/CN110751664B/en
Publication of CN110751664A publication Critical patent/CN110751664A/en
Application granted granted Critical
Publication of CN110751664B publication Critical patent/CN110751664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a brain tissue segmentation method based on hyper-voxel matching, which comprises the following steps: s1: all the magnetic resonance images generate hyper-voxel data through an SLIC algorithm; s2: preprocessing all magnetic resonance images; s3: acquiring the super voxel characteristics of each magnetic resonance image; s4: calculating the characteristic gradient between each super voxel and the adjacent super voxel and obtaining the sum of the characteristic gradients; s5: determining a label corresponding to each hyper-voxel in the template image; s6: serially connecting the super voxel characteristics and the characteristic gradient sum of each magnetic resonance image into a vector, and calculating the similarity of each super voxel in the magnetic resonance image to be matched and the template image according to the vector; s7: and matching the magnetic resonance image to be matched with the template image to determine the segmentation result of each magnetic resonance image to be matched. According to the method, the self characteristics of the super voxels are considered in the matching process, and meanwhile, the relation between the adjacent super voxels can also be considered, so that an effective matching result can be obtained.

Description

Brain tissue segmentation method based on hyper-voxel matching
Technical Field
The invention relates to the technical field of image processing, in particular to a brain tissue segmentation method based on hyper-voxel matching.
Background
The goal of brain tissue segmentation in magnetic resonance imaging is to separate the brain into White Matter (WM), gray Matter (GM) and cerebrospinal fluid (CSF), and accurate segmentation of brain tissue is an important part for the diagnosis and treatment of diseases, and can be used to assess the severity of certain diseases and brain evolution by measuring changes in the tissue structure of regions of interest in the brain.
Feature matching is a common means for image segmentation, and an image with a label is used as a template and is divided into a plurality of regions, for an image to be segmented, the image is also divided into a plurality of regions, for each region in the image to be segmented, a region closest to the feature of the image in the template image, namely, a region closest to the image in the template image, is searched, and for each region in the image to be segmented, the label of the region is set to be the label of the template region closest to the region. The regions may be regular image blocks, but in order to group more similar pixels together, a superpixel (superpixel in a three-dimensional image) algorithm is usually adopted to generate superpixels of the image, so as to realize matching between the superpixels.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problem that only the characteristics of an image area are considered in the traditional image matching and the relation between the hyper-voxels is ignored, the invention provides a brain tissue segmentation method based on the hyper-voxel matching.
The technical scheme is as follows: in order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows:
a brain tissue segmentation method based on hyper-voxel matching specifically comprises the following steps:
s1: generating hyper-voxel data by all magnetic resonance images according to a three-dimensional SLIC algorithm, wherein all the magnetic resonance images comprise all the magnetic resonance images to be matched and template images;
s2: preprocessing all the magnetic resonance images;
s3: superposing the segmentation boundaries obtained by all the magnetic resonance images based on a three-dimensional SLIC algorithm in all the preprocessed magnetic resonance images to obtain the super voxel characteristics of each magnetic resonance image;
s4: according to the super voxel characteristics of each magnetic resonance image, calculating the characteristic gradient between each super voxel and adjacent super voxels, and obtaining the sum of the characteristic gradients;
s5: determining a label corresponding to each hyper-voxel in the template image;
s6: serially connecting the super voxel characteristics and the characteristic gradient sum of each magnetic resonance image into a vector, and calculating the similarity of each super voxel in the magnetic resonance image to be matched and the template image according to the vector;
s7: and matching the magnetic resonance image to be matched with the template image according to the similarity of each hyper-voxel in the magnetic resonance image to be matched and the template image, and determining the segmentation result of each magnetic resonance image to be matched.
Further, in step S1, all the magnetic resonance images generate hyper-voxel data, which is as follows:
s1.1: setting clustering centers, uniformly distributing the clustering centers in all the magnetic resonance images respectively, and simultaneously carrying out sequencing labeling on all the clustering centers in each magnetic resonance image;
s1.2: calculating the distance between the clustering center and each voxel in the clustering center field, wherein the field size and the distance size specifically comprise:
Figure BDA0002220494180000021
wherein:
Figure BDA0002220494180000022
R i for a range of fields around the cluster center in the ith magnetic resonance image, D i Is the distance between the cluster center and the ith voxel in its domain, S i Is the distance between adjacent seed points in the ith magnetic resonance image, d ic Distance in gray space for the ith voxel in the cluster center and its domain, d is Is the spatial distance, v, between the cluster center and the ith voxel in its domain j Is the gray value of the current voxel v k Is the gray value of the cluster center, x j For the x-axis coordinate, y, of the current voxel in three-dimensional space j Is the y-axis coordinate, z, of the current voxel in three-dimensional space j Is the z-axis coordinate, x, of the current voxel in three-dimensional space k For the x-axis coordinate, y, of the cluster center in three-dimensional space k As y-axis coordinate, z, of the center of the cluster in three-dimensional space k Is the z-axis coordinate of the clustering center in the three-dimensional space, S is the distance between adjacent seed points in the magnetic resonance image where the clustering center is located, and m is the weight between the adjustment gray scale space distance and the space distanceThe parameters of (a);
s1.3: comparing all distances corresponding to each voxel according to the distance between the clustering center and each voxel in the field, and selecting a minimum distance, wherein the clustering center corresponding to the minimum distance is the clustering center to which the voxel belongs, and the label of the clustering center corresponding to the minimum distance is the label of the voxel;
s1.4: according to the label of each voxel, updating the spatial position of the clustering center to the geometric centers of all voxels in the clustering center field;
s1.5: and repeating the step S1.2 to the step S1.4 according to the updated spatial position of the clustering center until the spatial positions of all the clustering centers are not changed any more.
Further, in step S1.1, the size of the hyper-voxel in each of the magnetic resonance images and the distance between adjacent cluster centers are specifically:
Figure BDA0002220494180000031
wherein: l is i Is the size of the hyper-voxel in the ith magnetic resonance image, S i Is the distance, N, between adjacent seed points in the ith magnetic resonance image i Is the number of voxels in the ith magnetic resonance image, K i The number of superpixels in the ith magnetic resonance image.
Further, in the step S2, all the magnetic resonance images are preprocessed, specifically:
template image: normalizing the gray value of the template image to be between 0 and 1, and processing the template image through a histogram equalization algorithm;
magnetic resonance image to be matched: normalizing the gray value of the magnetic resonance image to be matched to be between 0 and 1, and processing the magnetic resonance image by the histogram equalization algorithm according to the preprocessed template image.
Further, in step S4, the feature gradient between each super voxel and the adjacent super voxel, and the sum of the feature gradients are specifically:
Figure BDA0002220494180000032
wherein: Δ H i Is the characteristic gradient between the current superpixel and the i-th adjacent superpixel, H a As a gray-level histogram feature of the current superpixel, H i Gradsum, the gray histogram feature of the ith voxel a Is the sum of the characteristic gradients of the current superpixel and all the neighboring superpixels, and n is the number of the neighboring superpixels.
Further, in step S5, a label corresponding to each hyper-voxel in the template image is determined, specifically:
and according to the segmentation labels marked in the template image, counting the number of pixels of each voxel which belong to each category in the region corresponding to the segmentation labels, and selecting the maximum number of pixels from the pixels, wherein the category corresponding to the maximum number of pixels is the label of the voxel.
Further, in step S6, the sum of the hyper-voxel characteristics and the characteristic gradients of each magnetic resonance image is concatenated into a vector, where the concatenation principle specifically is as follows:
and multiplying the sum of the feature gradients by the weight W according to a preset weight W, and then serially connecting the feature vectors with the gray level histogram feature vector of the gray level feature of each voxel.
Further, in step S6, the similarity of each hyper-voxel in the magnetic resonance image to be matched and the template image is calculated according to the vector, specifically:
calculating Euclidean distance between vectors obtained by series connection in the magnetic resonance image to be matched and vectors obtained by series connection in the template image, wherein the smaller the Euclidean distance is, the more similar the two hyper-voxels are, and the calculation formula of the Euclidean distance is specifically as follows:
Figure BDA0002220494180000041
wherein: d (A) i ,B j ) The euclidean distance between the ith voxel in the magnetic resonance image to be matched and the jth voxel in the template image,
Figure BDA0002220494180000042
is the k-dimension vector in the final characteristic vector of the super voxel in the magnetic resonance image to be matched,
Figure BDA0002220494180000043
is the k-dimension vector in the final characteristic vector of the super voxel in the template image.
Further, in step S7, a segmentation result of each to-be-matched magnetic resonance image is determined, specifically:
s7.1: searching N hyper-voxels with the minimum distance from the hyper-voxels in the magnetic resonance image to be matched in the template image, counting a label with the maximum ratio from the N hyper-voxels, and endowing the label with the maximum ratio to the current hyper-voxel in the magnetic resonance image to be matched;
s7.2: and repeating the step S7.1 for each hyper-voxel in the magnetic resonance image to be matched, obtaining a label corresponding to each hyper-voxel, and determining a segmentation result of each magnetic resonance image to be matched according to the label corresponding to each hyper-voxel.
Has the beneficial effects that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
the brain tissue segmentation method firstly extracts the characteristics of the superpixel, the characteristics can only describe the superpixel, secondly calculates the gradient characteristics of each superpixel and the neighborhood superpixel, then performs superpixel matching, and finally maps the superpixel classification result back to the voxel to obtain the tissue segmentation result, so that the relationship between adjacent superpixels can be considered while the characteristics of the superpixel are considered in the matching process, and an effective matching result is obtained.
Drawings
FIG. 1 is a schematic flow chart of a brain tissue segmentation method of the present invention;
FIG. 2 is a brain magnetic resonance image;
FIG. 3 is a label image corresponding to a brain magnetic resonance image;
FIG. 4 is a hyper-voxel image generated on a magnetic resonance image of the brain;
FIG. 5 is a graph of the results obtained without the brain tissue segmentation method of the present invention;
fig. 6 is a graph showing the results of matching using the brain tissue segmentation method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. The described embodiments are a subset of the embodiments of the invention and are not all embodiments of the invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention.
Example 1
Referring to fig. 1, the present embodiment provides a brain tissue segmentation method based on hyper-voxel matching, which specifically includes the following steps:
step S1: generating hyper-voxel data for all magnetic resonance images by applying a three-dimensional SLIC algorithm, wherein all the magnetic resonance images comprise all the magnetic resonance images to be matched and template images, and the method specifically comprises the following steps:
step S1.1: and taking the initialized seed points as clustering centers, and simultaneously uniformly distributing the seed points in each magnetic resonance image according to the preset number of the superpixels, namely determining the number of the clustering centers by the preset number of the superpixels, wherein the clustering centers in each magnetic resonance image are uniformly distributed in each magnetic resonance image. And all cluster centers in each magnetic resonance image are labeled in order.
It should be noted that the setting of the number of voxels is not specifically required, and may be selected according to the user's requirement.
In this embodiment, the size of the super voxel in each magnetic resonance image is specifically:
L i =N i /K i
wherein: l is a radical of an alcohol i Is the size of the super voxel in the ith magnetic resonance image, N i Is the number of voxels in the ith magnetic resonance image, K i The number of the super voxels in the ith magnetic resonance image.
The distance between adjacent seed points in each magnetic resonance image, that is, the distance between adjacent clustering centers, specifically is:
Figure BDA0002220494180000061
wherein: s. the i Is the distance between adjacent seed points in the ith magnetic resonance image, N i As the number of voxels in the ith magnetic resonance image, K i The number of superpixels in the ith magnetic resonance image.
Step S1.2: in a range of fields around each seed point, i.e. cluster center, the distance between the cluster center and each voxel in the field is calculated, in particular, a range of fields around the cluster center in each magnetic resonance image, in particular:
R i =(2S i ) 3
wherein: r i For a field of a certain extent around the cluster center in the ith magnetic resonance image, S i Is the distance between adjacent seed points in the ith magnetic resonance image.
The distance between the clustering center and each voxel in the field is specifically as follows:
Figure BDA0002220494180000062
wherein:
Figure BDA0002220494180000063
D i distance between cluster center and ith voxel in its domain, d ic Distance in gray space for the ith voxel in the cluster center and its domain, d is Is the spatial distance, v, between the cluster center and the ith voxel in its domain j Is the gray value of the current voxel v k Gray value of cluster center, x j For the x-axis coordinate, y, of the current voxel in three-dimensional space j Is the y-axis coordinate, z, of the current voxel in three-dimensional space j Is the z-axis coordinate, x, of the current voxel in three-dimensional space k For the x-axis coordinate, y, of the cluster center in three-dimensional space k Is the y-axis coordinate, z, of the cluster center in three-dimensional space k Is the z-axis coordinate of the clustering center in the three-dimensional space, S is the distance between adjacent seed points in the magnetic resonance image where the clustering center is located, and m is a parameter for adjusting the weight between the gray scale space distance and the space distance.
Step S1.3: and determining the clustering center to which each voxel belongs according to the distance between the clustering center and each voxel in the field, thereby allocating a label to each voxel, wherein the label is the label of the clustering center corresponding to the field in which the voxel is located. Specifically, all distances calculated by each voxel are compared, and the minimum distance is selected from the distances, the cluster center corresponding to the minimum distance is the cluster center to which the voxel belongs, and the label of the voxel is the label of the cluster center corresponding to the minimum distance.
Step S1.4: and updating the spatial position of the clustering center according to the label set by each voxel, namely updating the spatial position of the clustering center corresponding to the field to the geometric center of all voxels in the same field according to the distribution of all voxels in the same field.
Step S1.5: and (5) repeating the step (S1.2) to the step (S1.4) according to the updated spatial position of the clustering center until the spatial positions of all clustering centers are not changed any more.
Step S2: preprocessing is carried out on the magnetic resonance image to be matched and the template image, so that the gray feature distribution among the samples is ensured to be similar, especially the gray feature distribution of the magnetic resonance image to be matched and the gray feature distribution of the template image are ensured to be similar, and the matching accuracy is further improved. The method specifically comprises the following steps:
for the template image, the gray value of the template image needs to be normalized to be between 0 and 1, and then the template image is processed by using a histogram equalization algorithm, so that the contrast of the template image is improved.
For the magnetic resonance image to be matched, the gray value of the magnetic resonance image to be matched is normalized to be between 0 and 1, and then the magnetic resonance image to be matched is processed by using a histogram equalization algorithm according to the preprocessed template image, so that the gray value distribution of the magnetic resonance image to be matched is closer to that of the template image.
And step S3: for the preprocessed magnetic resonance image to be matched and the template image, the hyper-voxel characteristics of each magnetic resonance image are obtained, that is, the magnetic resonance image to be matched and the template image are overlapped in the magnetic resonance image to be matched and the template image preprocessed in the step S2 and divided into different areas based on the segmentation boundary obtained by the three-dimensional SLIC algorithm.
Meanwhile, in the embodiment, a gray histogram feature is adopted, that is, the gray value range of the whole image is uniformly divided into 16 intervals, and the number of voxels of the gray value in each interval in each super voxel is calculated, so that a 16-dimensional vector is obtained, the vector is the gray histogram feature of each super voxel, and the vector can represent the gray distribution condition of the super voxel.
And step S4: according to the gray histogram feature of each superpixel, calculating the feature gradient between each superpixel and each adjacent superpixel thereof, and obtaining the sum of the feature gradients, specifically:
Figure BDA0002220494180000071
wherein: Δ H i Is the characteristic gradient between the current superpixel and the adjacent ith superpixel, H a As a gray-level histogram feature of the current superpixel, H i Is adjacentGray histogram feature of ith voxel, gradsum a Is the sum of the characteristic gradients of the current superpixel and all the neighboring superpixels, and n is the number of the neighboring superpixels.
Step S5: and determining a label corresponding to each hyper-voxel in the template image. For a template image, a segmentation label manually labeled by an expert is present in the image, and a label of each hyper-voxel in the image can be generated according to the segmentation label, specifically:
and counting the number of pixels of each super voxel which belong to each category in the region corresponding to the segmentation label, and selecting the maximum number of pixels from the number of pixels, wherein the category corresponding to the maximum number of pixels is the label of the super voxel.
Step S6: and serially connecting the gray histogram feature vector of the gray feature of each super voxel and the sum of the feature gradients of each super voxel and all adjacent super voxels into a vector, and taking the vector as the final feature vector of each super voxel for matching. The series principle is specifically as follows:
firstly, setting a weight W for the sum vector of the feature gradients of each superpixel and all adjacent superpixels, multiplying the sum of the feature gradients by the weight W, and then connecting the sum with the feature vector of the gray histogram of the gray feature of each superpixel in series. Wherein the setting of the weight W is determined by a specific data set.
Meanwhile, the similarity of each hyper-voxel in the magnetic resonance image to be matched and the template image can be calculated according to the vector, wherein the similarity can be compared through Euclidean distance, and the similarity specifically comprises the following steps:
in the magnetic resonance image and the template image to be matched, the gray level histogram feature vector of each super voxel and the sum of the feature gradients of each super voxel and all adjacent super voxels are 16-dimensional vectors which are connected in series to form a vector, and then a 32-dimensional vector is obtained. In the magnetic resonance image to be matched, the 32-dimensional vector of each super voxel is the final feature vector of each super voxel, and is marked as: f a In the template image, the 32-dimensional vector of each super voxel is the final feature vector of each super voxel and is labeled as: f b
According to the final characteristic vector F of each hyper-voxel in the magnetic resonance image to be matched a Final feature vector F for each superpixel in the template image b Calculating a Euclidean distance between the two voxels, wherein the smaller the Euclidean distance is, the more similar the two hyper-voxels are, and the calculation formula of the Euclidean distance is specifically as follows:
Figure BDA0002220494180000081
wherein: d (A) i ,B j ) The euclidean distance between the ith voxel in the magnetic resonance image to be matched and the jth voxel in the template image,
Figure BDA0002220494180000082
is the k-dimension vector in the final characteristic vector of the super voxel in the magnetic resonance image to be matched,
Figure BDA0002220494180000083
is the k-th dimension vector in the super voxel final characteristic vector in the template image.
Step S7: and matching all the magnetic resonance images to be matched with the template images according to the Euclidean distance in the step S6, determining the segmentation result of each magnetic resonance image to be matched, and performing the matching process in a voting mode. The method comprises the following specific steps:
step S7.1: for the superpixel in the magnetic resonance image to be matched, N superpixels with the minimum distance to the superpixel are searched in the template image, and since each superpixel of the template image is endowed with a label in step S5, the label with the largest proportion is counted from the N superpixels, and the label with the largest proportion is endowed to the current superpixel in the magnetic resonance image to be matched.
In the present embodiment, specifically, 10 closest supervoxels are sought in the template image, that is, N =10, where N is the number of closest supervoxels sought in the template image.
Step S7.2: and (4) executing the step (S7.1) for each hyper-voxel in the magnetic resonance image to be matched, so that each hyper-voxel is endowed with a label, and a segmentation result of the magnetic resonance image to be matched is obtained.
The brain tissue segmentation method based on hyper-voxel matching is used in the practical application process, and the IBSR18 data set data is taken as an example in the embodiment to verify the implementation of the design and application of the invention on brain tissue extraction of the magnetic resonance imaging of the brain.
The experimental conditions are as follows: a computer is selected for an experiment, a 64-bit operating system is adopted, and Matlab and Python are used in a programming language, wherein Matlab is R2014a version, and Python is 3.5 version.
The experimental data are brain magnetic resonance images of the IBSR18 dataset. The internet brain segmentation library provides manually guided expert segmentation results as well as magnetic resonance brain image data. The IBSR18 data set contains magnetic resonance images of T1-Weighted modality of 18 healthy subjects, each magnetic resonance image is 256 × 256 × 128 three-dimensional data, the data set contains expert labeled tags, which respectively have gray matter, white matter and spinal fluid, 0 in the tag represents that the pixel belongs to non-brain tissue, i.e., background region, 1 represents that the pixel belongs to spinal fluid, 2 represents gray matter, and 3 represents white matter. Referring to fig. 2 and 3, MRI images in the ibsr18 dataset and their corresponding semantic segmentation tags are shown. The brain tissue segmentation is realized by performing hyper-voxel matching according to the above design method, and referring to fig. 4, the hyper-voxel result generated for the magnetic resonance image of the brain is shown, and referring to fig. 6, the segmentation result obtained after the whole process is executed is shown. Referring to fig. 5, for comparison, the result obtained using only the gray histogram feature matching without using the above-described design method is shown.
In the experiment, the first sample is used as a template image, and the rest 17 samples are used as images to be matched, and the experiment is carried out according to the steps of the invention.
In order to test the segmentation precision of the brain magnetic resonance image, a Dice coefficient is used as an evaluation index, and an acquisition formula of the Dice coefficient specifically comprises the following steps:
Figure BDA0002220494180000091
wherein: TP is the overlapping area of the divided brain tissue area and the expert manual division template, FP is the area of the divided brain tissue but not belonging to the expert manual division template, and FN is the area of the brain tissue but not divided in the expert manual division template.
The Dice coefficient is a set similarity metric function, and is generally used to calculate the similarity between two samples. This is used to measure how similar the brain tissue region extracted by the method is to the real result.
On the IBSR18 data set, the Dice index corresponding to the method of the invention and the method not adopted is as shown in the following table 1, which specifically includes:
TABLE 1
Figure BDA0002220494180000101
Based on the table 1, it can be proved that the design method of the present invention can obtain better matching result by adding the sum of the feature gradients compared to the matching only using the feature of the voxel itself. Meanwhile, the accuracy of segmentation can be improved by considering the relation between adjacent hyper-voxels in the matching process.
The present invention and its embodiments have been described in an illustrative manner, and are not to be considered as limiting, since the embodiments shown in the drawings are merely exemplary embodiments of the invention, and the actual constructions and methods are not limited thereto. Therefore, if the person skilled in the art receives the teaching, the structural modes and embodiments similar to the technical solutions are not creatively designed without departing from the spirit of the invention, and all of them belong to the protection scope of the invention.

Claims (9)

1. A brain tissue segmentation method based on hyper-voxel matching is characterized by comprising the following steps:
s1: generating hyper-voxel data by all magnetic resonance images according to a three-dimensional SLIC algorithm, wherein all the magnetic resonance images comprise all the magnetic resonance images to be matched and template images;
s2: preprocessing all the magnetic resonance images;
s3: superposing the segmentation boundaries obtained by all the magnetic resonance images based on a three-dimensional SLIC algorithm in all the preprocessed magnetic resonance images to obtain the super voxel characteristics of each magnetic resonance image;
s4: according to the super voxel characteristics of each magnetic resonance image, calculating the characteristic gradient between each super voxel and the adjacent super voxel, and acquiring the sum of the characteristic gradients;
s5: determining a label corresponding to each hyper-voxel in the template image;
s6: serially connecting the super voxel characteristics and the characteristic gradient sum of each magnetic resonance image into a vector, and calculating the similarity of each super voxel in the magnetic resonance image to be matched and the template image according to the vector;
s7: and matching the magnetic resonance image to be matched with the template image according to the similarity of each super voxel in the magnetic resonance image to be matched and the template image, and determining the segmentation result of each magnetic resonance image to be matched.
2. The method for brain tissue segmentation based on hyper-voxel matching as claimed in claim 1, wherein in the step S1, all the magnetic resonance images generate hyper-voxel data, specifically as follows:
s1.1: setting clustering centers, uniformly distributing the clustering centers in all the magnetic resonance images respectively, and simultaneously carrying out sequencing labeling on all the clustering centers in each magnetic resonance image;
s1.2: calculating the distance between the clustering center and each voxel in the clustering center field, wherein the field size and the distance size specifically comprise:
Figure FDA0002220494170000011
wherein:
Figure FDA0002220494170000012
R i for a range of fields around the cluster center in the ith magnetic resonance image, D i Is the distance between the cluster center and the ith voxel in its domain, S i Is the distance between adjacent seed points in the ith magnetic resonance image, d ic Distance in gray space for the ith voxel in the cluster center and its domain, d is Is the spatial distance, v, between the cluster center and the ith voxel in its domain j Is the gray value of the current voxel v k Gray value of cluster center, x j For the x-axis coordinate, y, of the current voxel in three-dimensional space j Is the y-axis coordinate, z, of the current voxel in three-dimensional space j Is the z-axis coordinate, x, of the current voxel in three-dimensional space k As x-axis coordinate, y, of the center of the cluster in three-dimensional space k As y-axis coordinate, z, of the center of the cluster in three-dimensional space k The z-axis coordinate of a clustering center in a three-dimensional space is used, S is the distance between adjacent seed points in a magnetic resonance image where the clustering center is located, and m is a parameter for adjusting the weight between the gray scale space distance and the space distance;
s1.3: comparing all distances corresponding to each voxel according to the distance between the clustering center and each voxel in the field, and selecting a minimum distance, wherein the clustering center corresponding to the minimum distance is the clustering center to which the voxel belongs, and the label of the clustering center corresponding to the minimum distance is the label of the voxel;
s1.4: according to the label of each voxel, updating the spatial position of the clustering center to the geometric centers of all voxels in the clustering center field;
s1.5: and repeating the step S1.2 to the step S1.4 according to the updated spatial position of the clustering center until the spatial positions of all the clustering centers are not changed any more.
3. The brain tissue segmentation method based on voxel matching according to claim 2, wherein in step S1.1, the size of the voxels in each of the mr images and the distance between the centers of adjacent clusters are specifically:
Figure FDA0002220494170000021
wherein: l is a radical of an alcohol i Is the size of the super voxel in the ith magnetic resonance image, S i Is the distance between adjacent seed points in the ith magnetic resonance image, N i As the number of voxels in the ith magnetic resonance image, K i The number of superpixels in the ith magnetic resonance image.
4. The method for brain tissue segmentation based on hyper-voxel matching according to claim 1 or 2, wherein in the step S2, all the magnetic resonance images are preprocessed, specifically:
template image: normalizing the gray value of the template image to be between 0 and 1, and processing the gray value through a histogram equalization algorithm;
magnetic resonance image to be matched: normalizing the gray value of the magnetic resonance image to be matched to be between 0 and 1, and processing the magnetic resonance image by the histogram equalization algorithm according to the preprocessed template image.
5. The method according to claim 4, wherein in step S4, the feature gradient between each super voxel and its neighboring super voxel, and the sum of the feature gradients are:
Figure FDA0002220494170000031
wherein: Δ H i Is the characteristic gradient between the current hyper-voxel and the adjacent i-th hyper-voxel,H a As a gray-level histogram feature of the current superpixel, H i Gradsum, the gray histogram feature of the ith adjacent hyper-voxel a Is the sum of the characteristic gradients of the current superpixel and all the adjacent superpixels, and n is the number of the adjacent superpixels.
6. The method according to claim 5, wherein in step S5, a label corresponding to each voxel in the template image is determined, specifically:
and according to the segmentation labels marked in the template image, counting the number of pixels of each voxel which belong to each category in the region corresponding to the segmentation labels, and selecting the maximum number of pixels from the pixels, wherein the category corresponding to the maximum number of pixels is the label of the voxel.
7. The method according to claim 6, wherein in step S6, the sum of the hyper-voxel characteristics and the characteristic gradients of each mr image is concatenated into a vector, and the concatenation principle is specifically as follows:
and multiplying the sum of the feature gradients by the weight W according to a preset weight W, and then serially connecting the feature vectors with the gray level histogram feature vector of the gray level feature of each voxel.
8. The method according to claim 6, wherein in step S6, the similarity of each hyper-voxel in the magnetic resonance image to be matched and the template image is calculated according to the vector, specifically:
calculating Euclidean distance between vectors obtained by series connection in the magnetic resonance image to be matched and vectors obtained by series connection in the template image, wherein the smaller the Euclidean distance is, the more similar the two hyper-voxels are, and the calculation formula of the Euclidean distance is specifically as follows:
Figure FDA0002220494170000032
wherein: d (A) i ,B j ) The euclidean distance between the ith voxel in the magnetic resonance image to be matched and the jth voxel in the template image,
Figure FDA0002220494170000033
is the k-dimension vector in the final characteristic vector of the super voxel in the magnetic resonance image to be matched,
Figure FDA0002220494170000034
is the k-dimension vector in the final characteristic vector of the super voxel in the template image.
9. The method according to claim 8, wherein in the step S7, a segmentation result of each magnetic resonance image to be matched is determined, specifically:
s7.1: searching N hyper-voxels with the minimum distance from the hyper-voxels in the magnetic resonance image to be matched in the template image, counting a label with the maximum ratio from the N hyper-voxels, and endowing the label with the maximum ratio to the current hyper-voxel in the magnetic resonance image to be matched;
s7.2: and repeating the step S7.1 for each hyper-voxel in the magnetic resonance image to be matched, obtaining a label corresponding to each hyper-voxel, and determining a segmentation result of each magnetic resonance image to be matched according to the label corresponding to each hyper-voxel.
CN201910931927.XA 2019-09-29 2019-09-29 Brain tissue segmentation method based on hyper-voxel matching Active CN110751664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910931927.XA CN110751664B (en) 2019-09-29 2019-09-29 Brain tissue segmentation method based on hyper-voxel matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910931927.XA CN110751664B (en) 2019-09-29 2019-09-29 Brain tissue segmentation method based on hyper-voxel matching

Publications (2)

Publication Number Publication Date
CN110751664A CN110751664A (en) 2020-02-04
CN110751664B true CN110751664B (en) 2022-11-18

Family

ID=69277423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910931927.XA Active CN110751664B (en) 2019-09-29 2019-09-29 Brain tissue segmentation method based on hyper-voxel matching

Country Status (1)

Country Link
CN (1) CN110751664B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508844B (en) * 2020-09-30 2022-11-18 东南大学 Weak supervision-based brain magnetic resonance image segmentation method
CN115359074B (en) * 2022-10-20 2023-03-28 之江实验室 Image segmentation and training method and device based on hyper-voxel clustering and prototype optimization

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146228A (en) * 2017-03-22 2017-09-08 东南大学 A kind of super voxel generation method of brain magnetic resonance image based on priori
CN108305279A (en) * 2017-12-27 2018-07-20 东南大学 A kind of brain magnetic resonance image super voxel generation method of iteration space fuzzy clustering

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8280133B2 (en) * 2008-08-01 2012-10-02 Siemens Aktiengesellschaft Method and system for brain tumor segmentation in 3D magnetic resonance images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146228A (en) * 2017-03-22 2017-09-08 东南大学 A kind of super voxel generation method of brain magnetic resonance image based on priori
CN108305279A (en) * 2017-12-27 2018-07-20 东南大学 A kind of brain magnetic resonance image super voxel generation method of iteration space fuzzy clustering

Also Published As

Publication number Publication date
CN110751664A (en) 2020-02-04

Similar Documents

Publication Publication Date Title
CN108416802B (en) Multimode medical image non-rigid registration method and system based on deep learning
Cover et al. Computational methods for corpus callosum segmentation on MRI: a systematic literature review
US8848997B2 (en) Medical image acquisition apparatus and operating method therefor
US6950544B2 (en) Automated measurement of anatomical structures in medical imaging
CN107680107B (en) Automatic segmentation method of diffusion tensor magnetic resonance image based on multiple maps
CN108629785B (en) Three-dimensional magnetic resonance pancreas image segmentation method based on self-learning
CN111931811A (en) Calculation method based on super-pixel image similarity
CN108171697B (en) WMH automatic extraction system based on cluster
CN115393269A (en) Extensible multi-level graph neural network model based on multi-modal image data
CN110751664B (en) Brain tissue segmentation method based on hyper-voxel matching
CN111080658A (en) Cervical MRI image segmentation method based on deformable registration and DCNN
CN116862889A (en) Nuclear magnetic resonance image-based cerebral arteriosclerosis detection method
Ziyan et al. Consistency clustering: a robust algorithm for group-wise registration, segmentation and automatic atlas construction in diffusion MRI
Tang et al. Tumor segmentation from single contrast MR images of human brain
CN110728685B (en) Brain tissue segmentation method based on diagonal voxel local binary pattern texture operator
Acosta et al. 3D shape context surface registration for cortical mapping
Chen et al. Segmentation of hippocampus based on ROI atlas registration
Liu et al. Supervoxel clustering with a novel 3d descriptor for brain tissue segmentation
Logiraj et al. TractNet: a deep learning approach on 3D curves for segmenting white matter fibre bundles
Mure et al. Classification of multiple sclerosis lesion evolution patterns a study based on unsupervised clustering of asynchronous time-series
CN112215814B (en) Prostate image segmentation method based on 3DHOG auxiliary convolutional neural network
JP6738003B1 (en) Apparatus, method and program for extracting anatomical part based on MRI image
CN109615605B (en) Functional magnetic resonance imaging brain partitioning method and system based on quantum potential energy model
Yang et al. Decomposed contour prior for shape recognition
Qu et al. Positive unanimous voting algorithm for focal cortical dysplasia detection on magnetic resonance image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant