CN112150477B - Full-automatic segmentation method and device for cerebral image artery - Google Patents

Full-automatic segmentation method and device for cerebral image artery Download PDF

Info

Publication number
CN112150477B
CN112150477B CN201911118499.5A CN201911118499A CN112150477B CN 112150477 B CN112150477 B CN 112150477B CN 201911118499 A CN201911118499 A CN 201911118499A CN 112150477 B CN112150477 B CN 112150477B
Authority
CN
China
Prior art keywords
image
segmentation
training
threshold
subclass
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911118499.5A
Other languages
Chinese (zh)
Other versions
CN112150477A (en
Inventor
耿辰
杨丽琴
李郁欣
耿道颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201911118499.5A priority Critical patent/CN112150477B/en
Publication of CN112150477A publication Critical patent/CN112150477A/en
Application granted granted Critical
Publication of CN112150477B publication Critical patent/CN112150477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Health & Medical Sciences (AREA)

Abstract

The invention provides a full-automatic segmentation method and a full-automatic segmentation device for cerebral image artery, which can automatically segment cerebral vessels of two-dimensional or three-dimensional brain images to be detected, and is characterized by comprising the following steps: step S1, performing image enhancement on the brain image to be detected; step S2, adopting a pre-trained machine learning classifier and a constructed subclass-parameter comparison table to carry out parameter self-adaptive selection; step S3, performing threshold segmentation according to the parameters; step S4, adaptive skull seed point extraction; step S5, removing skull according to the parameters; step S6, analyzing the volumes of all connected domains and screening the connected domains according to the parameters; step S7, obtaining upper and lower limit threshold values by self-adaptive threshold value statistics; step S8, increasing according to the upper and lower limit threshold value regions; and step S9, uniformly expanding to obtain the final image segmentation result.

Description

Full-automatic segmentation method and device for cerebral image artery
Technical Field
The invention belongs to the field of medical image processing, and particularly relates to a full-automatic cerebral image artery segmentation method and device.
Background
The MRA is widely used in clinical applications because it is safe and non-invasive, and it uses the MR signals generated during blood flow, which are different from the surrounding tissues, to display the signal characteristics of blood vessels and blood flow, and can provide detailed and intuitive images of blood vessels for diagnosis and surgical planning of cerebral aneurysms. The more commonly used MRA methods have time of flight (TOF), Phase Contrast (PCA) and contrast enhanced magnetic resonance angiography (CE-MRA). The time-of-flight method and the phase contrast method belong to the technology of establishing image contrast by the difference of the MR signal of flowing blood and the MR signal of surrounding static tissues, and are the technology of relevant imaging without using contrast agents. Contrast enhanced MRA is a magnetic resonance angiography technique that shortens blood T1 using paramagnetic substances, and belongs to contrast agent enhanced MRA. The time leap method is fast and has strong vascular contrast, so that the method is widely used in clinic.
CT blood vessel imaging (CTA) refers to the three-dimensional display of the intracranial vascular system after the computer processing the image after the intravenous injection of an iodine-containing contrast agent. CTA clearly shows the Willis arterial loop, as well as the anterior, middle and posterior cerebral arteries and their major branches, and provides important diagnostic basis for occlusive vascular disease. Can make the diagnosis of ischemic cerebrovascular disease earlier to 2 hours after the disease.
The accurate cerebral vessel segmentation can provide important basis for the analysis of cerebral aneurysm images, and can be used for matching cerebral vessels, three-dimensional reconstruction and the like, so that the morphological characteristics of cerebral aneurysms can be better observed. Common cerebrovascular segmentation methods include a threshold-based segmentation method, a region-growth-based segmentation method, a cerebrovascular centerline-based segmentation method, a deformation model-based segmentation method and the like, and the algorithms are widely applied and mature. Many scholars have proposed an improvement on the above segmentation methods or effectively combined the segmentation methods, and have also continuously researched new cerebrovascular segmentation methods, such as: the segmentation method based on the convolutional neural network enables the final segmentation effect to achieve the purpose of advantage complementation.
However, the above-mentioned cerebrovascular vessel segmentation method still requires a person with certain knowledge to perform manual assistance, so as to complete the segmentation of the cerebrovascular vessel more stably and accurately. Therefore, when the brain images are actually processed, the methods requiring the assistance of the personnel easily cause the waste of time or personnel, thereby affecting the working efficiency of the related medical personnel. In addition, in practical use, various methods are difficult to adapt to different types of brain images acquired by different devices, so that errors occur in the segmentation result.
Disclosure of Invention
In order to solve the problems, the invention provides a full-automatic segmentation method and a full-automatic segmentation device for cerebral image artery, which can automatically perform cerebral vessel segmentation on a cerebral image, and the invention adopts the following technical scheme:
the invention provides a full-automatic segmentation method of cerebral image artery, which is used for performing cerebral vessel segmentation on a two-dimensional or three-dimensional brain image to be detected and is characterized by comprising the following steps of: step S1, a local enhancement algorithm is adopted to enhance the blood vessel area in the brain image to be detected, and the contrast between the blood vessel area and the background is improved to form an enhanced image; step S2, extracting a set feature vector of the gray histogram according to the gray histogram of the brain image to be detected after histogram equalization, inputting the set feature vector into a machine learning classifier trained in advance to obtain a subclass label, and further searching a pre-established subclass-parameter comparison table according to the subclass label to extract a parameter with the highest distribution ratio in the belonged subclass, wherein the parameter comprises a segmentation threshold, a skull threshold, a connected domain threshold and an increase threshold; step S3, carrying out threshold segmentation on the enhanced image by using the parameters so as to obtain a threshold segmentation image; step S4, extracting a key layer slice in the brain image to be detected, and extracting a point on the skull as a first seed point by adopting a two-dimensional bounding box; step S5, obtaining a skull segmentation result by adopting a region growing method based on the parameters and the first seed point, and then subtracting the skull segmentation result from the threshold segmentation image to obtain a skull removal image; step S6, analyzing the volumes of all connected domains in the skull removal image, arranging the connected domains according to the volumes from large to small to form a connected domain list, screening the connected domain list by using parameters to obtain extracted connected domains, and further removing the unselected connected domains in the skull removal image; step S7, calculating the voxel gray level in the same range of the corresponding enhanced image in the extracted connected domain to obtain a gray level distribution value, and calculating the upper and lower limit threshold value of the region growth according to the gray level distribution value through a distribution model; step S8, the extracted connected domain is used as a second seed point, and region growing is carried out in the enhanced image according to the second seed point and the upper and lower limit threshold values; step S9, binarizing the region growing part obtained by region growing, and uniformly expanding the binarized result by using an expansion algorithm, so as to extract the corresponding region in the brain image to be detected from the expanded region, thereby obtaining a segmentation result, wherein the machine learning classifier and the subclass-parameter comparison table are obtained by pre-training in the following steps: step T1, histogram equalization is carried out on a plurality of training images for training in sequence to obtain corresponding gray level distribution histograms for training; step T2, extracting training set feature vectors at least containing envelope gradient curves, envelope inflection point features, maximum extreme point coordinates, entropies and maximum entropies of each training gray level distribution histogram; step T3, each training image is segmented manually, and training parameters including segmentation threshold values, skull threshold values, connected domain threshold values and growth threshold values are set in the segmentation process; step T4, respectively taking the training set characteristic vector and the training parameters corresponding to each training image as data sets, and classifying the data sets according to the distance between the training parameters of different data sets so as to gather the data sets with the minimum distance into one class and obtain a plurality of different subclasses; and T5, extracting the feature vector of each subclass, endowing the feature vector belonging to the same subclass with the same label, further training according to all the feature vectors with labels by adopting a machine learning classification method to obtain a machine learning classifier, and simultaneously establishing a corresponding subclass-parameter comparison table according to each subclass and corresponding training parameters.
The full-automatic segmentation method for the cerebral image artery provided by the invention can also have the following technical characteristics: the distribution model is formed by single distribution in Gaussian distribution, Poisson distribution and uniform distribution or formed by superposition of multiple same or different distributions in Gaussian distribution, Poisson distribution and uniform distribution.
The full-automatic segmentation method for the cerebral image artery provided by the invention can also have the following technical characteristics: when the connected domain list is screened, the connected domain threshold value of the parameter represents the volume of the minimum allowed connected domain, and the connected domain threshold value also ensures that the extracted connected domain is positioned within the top 10 bits of the sequence of the connected domain list.
The full-automatic segmentation method for the cerebral image artery provided by the invention can also have the following technical characteristics: the increase threshold is a corresponding threshold when the distribution can be covered in the calculation distribution model and is above a specified percentage range.
The invention also provides a full-automatic cerebral image artery segmentation device, which is used for performing cerebral vessel segmentation on a two-dimensional or three-dimensional brain image to be detected, and is characterized by comprising the following components: the image enhancement part is used for enhancing the blood vessel area in the brain image to be detected by adopting a local enhancement algorithm, and improving the contrast between the blood vessel area and the background so as to form an enhanced image; the parameter self-adaptive selection part is stored with a pre-trained machine learning classifier and a subclass-parameter comparison table and is used for extracting an aggregate characteristic vector of a gray histogram according to the gray histogram of the brain image to be detected after histogram equalization, inputting the aggregate characteristic vector into the machine learning classifier to obtain a subclass label, and searching the subclass-parameter comparison table according to the subclass label to extract a parameter with the highest distribution occupation ratio in the belonged subclass; a threshold value dividing section for performing threshold value division on the enhanced image using the parameter to obtain a threshold value divided image; the adaptive skull seed point extraction part is used for extracting key layer slices in the brain image to be detected and extracting points on the skull as first seed points by adopting a two-dimensional bounding box; the skull removing part is used for obtaining a skull segmentation result by adopting a region growing method based on the parameters and the first seed point, and then subtracting the skull segmentation result from the threshold segmentation image to obtain a skull removal image; the automatic connected domain screening part is used for analyzing the volumes of all connected domains in the skull removal image, arranging the connected domains from large to small according to the volumes to form a connected domain list, screening the connected domain list by using parameters to obtain extracted connected domains, and further removing the unselected connected domains in the skull removal image; the adaptive threshold value statistic part is used for carrying out statistics on the voxel gray levels in the same range of the corresponding enhanced images in the extracted connected domain to obtain a gray level distribution numerical value, and meanwhile, calculating an upper limit threshold value and a lower limit threshold value of region growth according to the gray level distribution numerical value through a distribution model; a region growing unit configured to take the extracted connected domain as a second seed point and perform region growing in the enhanced image according to the second seed point and upper and lower limit thresholds; the uniform expansion part is used for carrying out binarization on a region growing part obtained by region growing, and carrying out uniform expansion according to a binarization result by using an expansion algorithm, so that a corresponding region in the brain image to be detected is extracted from the expanded region to obtain a segmentation result, wherein the machine learning classifier and the subclass-parameter comparison table are obtained by pre-training the following steps: step T1, histogram equalization is carried out on a plurality of training images for training in sequence to obtain corresponding gray level distribution histograms for training; step T2, extracting training set feature vectors at least containing envelope gradient curves, envelope inflection point features, maximum extreme point coordinates, entropies and maximum entropies of each training gray level distribution histogram; step T3, each training image is segmented manually, and training parameters including segmentation threshold values, skull threshold values, connected domain threshold values and growth threshold values are set in the segmentation process; step T4, respectively taking the training set characteristic vector and the training parameters corresponding to each training image as data sets, and classifying the data sets according to the distance between the training parameters of different data sets so as to gather the data sets with the minimum distance into one class and obtain a plurality of different subclasses; and T5, extracting the feature vector of each subclass, endowing the feature vector belonging to the same subclass with the same label, further training according to all the feature vectors with labels by adopting a machine learning classification method to obtain a machine learning classifier, and simultaneously establishing a corresponding subclass-parameter comparison table according to each subclass and corresponding training parameters.
Action and Effect of the invention
According to the full-automatic segmentation method for the cerebral image artery, the machine learning classifier is trained in advance, the subclass-parameter comparison table is established, the enhanced brain image to be detected is subjected to subclass identification and parameter extraction according to the subclass-parameter comparison table and the enhanced brain image to be detected, and further, the images are subjected to threshold segmentation, skull removal, region growing, expansion and other processing automatically according to the parameters, so that the full-automatic segmentation method for the brain image without manual intervention is realized. Meanwhile, the machine learning classifier and the subclass-parameter comparison table used in the method can continuously optimize the segmentation parameters through learning, and the segmentation effect of the method on the cerebral vessels is further improved. In addition, the full-automatic segmentation method for the cerebral image artery can be suitable for various brain image acquisition devices, for each acquisition device, full-automatic parameters for performing cerebral MRA image vessel segmentation can be obtained only by learning a small amount of data, the segmentation process is further automatically completed, and the method is suitable for batch operation.
Drawings
FIG. 1 is a flow chart of a method for fully automatically segmenting cerebral image arteries according to an embodiment of the present invention;
FIG. 2 is a flow chart of a pre-processing procedure in an embodiment of the invention;
FIG. 3 is a schematic diagram of a brain image to be measured according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of thresholding an image in an embodiment of the invention;
FIG. 5 is a schematic illustration of a skull removed image in an embodiment of the invention;
FIG. 6 is a schematic illustration of a region growing image in an embodiment of the invention; and
fig. 7 is a schematic diagram of a segmentation result image in an embodiment of the present invention.
Detailed Description
In order to make the technical means, creation features, achievement objects and effects of the invention easy to understand, the following embodiments and drawings are combined to describe the fully automatic segmentation method and device of the brain image artery of the invention.
< example >
Fig. 1 is a flowchart of a method for fully automatically segmenting a cerebral image artery according to an embodiment of the present invention.
As shown in fig. 1, the method for fully automatically segmenting the artery of the brain image comprises the following steps:
and step S1, enhancing the blood vessel region in the brain image to be detected by adopting a local enhancement algorithm, and improving the contrast between the blood vessel region and the background.
Step S2, extracting an aggregate feature vector of the gray histogram according to the gray histogram of the brain image to be detected after histogram equalization, inputting the aggregate feature vector into a machine learning classifier M trained in advance to obtain a subclass label, and further searching a subclass-parameter comparison table established in advance according to the subclass label to extract a parameter with the highest distribution ratio in the subclass to which the parameter belongs.
In this embodiment, the machine learning classifier M is a classification method and a classifier under a supervised learning method, such as SVM and its various variants, logistic regression, AdaBoost, and the like. When the supervised learning method is adopted, the input training set needs to be labeled manually, then the classification method can automatically solve classification functions or implicit functions according to the labeled categories, so that classification functions or function combinations for distinguishing various categories are obtained, and the machine learning classifier M can be formed through the functions.
In this embodiment, the parameters include a segmentation threshold (a, b), a skull threshold (c, d), a connected component threshold (e), and a growth threshold (f, g). The subclass-parameter comparison table is a data table with a plurality of subclass labels and corresponding parameters, and one example of the subclass-parameter comparison table is as follows;
TABLE 1 example of subclass-parameter lookup tables
Figure GDA0003215722070000081
Figure GDA0003215722070000091
In this embodiment, in step S2, the machine learning classifier M and the subclass-parameter comparison table required to be used are adaptively selected for parameter selection, and are obtained by training in a pre-training process in advance.
FIG. 2 is a flow chart of a pre-training process in an embodiment of the invention.
As shown in fig. 2, the pre-training process includes the following steps:
and step T1, sequentially carrying out histogram equalization on a plurality of training images to obtain corresponding training gray distribution histograms.
In this embodiment, the training image uses brain MRA image Data, and when histogram equalization is performed on the Data, the gray value distribution of the Data is normalized to be in the range of 0 to 1024.
And step T2, extracting the training set characteristic vector T of each training gray distribution histogram. The training set feature vector T comprises features such as an envelope gradient curve, an envelope inflection point feature, a maximum extreme point coordinate, entropy and maximum entropy.
In step T2 of the present embodiment, after extracting each feature of the histogram of the gray scale distribution of the training image, a plurality of features are converted into vectors, and then the plurality of feature vectors are connected and combined in a fixed order to form a training set feature vector T. For example, after extracting the envelope gradient curve (points T1, T2, T3, T4, T5), the envelope inflection point feature (point T3), the maximum extreme point coordinate (point T3), the entropy (s1, s2, s3, s4, s5), and the maximum entropy (s3), the histogram forms the set feature vector T [ T1, T2, T3, T4, T5, T3, T3, s1, s2, s3, s4, s5, s3 ].
And step T3, performing manual segmentation on each training image, and setting training parameters including a segmentation threshold, a skull threshold, a connected domain threshold and an increase threshold in the segmentation process.
In this embodiment, the manual segmentation process specifically includes:
a) performing image enhancement on the range of 100-500 blood vessels by using a sigmoid filter;
b) selecting a segmentation threshold (a is 200, b is 300);
c) dividing a threshold value;
d) skull threshold selection (c is 280, d is 300);
e) selecting skull seed points;
f) performing region growing based on the seed points to remove the skull;
g) analyzing the connected domains, calculating the volume of each connected domain, and sequencing the volumes from large to small;
h) selecting a connected domain threshold, and selecting three connected domains l1, l2 and l3 with the volume larger than e 5000 from the first 10 connected domains as seeds according to the sorting result;
i) performing statistics on connected domains, performing statistics on pixel values of all pixel points in l1, l2 and l3, and calculating an average value mu and a standard deviation sigma of the pixel values;
j) selecting a growth threshold, namely selecting two values of mu-sigma and mu + sigma as a threshold for region growth according to a Gaussian distribution model;
k) the segmentation results of the blood vessels were obtained by using l1, l2, and l3 as seeds and μ - σ and μ + σ as thresholds, and the segmentation results covered 70% of the cerebral artery region.
In this embodiment, since the distribution model used is a gaussian distribution model, the growth thresholds f and g have two values, i.e., μ - σ and μ + σ. In other embodiments, the growth thresholds f and g are selected by calculating, according to the selected distribution model, the corresponding threshold in the model that can cover a range of distribution over a specified percentage.
And T4, respectively taking the training set feature vector and the training parameters corresponding to each training image as data sets, and classifying the data sets according to the distances between the training parameters of different data sets so as to gather the data sets with the minimum distances into one class and obtain a plurality of different subclasses.
Step T4 in this embodiment is a parameter association statistics step, that is, multiple examples of the above-mentioned manually segmented parameters (a, b, c, d, e) and corresponding set feature vectors T are used, hierarchical clustering is performed according to euclidean distances (following formula) between different examples of parameters, and cases with the smallest distances are gathered into one class (each case is a data set formed by the corresponding training set feature vector and the training parameters obtained in the above-mentioned step for each training image), so as to obtain multiple different subclasses;
Figure GDA0003215722070000111
in the formula, d (x, y) is the Euclidean distance between x and y, x and y are feature vectors of different cases, and n is the rank number of the feature vectors.
In other embodiments, other distance calculation methods may be used to complete the clustering of the subclasses.
And T5, extracting the feature vector of each subclass, endowing the feature vector belonging to the same subclass with the same label, further training according to all the feature vectors with labels by adopting a machine learning classification method to obtain a machine learning classifier, and simultaneously establishing a corresponding subclass-parameter comparison table according to each subclass and corresponding training parameters.
In this embodiment, an SVM (support vector machine) classifier is used for training to obtain an SVM classifier M (i.e., a machine learning classifier M).
Through the above steps, the machine learning classifier M and the subclass-parameter comparison table are obtained, and the method can be applied to the parameter adaptive selection operation in step S2.
In the present embodiment, the image shown in fig. 3 is taken as an example, and after the image is subjected to the processing of step S1 and step S2, the parameters that indicate that the sub-class label is class 3 and that the distribution ratio among the sub-classes to which the sub-class label belongs is highest, which are extracted from the sub-class label, are [ a5, b5, c5, d5, e5, f5, and g5] are obtained in step S2. The subsequent processing is next performed by step S3.
In step S3, the enhanced image is thresholded using the parameters to obtain a thresholded image.
In step S3 of the present embodiment, the image to be measured is subjected to threshold segmentation using the segmentation threshold, and the segmented threshold image is as shown in fig. 4.
And step S4, extracting the key layer slice in the brain image to be detected, and extracting a point on the skull as a first seed point by adopting a two-dimensional bounding box.
In this embodiment, the brain image to be measured is data such as MRA or CTA, which is generally three-dimensional data formed by stacking a plurality of slice layers. The slice of the key layer extracted from the slice is a slice with valid information, for example, when scanning the CTA of the head, there are some blank areas above the top of the head without valid information, and the layer containing the head is the key layer.
In addition, bounding a box means that, for an object, the object is wrapped with a polygon having an area larger than the object, and the polygon is shrunk until it stops when its sides meet the object. The embodiment adopts a two-dimensional square bounding box to extract points p1, p2, p3 and p4 on the skull, which are respectively positioned on four contact points of the front, back, left and right of the layer of skull.
And step S5, obtaining a skull segmentation result by adopting a region growing method based on the parameters and the first seed point, and then subtracting the skull segmentation result from the threshold segmentation image to obtain a skull removal image.
In step S5 of the present embodiment, the skull threshold value in the parameters is used when the skull segmentation result is generated, and the finally obtained skull removal image is as shown in fig. 5.
And step S6, analyzing the volumes of all connected domains in the skull removal image, arranging the connected domains according to the volumes from large to small to form a connected domain list, screening the connected domain list by using parameters to obtain extracted connected domains, and further removing the unselected connected domains in the skull removal image.
In this embodiment, when the parameters are used to perform connected component screening, the threshold (e) of the connected component represents the minimum allowed volume of the connected component, and after the existing connected component volumes are sorted from large to small, the threshold (e) of the connected component should ensure that the taken connected component is located within the top 10 bits of the sort.
And step S7, counting the voxel gray levels in the same range of the corresponding enhanced image in the extracted connected domain to obtain a gray level distribution value, and calculating an upper limit threshold and a lower limit threshold of the region growth according to the gray level distribution value through a distribution model.
In step S7 of the present embodiment, the gradation distribution value is the average, standard deviation, variance, and the like of the gradation distribution, and the distribution model used is a gaussian distribution model.
Step S8 is to take the extracted connected component as a second seed point, and perform region growing on the enhanced image according to the second seed point and the upper and lower limit thresholds calculated in step S7. In this embodiment, a region growing image after region growing is shown in fig. 6.
And step S9, binarizing the region growing part obtained by region growing, and uniformly expanding according to the binarized result by using an expansion algorithm, so as to extract the corresponding region in the brain image to be detected from the expanded region, thereby obtaining a segmentation result.
In this embodiment, the region growing portion is obtained by segmenting the growing portion in the region growing image obtained in step S8, and the binarized region growing portion is used as a foreground after binarization, and then expanded by using an expansion algorithm.
Further, in the case of performing expansion using the expansion algorithm, the present embodiment performs uniform expansion using a spherical operator with a radius of 30. In other embodiments, the dilation operator uses which modality, and has no effect on the segmentation result of the method.
In this embodiment, the expanded segmentation result is shown in fig. 7, and when browsing and checking are performed, all details of the blood vessel region can be retained to the maximum extent, so that the loss of the blood vessel details due to the imperfection of the region growing parameter is avoided.
Examples effects and effects
According to the full-automatic segmentation method for the brain image artery provided by the embodiment, the machine learning classifier is trained in advance, the subclass-parameter comparison table is established, the enhanced brain image to be detected is subjected to subclass identification and parameter extraction according to the subclass-parameter comparison table and the enhanced brain image to be detected, and further, the image is automatically subjected to threshold segmentation, skull removal, region growing, expansion and other processing according to the parameters, so that the full-automatic segmentation method for the brain image without manual intervention is realized. Meanwhile, the machine learning classifier and the subclass-parameter comparison table used in the method can continuously optimize the segmentation parameters through learning, and the segmentation effect of the method on the cerebral vessels is further improved. In addition, the full-automatic segmentation method for the cerebral image artery in the embodiment can be suitable for various acquisition devices of the cerebral image, for each acquisition device, full-automatic parameters for performing cerebral MRA image vessel segmentation can be obtained only by learning a small amount of data, the segmentation process is further automatically completed, and the method is suitable for batch operation.
In addition, in the embodiment, the histogram equalization step is to ensure that the images with different gray scale distributions and different acquisition parameters can be segmented by using the same parameters, so that the adaptability of the method is improved.
In addition, in the embodiment, through the uniform expansion/dilation in step S9, it is able to ensure that the entire details of the blood vessel wall are completely retained, and meanwhile, in the segmentation process, the original data is always used, and the voxel characteristics of the data are not changed, so that it is ensured that the result obtained by the segmentation is consistent with the imaging result.
The above-described embodiments are merely illustrative of specific embodiments of the present invention, and the present invention is not limited to the description of the above-described embodiments.
For example, in the embodiment, the resulting image obtained by segmenting the brain image to be measured is a blood vessel of a normal anatomical structure. In other embodiments, the full-automatic brain image artery segmentation method of the present invention can achieve a better segmentation effect for anatomical structure variation or normal physiological structure variation such as aneurysms and cysts.
For another example, in the embodiment, the brain image to be measured is three-dimensional image data. The full-automatic segmentation method for the cerebral image artery can also automatically segment two-dimensional image data.
In addition, the pre-training process generally uses a group of brain MRA image data, and the larger the number of data included in the group, that is, the larger the number of cases included in the group, the better the segmentation effect of the pre-trained parameter model, and generally, the data used for pre-training and the data used for segmentation are both collected from the same brand of equipment and under the same collection environment of the same collection parameter (but collected from different groups of people), so that the segmentation effect is the best, and when the source of the pre-training data is different from that of the segmentation data, the segmentation can be performed, but the accuracy may be reduced.
In addition, the embodiment provides a full-automatic segmentation method for the cerebral image artery, which mainly performs parameter self-adaptive selection on the image through a pre-trained machine learning classifier and a pre-established subclass-parameter comparison table, and further performs a series of processing on the cerebral image to be detected according to the parameters so as to complete segmentation. However, for practical convenience, a corresponding computer program may be correspondingly designed according to the full-automatic brain image artery segmentation method to form a full-automatic brain image artery segmentation apparatus, where the full-automatic brain image artery segmentation apparatus includes an image enhancement unit for performing step S1, a parameter adaptive selection unit for performing step S2, a threshold segmentation unit for performing step S3, an adaptive skull seed point extraction unit for performing step S4, a skull removal unit for performing step S5, an automatic connected domain screening unit for performing step S6, an adaptive threshold statistics unit for performing step S7, a region extension unit for performing step S8, and a uniform extension unit for performing step S9, and the parameter adaptive selection unit stores a pre-constructed machine learning classifier and a subclass-parameter comparison table so as to facilitate calling during execution. The working principle of these components is consistent with the action described in the corresponding step, and will not be described again.

Claims (5)

1. A full-automatic segmentation method of cerebral image artery is used for carrying out cerebral vessel segmentation on a two-dimensional or three-dimensional brain image to be detected, and is characterized by comprising the following steps:
step S1, a local enhancement algorithm is adopted to enhance the blood vessel area in the brain image to be detected, and the contrast between the blood vessel area and the background is improved to form an enhanced image;
step S2, extracting a set feature vector of the gray histogram according to the gray histogram of the brain image to be detected after histogram equalization, inputting the set feature vector into a machine learning classifier trained in advance to obtain a subclass label, and further searching a pre-established subclass-parameter comparison table according to the subclass label to extract a parameter with the highest distribution ratio in the subclass, wherein the parameter comprises a segmentation threshold, a skull threshold, a connected domain threshold and an increase threshold;
step S3, performing threshold segmentation on the enhanced image by using the parameters to obtain a threshold segmentation image;
step S4, extracting a key layer slice in the brain image to be detected, and extracting a point on the skull as a first seed point by adopting a two-dimensional bounding box;
step S5, obtaining a skull segmentation result by adopting a region growing method based on the parameter and the first seed point, and then subtracting the skull segmentation result from the threshold segmentation image to obtain a skull removal image;
step S6, analyzing the area or volume of all connected domains in the skull removal image, arranging the connected domains from large to small according to the area or volume to form a connected domain list, screening the connected domain list by using the parameters to obtain extracted connected domains, and further removing the unselected connected domains in the skull removal image;
step S7, counting the voxel gray levels in the same range of the corresponding enhanced image in the extracted connected domain to obtain a gray level distribution value, and calculating an upper limit threshold and a lower limit threshold of region growth according to the gray level distribution value through a distribution model;
step S8, the extracted connected domain is used as a second seed point, and region growing is carried out in the enhanced image according to the second seed point and the upper and lower limit threshold values;
step S9, binarizing the region growing part obtained by region growing, uniformly expanding according to the binarized result by using an expansion algorithm, extracting the corresponding region in the brain image to be detected by the expanded region to obtain a segmentation result,
the machine learning classifier and the subclass-parameter comparison table are obtained by pre-training the following steps:
step T1, histogram equalization is carried out on a plurality of training images for training in sequence to obtain corresponding gray level distribution histograms for training;
step T2, extracting training set feature vectors at least containing envelope gradient curves, envelope inflection point features, maximum extreme point coordinates, entropies and maximum entropies of each training gray level distribution histogram;
step T3, performing manual segmentation on each training image, and setting training parameters including a segmentation threshold, a skull threshold, a connected domain threshold and an increase threshold in the segmentation process;
step T4, respectively taking the training set feature vector and the training parameters corresponding to each training image as data sets, and classifying the data sets according to the distances between the training parameters of different data sets so as to gather the data sets with the minimum distances into one class and obtain a plurality of different subclasses;
and T5, extracting the feature vector of each subclass, endowing the feature vector belonging to the same subclass with the same label, further training according to all the feature vectors with labels by adopting a machine learning classification method to obtain the machine learning classifier, and simultaneously establishing a corresponding subclass-parameter comparison table according to each subclass and the corresponding training parameters.
2. The method for full-automatic segmentation of cerebral image artery according to claim 1, wherein:
the distribution model is formed by single distribution in Gaussian distribution, Poisson distribution and uniform distribution or formed by superposition of multiple same or different distributions in Gaussian distribution, Poisson distribution and uniform distribution.
3. The method for full-automatic segmentation of cerebral image artery according to claim 1, wherein:
wherein the connected component threshold of the parameter represents a volume of a smallest allowed connected component when the list of connected components is filtered,
the connected component threshold value also ensures that the extracted connected component is within the top 10 ordered bits of the connected component list.
4. The method for full-automatic segmentation of cerebral image artery according to claim 1, wherein:
and the growth threshold is a corresponding threshold when the distribution model can cover a distribution specified percentage range or more.
5. The utility model provides a full automatic segmentation device of brain image artery for carry out the brain vessel to two-dimentional or three-dimensional brain image that awaits measuring and cut apart, its characterized in that includes:
the image enhancement part is used for enhancing the blood vessel area in the brain image to be detected by adopting a local enhancement algorithm and improving the contrast between the blood vessel area and the background so as to form an enhanced image;
the parameter self-adaptive selection part is stored with a pre-trained machine learning classifier and a subclass-parameter comparison table and is used for extracting an aggregate characteristic vector of the gray histogram according to the gray histogram of the brain image to be tested after histogram equalization, inputting the aggregate characteristic vector into the machine learning classifier to obtain a subclass label, and further searching the subclass-parameter comparison table according to the subclass label to extract a parameter with the highest distribution ratio in the subclass;
a threshold value dividing unit configured to perform threshold value division on the enhanced image using the parameter to obtain a threshold value divided image;
the adaptive skull seed point extraction part is used for extracting a key layer slice in the brain image to be detected and extracting a point on the skull as a first seed point by adopting a two-dimensional bounding box;
a skull removing part, configured to obtain a skull segmentation result by using a region growing method based on the parameter and the first seed point, and then subtract the skull segmentation result from the threshold segmentation image to obtain a skull removal image;
the automatic connected domain screening part is used for analyzing the areas or the volumes of all connected domains in the skull removal image, arranging the connected domains from large to small according to the areas or the volumes to form a connected domain list, screening the connected domain list by using the parameters to obtain extracted connected domains, and further removing the unselected connected domains in the skull removal image;
the self-adaptive threshold value statistical part is used for counting the voxel gray levels in the same range of the corresponding enhanced image in the extracted connected domain to obtain a gray level distribution numerical value, and simultaneously calculating an upper limit threshold value and a lower limit threshold value of region growth according to the gray level distribution numerical value through a distribution model;
a region growing unit configured to take the extracted connected component as a second seed point and perform region growing in the enhanced image according to the second seed point and the upper and lower threshold values;
a uniform expansion part which binarizes the region growing part obtained by the region growing and uniformly expands according to the binarization result by using an expansion algorithm so as to extract the corresponding region in the brain image to be detected from the expanded region and obtain a segmentation result,
the machine learning classifier and the subclass-parameter comparison table are obtained by pre-training the following steps:
step T1, histogram equalization is carried out on a plurality of training images for training in sequence to obtain corresponding gray level distribution histograms for training;
step T2, extracting training set feature vectors at least containing envelope gradient curves, envelope inflection point features, maximum extreme point coordinates, entropies and maximum entropies of each training gray level distribution histogram;
step T3, performing manual segmentation on each training image, and setting training parameters including a segmentation threshold, a skull threshold, a connected domain threshold and an increase threshold in the segmentation process;
step T4, respectively taking the training set feature vector and the training parameters corresponding to each training image as data sets, and classifying the data sets according to the distances between the training parameters of different data sets so as to gather the data sets with the minimum distances into one class and obtain a plurality of different subclasses;
and T5, extracting the feature vector of each subclass, endowing the feature vector belonging to the same subclass with the same label, further training according to all the feature vectors with labels by adopting a machine learning classification method to obtain the machine learning classifier, and simultaneously establishing a corresponding subclass-parameter comparison table according to each subclass and the corresponding training parameters.
CN201911118499.5A 2019-11-15 2019-11-15 Full-automatic segmentation method and device for cerebral image artery Active CN112150477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911118499.5A CN112150477B (en) 2019-11-15 2019-11-15 Full-automatic segmentation method and device for cerebral image artery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911118499.5A CN112150477B (en) 2019-11-15 2019-11-15 Full-automatic segmentation method and device for cerebral image artery

Publications (2)

Publication Number Publication Date
CN112150477A CN112150477A (en) 2020-12-29
CN112150477B true CN112150477B (en) 2021-09-28

Family

ID=73892145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911118499.5A Active CN112150477B (en) 2019-11-15 2019-11-15 Full-automatic segmentation method and device for cerebral image artery

Country Status (1)

Country Link
CN (1) CN112150477B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862731A (en) * 2021-01-21 2021-05-28 北京科技大学 Full-automatic blood vessel extraction method of TOF image
CN116342588B (en) * 2023-05-22 2023-08-11 徕兄健康科技(威海)有限责任公司 Cerebrovascular image enhancement method
CN116721354B (en) * 2023-08-08 2023-11-21 中铁七局集团电务工程有限公司武汉分公司 Building crack defect identification method, system and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296653A (en) * 2016-07-25 2017-01-04 浙江大学 Brain CT image hemorrhagic areas dividing method based on semi-supervised learning and system
CN109190690A (en) * 2018-08-17 2019-01-11 东北大学 The Cerebral microbleeds point detection recognition method of SWI image based on machine learning
CN109949322A (en) * 2019-03-27 2019-06-28 中山大学 A kind of cerebrovascular image partition method based on magnetic resonance T1 enhancing image

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334895B (en) * 2008-08-07 2011-09-14 清华大学 Image division method aiming at dynamically intensified mammary gland magnetic resonance image sequence
CN102737379A (en) * 2012-06-07 2012-10-17 中山大学 Captive test (CT) image partitioning method based on adaptive learning
US9959486B2 (en) * 2014-10-20 2018-05-01 Siemens Healthcare Gmbh Voxel-level machine learning with or without cloud-based support in medical imaging
CN105096332B (en) * 2015-08-25 2019-06-28 上海联影医疗科技有限公司 Medical image cutting method and device
CN104933711B (en) * 2015-06-10 2017-09-29 南通大学 A kind of automatic fast partition method of cancer pathology image
CN105787958A (en) * 2016-05-20 2016-07-20 东南大学 Partition method for kidney artery CT contrastographic picture vessels based on three-dimensional Zernike matrix
US10492723B2 (en) * 2017-02-27 2019-12-03 Case Western Reserve University Predicting immunotherapy response in non-small cell lung cancer patients with quantitative vessel tortuosity
CN107016677B (en) * 2017-03-24 2020-01-17 北京工业大学 Cloud picture segmentation method based on FCN and CNN
CN107230204B (en) * 2017-05-24 2019-11-22 东北大学 A kind of method and device for extracting the lobe of the lung from chest CT image
CN107292312B (en) * 2017-06-19 2021-06-22 中国科学院苏州生物医学工程技术研究所 Tumor CT image processing method
CN108765430B (en) * 2018-05-24 2022-04-08 西安思源学院 Cardiac left cavity region segmentation method based on cardiac CT image and machine learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296653A (en) * 2016-07-25 2017-01-04 浙江大学 Brain CT image hemorrhagic areas dividing method based on semi-supervised learning and system
CN109190690A (en) * 2018-08-17 2019-01-11 东北大学 The Cerebral microbleeds point detection recognition method of SWI image based on machine learning
CN109949322A (en) * 2019-03-27 2019-06-28 中山大学 A kind of cerebrovascular image partition method based on magnetic resonance T1 enhancing image

Also Published As

Publication number Publication date
CN112150477A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
Telrandhe et al. Detection of brain tumor from MRI images by using segmentation & SVM
CN106296653B (en) Brain CT image hemorrhagic areas dividing method and system based on semi-supervised learning
Joshi et al. Classification of brain cancer using artificial neural network
US10303986B2 (en) Automated measurement of brain injury indices using brain CT images, injury data, and machine learning
Taghanaki et al. Geometry-based pectoral muscle segmentation from MLO mammogram views
CN112150477B (en) Full-automatic segmentation method and device for cerebral image artery
CN109635846A (en) A kind of multiclass medical image judgment method and system
Gordillo et al. A new fuzzy approach to brain tumor segmentation
Samanta et al. Computer aided diagnostic system for automatic detection of brain tumor through MRI using clustering based segmentation technique and SVM classifier
Viji et al. Performance evaluation of standard image segmentation methods and clustering algorithms for segmentation of MRI brain tumor images
Alagarsamy et al. Identification of Brain Tumor using Deep Learning Neural Networks
Nagtode et al. Two dimensional discrete Wavelet transform and Probabilistic neural network used for brain tumor detection and classification
Rampun et al. Breast density classification using multiresolution local quinary patterns in mammograms
Maheswari et al. A survey on computer algorithms for retinal image preprocessing and vessel segmentation
Teranikar et al. Feature detection to segment cardiomyocyte nuclei for investigating cardiac contractility
Tuan et al. 3D brain magnetic resonance imaging segmentation by using bitplane and adaptive fast marching
Neelakanteswara et al. Computer based advanced approach for MRI image classification using neural network with the texture features extracted
Shekhar et al. Image analysis for brain tumor detection from MRI images using wavelet transform
Azli et al. Ultrasound image segmentation using a combination of edge enhancement and kirsch’s template method for detecting follicles in ovaries
Prabin et al. AUTOMATIC SEGMENTATION OF LUNG CT IMAGES BY CC BASED REGION GROWING.
Devanathan et al. An optimal multilevel thresholding based segmentation and classification model for brain tumor diagnosis
Ion et al. Breast Cancer Images Segmentation using Fuzzy Cellular Automaton
Awang et al. An overview of segmentation and classification techniques: A survey of brain tumour-related research
Kalaiselvi et al. Knowledge based self initializing FCM algorithms for fast segmentation of brain tissues in magnetic resonance images
Hasan et al. Watershed-matching algorithm: a new pathway for brain tumor segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant