CN113255721B - Tumor peripheral surface auditory nerve recognition method based on machine learning - Google Patents

Tumor peripheral surface auditory nerve recognition method based on machine learning Download PDF

Info

Publication number
CN113255721B
CN113255721B CN202110393849.XA CN202110393849A CN113255721B CN 113255721 B CN113255721 B CN 113255721B CN 202110393849 A CN202110393849 A CN 202110393849A CN 113255721 B CN113255721 B CN 113255721B
Authority
CN
China
Prior art keywords
image
value
sample
voxel
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110393849.XA
Other languages
Chinese (zh)
Other versions
CN113255721A (en
Inventor
仇翔
王佳凤
黄家浩
袁少楠
陈升炜
冯远静
陆星州
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202110393849.XA priority Critical patent/CN113255721B/en
Publication of CN113255721A publication Critical patent/CN113255721A/en
Application granted granted Critical
Publication of CN113255721B publication Critical patent/CN113255721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

A tumor peripheral surface auditory nerve recognition method based on machine learning aims at solving the problem that when a patient carries out MRI (magnetic resonance imaging) fiber reconstruction, a tumor compression causes deformation of auditory nerve so that an anatomical structure of the patient cannot be recognized by using a map.

Description

Tumor peripheral surface auditory nerve recognition method based on machine learning
Technical Field
The invention relates to medical image processing, in particular to a tumor peripheral surface acoustic nerve recognition method based on machine learning.
Background
Auditory neuroma is a common benign tumor in the cranium, and patients with auditory neuroma often show symptoms of tinnitus at the beginning, further hearing loss and even deafness in some patients. If the auditory neuroma is larger, the same side of the trigeminal nerve is affected, and symptoms such as facial paralysis, facial muscle twitch, facial numbness, trigeminal neuralgia and the like appear. In recent years, the surgical center of gravity of doctors is gradually shifted to remove tumors and preserve the facial auditory nerve function. Therefore, the MRI imaging information is used for researching the position relation between the facial auditory nerve and the tumor, and the MRI imaging information has an important reference function on the operation decision.
White matter fiber reconstruction includes two major parts, fiber direction estimation and fiber tracking technology. In the past, researchers have proposed DTI imaging models that assume that only one nerve fiber is contained in each voxel, and thus cannot describe the structure of fiber intersection, bifurcation, fan shape, bottleneck shape, etc. in the voxels, and then have proposed that fiber direction estimation for high-angle resolution diffusion imaging improves this problem. Methods for fiber tracking technology are currently largely classified into two categories, deterministic and probabilistic. Despite the great progress made in fiber reconstruction technology. But it does not separate the tracked fibers into anatomically significant fiber bundle structures. To better apply the fiber tracking results in clinic, lauren J et al propose a method for automatically identifying neurosurgical critical white matter fiber bundles using white matter atlas, which calculates the euclidean distance of paired fibers (fibers of new data and atlas) and transforms them into a similarity matrix, each fiber being represented as a point, similar fibers being typically embedded in spectra adjacent to each other, clustering using k-means algorithm in the embedding space to obtain fiber bundles with anatomical markers. The method has high identification accuracy and good effect under general conditions. However, in practical situations, the fibers reconstructed by using MRI information of a patient deform due to tumor compression, and a large deviation occurs when the distance calculation is performed with the atlas, so that the anatomical structure to which the patient belongs cannot be identified.
Disclosure of Invention
In order to solve the problems that the auditory nerve is deformed and can not be identified under the influence of tumors in the prior art and the number of medical images is small, the invention provides a machine learning-based tumor peripheral surface auditory nerve identification method. Specifically, the basic idea of the invention is to convert the fiber bundle identification problem into the voxel classification problem, expand the total sample volume by thousands of times, train the features extracted from the voxels to obtain a learning model and apply the model to the generalization capability of a new data evaluation model, and the method can basically exclude the influence of other fiber bundles in the data preprocessing stage.
The technical scheme adopted for solving the technical problems is as follows:
a machine learning-based method for identifying auditory nerves of a tumor peripheral surface, the method comprising the steps of:
step one: the data set is divided as follows:
each patient image data selected comprises a T1 image, a T1 enhanced image, a T2 image, a DTI image and a mark image, and a patient image set is randomly divided into a training image set and a test image set according to a set proportion;
step two: the data were resampled as follows:
taking the marked image as a reference, resampling the T1 image, the T1 enhanced image and the T2 image to enable the voxel size of each image to be the same;
step three: extracting features and constructing a sample set, wherein the process is as follows:
the method comprises the steps of reading the resolution of a target image, carrying out self-adaptive slicing on the 3D target image to convert a three-dimensional problem into a two-dimensional problem, converting voxel coordinates of each image data into world coordinates, extracting characteristic values of corresponding coordinates of an original image by taking a marked image as a reference, wherein the characteristic values comprise an anisotropic fraction FA, a direction distribution function ODF, an average dispersion value MD, an axial dispersion value AD, a radial dispersion value RD, a T1 signal value, a T1 enhanced signal value and a T2 signal value, extracting labels from the marked image, wherein the labels comprise three parts of auditory nerve, auditory tumor and brain stem, and each voxel forms a sample to obtain a sample characteristic training set and a sample characteristic test set;
step four: sample feature training set data processing is carried out, and the process is as follows:
visualizing the training set obtained in the step three, cleaning abnormal value and missing value samples, and scaling the characteristics to enable the characteristic values to be distributed between 0 and 1;
step five: model training and testing, the process is as follows:
and (3) putting the sample training set obtained in the step (IV) into a machine learning model for training by adopting a five-fold cross validation method, putting the sample testing set obtained in the step (III) into the model for classification prediction after training is finished, comparing the model with a label true value to obtain a confusion matrix, and evaluating the generalization capability of the model.
Further, in the first step, the data set dividing process is as follows:
converting the fiber identification problem into a voxel classification problem, wherein the size of the voxels is unified by adopting the size of a marked image; the data comprises a T1 image, a T1 enhanced image, a T2 image, a DTI image and a marker image, wherein each patient data comprises the data, the T1 image, the T1 enhanced image and the T2 image are used for providing voxel signal values, and the DTI image is used for extracting anisotropic score, direction distribution function, average dispersion value, axial dispersion value and radial dispersion value; the labeled image is used as a reference for extracting voxel characteristics and is used for comparing a prediction result, wherein a voxel type is represented by 1 and is brain stem, a voxel type is represented by 2, a voxel type is represented by fiber, a voxel type is represented by 3, a tumor in the labeled image is segmented by manual labeling, a facial auditory nerve is obtained by automatic tracking of fiber, and a brain stem region is segmented by Freesurf.
In the second step, the data resampling process is as follows:
to mark voxel space size p of image new Taking the T1 image, the voxel space size p of the T1 enhanced image and the T2 image as a reference old And image size s old The voxel space of each image is made to be the same in size, and the new image size calculation formulas of the T1 image, the T1 enhanced image and the T2 image are as follows:
in the third step, the process of extracting the features and constructing the sample set is as follows:
acquiring the resolution of an image, representing the resolution by using l×w×h, cutting the three-dimensional image of the ROI into a two-dimensional image with the l Zhang Fenbian rate of w×h, and converting the three-dimensional problem into a two-dimensional problem;
each voxel is used as a sample, the voxel coordinate of each image data is converted into world coordinates, a label is extracted from a marked image, the label comprises three parts of facial auditory nerves, auditory tumors and brainstem, the numbers "1", "2" and "3" are used as true values, the characteristic values of the corresponding coordinates of the original image are extracted by taking the marked image world coordinates as references, and the characteristics comprise an anisotropic score FA, a direction distribution function ODF, an average dispersion value MD, an axial dispersion value AD, a radial dispersion value RD, a T1 signal value, a T1 enhancement signal value and a T2 signal value, so that a sample characteristic training set and a sample characteristic test set are obtained;
the direction distribution function ODF is calculated by the following formula:
wherein,is the fiber orientation distribution->σ 2 Is the square of the width parameter and,is the sample convolution direction, g is the gradient direction;
obtaining characteristic values lambda in three directions by using DTI tensor matrix 1 ,λ 2 And lambda (lambda) 3 The anisotropy fraction FA, the average dispersion value MD, the axial dispersion value AD, the radial dispersion value RD are obtained by the following calculations, respectively:
AD=λ 1
thus, 8 features are extracted for each voxel, forming an N x 9 two-dimensional matrix, with column 9 being the tag truth value; in order to reduce the probability of overfitting, the sample sequence is disturbed to obtain a characteristic sample training set, wherein N is the total number of samples, and M is formed by the same theory i Two-dimensional matrix of x 9 columnsThe sample sequence is disordered, and a characteristic sample test set i is obtained, wherein M is i The last column is the label true value for the number of samples made for the ith patient.
In the fourth step, the sample feature training set data processing process is as follows:
and (3) visualizing the training set obtained in the step (III), cleaning abnormal value and missing value samples, and scaling the characteristics so that the characteristic values are distributed between 0 and 1, wherein a characteristic scaling formula is as follows:
wherein f' is the characteristic value after scaling, f is the characteristic value before scaling, f max And f min The maximum and minimum values of the feature before scaling, respectively. Training data provided to the algorithm model may thus be obtained.
In the fifth step, the model training and testing process is as follows:
5.1 Training model: taking the first eight columns of the feature sample training set as training data input by a machine learning classification model, and the ninth column as training label true value input; adopting a five-fold cross validation method, taking the average value of evaluation indexes of the five validation sets as a standard, and obtaining an optimal model of the current training set through grid searching and fine tuning model parameters;
5.2 Test model): taking the first eight columns of the characteristic sample test set as data input by a machine learning classification model, and the ninth column as true value input for comparison; comparing and calculating the predicted value and the true value output by the model to obtain a confusion matrix containing a plurality of indexes for evaluating the quality of the model;
5.3 Pseudo-color processing is carried out on the model prediction result, and the position relation of the predicted facial auditory nerve, brain stem and auditory tumor is restored, so that the three positions can be visualized.
The beneficial effects of the invention are as follows: effectively identify the facial auditory nerve deformed under the influence of the tumor.
Drawings
Fig. 1 is a schematic diagram of a confusion matrix.
Detailed description of the preferred embodiments
The invention is further described in order to make the technical scheme of the invention more clear.
Referring to fig. 1, a machine learning-based method for identifying auditory nerve on tumor peripheral surface can effectively identify auditory nerve on surface under the influence of tumor and judge the positional relationship among auditory tumor, auditory nerve on surface and brain stem, comprising the following steps:
step one: the data set is divided as follows:
the invention converts the fiber identification problem into the voxel classification problem, the voxel size is unified by adopting the size of a marked image, the data comprises a T1 image, a T1 enhanced image, a T2 image, a DTI image and a marked image, and each patient data comprises the data; wherein the T1 image, the T1 enhanced image and the T2 image are used to provide voxel signal values and the DTI image is used to extract an anisotropy score, a directional distribution function, an average dispersion value, an axial dispersion value and a radial dispersion value. The marked image is mainly used as a reference for extracting voxel characteristics and is used for comparing a prediction result, and the invention uses a 1 for indicating that the voxel type is brainstem, a 2 for indicating that the voxel type is fiber and a 3 for indicating that the voxel type is tumor. Tumor segmentation in the marker image is from manual labeling, facial auditory nerves are from automatic fiber tracking, and brainstem areas are segmented by Freesurfer.
Step two: the data were resampled as follows:
to mark voxel space size p of image new Taking the T1 image, the voxel space size p of the T1 enhanced image and the T2 image as a reference old And image size s old The voxel space of each image is made to be the same in size, and the new image size calculation formulas of the T1 image, the T1 enhanced image and the T2 image are as follows:
step three: extracting features and constructing a sample set, wherein the process is as follows:
acquiring the resolution of an image, representing the resolution by using l×w×h, cutting the three-dimensional image of the ROI into a two-dimensional image with the l Zhang Fenbian rate of w×h, and converting the three-dimensional problem into a two-dimensional problem;
each voxel is used as a sample, the voxel coordinates of each image data are converted into world coordinates, labels are extracted from the marked image, the labels comprise three parts of facial auditory nerves, auditory tumors and brainstem, the numbers "1", "2" and "3" are used as true values, the characteristic values of the corresponding coordinates of the original image are extracted by taking the marked image world coordinates as references, and the characteristics comprise an anisotropic score FA, a direction distribution function ODF, an average dispersion value MD, an axial dispersion value AD, a radial dispersion value RD, a T1 signal value, a T1 enhancement signal value and a T2 signal value, so that a sample characteristic training set and a sample characteristic test set are obtained.
The direction distribution function ODF is calculated by the following formula:
wherein,is the fiber orientation distribution->σ 2 Is the square of the width parameter and,is the sample convolution direction, g is the gradient direction;
obtaining characteristic values lambda in three directions by using DTI tensor matrix 1 ,λ 2 And lambda (lambda) 3 The anisotropy fraction FA, the average dispersion value MD, the axial dispersion value AD, the radial dispersion value RD are obtained by the following calculations, respectively:
AD=λ 1
thus, 8 features are extracted for each voxel, forming an N x 9 two-dimensional matrix, with column 9 being the tag truth value; in order to reduce the probability of overfitting, the sample sequence is disturbed to obtain a characteristic sample training set, wherein N is the total number of samples; similarly, can form M i The feature sample test set i is obtained by a two-dimensional matrix of x 9 columns and disturbing the sample sequence, wherein M is i The last column is the tag value for the number of samples made for the ith patient.
Step four: sample feature training set data processing is carried out, and the process is as follows:
and (3) visualizing the training set obtained in the step (III), cleaning abnormal value and missing value samples, and scaling the characteristics so that the characteristic values are distributed between 0 and 1, wherein a characteristic scaling formula is as follows:
wherein f' is the characteristic value after scaling, f is the characteristic value before scaling, f max And f min The maximum and minimum values of the feature before scaling, respectively. Training data provided to the algorithm model may thus be obtained.
Step five: model training and evaluation, the process is as follows:
5.1 Training model: the first eight columns of the feature sample training set are used as training data input by the machine learning classification model, and the ninth column is used as training label true value input. And (3) adopting a five-fold cross validation method, taking the average value of evaluation indexes of the five validation sets as a standard, and obtaining an optimal model of the current training set through grid search and fine adjustment of model parameters.
5.2 Test model): the first eight columns of the characteristic sample test set are used as data input by the machine learning classification model, and the ninth column is used as true value input for comparison. The confusion matrix which contains a plurality of indexes for evaluating the advantages and disadvantages of the model as shown in fig. 1 can be obtained by comparing and calculating the predicted value and the true value output by the model.
5.3 Pseudo-color processing is carried out on the model prediction result, and the position relation of the predicted facial auditory nerve, brain stem and auditory tumor is restored, so that the three positions can be visualized.
The matters described in this specification are merely illustrative of the manner in which the inventive concepts may be implemented and are not intended to limit the scope of the invention. All technical means which are considered to be equivalent according to the inventive concept by those skilled in the art using the principles of the present invention are included in the scope of the present invention.

Claims (6)

1. A machine learning-based method for identifying auditory nerves of a tumor peripheral surface, which is characterized by comprising the following steps:
step one: the data set is divided as follows:
each patient image data selected comprises a T1 image, a T1 enhanced image, a T2 image, a DTI image and a marked image, the patient image set is randomly divided into a training image set and a test image set according to a set proportion, the marked image is used as a reference for extracting voxel characteristics and is used for comparing a prediction result, the voxel category is brain stem, the voxel category is fiber and is denoted by 1, the voxel category is fiber, the voxel category is tumor and is denoted by 3, the tumor segmentation in the marked image is from manual labeling, the auditory nerve is from fiber automatic tracking, and the brain stem area is obtained by Freesurfer segmentation;
step two: the data were resampled as follows:
resampling the T1 image, the T1 enhanced image and the T2 image by taking the marked image as a reference so that the voxel size of each image is the same;
step three: extracting features and constructing a sample set, wherein the process is as follows:
the method comprises the steps of reading the resolution of a target image, carrying out self-adaptive slicing on the 3D target image to convert a three-dimensional problem into a two-dimensional problem, converting voxel coordinates of each image data into world coordinates, extracting characteristic values of corresponding coordinates of an original image by taking a marked image as a reference, wherein the characteristic values comprise an anisotropic fraction FA, a direction distribution function ODF, an average dispersion value MD, an axial dispersion value AD, a radial dispersion value RD, a T1 signal value, a T1 enhanced signal value and a T2 signal value, extracting labels from the marked image, wherein the labels comprise three parts of auditory nerve, auditory tumor and brain stem, and each voxel forms a sample to obtain a sample characteristic training set and a sample characteristic test set;
step four: sample feature training set data processing is carried out, and the process is as follows:
visualizing the training set obtained in the step three, cleaning abnormal value and missing value samples, and scaling the characteristics to enable the characteristic values to be distributed between 0 and 1;
step five: model training and testing, the process is as follows:
and (3) putting the sample training set obtained in the step (IV) into a machine learning model for training by adopting a five-fold cross validation method, putting the sample testing set obtained in the step (III) into the model for classification prediction after training is finished, comparing the model with a label true value to obtain a confusion matrix, and evaluating the generalization capability of the model.
2. The method for identifying auditory nerve on tumor peripheral surface based on machine learning according to claim 1, wherein in the first step, the data set dividing process is as follows:
converting the fiber identification problem into a voxel classification problem, wherein the size of the voxels is unified by adopting the size of a marked image; the data comprises a T1 image, a T1 enhanced image, a T2 image, a DTI image and a marker image, each patient data comprising the above data, wherein the T1 image, the T1 enhanced image and the T2 image are used to provide voxel signal values, and the DTI image is used to extract anisotropy scores, direction distribution functions, mean diffusion values, axial diffusion values and radial diffusion values.
3. The method for identifying the auditory nerve of the tumor peripheral surface based on machine learning according to claim 1 or 2, wherein in the second step, the data resampling process is as follows:
to mark voxel space size p of image new Taking the T1 image, the voxel space size p of the T1 enhanced image and the T2 image as a reference old And image size s old The voxel space of each image is made to be the same in size, and the new image size calculation formulas of the T1 image, the T1 enhanced image and the T2 image are as follows:
4. the method for identifying the auditory nerve of the tumor peripheral surface based on machine learning according to claim 1 or 2, wherein in the third step, the process of extracting the characteristics and constructing the sample set is as follows:
acquiring the resolution of an image, representing the resolution by using l×w×h, cutting the three-dimensional image of the ROI into a two-dimensional image with the l Zhang Fenbian rate of w×h, and converting the three-dimensional problem into a two-dimensional problem;
each voxel is used as a sample, the voxel coordinate of each image data is converted into world coordinates, a label is extracted from a marked image, the label comprises three parts of facial auditory nerves, auditory tumors and brainstem, the numbers "1", "2" and "3" are used as true values, the characteristic values of the corresponding coordinates of the original image are extracted by taking the marked image world coordinates as references, and the characteristics comprise an anisotropic score FA, a direction distribution function ODF, an average dispersion value MD, an axial dispersion value AD, a radial dispersion value RD, a T1 signal value, a T1 enhancement signal value and a T2 signal value, so that a sample characteristic training set and a sample characteristic test set are obtained;
the direction distribution function ODF is calculated by the following formula:
wherein,is the fiber orientation distribution->σ 2 Is the square of the width parameter,/>Is the sample convolution direction, g is the gradient direction;
obtaining characteristic values lambda in three directions by using DTI tensor matrix 1 ,λ 2 And lambda (lambda) 3 The anisotropy fraction FA, the average dispersion value MD, the axial dispersion value AD, the radial dispersion value RD are obtained by the following calculations, respectively:
AD=λ 1
thus, 8 features are extracted for each voxel, forming an N x 9 two-dimensional matrix, with column 9 being the tag truth value; in order to reduce the probability of overfitting, the sample sequence is disturbed to obtain a characteristic sample training set, wherein N is the total number of samples, and M is formed by the same theory i The feature sample test set i is obtained by a two-dimensional matrix of x 9 columns and disturbing the sample sequence, wherein M is i The last column is the label true value for the number of samples made for the ith patient.
5. The machine learning-based tumor peripheral surface acoustic nerve recognition method according to claim 1 or 2, wherein in the fourth step, the sample feature training set data processing process is as follows:
and (3) visualizing the training set obtained in the step (III), cleaning abnormal value and missing value samples, and scaling the characteristics so that the characteristic values are distributed between 0 and 1, wherein a characteristic scaling formula is as follows:
wherein f' is the characteristic value after scaling, f is the characteristic value before scaling, f max And f min The maximum and minimum values of the feature before scaling, respectively, from which training data provided to the algorithm model can be derived.
6. The method for identifying the auditory nerve of the tumor peripheral surface based on machine learning according to claim 1 or 2, wherein in the fifth step, the model training and testing process is as follows:
5.1 Training model: taking the first eight columns of the feature sample training set as training data input by a machine learning classification model, and the ninth column as training label true value input; adopting a five-fold cross validation method, taking the average value of evaluation indexes of the five validation sets as a standard, and obtaining an optimal model of the current training set through grid searching and fine tuning model parameters;
5.2 Test model): taking the first eight columns of the characteristic sample test set as data input by a machine learning classification model, and the ninth column as true value input for comparison; comparing and calculating the predicted value and the true value output by the model to obtain a confusion matrix containing a plurality of indexes for evaluating the quality of the model;
5.3 Pseudo-color processing is carried out on the model prediction result, and the position relation of the predicted facial auditory nerve, brain stem and auditory tumor is restored, so that the three positions can be visualized.
CN202110393849.XA 2021-04-13 2021-04-13 Tumor peripheral surface auditory nerve recognition method based on machine learning Active CN113255721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110393849.XA CN113255721B (en) 2021-04-13 2021-04-13 Tumor peripheral surface auditory nerve recognition method based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110393849.XA CN113255721B (en) 2021-04-13 2021-04-13 Tumor peripheral surface auditory nerve recognition method based on machine learning

Publications (2)

Publication Number Publication Date
CN113255721A CN113255721A (en) 2021-08-13
CN113255721B true CN113255721B (en) 2024-03-22

Family

ID=77220622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110393849.XA Active CN113255721B (en) 2021-04-13 2021-04-13 Tumor peripheral surface auditory nerve recognition method based on machine learning

Country Status (1)

Country Link
CN (1) CN113255721B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494132A (en) * 2021-12-24 2022-05-13 山东师范大学 Disease classification system based on deep learning and fiber bundle spatial statistical analysis

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2141506B1 (en) * 2008-07-01 2019-04-03 The Regents of The University of California Identifying fiber tracts using magnetic resonance imaging (MRI)

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism

Also Published As

Publication number Publication date
CN113255721A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
TWI307058B (en) Method for identifying objects in an image and computer readable medium
CN106296653B (en) Brain CT image hemorrhagic areas dividing method and system based on semi-supervised learning
Zhang et al. CU-Net: a U-Net architecture for efficient brain-tumor segmentation on BraTS 2019 dataset
CN103093455A (en) Diffusion tensor imaging white matter fiber clustering method
CN111931811A (en) Calculation method based on super-pixel image similarity
CN106529188A (en) Image processing method applied to surgical navigation
Kole et al. Automatic brain tumor detection and isolation of tumor cells from MRI images
CN109934804A (en) The detection method in the Alzheimer lesion region based on convolutional neural networks
CN111080575A (en) Thalamus segmentation method based on residual error dense U-shaped network model
CN113255721B (en) Tumor peripheral surface auditory nerve recognition method based on machine learning
Martins et al. An adaptive probabilistic atlas for anomalous brain segmentation in MR images
CN103745473B (en) A kind of brain tissue extraction method
CN110136840B (en) Medical image classification method and device based on self-weighting hierarchical biological features and computer readable storage medium
Xu et al. RUnT: A network combining residual U-Net and transformer for vertebral edge feature fusion constrained spine CT image segmentation
CN112381818B (en) Medical image identification enhancement method for subclass diseases
Bi et al. Classification of low-grade and high-grade glioma using multiparametric radiomics model
Stasiak et al. Application of convolutional neural networks with anatomical knowledge for brain MRI analysis in MS patients
CN110751664B (en) Brain tissue segmentation method based on hyper-voxel matching
CN103336781B (en) A kind of medical image clustering method
CN117557576A (en) Semi-supervised optic nerve segmentation method based on clinical knowledge driving and contrast learning
CN116152170A (en) Intracranial primary malignant tumor identification method based on machine learning
CN115719357A (en) Multi-structure segmentation method for brain medical image
Miao et al. CoWRadar: Visual Quantification of the Circle of Willis in Stroke Patients.
Barzegar et al. Brain tumor segmentation based on 3D neighborhood features using rule-based learning
CN113762263A (en) Semantic segmentation method and system for small-scale similar structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant