CN111047589A - Attention-enhanced brain tumor auxiliary intelligent detection and identification method - Google Patents

Attention-enhanced brain tumor auxiliary intelligent detection and identification method Download PDF

Info

Publication number
CN111047589A
CN111047589A CN201911393654.4A CN201911393654A CN111047589A CN 111047589 A CN111047589 A CN 111047589A CN 201911393654 A CN201911393654 A CN 201911393654A CN 111047589 A CN111047589 A CN 111047589A
Authority
CN
China
Prior art keywords
model
classification
convolution
segmentation
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911393654.4A
Other languages
Chinese (zh)
Other versions
CN111047589B (en
Inventor
李建欣
张帅
于金泽
周号益
邰振赢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201911393654.4A priority Critical patent/CN111047589B/en
Publication of CN111047589A publication Critical patent/CN111047589A/en
Application granted granted Critical
Publication of CN111047589B publication Critical patent/CN111047589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention realizes a set of attention-enhanced brain tumor auxiliary intelligent detection and identification method, the technical scheme is improved on the basis of a U-Net model, the attention enhancement mechanism using the training of the segmentation task as the classification task is provided, the accuracy of the classification task is improved by paying attention to the segmentation task, the focus area and the edge information, the segmentation task and the classification task are optimized simultaneously through a multi-task loss measurement and training method, the expected effects on the two tasks of the segmentation task and the classification task are achieved, and the design purpose and the application target are realized.

Description

Attention-enhanced brain tumor auxiliary intelligent detection and identification method
Technical Field
The invention relates to the field of image processing, in particular to an attention-enhanced brain tumor auxiliary intelligent detection and identification method in the fields of medical imaging and computer-aided diagnosis.
Background
Tumors that grow in the cranium are collectively referred to as brain tumors, and refer to tumors of the nervous system that occur in the cranial cavity, including tumors that originate in the neuroepithelium, peripheral nerves, meninges, and germ cells, tumors of lymphoid and hematopoietic tissues, craniopharyngiomas and granulocytic tumors in the sphenoid saddle region, and metastatic tumors. Tumors arise from the parenchyma of the brain and are called primary intracranial tumors, and metastasis from malignant tumors of other visceral tissues of the body to the cranium is called secondary intracranial tumors. Intracranial tumors can occur at any age, most commonly between 20 and 50 years of age. With the development of neuroimaging techniques and functional examination techniques in recent years, auxiliary examination has become a main means for diagnosing intracranial tumors.
Brain glioma is the most common primary intracranial malignant tumor, accounting for over 75%. Brain glioma is divided into localized glioma and diffuse glioma, which can be divided into WHO I-IV grade according to the malignancy degree of tumor, and the malignancy degree is increased along with the increase of grade. Brain glioma can be divided into low-grade glioma (LGG, WHO I-II grade) and high-grade glioma (HGG, WHO III-IV grade), and can be divided into different subtypes according to gene mutation, chromosome change and the like, and the treatment modes and prognosis of the brain glioma with different grades and different gene mutations are different. Therefore, if the tumor region and the tumor grade can be accurately segmented and judged before the surgical treatment, the method is helpful for guiding the selection of the treatment scheme and the surgical resection region, and has important values for improving the treatment effect and the prognosis of the patient.
Magnetic Resonance Imaging (MRI) is a medical imaging technique that utilizes the hydrogen nuclei, i.e., hydrogen protons, of the human body in a strong external magnetic field to generate magnetic resonance under the action of specific radio frequency pulses. MRI is the main imaging examination technique for various intracranial diseases, can be used as the first examination method for some diseases, and is also an important supplement for CT examination. The MRI examination has the advantages of high tissue resolution, multiple sequences, multiple parameters, multiple directions, multiple kinds of fMRI examination and the like, and can more sensitively discover the lesion and display the lesion characteristics, thereby being beneficial to early detection and accurate diagnosis of the disease. Common MRI images for brain glioma diagnosis include images of three planes of axial (Axis), Sagittal (Sagittal) and Coronal (Coronal) planes, and T1, T1 enhanced, T2 and T2 water suppression four modalities, and the location, range and grade of the tumor are clinically and frequently determined in combination with information of the three planes and the four modalities. However, due to the diversity of the appearance and shape of brain tumors, the separation of brain tumors in multi-modality MRI images is one of the most challenging and difficult persons in medical image processing, and the classification of brain gliomas based on non-invasive examination such as brain MRI, and even the judgment and classification of genotypes are the research directions of great clinical attention.
The localized glioma is mostly seen in children, adults are relatively few, most patients can be cured by operation, the malignancy degree is low, and the research focus of the patent is not taken as the research focus of the patent.
The application of the current deep learning on brain MRI images, particularly other related patents, is concentrated in the field of brain tumor segmentation, and the fields directly related to diagnosis and treatment, such as classification and grading of brain tumors, are less involved, which is more clinically concerned and difficult to achieve by the current human eyes in noninvasive image examination. The method for obtaining a better effect on a brain tumor segmentation task mostly adopts U-Net as a basic frame, and is improved on the basis, the technical scheme of the application also takes U-Net as a basis, uses a three-dimensional model to operate a three-dimensional image, optimizes the three-dimensional image according to the latest progress of current machine learning and deep learning in the computer vision field on the basis of the original U-Net to achieve a better effect, focuses more on a tumor grading and classification diagnosis layer compared with other works, and focuses on an abnormal area in a brain MRI image to realize a tumor diagnosis task by taking the classification task as an attention enhancement mode.
Disclosure of Invention
At present, most of the applications of U-Net to MRI image processing of brain tumors are image segmentation tasks, the applications only segment the region of the tumor, and do not further use the segmented images to obtain valuable information at the medical level, which is a problem that in the process of communicating with clinicians, the clinician pays more attention to the images and needs to research heavily.
In order to achieve the purpose, the invention adopts the following technical scheme:
an attention-enhanced brain tumor auxiliary intelligent detection and identification method is characterized by comprising the following steps: comprises the following steps of;
the method comprises the following steps: establishing a three-dimensional convolution network model of a multitask neural network based on U-Net and suitable for segmentation and diagnosis of a brain glioma lesion region in a brain MRI image;
step two: a multi-task joint training objective;
step three: measuring the loss of multiple tasks and optimizing the result;
step four: and (4) model training, result evaluation and output.
The step of establishing a three-dimensional convolution network model of a multitask neural network based on U-Net and suitable for segmentation and diagnosis of a brain glioma lesion region in a brain MRI image comprises the following steps:
building a three-dimensional model of an attention area based on a tumor area as the model based on an original U-Net network by using a three-dimensional convolution processing method, wherein the framework of the model comprises a down-sampling data path and an up-sampling data path, each layer on the down-sampling path comprises two convolution layers of 3 x 3, and each convolution layer adopts a dropout mode to prevent overfitting and uses an activation function of a ReLU. After two convolutional layers, using a maximum pooling layer with a step size of 2 and a size of 3 × 3 × 3 to perform downsampling operation; the up-sampling operation is carried out between two layers of the up-sampling data path in a deconvolution mode, the features after up-sampling are spliced with the features of the left down-sampling layer, the spliced features are subjected to convolution operation twice as same as that of the left up-sampling layer, after the up-sampling path finally obtains the features fused with the deep layer and the shallow layer information, convolution with the convolution kernel size of 3 x 3 is carried out twice, and after the features after convolution are obtained, the model is divided into two branches: one continues to carry out convolution operation once, the output channel is the same as the classification in the semantic segmentation result, and then the output result containing the background, the edema, the tumor parenchyma, the necrosis and the enhancement kernel is obtained through softmax calculation; after the other branch is subjected to convolution operation for one time, Global Average Pooling (Global Average Pooling) is used, then two full-connection layers are followed, and the output of the second full-connection layer is the same as the classification number of pathological diagnosis;
and then traversing all brain MRI images imported from a case, counting average value and variance information, reserving the brain MRI images for use in a standardized operation in a training and predicting process, receiving input brain MRI image sequences of four modes of T1, T1 enhancement, T2 and T2-Flair, taking each mode as a channel, splicing all slice images in the scanning sequence and a segmentation result labeling mask respectively to form a three-dimensional picture and a labeling sequence, binding the two sequences and a diagnosis result corresponding to the sequences to be used as a sample, and processing all the images.
The multi-task joint training target step comprises the following steps:
on the full convolution model, a classification branch is added after shallow information and deep information are fused, a single task segmentation and classification model is used, and semantic segmentation and classification results are obtained simultaneously, so that a segmentation task and a classification task of brain tumors can be executed simultaneously and shallow features are shared;
after the features of the deep layer information and the shallow layer information are fused, convolution with convolution kernel size of 3 multiplied by 3 is carried out twice, and after the features after convolution are obtained, the model is divided into two branches: one continues to carry out convolution operation once, the output channel is the same as the classification in the semantic segmentation result, and then the output result containing the background, the edema, the tumor parenchyma, the necrosis and the enhancement kernel is obtained through softmax calculation; the other branch was subjected to a convolution operation, followed by Global Average Pooling (Global Average Pooling), followed by two fully-connected layers, the output of the second fully-connected layer being the same as the pathological diagnosis classification number, including the following categories: oligodendroglioma, anaplastic oligodendroglioma, astrocytoma, anaplastic astrocytoma, glioblastoma.
The measuring the loss of the multiple tasks, and the optimizing the result step uses a loss function of the multiple task combination to measure the segmentation result and the classification result and optimize the segmentation and classification result, wherein:
the loss function of the image segmentation model adopts a Dice loss function;
the loss function of the tumor classification module selects a cross-entropy function.
The model training and result evaluation and output step comprises the following steps:
a model training step, namely setting a Dice pass of an image segmentation model or a cross entropy loss function of a tumor classification model to carry out back propagation for iterative training, and carrying out back propagation for iterative training after combining a pass value of the image segmentation model and a pass value of the tumor classification model in a certain proportion;
after the training effect is converged and a more ideal result is obtained, evaluating the effect on a training set;
and inputting a new case image to the evaluated model, and outputting a detection and identification result.
Compared with the prior art, the invention has the advantages that:
the design scheme fully utilizes the segmented information, and expands the classification task module aiming at the medical information hidden in the segmented image, thereby playing the roles of providing suggestions for an auxiliary medical system of a doctor in the diagnosis process, improving the diagnosis capability of a medical institution on related diseases, and feeding pathological research on the brain tumor in the medical field in the aspects of classification, classification and the like of the brain tumor.
At present, most of U-Net is applied to brain tumor MRI image processing to perform image segmentation tasks, the applications only segment the region of a tumor, valuable medical information is not obtained by further using the segmented image, and segmentation and diagnosis results of a focus region are obtained simultaneously through processing and analyzing the image.
Drawings
FIG. 1 is a design framework of a brain tumor auxiliary detection and identification system;
FIG. 2 is a multi-task learning model of brain tumor segmentation and classification diagnosis tasks;
Detailed Description
Referring to the attached drawings 1-2 in the specification, the invention provides a method for analyzing and processing a three-dimensional brain nuclear magnetic resonance image by combining a medical image with a deep learning and computer vision method and using a computer vision analysis method, and carries out segmentation of a glioma lesion region on the brain nuclear magnetic resonance image and a classification diagnosis task based on the image. Aiming at the problems that the data volume of a medical image data set is small, the category is unbalanced and serious, and the existing method focuses on the segmentation of a focus region and omits a classification diagnosis task, an improved 3D U-Net-based convolutional neural network is provided, classification diagnosis branches are added, and segmentation and classification results are obtained simultaneously in a multi-task combined training mode. FIG. 1 is an algorithm design flow proposed by the present invention, which first preprocesses an MRI image, a segmentation result of an artificial label corresponding to the MRI image, and classification diagnostic information obtained from pathological information to obtain a three-dimensional image sequence including four modalities, divides the processed image into a training set and a test set in proportion, trains on the training set, optimizes the performance of a model on two tasks of segmentation and classification, and finally evaluates the effect on the training set after the training effect is converged and a more ideal result is obtained.
Before a model is established, brain MRI image data used by a user directly comes from a medical record system of a hospital, image preprocessing operations of denoising, brightness contrast adjustment and the like for enhancing visibility are completed, image data are labeled manually, different parts of a tumor, including four parts of edema, tumor parenchyma, enhanced nucleus and necrosis, are labeled, and diagnosis information of tumor types, stages and the like is obtained from medical record diagnosis of cases and postoperative pathological information.
And traversing all MRI images, counting information such as the average value, the variance and the like, and reserving the information for standardized operation in the training and predicting process. In addition, four modal images at the same slice position of the same scanning sequence are stacked, each mode is used as a channel, then all slice images under the scanning sequence and a segmentation result marking mask (mask) are respectively spliced to form a three-dimensional image and a marking sequence, and then the two sequences and a diagnosis result corresponding to the sequences are bound to form a sample.
After all samples are processed, the samples are divided into a training set and a testing set according to a determined proportion for later use.
And then, building a three-dimensional model by using a three-dimensional convolution processing method based on the original U-Net network.
Fig. 2 shows the model architecture used in the present invention, comprising a left-hand, one-down-sampled data path and a right-hand, one-up-sampled data path. Each layer on the left downsampled path contains two 3 x 3 convolutional layers, and each convolutional layer adopts dropout mode to prevent overfitting, and the activation function of the ReLU is used. After two convolutional layers, using a maximum pooling layer with a step size of 2 and a size of 3 × 3 × 3 to perform downsampling operation; and performing up-sampling operation between two layers of the right data path in a deconvolution mode, splicing the up-sampled features with the features of the left down-sampling layer, and performing convolution operation on the spliced features twice, wherein the convolution operation is the same as that of the left up-sampling layer.
The downsampling layer improves the receptive field of each pixel in the finally obtained deepest layer features by continuously reducing the resolution of the features, obtains a high-level representation with more abstract features, namely deep information, and has higher representation capability for picture classification and pixel class judgment in pictures, but the accuracy of pixel-level classification and the resolution of obtained classification results are greatly reduced due to the loss of the resolution. The middle cross-layer connection is shallow information, and the problem that the resolution ratio of the result is reduced compared with the input result is solved by fusing the middle cross-layer connection with the up-sampling path characteristics on the right side, so that the fused result is better in expression result.
After the characteristics of the fused deep-layer and shallow-layer information are finally obtained from the up-sampling channel, convolution with convolution kernel size of 3 × 3 × 3 is performed twice, and after the characteristics of the convolution are obtained, the model is divided into two branches: one continues to carry out convolution operation once, the output channel is the same as the classification in the semantic segmentation result, and then the output result containing the background, the edema, the tumor parenchyma, the necrosis and the enhancement kernel is obtained through softmax calculation; the other branch was subjected to a convolution operation, followed by Global Average Pooling (Global Average Pooling), followed by two fully-connected layers, the output of the second fully-connected layer being the same as the pathological diagnosis classification number, including the following categories: oligodendroglioma, anaplastic oligodendroglioma, astrocytoma, anaplastic astrocytoma, glioblastoma.
The above process can be regarded as a process of accurately identifying the brain tumor category by combining the background based on the tumor region as the attention region of the model. The process uses a multi-task learning method, so that the segmentation task and the classification task of the brain tumor can be simultaneously executed, and the segmentation task and the classification task can be jointly promoted under the mutual action due to the sharing of the shallow features.
The multi-task learning method is measured by the loss function and the model training method is used for optimizing the performance of the model.
1. Loss function
1) Selection of a loss function for brain tumor segmentation task model:
one challenge in medical image segmentation is the problem of class imbalance in the data, for example in brain tumor MRI images, the proportion of the entire data that is the target object to be segmented is particularly small, resulting in severe class imbalance. In this case, the training is hindered by using the traditional classification cross entropy loss function, and the Dice loss function can effectively deal with the class imbalance problem, so the invention adopts the loss function as the loss function of the segmentation model, which is specifically expressed as follows:
Figure BDA0002345694900000061
where u is the segmentation result output for the network, v is the segmentation of the labels, and i is the number of pixels in the training block
2) Selection of loss function for brain tumor classification task:
the problem is a classification problem, and a cross entropy function widely used by the classification problem is adopted as a loss function of the module, which is specifically expressed as follows:
Figure BDA0002345694900000071
where N is the total number of samples, K is the total number of classes, yi,jIs a tag value, pi,jIs a predicted value
2. The model training method comprises the following steps:
1) and setting a cross entropy loss function which only allows an image segmentation model or a tumor classification model to carry out back propagation for iterative training, wherein the overall loss function is expressed as:
Figure BDA0002345694900000072
2) combining the loss value of the image segmentation module and the loss value of the tumor classification model according to a certain proportion, and then carrying out back propagation for iterative training, namely expressing the total loss function as:
Loss2=LossDice+αLosscross
wherein α is an adjustable scaling factor.

Claims (5)

1. An attention-enhanced brain tumor auxiliary intelligent detection and identification method is characterized by comprising the following steps: comprises the following steps of;
the method comprises the following steps: establishing a three-dimensional convolution network model of a multitask neural network based on U-Net and suitable for segmentation and diagnosis of a brain glioma lesion region in a brain MRI image;
step two: a multi-task joint training objective;
step three: measuring the loss of multiple tasks and optimizing the result;
step four: and (4) model training, result evaluation and output.
2. The method for aided intelligent detection and identification of brain tumors according to claim 1, wherein the method comprises the following steps: the step of establishing a three-dimensional convolution network model of a multitask neural network based on U-Net and suitable for segmentation and diagnosis of a brain glioma lesion region in a brain MRI image comprises the following steps:
building a three-dimensional model of an attention area based on a tumor area as the model based on an original U-Net network by using a three-dimensional convolution processing method, wherein the framework of the model comprises a down-sampling data path and an up-sampling data path, each layer on the down-sampling path comprises two convolution layers of 3 x 3, and each convolution layer adopts a dropout mode to prevent overfitting and uses an activation function of a ReLU. After two convolutional layers, using a maximum pooling layer with a step size of 2 and a size of 3 × 3 × 3 to perform downsampling operation; the up-sampling operation is carried out between two layers of the up-sampling data path in a deconvolution mode, the features after up-sampling are spliced with the features of the left down-sampling layer, the spliced features are subjected to convolution operation twice as same as that of the left up-sampling layer, after the up-sampling path finally obtains the features fused with the deep layer and the shallow layer information, convolution with the convolution kernel size of 3 x 3 is carried out twice, and after the features after convolution are obtained, the model is divided into two branches: one continues to carry out convolution operation once, the output channel is the same as the classification in the semantic segmentation result, and then the output result containing the background, the edema, the tumor parenchyma, the necrosis and the enhancement kernel is obtained through softmax calculation; after the other branch is subjected to convolution operation for one time, Global Average Pooling (Global Average Pooling) is used, then two full-connection layers are followed, and the output of the second full-connection layer is the same as the classification number of pathological diagnosis;
and then traversing all brain MRI images imported from a case, counting average value and variance information, reserving the brain MRI images for use in a standardized operation in a training and predicting process, receiving input brain MRI image sequences of four modes of T1, T1 enhancement, T2 and T2-Flair, taking each mode as a channel, splicing all slice images in the scanning sequence and a segmentation result labeling mask respectively to form a three-dimensional picture and a labeling sequence, binding the two sequences and a diagnosis result corresponding to the sequences to be used as a sample, and processing all the images.
3. The method for aided intelligent detection and identification of brain tumors according to claim 2, wherein the method comprises the following steps: the multi-task joint training target step comprises the following steps:
on the full convolution model, a classification branch is added after shallow information and deep information are fused, a single task segmentation and classification model is used, and semantic segmentation and classification results are obtained simultaneously, so that a segmentation task and a classification task of brain tumors can be executed simultaneously and shallow features are shared;
after the features of the deep layer information and the shallow layer information are fused, convolution with convolution kernel size of 3 multiplied by 3 is carried out twice, and after the features after convolution are obtained, the model is divided into two branches: one continues to carry out convolution operation once, the output channel is the same as the classification in the semantic segmentation result, and then the output result containing the background, the edema, the tumor parenchyma, the necrosis and the enhancement kernel is obtained through softmax calculation; the other branch was subjected to a convolution operation, followed by Global Average Pooling (Global Average Pooling), followed by two fully-connected layers, the output of the second fully-connected layer being the same as the pathological diagnosis classification number, including the following categories: oligodendroglioma, anaplastic oligodendroglioma, astrocytoma, anaplastic astrocytoma, glioblastoma.
4. The method for aided intelligent detection and identification of brain tumors with enhanced attention according to claim 3, wherein the method comprises the following steps: the measuring the loss of the multiple tasks, and the optimizing the result step uses a loss function of the multiple task combination to measure the segmentation result and the classification result and optimize the segmentation and classification result, wherein:
the loss function of the image segmentation model adopts a Dice loss function;
the loss function of the tumor classification module selects a cross-entropy function.
5. The method for aided intelligent detection and identification of brain tumors according to claim 4, wherein the method comprises the following steps: the model training and result evaluation and output step comprises the following steps:
a model training step, namely setting a Dice pass of an image segmentation model or a cross entropy loss function of a tumor classification model to carry out back propagation for iterative training, and carrying out back propagation for iterative training after combining a pass value of the image segmentation model and a pass value of the tumor classification model in a certain proportion;
after the training effect is converged and a more ideal result is obtained, evaluating the effect on a training set;
and inputting a new case image to the evaluated model, and outputting a detection and identification result.
CN201911393654.4A 2019-12-30 2019-12-30 Attention-enhanced brain tumor auxiliary intelligent detection and identification method Active CN111047589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911393654.4A CN111047589B (en) 2019-12-30 2019-12-30 Attention-enhanced brain tumor auxiliary intelligent detection and identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911393654.4A CN111047589B (en) 2019-12-30 2019-12-30 Attention-enhanced brain tumor auxiliary intelligent detection and identification method

Publications (2)

Publication Number Publication Date
CN111047589A true CN111047589A (en) 2020-04-21
CN111047589B CN111047589B (en) 2022-07-26

Family

ID=70241643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911393654.4A Active CN111047589B (en) 2019-12-30 2019-12-30 Attention-enhanced brain tumor auxiliary intelligent detection and identification method

Country Status (1)

Country Link
CN (1) CN111047589B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667458A (en) * 2020-04-30 2020-09-15 杭州深睿博联科技有限公司 Method and device for detecting early acute cerebral infarction in flat-scan CT
CN111968127A (en) * 2020-07-06 2020-11-20 中国科学院计算技术研究所 Cancer focus area identification method and system based on full-section pathological image
CN112085113A (en) * 2020-09-14 2020-12-15 四川大学华西医院 Severe tumor image recognition system and method
CN112733873A (en) * 2020-09-23 2021-04-30 浙江大学山东工业技术研究院 Chromosome karyotype graph classification method and device based on deep learning
CN112766333A (en) * 2021-01-08 2021-05-07 广东中科天机医疗装备有限公司 Medical image processing model training method, medical image processing method and device
CN112927240A (en) * 2021-03-08 2021-06-08 重庆邮电大学 CT image segmentation method based on improved AU-Net network
CN113112465A (en) * 2021-03-31 2021-07-13 上海深至信息科技有限公司 System and method for generating carotid intima-media segmentation model
CN113223704A (en) * 2021-05-20 2021-08-06 吉林大学 Auxiliary diagnosis method for computed tomography aortic aneurysm based on deep learning
CN113223014A (en) * 2021-05-08 2021-08-06 中国科学院自动化研究所 Brain image analysis system, method and equipment based on data enhancement
CN113516671A (en) * 2021-08-06 2021-10-19 重庆邮电大学 Infant brain tissue segmentation method based on U-net and attention mechanism
CN115222007A (en) * 2022-05-31 2022-10-21 复旦大学 Improved particle swarm parameter optimization method for glioma multitask integrated network
CN116645381A (en) * 2023-06-26 2023-08-25 海南大学 Brain tumor MRI image segmentation method, system, electronic equipment and storage medium
CN117726624A (en) * 2024-02-07 2024-03-19 北京长木谷医疗科技股份有限公司 Method and device for intelligently identifying and evaluating adenoid lesions in real time under video stream

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087318A (en) * 2018-07-26 2018-12-25 东北大学 A kind of MRI brain tumor image partition method based on optimization U-net network model
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109754404A (en) * 2019-01-02 2019-05-14 清华大学深圳研究生院 A kind of lesion segmentation approach end to end based on more attention mechanism
US20190205606A1 (en) * 2016-07-21 2019-07-04 Siemens Healthcare Gmbh Method and system for artificial intelligence based medical image segmentation
CN110120033A (en) * 2019-04-12 2019-08-13 天津大学 Based on improved U-Net neural network three-dimensional brain tumor image partition method
CN110298844A (en) * 2019-06-17 2019-10-01 艾瑞迈迪科技石家庄有限公司 X-ray contrastographic picture blood vessel segmentation and recognition methods and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190205606A1 (en) * 2016-07-21 2019-07-04 Siemens Healthcare Gmbh Method and system for artificial intelligence based medical image segmentation
CN109087318A (en) * 2018-07-26 2018-12-25 东北大学 A kind of MRI brain tumor image partition method based on optimization U-net network model
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109754404A (en) * 2019-01-02 2019-05-14 清华大学深圳研究生院 A kind of lesion segmentation approach end to end based on more attention mechanism
CN110120033A (en) * 2019-04-12 2019-08-13 天津大学 Based on improved U-Net neural network three-dimensional brain tumor image partition method
CN110298844A (en) * 2019-06-17 2019-10-01 艾瑞迈迪科技石家庄有限公司 X-ray contrastographic picture blood vessel segmentation and recognition methods and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TAN PAN 等: "A Multi-Task Convolutional Neural Network for Renal Tumor Segmentation and Classification Using Multi-Phasic CT Images", 《2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667458B (en) * 2020-04-30 2023-09-01 杭州深睿博联科技有限公司 Early acute cerebral infarction detection method and device in flat scanning CT
CN111667458A (en) * 2020-04-30 2020-09-15 杭州深睿博联科技有限公司 Method and device for detecting early acute cerebral infarction in flat-scan CT
CN111968127A (en) * 2020-07-06 2020-11-20 中国科学院计算技术研究所 Cancer focus area identification method and system based on full-section pathological image
CN111968127B (en) * 2020-07-06 2021-08-27 中国科学院计算技术研究所 Cancer focus area identification method and system based on full-section pathological image
CN112085113A (en) * 2020-09-14 2020-12-15 四川大学华西医院 Severe tumor image recognition system and method
CN112733873A (en) * 2020-09-23 2021-04-30 浙江大学山东工业技术研究院 Chromosome karyotype graph classification method and device based on deep learning
CN112766333A (en) * 2021-01-08 2021-05-07 广东中科天机医疗装备有限公司 Medical image processing model training method, medical image processing method and device
CN112766333B (en) * 2021-01-08 2022-09-23 广东中科天机医疗装备有限公司 Medical image processing model training method, medical image processing method and device
CN112927240B (en) * 2021-03-08 2022-04-05 重庆邮电大学 CT image segmentation method based on improved AU-Net network
CN112927240A (en) * 2021-03-08 2021-06-08 重庆邮电大学 CT image segmentation method based on improved AU-Net network
CN113112465A (en) * 2021-03-31 2021-07-13 上海深至信息科技有限公司 System and method for generating carotid intima-media segmentation model
CN113223014A (en) * 2021-05-08 2021-08-06 中国科学院自动化研究所 Brain image analysis system, method and equipment based on data enhancement
CN113223704B (en) * 2021-05-20 2022-07-26 吉林大学 Auxiliary diagnosis method for computed tomography aortic aneurysm based on deep learning
CN113223704A (en) * 2021-05-20 2021-08-06 吉林大学 Auxiliary diagnosis method for computed tomography aortic aneurysm based on deep learning
CN113516671B (en) * 2021-08-06 2022-07-01 重庆邮电大学 Infant brain tissue image segmentation method based on U-net and attention mechanism
CN113516671A (en) * 2021-08-06 2021-10-19 重庆邮电大学 Infant brain tissue segmentation method based on U-net and attention mechanism
CN115222007A (en) * 2022-05-31 2022-10-21 复旦大学 Improved particle swarm parameter optimization method for glioma multitask integrated network
CN116645381A (en) * 2023-06-26 2023-08-25 海南大学 Brain tumor MRI image segmentation method, system, electronic equipment and storage medium
CN117726624A (en) * 2024-02-07 2024-03-19 北京长木谷医疗科技股份有限公司 Method and device for intelligently identifying and evaluating adenoid lesions in real time under video stream

Also Published As

Publication number Publication date
CN111047589B (en) 2022-07-26

Similar Documents

Publication Publication Date Title
CN111047589B (en) Attention-enhanced brain tumor auxiliary intelligent detection and identification method
Iglesias et al. A probabilistic atlas of the human thalamic nuclei combining ex vivo MRI and histology
Angelini et al. Glioma dynamics and computational models: a review of segmentation, registration, and in silico growth algorithms and their clinical applications
O'Donnell et al. Fiber clustering versus the parcellation-based connectome
Li et al. A hybrid approach to automatic clustering of white matter fibers
Menze et al. Analyzing magnetic resonance imaging data from glioma patients using deep learning
Wu et al. Investigation into local white matter abnormality in emotional processing and sensorimotor areas using an automatically annotated fiber clustering in major depressive disorder
CA2752370A1 (en) Segmentation of structures for state determination
Rajasekaran et al. Advanced brain tumour segmentation from mri images
Ye et al. Segmentation of the cerebellar peduncles using a random forest classifier and a multi-object geometric deformable model: application to spinocerebellar ataxia type 6
Kronfeld-Duenias et al. White matter pathways in persistent developmental stuttering: Lessons from tractography
Wang et al. Sk-unet: An improved u-net model with selective kernel for the segmentation of lge cardiac mr images
Rana et al. Brain tumor detection through MR images: a review of literature
Nizamani et al. Advance Brain Tumor segmentation using feature fusion methods with deep U-Net model with CNN for MRI data
Xie et al. Cntseg: A multimodal deep-learning-based network for cranial nerves tract segmentation
Nguyen et al. Ocular structures segmentation from multi-sequences MRI using 3D Unet with fully connected CRFs
Li et al. Deep attention super-resolution of brain magnetic resonance images acquired under clinical protocols
Chandra et al. CCsNeT: Automated Corpus Callosum segmentation using fully convolutional network based on U-Net
Aderghal Classification of multimodal MRI images using Deep Learning: Application to the diagnosis of Alzheimer’s disease.
Lecesne et al. Segmentation of cardiac infarction in delayed-enhancement MRI using probability map and transformers-based neural networks
CN115170540A (en) Mild traumatic brain injury classification method based on multi-modal image feature fusion
Pallawi et al. Study of Alzheimer’s disease brain impairment and methods for its early diagnosis: a comprehensive survey
JP2020534914A (en) Non-invasive estimation of prostate tissue composition based on multiparametric MRI data
Wang et al. A calibrated SVM based on weighted smooth GL1/2 for Alzheimer’s disease prediction
Williams et al. Thalamic nuclei segmentation from T $ _1 $-weighted MRI: unifying and benchmarking state-of-the-art methods with young and old cohorts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant