CN114298234B - Brain medical image classification method and device, computer equipment and storage medium - Google Patents

Brain medical image classification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114298234B
CN114298234B CN202111655896.3A CN202111655896A CN114298234B CN 114298234 B CN114298234 B CN 114298234B CN 202111655896 A CN202111655896 A CN 202111655896A CN 114298234 B CN114298234 B CN 114298234B
Authority
CN
China
Prior art keywords
image
deep learning
module
classification model
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111655896.3A
Other languages
Chinese (zh)
Other versions
CN114298234A (en
Inventor
王思伦
肖焕辉
刘志华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yiwei Medical Technology Co Ltd
Original Assignee
Shenzhen Yiwei Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yiwei Medical Technology Co Ltd filed Critical Shenzhen Yiwei Medical Technology Co Ltd
Priority to CN202111655896.3A priority Critical patent/CN114298234B/en
Publication of CN114298234A publication Critical patent/CN114298234A/en
Application granted granted Critical
Publication of CN114298234B publication Critical patent/CN114298234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application relates to a brain medical image classification method, a brain medical image classification device, a computer device and a storage medium. The method comprises the following steps: acquiring a brain medical image corresponding to the target classification task; preprocessing the brain medical image to segment a gray matter image and a white matter image corresponding to the brain medical image; inputting the gray matter image and the white matter image into a deep learning classification model together, and processing the gray matter image and the white matter image through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module introduced, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module; and obtaining a classification result of the target classification task according to the output of the deep learning classification model. By adopting the method, the efficiency and the accuracy of brain medical image classification can be improved.

Description

Brain medical image classification method, apparatus, computer device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for classifying brain medical images, a computer device, and a storage medium.
Background
Medical images are internal tissue images that are acquired non-invasively from the human body or a part of the human body for medical treatment or medical research. Classifying medical images, such as brain medical images, to obtain the medical image classification category can provide effective medical assistance for medical workers.
In the conventional medical image classification scheme, a doctor usually distinguishes medical images of a patient from medical images of normal persons by checking visible changes of the medical images, and the method has strong subjectivity and needs abundant clinical knowledge and experience, so that the efficiency and the accuracy of medical image classification are influenced.
Disclosure of Invention
In view of the above, there is a need to provide a brain medical image classification method, apparatus, computer device and storage medium that can improve the classification efficiency and accuracy.
A method for classifying medical images of the brain, comprising:
acquiring a brain medical image corresponding to the target classification task;
preprocessing the brain medical image to segment a gray matter image and a white matter image corresponding to the brain medical image;
inputting the grey matter image and the white matter image into a deep learning classification model together, and processing the grey matter image and the white matter image through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module introduced, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module;
and obtaining a classification result of the target classification task according to the output of the deep learning classification model.
In one embodiment, the dense convolutional network comprises one convolution module, three dense modules, two transition modules, one classification module, and two attention mechanism modules; wherein a transition module exists between any two dense modules, one said attention mechanism module is located between a first said dense module and a first said transition module, another said attention mechanism module is located between a second said dense module and a second said transition module, each said attention mechanism module includes a space attention mechanism module and a channel attention mechanism module.
In one embodiment, the acquiring medical images of the brain corresponding to the target classification task includes:
acquiring brain medical images of at least two modalities corresponding to the target classification task;
the preprocessing the brain medical image to segment a gray matter image and a white matter image corresponding to the brain medical image comprises:
preprocessing the brain medical images in each mode to segment gray matter images and white matter images corresponding to the brain medical images in each mode;
the inputting the gray matter image and the white matter image into a deep learning classification model together, and processing the gray matter image and the white matter image through each module of the deep learning classification model includes:
combining the characteristics of the gray-matter images, inputting the combined images into a multilayer perceptron, and optimizing the combined images through back propagation to obtain a fusion gray-matter image;
combining the characteristics of the white matter images, inputting the combined images into a multilayer perceptron, and optimizing the combined images through back propagation to obtain a fused white matter image;
and inputting the fused gray matter image and the fused white matter image into a deep learning classification model together, and processing through each module of the deep learning classification model.
In one embodiment, the acquiring of the medical brain image corresponding to the target classification task includes:
acquiring historical brain medical images and real-time brain medical images corresponding to the target classification task;
the preprocessing the brain medical image to segment a gray matter image and a white matter image corresponding to the brain medical image comprises:
respectively preprocessing the historical brain medical image and the real-time brain medical image to segment a grey matter image and a white matter image which respectively correspond to the historical brain medical image and the real-time brain medical image;
the inputting the gray matter image and the white matter image into a deep learning classification model together, and processing the gray matter image and the white matter image through each module of the deep learning classification model includes:
and extracting a first difference image between the two gray matter images and a second difference image between the two white matter images, inputting the two gray matter images, the two white matter images, the first difference image and the second difference image into a deep learning classification model together, and processing through each module of the deep learning classification model.
In one embodiment, the preprocessing the brain medical image to segment a gray matter image corresponding to the brain medical image and a white matter image corresponding to the brain medical image comprises:
preprocessing the brain medical image to segment a global gray matter image corresponding to the brain medical image and a global white matter image corresponding to the brain medical image,
the inputting the gray matter image and the white matter image into a deep learning classification model together, and processing the gray matter image and the white matter image through each module of the deep learning classification model includes:
inputting the global grey matter image and the global white matter image into a deep learning classification model, and processing the global grey matter image and the global white matter image through each module of the deep learning classification model;
acquiring an intermediate feature map output by a spatial attention mechanism module of the deep learning classification model, so as to segment the global gray matter image based on the intermediate feature map to obtain a local gray matter image, and segment the global white matter image based on the intermediate feature map to obtain a local white matter image;
inputting the local gray matter image and the local white matter image into a deep learning classification model, and processing the local gray matter image and the local white matter image through each module of the deep learning classification model;
the obtaining of the classification result of the target classification task according to the output of the deep learning classification model comprises:
and obtaining a classification result of the brain medical image according to a first output of the deep learning classification model based on the global gray matter image and the global white matter image and a second output of the deep learning classification model based on the local gray matter image and the local white matter image.
In one embodiment, the method further comprises the step of model training; the step of model training comprises:
acquiring an original sample set and an initial deep learning classification model, wherein the original sample set comprises original samples corresponding to N categories, and classification labels exist in the original samples correspondingly;
randomly dividing the original sample set into K sample subsets;
one sample subset in the K sample subsets is used as a test set, the rest (K-1) sample subsets are used as a training set, an initial deep learning classification model is trained in a supervision mode, M trained deep learning classification models are obtained, and M is larger than or equal to 2 and smaller than or equal to K;
and when the initial deep learning classification model is trained each time, calculating training loss by using a class-weighted cross entropy loss function, and updating model parameters of the initial deep learning classification model by minimizing the training loss to obtain a corresponding trained deep learning classification model.
In one embodiment, the method further comprises, each time the initial deep learning classification model is trained, calculating a training loss using a class-weighted cross-entropy loss function by:
determining the sample number of the original samples of each category in the training set;
taking the ratio of the number of samples of each category to the number of samples of the training set as the frequency of each category;
determining the median of the frequencies of all categories;
dividing the median by the frequency of each category to obtain the weight of each category;
and constructing a cross entropy loss function according to the weight of each category to be used as training loss.
A brain medical image classification apparatus, comprising:
the acquisition module is used for acquiring the brain medical image corresponding to the target classification task;
the segmentation module is used for preprocessing the brain medical image so as to segment a gray matter image and a white matter image corresponding to the brain medical image;
the classification module is used for inputting the gray matter image and the white matter image into a deep learning classification model together, and processing the gray matter image and the white matter image through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module introduced, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module; and obtaining a classification result of the target classification task according to the output of the deep learning classification model.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a brain medical image corresponding to the target classification task;
preprocessing the brain medical image to segment a gray matter image and a white matter image corresponding to the brain medical image;
inputting the grey matter image and the white matter image into a deep learning classification model together, and processing the grey matter image and the white matter image through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module introduced, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module;
obtaining a classification result of the target classification task according to the output of the deep learning classification model
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a brain medical image corresponding to the target classification task;
preprocessing the brain medical image to segment a gray matter image and a white matter image corresponding to the brain medical image;
inputting the gray matter image and the white matter image into a deep learning classification model together, and processing the gray matter image and the white matter image through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module introduced, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module;
and obtaining a classification result of the target classification task according to the output of the deep learning classification model.
According to the brain medical image classification method, the brain medical image classification device, the computer equipment and the storage medium, after the brain medical image corresponding to the target classification task is obtained, the brain medical image is preprocessed to segment the gray matter image and the white matter image corresponding to the brain medical image, the gray matter image and the white matter image are input into the deep learning classification model together, and the gray matter image and the white matter image are processed through all modules of the deep learning classification model. Therefore, medical image classification is carried out through the deep learning classification model, the classification efficiency can be improved, and the problem that the classification accuracy is improved due to subjectivity caused by manual classification is avoided. And classification is performed based on the gray matter image and the white matter image, so that information depended on by classification is richer and more complete, and the classification accuracy is favorably improved further. In addition, the deep learning classification model is a dense convolution network with at least one attention mechanism module, the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module, and attention mechanisms in two aspects of space and channel are introduced, so that the model can focus on the gray matter image and the white matter image more accurately during processing, and the classification accuracy of the model is further improved.
Drawings
Fig. 1 is a diagram illustrating an application environment of the method for classifying brain medical images according to an embodiment;
FIG. 2 is a flow chart illustrating a method for classifying medical images of the brain according to an embodiment;
FIG. 3 is a diagram of a deep learning classification model in one embodiment;
fig. 4 is a block diagram illustrating the structure of the brain medical image classification apparatus according to an embodiment;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is a diagram of an application environment of the brain medical image classification method according to an embodiment. Referring to fig. 1, the method for classifying brain medical images is applied to a brain medical image classification system. The brain medical image classification system comprises a terminal 102 and a server 104. The terminal 102 and the server 104 are connected via a network. The terminal 102 may be a desktop terminal or a mobile terminal, and the mobile terminal may be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 104 may be implemented as a stand-alone server or a server cluster comprised of multiple servers. The terminal 102 and the server 104 may be used separately to execute the brain medical image classification method provided in the embodiment of the present application. The terminal 102 and the server 104 may also be cooperatively used to execute the brain medical image classification method provided in the embodiment of the present application.
In one embodiment, as shown in fig. 2, a brain medical image classification method is provided, which is described by taking an example that the method is applied to a computer device in fig. 1 (the computer device may be specifically a terminal or a server in fig. 1), and includes the following steps:
step 202, a brain medical image corresponding to the target classification task is obtained.
The medical image is a three-dimensional tissue image having spatial position information, which is obtained in a non-invasive manner with respect to a target portion of a biological object. Specifically, the tissue image may be obtained by CT (computed tomography), MRI (magnetic resonance imaging), PT (penetrant inspection), or the like.
In a specific embodiment, the medical image of the brain to be classified is an MRI image of the brain.
In one embodiment, step 202 comprises: medical images of the brain of at least two modalities corresponding to the target classification task are acquired. For example, the magnetic resonance image may include T1W magnetic resonance image (T1-weighted MRI, T1W MRI), T2W magnetic resonance image (T2-weighted MRI, T2W MRI), proton density weighted image (PdW MRI), T2-fluid attenuation inversion recovery MRI (T2-fluidized induced recovery MRI), T2-FLAIR MRI), and so on. The computer device may acquire medical images of the brain in at least two modalities, a T1W magnetic resonance image and a T2W magnetic resonance image.
It is understood that different modalities of medical images of the brain may reflect different information, or the same information from different layers, and that when classifying based on a target task, diversified data may be selected to improve classification accuracy.
In one embodiment, step 202 includes: and acquiring historical brain medical images and real-time brain medical images corresponding to the target classification task. The historical brain medical image may be a brain medical image obtained by performing a brain examination on the target object historically, and is preferably a brain medical image obtained by performing a brain examination last time. The real-time brain medical image is a brain medical image obtained by currently performing brain detection.
It can be understood that the brain medical images in different periods can reflect the development process of lesions, and when the brain medical images are classified based on the target task, time series data can be selected to increase the information change related to the attention classification, so that the classification accuracy can be improved. But also can effectively avoid the contingency of the detection result of the single medical image.
Step 204, preprocessing the brain medical image to segment a gray matter image and a white matter image corresponding to the brain medical image.
Specifically, the computer device can perform head correction, registration and segmentation on the NIFTI-format nuclear magnetic resonance image to obtain three images of gray matter, white matter and cerebrospinal fluid, and then perform spatial standardization and smoothing operation on the gray matter image and the white matter image to obtain a processed gray matter image and a processed white matter image. The spatial standardization can specifically be to register the image on a standard brain template space MNI and unify the coordinate space of the image.
In one embodiment, the computer device is in a scenario of acquiring medical images of the brain of at least two modalities, step 204, comprising: and preprocessing the brain medical images in the modes to segment gray matter images and white matter images corresponding to the brain medical images in each mode.
In one embodiment, the computer device in the scenario of acquiring historical brain medical images and real-time brain medical images, step 204, comprises: and respectively preprocessing the historical brain medical image and the real-time brain medical image to segment a grey matter image and a white matter image which respectively correspond to each other.
Step 206, inputting the grey matter image and the white matter image into a deep learning classification model together, and processing the grey matter image and the white matter image through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module.
Specifically, the computer device can stack the gray matter image and the white matter image as two independent channels to obtain a two-channel fusion image, so that the fusion image carries different feature information and is input into the deep learning classification model, and the fusion image is processed through each module of the deep learning classification model.
The dense convolutional network directly connects all layers related to the mapping size with the matching features, and the maximum information communication between layers in the network is reserved as far as possible; and each layer respectively obtains additional input from the layer above the layer and the used layer above the layer one by one, and finally, the mapping of all the characteristics carried by the layer is transferred to all the layers behind the layer, and the forward propagation characteristic is kept.
The Attention mechanism Module comprises a spatial Attention mechanism Module (spatial Attention Module) and a Channel Attention Module (Channel Attention Module), wherein the Channel Attention Module is used for generating mapping of Channel Attention by using the relationship among the channels of the features, and the spatial Attention Module is used for generating mapping of spatial Attention by using the relationship among the spatial features. By introducing an attention mechanism module into the dense convolutional network, the network can learn to pay attention to important information, namely focus areas of gray matter and white matter.
In one embodiment, as shown in FIG. 3, a dense convolutional network comprises one convolution module, three dense modules, two transition modules, one classification module, and two attention mechanism modules; and the attention mechanism module is positioned between the second dense module and the second transition module, and each attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module. For example, the process of constructing the CBAM-DenseNet100 network, i.e., the deep learning classification model provided by the present application, is as follows:
on the basis of the DenseNet-121 network, reducing the number of dense modules of the DenseNet-121 network and the number of convolution layers in the remaining dense modules, connecting the output of the third dense module to the global pooling layer, and connecting the output of the global pooling layer to the output layer to obtain a lightweight dense convolution network; and adding an attention mechanism module between adjacent dense modules of the lightweight dense convolutional network to construct a CBAM-DenseNet100 deep learning classification model. In consideration of the parameter quantity and the accuracy of the model, the number of the dense modules and the number of the convolution layers contained in the dense modules in the CBAM-DenseNet network are optimized, the number of the optimized dense modules is 3, and the optimized dense modules contain 16 layers of 1 × 1 convolution layers and 16 layers of 3 × 3 convolution layers, so that the parameter quantity of the model is reduced, and the accuracy is better.
The convolution module includes a convolution layer, a Batch Normalization layer (BN) and a pooling layer. Each dense module consists of 16 sets of convolution operations, each set of operations being stacked by a Batch Normalization layer (BN), convolution layers with convolution kernel sizes of 1 x1, and convolution layers of 3 x 3. Wherein, the BN layer is used for preventing gradient diffusion and maintaining network nonlinearity; 1 × 1 convolution layers, also called bottleneck layers, can reduce the number of output feature maps, and achieve the purpose of reducing dimension and calculation amount; the convolution layer of 3-3 is used for extracting the characteristics of a single slice of the nuclear magnetic resonance image and continuous change information among slices; and dense connection is used in the dense module, namely the input of the current layer is from the output of all the previous layers, and the connection mode not only enhances the transfer of the characteristics, thereby more effectively utilizing the characteristics of each layer, but also lightens the gradient disappearance problem of the deep neural network and is beneficial to deep network training. The transition module includes a BatchNorm layer, a convolutional layer, and a pooling layer. The classification module comprises a pooling layer, a full link layer and a Softmax function.
The two CBAM modules in the CBAM-DenseNet100 network are identical in structure, and each CBAM module is composed of a spatial attention module and a channel attention module. The channel attention module is composed of a pooling layer, a shared multilayer perceptron (MLP) and a sigmoid activation function. Taking the feature graph output by the dense module as the input feature graph of the channel attention module, firstly, performing global max pooling and global average pooling on the input feature graph respectively to obtain two 1 × 1 × C feature graphs, then respectively sending the two 1 × 1 × C feature graphs into a two-layer neural network (MLP), then performing element-wise addition operation on the features output by the MLP, and performing sigmoid activation operation to generate a final channel attention feature graph. And finally, performing element-wise multiplication operation on the channel attention feature map and the input feature map to generate the input features required by the spatial attention module.
The spatial attention module takes the feature map output by the channel attention module as the input feature map of the module. Firstly, performing global maximum pooling and global average pooling based on channels to obtain two H multiplied by W multiplied by 1 feature maps, and then splicing the 2 feature maps based on the channels. Then, after a 7 × 7 convolution operation, the dimensionality is reduced to 1 channel, i.e., hxwx 1. And generating a spatial attention feature map by sigmoid. And finally, multiplying the space attention feature map and the input feature map of the module to obtain a finally generated feature map. And finally, inputting the generated features into a transition module for dimensionality reduction, and then inputting the output of the transition module into a next dense module for subsequent convolution operation.
In one embodiment, the characteristics of a plurality of gray-matter images are combined, the combined images are input into a multilayer perceptron, and the combined images are optimized through back propagation to obtain a fused gray-matter image; combining the characteristics of the white matter images, inputting the combined images into a multilayer perceptron, and optimizing the combined images through back propagation to obtain a fused white matter image; and inputting the fusion gray matter image and the fusion white matter image into a deep learning classification model together, and processing through each module of the deep learning classification model.
Among them, the multi-layer Perceptron (MLP) is a feedforward artificial neural network model that maps input multiple data sets onto a single output data set. Further, the multi-layer perceptron is a neural network architecture which can have a plurality of hidden layers in the middle besides an input layer and an output layer. Feature merging refers to superimposing features of different sources together to obtain merged features more favorable for performing prediction classification on medical images. The method of feature merging can adopt add mode, when the adopted add mode is used for fusing features, value-by-element superposition is carried out, the number of channels is kept unchanged, and the add mode increases the information quantity of the features describing the image in each dimension under the condition of unchanged dimension, which is obviously beneficial to the classification of the final image.
Specifically, after obtaining brain medical images in multiple modalities and extracting gray matter images and white matter images respectively, the computer device may merge features of the multiple gray matter images in an add mode, input the merged images into the multi-layer perceptron, optimize the merged images through back propagation to obtain fused gray matter images, merge features of the multiple white matter images in an add mode, input the merged images into the multi-layer perceptron, and optimize the merged images through back propagation to obtain fused white matter images, so that the information amount of the features describing the images in each dimension is increased under the condition that the dimension is unchanged. And merging the fused gray matter image and the fused white matter image in a concat mode, describing that the characteristic dimension of the image is increased, and paying attention to the information of multiple channels.
In this embodiment, through the medical images of brain of a plurality of modals, both increased the information content under every dimension, increased the dimension of information again, be convenient for follow-up classify based on richer information.
In one embodiment, a first difference image between two gray matter images and a second difference image between two white matter images are extracted, the two gray matter images, the two white matter images, the first difference image and the second difference image are input into the deep learning classification model together, and the deep learning classification model is processed through each module.
The difference image is an image reflecting the change information of the two images. The change information can reflect the evolution process of the lesion on one hand and can avoid the contingency of the single-frame image on the other hand. The difference image between the two images can be obtained by element-by-element subtraction operation with the number of channels unchanged.
Specifically, the computer device may extract a first difference image between the gray matter image of the historical brain medical image and the gray matter image of the real-time brain medical image, and simultaneously extract a second difference image between the white matter image of the historical brain medical image and the white matter image of the real-time brain medical image, thus making information difference to each dimension without changing the dimension. And then, channel superposition is carried out on the gray matter image of the historical brain medical image, the gray matter image of the real-time brain medical image, the white matter image of the historical brain medical image, the white matter image of the real-time brain medical image, the first difference image and the second difference image, then the images are input into the deep learning classification model together, and the images after channel superposition are processed through all modules of the deep learning classification model.
In this embodiment, the information of the brain medical images of a plurality of time points is synthesized for classification, which not only increases the time dimension but also increases the information difference in the time dimension, thereby facilitating the subsequent classification based on richer information.
In one embodiment, the global gray matter image and the global white matter image are input into a deep learning classification model, and the global gray matter image and the global white matter image are processed through modules of the deep learning classification model; acquiring an intermediate feature map output by a space attention mechanism module of the deep learning classification model, segmenting a global grey matter image based on the intermediate feature map to obtain a local grey matter image, and segmenting the global white matter image based on the intermediate feature map to obtain a local white matter image; and inputting the local gray matter image and the local white matter image into a deep learning classification model, and processing the local gray matter image and the local white matter image through each module of the deep learning classification model.
Specifically, the gray matter image and the white matter image directly segmented from the brain medical image by the computer device are overall images based on the whole brain, are global images, and reflect the overall information distribution condition of the brain. And a module of the deep learning classification model, such as a spatial attention mechanism module, outputs an intermediate feature map which is a high-level semantic feature and focuses on the important region information of gray matter and white matter. By segmenting the global white matter image and/or the global gray matter image based on the intermediate feature map, a local image is obtained, which reflects information of important areas of the brain.
In the embodiment, images are classified by integrating global information and local information, on one hand, global correlation information reflected by the global images is utilized, on the other hand, an additional model structure is not introduced, local images are divided out in key areas concerned in the process of processing the global images based on the original deep learning model, and the key areas of the local information are utilized to further improve the accuracy of model classification.
And step 208, obtaining a classification result of the target classification task according to the output of the deep learning classification model.
Specifically, the output of the deep-learning classification model is the probability that the target input belongs to the respective class. For example, if the deep learning classification model is a Q classification model, probabilities corresponding to Q classes are output.
For example, the deep learning classification model predicts the probability that the target input distribution belongs to three classes as X0, X1, X2. If the probability of X0 is the highest, the final prediction result is the classification category corresponding to X0; if the probability of X1 is the highest, the final prediction result is the classification type corresponding to X1; and if the probability of the X2 is the highest, the final prediction result is the classification type corresponding to the X2.
In one embodiment, step 208 includes: and obtaining a classification result of the brain medical image according to a first output of the deep learning classification model based on the global gray matter image and the global white matter image and a second output of the deep learning classification model based on the local gray matter image and the local white matter image.
Specifically, the computer device may use as a criterion a weighted average of the probabilities output twice as a certain class, the corresponding class with the highest probability being the final prediction result. Wherein the weights of the first output and the second output can be obtained according to the importance degree of the image information (global information/local information) to the classification result.
According to the brain medical image classification method, after the brain medical image corresponding to the target classification task is obtained, the brain medical image is preprocessed to segment the gray matter image and the white matter image corresponding to the brain medical image, the gray matter image and the white matter image are input into the deep learning classification model together, and the gray matter image and the white matter image are processed through all modules of the deep learning classification model. Therefore, medical image classification is carried out through the deep learning classification model, the classification efficiency can be improved, and the problem that the classification accuracy is improved due to subjectivity caused by manual classification is avoided. And classification is carried out based on the gray matter image and the white matter image, so that information depended on by classification is richer and more complete, and the classification accuracy is further improved. In addition, the deep learning classification model is a dense convolution network with at least one attention mechanism module, the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module, and attention mechanisms in two aspects of space and channel are introduced, so that the model can focus on key areas of gray matter images and white matter images more accurately during processing, and the classification accuracy of the model is further improved.
In one embodiment, the brain medical image classification method further comprises the step of model training; the model training step comprises: acquiring an original sample set and an initial deep learning classification model, wherein the original sample set comprises original samples corresponding to N classes, and classification labels exist in the original samples correspondingly; randomly dividing an original sample set into K sample subsets; taking one sample subset of the K sample subsets as a test set, taking the rest (K-1) sample subsets as a training set, and training an initial deep learning classification model in a supervision manner to obtain M trained deep learning classification models, wherein M is more than or equal to 2 and less than or equal to K; and when the initial deep learning classification model is trained each time, calculating training loss by using a category-weighted cross entropy loss function, and updating model parameters of the initial deep learning classification model by minimizing the training loss to obtain the corresponding trained deep learning classification model.
Specifically, the computer device can collect original samples of various classification categories, then randomly divide an original sample set into K sample subsets, each time, one sample subset of the K sample subsets is used as a test set, the rest (K-1) sample subsets are used as training sets, and an initial deep learning classification model is trained in a supervised manner, so that M trained deep learning classification models can be obtained, wherein M is more than or equal to 2 and less than or equal to K. The M trained deep learning classification models can be obtained through parallel training. The initial learning rate is 0.001 in the model training process, the training loss is calculated by using a cross entropy loss function weighted by categories, the smaller the loss value is, the better the classification performance of the model on training data is, and thus the optimal deep learning classification model under each group of training set and test set can be obtained.
In this embodiment, in consideration of the unbalanced data amount of each class in each set of training set and test set during the model training process, a cross entropy loss function weighted by the classes is used to give a weight to each class, so that the accuracy of the model can be improved.
In one embodiment, the method for brain medical image classification further comprises calculating a training loss using a class-weighted cross-entropy loss function each time an initial deep-learning classification model is trained by: determining the sample number of original samples of each category in a training set; taking the ratio of the number of samples of each category to the number of samples of the training set as the frequency of each category; determining the median of the frequencies of all categories; dividing the median by the frequency of each category to obtain the weight of each category; and constructing a cross entropy loss function according to the weight of each category to be used as training loss.
Specifically, the computer device may calculate the total number of samples N of the training set at each time, and the number of samples Si of each class in the training set, where i represents a class, and 0 ≦ i. For example, S0 represents the alzheimer' S disease (AD) category, S1 represents the frontotemporal dementia (FTD) category, and S2 represents the normal control NC category. The computer device may continue to divide the number of samples per category by the total number of samples to obtain a frequency per category: f0= S0/N, F1= S1/N, F2= S2/N, \ 8230, and the median M of these frequencies. Dividing the obtained median by the frequency of each category, w0= M/F0, w1= M/F1, w2= M/F2, \\ 8230, and obtaining the weight of each category. In this way, the computer device can construct a cross-entropy loss function as a training loss based on the weight of each class.
In this embodiment, the weights of the categories are reasonably determined based on the ratio of the sample size of each category in each training set and each testing set to the total amount and the number of the dominant bits, so that the reliability and accuracy of the model can be improved.
In one embodiment, the computer device obtains the final classification result of the model by using a soft voting mode. The soft voting mode is to input the average value of the probabilities of a certain class into all model predictions as a standard, and the corresponding class with the highest probability is the final prediction result. Specifically, M trained deep learning classification models can be obtained through the training mode of the application, and the output of each deep learning classification model is the probability that the input belongs to each category. For example, the deep learning classification model is a Q classification model, and probabilities corresponding to Q classes are input. For example, the 5 deep learning classification models predict the average values X0, X1, X2 of the probabilities that the target input distribution belongs to the three classes. If the probability of X0 is the highest, the final prediction result is AD; if the probability of X1 is the highest, the final prediction result is FTD; and if the X2 probability is the highest, the final prediction result is NC.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
In one embodiment, as shown in fig. 4, there is provided a brain medical image classification apparatus including: an obtaining module 401, a segmentation module 402, and a classification module 403, wherein:
an obtaining module 401, configured to obtain a brain medical image corresponding to the target classification task;
a segmentation module 402, configured to pre-process the brain medical image to segment a gray matter image and a white matter image corresponding to the brain medical image;
the classification module 403 is configured to input the gray matter image and the white matter image into the deep learning classification model, and process the gray matter image and the white matter image through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module; and obtaining a classification result of the target classification task according to the output of the deep learning classification model.
In one embodiment, the dense convolutional network comprises one convolution module, three dense modules, two transition modules, one classification module, and two attention mechanism modules; and the attention mechanism module is positioned between the second dense module and the second transition module, and each attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module.
In one embodiment, the acquisition module 401 is further configured to acquire medical images of the brain of at least two modalities corresponding to the target classification task; the segmentation module 402 is further configured to pre-process the brain medical images in each modality to segment a gray matter image and a white matter image corresponding to the brain medical image in each modality; the classification module 403 is further configured to perform feature merging on the multiple gray matter images, input the merged images into a multi-layer perceptron, and optimize the merged images through back propagation to obtain a fused gray matter image; combining the characteristics of the white matter images, inputting the combined images into a multilayer perceptron, and optimizing the combined images through back propagation to obtain a fused white matter image; and inputting the fusion gray matter image and the fusion white matter image into a deep learning classification model together, and processing through each module of the deep learning classification model.
In one embodiment, the obtaining module 401 is further configured to obtain historical brain medical images and real-time brain medical images corresponding to the target classification task; the segmentation module 402 is further configured to respectively pre-process the historical brain medical image and the real-time brain medical image to segment a gray matter image and a white matter image corresponding to each other; the classification module 403 is further configured to extract a first difference image between two gray matter images and a second difference image between two white matter images, input the two gray matter images, the two white matter images, the first difference image and the second difference image into the deep learning classification model, and perform processing through each module of the deep learning classification model.
In one embodiment, the segmentation module 402 is further configured to pre-process the brain medical image to segment a global gray matter image corresponding to the brain medical image and a global white matter image corresponding to the brain medical image; the classification module 403 is further configured to input the global gray matter image and the global white matter image into a deep learning classification model, and process the global gray matter image and the global white matter image through each module of the deep learning classification model; acquiring an intermediate feature map output by a space attention mechanism module of the deep learning classification model, segmenting a global grey matter image based on the intermediate feature map to obtain a local grey matter image, and segmenting the global white matter image based on the intermediate feature map to obtain a local white matter image; inputting the local gray matter image and the local white matter image into a deep learning classification model, and processing the local gray matter image and the local white matter image through each module of the deep learning classification model; and obtaining a classification result of the brain medical image according to a first output of the deep learning classification model based on the global gray matter image and the global white matter image and a second output of the deep learning classification model based on the local gray matter image and the local white matter image.
In one embodiment, the brain medical image classification device further includes a training module, where the training module is configured to obtain an original sample set and an initial deep learning classification model, the original sample set includes original samples corresponding to N categories, and each original sample has a corresponding classification label; randomly dividing an original sample set into K sample subsets; taking one sample subset of the K sample subsets as a test set, taking the rest (K-1) sample subsets as a training set, and training an initial deep learning classification model in a supervision manner to obtain M trained deep learning classification models, wherein M is more than or equal to 2 and less than or equal to K; and when the initial deep learning classification model is trained each time, calculating training loss by using a class-weighted cross entropy loss function, and updating model parameters of the initial deep learning classification model by minimizing the training loss to obtain a corresponding trained deep learning classification model.
In one embodiment, the training module is further configured to determine a sample number of the original samples of each category in the training set; taking the ratio of the number of samples of each category to the number of samples of the training set as the frequency of each category; determining the median of the frequencies of all categories; dividing the median by the frequency of each category to obtain the weight of each category; and constructing a cross entropy loss function according to the weight of each category to be used as training loss.
For specific definition of the brain medical image classification apparatus, reference may be made to the definition of the brain medical image classification apparatus above, and details are not repeated here. All or part of the modules in the brain medical image classification device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 5. The computer device comprises a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a brain medical image classification apparatus. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to implement the steps of the brain medical image segmentation method in the above embodiment.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the method for medical imaging of the brain in the above-described embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A method for classifying brain medical images, comprising:
acquiring brain medical images of at least two modalities corresponding to a target classification task;
preprocessing the brain medical images in each mode to segment gray matter images and white matter images corresponding to the brain medical images in each mode;
combining the characteristics of the gray matter images, inputting the combined images into a multilayer perceptron, and optimizing the combined images through back propagation to obtain a fused gray matter image;
combining the characteristics of the white matter images, inputting the combined images into a multilayer perceptron, and optimizing the combined images through back propagation to obtain a fused white matter image;
inputting the fusion gray matter image and the fusion white matter image into a deep learning classification model together, and processing through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module introduced, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module;
and obtaining a classification result of the target classification task according to the output of the deep learning classification model.
2. A method for classifying brain medical images, comprising:
acquiring historical brain medical images and real-time brain medical images corresponding to the target classification task;
respectively preprocessing the historical brain medical image and the real-time brain medical image to segment a grey matter image and a white matter image which respectively correspond to the historical brain medical image and the real-time brain medical image;
extracting a first difference image between the two gray matter images and a second difference image between the two white matter images, inputting the two gray matter images, the two white matter images, the first difference image and the second difference image into a deep learning classification model together, and processing through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module introduced, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module;
and obtaining a classification result of the target classification task according to the output of the deep learning classification model.
3. A method for classifying medical images of the brain, comprising:
acquiring a brain medical image corresponding to the target classification task;
preprocessing the brain medical image to segment a global gray matter image corresponding to the brain medical image and a global white matter image corresponding to the brain medical image;
inputting the global gray matter image and the global white matter image into a deep learning classification model, and processing the global gray matter image and the global white matter image through each module of the deep learning classification model;
acquiring an intermediate feature map output by a space attention mechanism module of the deep learning classification model, so as to segment the global grey matter image based on the intermediate feature map to obtain a local grey matter image, and segment the global white matter image based on the intermediate feature map to obtain a local white matter image;
inputting the local gray matter image and the local white matter image into a deep learning classification model, and processing the local gray matter image and the local white matter image through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module introduced, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module;
and obtaining a classification result of the brain medical image according to a first output of the deep learning classification model based on the global gray matter image and the global white matter image and a second output of the deep learning classification model based on the local gray matter image and the local white matter image.
4. A medical brain image classification method according to any of claims 1 to 3, wherein the dense convolutional network comprises one convolutional module, three dense modules, two transition modules, one classification module and two attention mechanism modules; wherein a transition module exists between any two dense modules, one said attention module is located between a first said dense module and a first said transition module, and the other said attention module is located between a second said dense module and a second said transition module, each said attention module comprising a space attention module and a channel attention module.
5. The method for classifying medical images of the brain according to any one of claims 1 to 3, further comprising the step of model training; the step of model training comprises:
acquiring an original sample set and an initial deep learning classification model, wherein the original sample set comprises original samples corresponding to N categories, and classification labels exist in the original samples correspondingly;
randomly dividing the original sample set into K sample subsets;
one sample subset in the K sample subsets is used as a test set, the rest (K-1) sample subsets are used as a training set, an initial deep learning classification model is trained in a supervision mode, M trained deep learning classification models are obtained, and M is larger than or equal to 2 and smaller than or equal to K;
and when the initial deep learning classification model is trained each time, calculating training loss by using a category-weighted cross entropy loss function, and updating model parameters of the initial deep learning classification model by minimizing the training loss to obtain the corresponding trained deep learning classification model.
6. The brain medical image classification method according to claim 5, further comprising calculating a training loss using a class-weighted cross entropy loss function each time an initial deep learning classification model is trained, by:
determining the sample number of the original samples of each category in the training set;
taking the ratio of the number of samples of each category to the number of samples of the training set as the frequency of each category;
determining the median of the frequencies of all categories;
dividing the median by the frequency of each category to obtain the weight of each category;
and constructing a cross entropy loss function according to the weight of each category to be used as training loss.
7. A brain medical image classification device, comprising:
the acquisition module is used for acquiring brain medical images of at least two modalities corresponding to the target classification task;
the segmentation module is used for preprocessing the brain medical images in each mode to segment gray matter images and white matter images corresponding to the brain medical images in each mode;
the classification module is used for combining the characteristics of the gray-matter images, inputting the combined images into a multilayer perceptron, and optimizing the combined images through back propagation to obtain a fused gray-matter image; combining the characteristics of the white matter images, inputting the combined images into a multilayer perceptron, and optimizing the combined images through back propagation to obtain a fused white matter image; inputting the fusion gray matter image and the fusion white matter image into a deep learning classification model together, and processing through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module introduced, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module; and obtaining a classification result of the target classification task according to the output of the deep learning classification model.
8. A brain medical image classification device, comprising:
the acquisition module is used for acquiring historical brain medical images and real-time brain medical images corresponding to the target classification tasks;
the segmentation module is used for respectively preprocessing the historical brain medical image and the real-time brain medical image so as to segment a grey matter image and a white matter image which respectively correspond to the historical brain medical image and the real-time brain medical image;
the classification module is used for extracting a first difference image between the two gray matter images and a second difference image between the two white matter images, inputting the two gray matter images, the two white matter images, the first difference image and the second difference image into a deep learning classification model together, and processing through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module introduced, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module; and obtaining a classification result of the target classification task according to the output of the deep learning classification model.
9. A brain medical image classification device, comprising:
the acquisition module is used for acquiring brain medical images corresponding to the target classification task;
the segmentation module is used for preprocessing the brain medical image to segment a global gray matter image corresponding to the brain medical image and a global white matter image corresponding to the brain medical image;
the classification module is used for inputting the global gray matter image and the global white matter image into a deep learning classification model, and processing the global gray matter image and the global white matter image through each module of the deep learning classification model; acquiring an intermediate feature map output by a space attention mechanism module of the deep learning classification model, so as to segment the global grey matter image based on the intermediate feature map to obtain a local grey matter image, and segment the global white matter image based on the intermediate feature map to obtain a local white matter image; inputting the local gray matter image and the local white matter image into a deep learning classification model, and processing the local gray matter image and the local white matter image through each module of the deep learning classification model; the deep learning classification model is a dense convolution network with at least one attention mechanism module introduced, and the attention mechanism module comprises a space attention mechanism module and a channel attention mechanism module; and obtaining a classification result of the brain medical image according to a first output of the deep learning classification model based on the global gray matter image and the global white matter image and a second output of the deep learning classification model based on the local gray matter image and the local white matter image.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202111655896.3A 2021-12-31 2021-12-31 Brain medical image classification method and device, computer equipment and storage medium Active CN114298234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111655896.3A CN114298234B (en) 2021-12-31 2021-12-31 Brain medical image classification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111655896.3A CN114298234B (en) 2021-12-31 2021-12-31 Brain medical image classification method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114298234A CN114298234A (en) 2022-04-08
CN114298234B true CN114298234B (en) 2022-10-04

Family

ID=80973088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111655896.3A Active CN114298234B (en) 2021-12-31 2021-12-31 Brain medical image classification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114298234B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578370B (en) * 2022-10-28 2023-05-09 深圳市铱硙医疗科技有限公司 Brain image-based metabolic region abnormality detection method and device
CN116342582B (en) * 2023-05-11 2023-08-04 湖南工商大学 Medical image classification method and medical equipment based on deformable attention mechanism
CN117036894B (en) * 2023-10-09 2024-03-26 之江实验室 Multi-mode data classification method and device based on deep learning and computer equipment
CN117392124B (en) * 2023-12-08 2024-02-13 山东大学 Medical ultrasonic image grading method, system, server, medium and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016007518A1 (en) * 2014-07-07 2016-01-14 The Regents Of The University Of California Automatic segmentation and quantitative parameterization of brain tumors in mri
CN107506797A (en) * 2017-08-25 2017-12-22 电子科技大学 One kind is based on deep neural network and multi-modal image alzheimer disease sorting technique
CN109671054A (en) * 2018-11-26 2019-04-23 西北工业大学 The non-formaldehyde finishing method of multi-modal brain tumor MRI
CN111932529A (en) * 2020-09-10 2020-11-13 腾讯科技(深圳)有限公司 Image segmentation method, device and system
CN112164082A (en) * 2020-10-09 2021-01-01 深圳市铱硙医疗科技有限公司 Method for segmenting multi-modal MR brain image based on 3D convolutional neural network
CN112308835A (en) * 2020-10-27 2021-02-02 南京工业大学 Intracranial hemorrhage segmentation method integrating dense connection and attention mechanism
KR20210028321A (en) * 2019-09-03 2021-03-12 고려대학교 산학협력단 Apparatus for cortical atrophy disease hierarchical diagnosis based on brain thickness information
CN112686903A (en) * 2020-12-07 2021-04-20 嘉兴职业技术学院 Improved high-resolution remote sensing image semantic segmentation model
CN112741613A (en) * 2021-01-13 2021-05-04 武汉大学 Resting human brain default network function and structure coupling analysis method
CN113743484A (en) * 2021-08-20 2021-12-03 宁夏大学 Image classification method and system based on space and channel attention mechanism

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689536B (en) * 2019-09-30 2023-07-18 深圳大学 Brain grey matter and white matter tracking method and device based on multi-mode magnetic resonance image
CN112700434A (en) * 2021-01-12 2021-04-23 苏州斯玛维科技有限公司 Medical image classification method and classification device thereof

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016007518A1 (en) * 2014-07-07 2016-01-14 The Regents Of The University Of California Automatic segmentation and quantitative parameterization of brain tumors in mri
CN107506797A (en) * 2017-08-25 2017-12-22 电子科技大学 One kind is based on deep neural network and multi-modal image alzheimer disease sorting technique
CN109671054A (en) * 2018-11-26 2019-04-23 西北工业大学 The non-formaldehyde finishing method of multi-modal brain tumor MRI
KR20210028321A (en) * 2019-09-03 2021-03-12 고려대학교 산학협력단 Apparatus for cortical atrophy disease hierarchical diagnosis based on brain thickness information
CN111932529A (en) * 2020-09-10 2020-11-13 腾讯科技(深圳)有限公司 Image segmentation method, device and system
CN112164082A (en) * 2020-10-09 2021-01-01 深圳市铱硙医疗科技有限公司 Method for segmenting multi-modal MR brain image based on 3D convolutional neural network
CN112308835A (en) * 2020-10-27 2021-02-02 南京工业大学 Intracranial hemorrhage segmentation method integrating dense connection and attention mechanism
CN112686903A (en) * 2020-12-07 2021-04-20 嘉兴职业技术学院 Improved high-resolution remote sensing image semantic segmentation model
CN112741613A (en) * 2021-01-13 2021-05-04 武汉大学 Resting human brain default network function and structure coupling analysis method
CN113743484A (en) * 2021-08-20 2021-12-03 宁夏大学 Image classification method and system based on space and channel attention mechanism

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Exclusive Independent Probability Estimation using Deep 3D Fully Convolutional DenseNets: Application to IsoIntense Infant Brain MRI Segmentation;Seyed Raein Hashemi等;《Proceedings of Machine Learning Research》;20191231;260-274 *
Image classification-based brain tumour tissue segmentation;Salma Al-qazzaz等;《Multimedia Tools and Applications》;20200905;993–1008 *
吴宗胜等.基于深度卷积神经网络的道路场景理解.《计算机工程与应用》.2017,第53卷(第22期), *
图像语义分割关键技术研究;范仲悦;《中国优秀硕士学位论文全文数据库_信息科技辑》;20190915;I138-1162 *
基于深度卷积神经网络的道路场景理解;吴宗胜等;《计算机工程与应用》;20171115;第53卷(第22期);正文第3.1节 *

Also Published As

Publication number Publication date
CN114298234A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN114298234B (en) Brain medical image classification method and device, computer equipment and storage medium
CN110321920B (en) Image classification method and device, computer readable storage medium and computer equipment
CN108784655B (en) Rapid assessment and outcome analysis for medical patients
US10499857B1 (en) Medical protocol change in real-time imaging
US10984905B2 (en) Artificial intelligence for physiological quantification in medical imaging
KR101908680B1 (en) A method and apparatus for machine learning based on weakly supervised learning
JP5650647B2 (en) System and method for fusing clinical and image features for computer-aided diagnosis
US20220122253A1 (en) Information processing device, program, trained model, diagnostic support device, learning device, and prediction model generation method
US20210151187A1 (en) Data-Driven Estimation of Predictive Digital Twin Models from Medical Data
JP6885517B1 (en) Diagnostic support device and model generation device
EP3712849B1 (en) Automated uncertainty estimation of lesion segmentation
CN110866909A (en) Training method of image generation network, image prediction method and computer equipment
CN115719328A (en) Method, system and apparatus for quantifying uncertainty in medical image evaluation
Chen et al. Contrastive learning for prediction of Alzheimer's disease using brain 18f-fdg pet
US20230154164A1 (en) Self-supervised learning for artificial intelligence-based systems for medical imaging analysis
US20230253116A1 (en) Estimating patient risk of cytokine storm using biomarkers
US11580390B2 (en) Data processing apparatus and method
US20230238141A1 (en) Subpopulation based patient risk prediction using graph attention networks
CN115619810B (en) Prostate partition segmentation method, system and equipment
US20230097895A1 (en) Multimodal analysis of imaging and clinical data for personalized therapy
US20230259820A1 (en) Smart selection to prioritize data collection and annotation based on clinical metrics
EP4198997A1 (en) A computer implemented method, a method and a system
CN111209946B (en) Three-dimensional image processing method, image processing model training method and medium
CN112037237B (en) Image processing method, image processing device, computer equipment and medium
US20240104719A1 (en) Multi-task learning framework for fully automated assessment of coronary arteries in angiography images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant