CN113205150B - Multi-time fusion-based multi-task classification system and method - Google Patents
Multi-time fusion-based multi-task classification system and method Download PDFInfo
- Publication number
- CN113205150B CN113205150B CN202110558478.6A CN202110558478A CN113205150B CN 113205150 B CN113205150 B CN 113205150B CN 202110558478 A CN202110558478 A CN 202110558478A CN 113205150 B CN113205150 B CN 113205150B
- Authority
- CN
- China
- Prior art keywords
- task
- tasks
- classification
- time phase
- correlation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000012549 training Methods 0.000 claims description 24
- 230000014509 gene expression Effects 0.000 claims description 16
- 238000013528 artificial neural network Methods 0.000 claims description 15
- 208000007433 Lymphatic Metastasis Diseases 0.000 claims description 12
- 210000001165 lymph node Anatomy 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000005481 NMR spectroscopy Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 101001012157 Homo sapiens Receptor tyrosine-protein kinase erbB-2 Proteins 0.000 claims description 3
- 102100030086 Receptor tyrosine-protein kinase erbB-2 Human genes 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 230000002962 histologic effect Effects 0.000 claims description 3
- 230000002055 immunohistochemical effect Effects 0.000 claims description 3
- 239000003550 marker Substances 0.000 claims description 3
- 230000002018 overexpression Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 239000000439 tumor marker Substances 0.000 claims description 3
- 230000001394 metastastic effect Effects 0.000 claims 1
- 206010061289 metastatic neoplasm Diseases 0.000 claims 1
- 206010028980 Neoplasm Diseases 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 abstract description 2
- 238000013507 mapping Methods 0.000 abstract description 2
- 238000011369 optimal treatment Methods 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 6
- 230000002596 correlated effect Effects 0.000 description 4
- 206010006187 Breast cancer Diseases 0.000 description 3
- 208000026310 Breast neoplasm Diseases 0.000 description 3
- 238000013535 dynamic contrast enhanced MRI Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 239000002872 contrast media Substances 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000002347 injection Methods 0.000 description 2
- 239000007924 injection Substances 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 239000000243 solution Substances 0.000 description 2
- 210000001519 tissue Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 230000005773 cancer-related death Effects 0.000 description 1
- 231100000504 carcinogenesis Toxicity 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/20—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Databases & Information Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a multi-time fusion-based multi-task classification system and a multi-time fusion-based multi-task classification method, and relates to the technical field of deep learning. The invention improves the prediction accuracy of a plurality of clinical indexes by dynamically updating the weight of each task. Mapping MRI radiological to relevant clinical indices may improve the predictive performance of multiple tasks. By combining with the correlation of radiology, the optimal treatment of tumors is important by multiplexing learning and combining the prediction indexes, and clinical decisions can be made according to a plurality of clinical indexes.
Description
Technical Field
The invention relates to the technical field of deep learning, in particular to a multi-time fusion-based multi-task classification system and method.
Background
Breast cancer is the most common cancer among women worldwide and has become the second leading cause of cancer-related death. The fusion of the phases of the medical image may provide an overall view angle for its classification. Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) is a dynamic scanning based on a rapid imaging sequence, and can provide higher sensitivity in the aspect of breast cancer prediction.
Eight phase data are included in dynamic contrast enhanced magnetic resonance imaging. The first phase is an image without contrast agent injection, and the second and eighth phases are an imaging instance at intervals after contrast agent injection. The first and third phases are relatively meaningful in clinic, so that the first and third phases of the DCE image undergo feature fusion. Feature fusion, the combination of features from different phases, takes an important place in modern network architecture. Fusion is typically performed by a simple weighting and concatenation operation.
The fusion operation can be performed at the input layer, the middle layer and the decision layer. Early fusion is mostly expressed as image fusion, and middle layer fusion is expressed as mixed fusion among the characteristics of the middle layer, and the study of machine learning shows that the fusion effect of the middle layer is better. But the fusion of intermediate layer features at different phases has different or disordered spatial dimensions that make the fusion challenging.
After medical image fusion, under the condition of limited sample number, combining a mechanism of multi-task learning to jointly predict a plurality of indexes. Most machine learning models are independently learned, i.e., single-task learning, i.e., a model is designed for a particular task and then iteratively optimized. For a task that is a bit more complex, it is broken down into multiple tasks, each part of the multiple tasks being modeled. However, when modeling subtasks, the association and constraint relation between the tasks are easily ignored, so that the overall effect of the whole task is not ideal. The goal of multitasking is to exploit the knowledge contained in the different tasks to improve the generalization performance of multiple related tasks. Multitasking has two major challenges in the training phase compared to single-task learning. One is how to share network parameters, namely two methods of hard parameter sharing and soft parameter sharing. The second is a learning process of how to balance different tasks. If n learning tasks exist, the tasks are related, the multi-task learning aims at extracting related information from all the tasks, mining as much information as possible, sharing the common information among the tasks, and balancing each task, so that the classification effect of the overall tasks can be improved. Multiple tasks can share one model, and the occupied memory quantity is small; the related tasks complement each other through sharing information, so that the performance of each other is improved.
Disclosure of Invention
In order to solve the technical problems, the invention provides a multi-task classification system and a multi-task classification method based on multi-time fusion, which utilize multi-time image fusion to perform multi-task joint prediction and assign corresponding weights to different tasks.
A multi-temporal fusion-based multi-task classification system, comprising: the device comprises an input module, a neural network module, a feature fusion module and an output module;
the input module is used for receiving data of a first time phase and a third time phase of the DCE image input by a user and then inputting the data into the neural network;
the neural network module comprises a convolution layer, a pooling layer and a full-connection layer, wherein the convolution layer carries out deeper analysis on an input image so as to obtain features with higher abstraction degree; the pooling layer further reduces the characteristic nodes, so that the purpose of reducing parameters in the whole neural network is achieved; the full connection layer is used for carrying out multi-task classification;
the feature fusion module is used for splicing the features;
and the output module is used for outputting and displaying the task classification result.
On the other hand, the multi-time fusion-based multi-task classification method is realized by the multi-time fusion-based multi-task classification system, and comprises the following steps of:
step 1: preprocessing a first time phase and a third time phase of a Dynamic Contrast Enhancement (DCE) image of the nuclear magnetic resonance image, and converting data into a unified jpg format;
step 2: the DCE images of the first time phase and the third time phase are subjected to feature fusion in a feature fusion module, features of the first time phase and the third time phase are added, multiplied and the maximum value is taken, and then the results obtained by the addition, the multiplication and the maximum value are spliced;
step 3: calculating weight information of tasks according to the correlation and learning capacity of each task in the multi-task, changing the weight information of the tasks, wherein the weight changes along with the change of iteration times in the running process of the tasks, and balancing the learning progress among different tasks through a weight adjustment strategy;
the multitasking specifically includes whether lymph node (Lymphnode) metastasizes, histological grading (histologic grading), molecular typing (molecular typing) and Ki67 (tumor marker, an immunohistochemical marker) expression levels; wherein whether the lymph node metastasize is divided into yes and no, the case is expressed by 1, and the case is expressed by 0; the histological fraction is divided into two cases of more than 2 and less than or equal to 2, wherein more than 2 is represented by 1, and less than or equal to 2 is represented by 0; molecular typing is divided into a lumen epithelial A type, a lumen epithelial B type, a trinocular type and a HER2 over-expression type, which are respectively represented by 0,1,2 and 3; ki67 expression levels are divided into high levels (greater than 14%) expressed with 1 and low levels (less than 14%) expressed with 0;
step 3.1: the linear correlation between tasks is measured by using a Pearson correlation coefficient method, and the calculation formula is as follows:
Cov(X,Y)=E{[X-E(X)[Y-E(Y)]]} (1)
e { [ X-E (X) [ Y-E (Y) in formula (1)]]The covariance of the random variables X and Y, denoted Cov (X, Y), equation (2) ρ xy The correlation coefficients called random variables X and Y, D (X) and D (Y) are the variances of X, Y, respectively, and E (X) and E (Y) are the expectations of X, Y, respectively;
step 3.2: by designing a rebalancing weighting strategy for the task correlation coefficients, the sum of the task correlation coefficients of the positive correlation in task n is recorded asThe sum of the task correlation coefficients of the negative correlation is denoted +.>The importance degrees of different tasks are weighed by calculating the ratio r of the two, adding the ratio r as a weighting coefficient and adding the weighting coefficient into training;
the positively correlated tasks include histological grading, molecular typing and ki67 expression level, and the sum of positively correlated task correlation coefficients is:
wherein z is the number of positive tasks, and i is the ith task;
the task with negative correlation is whether lymph node metastasis or not, and the sum of the task correlation coefficients with negative correlation is as follows:
wherein f is the number of negative tasks;
the importance of different tasks is weighed by calculating the ratio of formulas (3) and (4) and adding the ratio as a weighting coefficient to training:
step 4: training the designed neural network, wherein the learning rate is set to be 0.001 in network training, the batch sample number batch_size of each training is set to be 30, a binary cross entropy loss function is used for lymph node metastasis classification tasks, tissue classification tasks and ki-67 high and low level expression classification, a multi-classification cross entropy loss function is used for molecular classification tasks, an Adam optimizer is used in all task training, and 60 epoch networks tend to converge.
Step 5: and testing a test set by using a convolutional neural network, wherein the test set comprises data of a first time phase and a third time phase of the DCE image, and outputting a classification prediction result of histological classification, molecular classification, lymph node metastasis and Ki-67 high and low levels.
The beneficial effects of the invention are as follows:
the technical scheme provides a multi-time fusion-based multi-task classification system and a multi-time fusion-based multi-task classification method, and the prediction accuracy of a plurality of clinical indexes is improved by dynamically updating the weight of each task. Mapping MRI radiological to relevant clinical indices may improve the predictive performance of multiple tasks. By combining the correlation of radiology and by multi-task learning and combining the prediction indexes, the method is important for the optimal treatment of tumors, can also carry out clinical decision according to a plurality of clinical indexes, realize the complementation of medical image information and the combined prediction of the plurality of clinical indexes, and improve the prediction accuracy of the clinical indexes,
drawings
FIG. 1 is a schematic diagram of a feature fusion module according to an embodiment of the present invention;
fig. 2 is an overall flow chart of an embodiment of the present invention.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
A multi-time fusion-based multi-task classification system comprises an input module, a neural network module, a feature fusion module and an output module;
the input module is used for receiving data of a first time phase and a third time phase of the DCE image input by a user and then inputting the data into the neural network;
the neural network module comprises a convolution layer, a pooling layer and a full-connection layer, wherein the convolution layer carries out deeper analysis on an input image so as to obtain features with higher abstraction degree; the pooling layer further reduces the characteristic nodes, so that the purpose of reducing parameters in the whole neural network is achieved; the full connection layer is used for carrying out multi-task classification;
the feature fusion module is used for splicing the features; fig. 1 is a feature fusion module, where f+ represents feature addition of two time phases, F represents feature multiplication of two time phases, fm represents a maximum value of features of two time phases, concat represents feature addition, three parts of multiplication and maximum value are fused, and FC represents a fully connected layer.
And the output module is used for outputting and displaying the task classification result.
On the other hand, the multi-time fusion-based multi-task classification method is realized by the multi-time fusion-based multi-task classification system, as shown in fig. 2, and comprises the following steps:
step 1: preprocessing a first time phase and a third time phase of a Dynamic Contrast Enhancement (DCE) image of the nuclear magnetic resonance image, and converting data into a unified jpg format; and taking the resnet18 pre-training model as a basic network, and extracting features of the first time phase and the third time phase of the DCE image through a convolution layer.
Step 2: the DCE images of the first time phase and the third time phase are subjected to feature fusion in a feature fusion module, features of the first time phase and the third time phase are added, multiplied and the maximum value is taken, and then the results obtained by the addition, the multiplication and the maximum value are spliced;
step 3: calculating weight information of tasks according to the correlation and learning capacity of each task in the multi-task, changing the weight information of the tasks, wherein the weight changes along with the change of iteration times in the running process of the tasks, and balancing the learning progress among different tasks through a weight adjustment strategy;
the multitasking specifically includes whether lymph node (Lymphnode) metastasizes, histological grading (histologic grading), molecular typing (molecular typing) and Ki67 (tumor marker, an immunohistochemical marker) expression levels; wherein whether the lymph node metastasize is divided into yes and no, the case is expressed by 1, and the case is expressed by 0; the histological fraction is divided into two cases of more than 2 and less than or equal to 2, wherein more than 2 is represented by 1, and less than or equal to 2 is represented by 0; molecular typing is divided into a lumen epithelial A type, a lumen epithelial B type, a trinocular type and a HER2 over-expression type, which are respectively represented by 0,1,2 and 3; ki67 expression levels are divided into high levels (greater than 14%) expressed with 1 and low levels (less than 14%) expressed with 0;
step 3.1: the linear correlation between tasks is measured by using a Pearson correlation coefficient method, and the calculation formula is as follows:
Cov(X,Y)=E{[X-E(X)[Y-E(Y)]]} (1)
e { [ X-E (X) [ Y-E (Y) in formula (1)]]The covariance of the random variables X and Y, denoted Cov (X, Y), equation (2) ρ xy The correlation coefficients called random variables X and Y, D (X) and D (Y) are the variances of X, Y, respectively, and E (X) and E (Y) are the expectations of X, Y, respectively;
in this example, a correlation coefficient matrix table was obtained as shown in Table 1, in which Lymphnode represents whether lymph node metastasizes, hisologicalGrading represents histological grading, molecular typing represents molecular typing and ki-67 represents ki-67 expression levels. From the table it can be seen that: there is a certain positive correlation among the three of histological classification, molecular classification and Ki67, and whether lymph node metastasis is in negative correlation with other three tasks. Therefore, the task weight design is carried out by utilizing the correlation among the tasks, and the multi-task joint prediction accuracy can be improved.
TABLE 1 correlation coefficient matrix table
The tasks supplement each other, and under the condition of fewer samples, the information contained in the medical image can be fully mined, and during training, the tasks can be used for adjusting the network at the same time according to the weight values. Learning of different tasks has different learning capabilities and learning degrees are different, and different tasks may be in different learning stages, for example, task a is not trained yet, and task B is close to convergence. So that the method of fixed weighting each loss limits the learning of the task at some stage. Therefore, a better weighting mode in multi-task learning should be dynamic, and the learning difficulty level, even learning effect and other comprehensive conditions are adjusted according to the learning ability of different tasks, the learning stage of the task learning.
Step 3.2: by designing a rebalancing weighting strategy for the task correlation coefficients, the sum of the task correlation coefficients of the positive correlation in task n is recorded asThe sum of the task correlation coefficients of the negative correlation is denoted +.>The importance degrees of different tasks are weighed by calculating the ratio r of the two, adding the ratio r as a weighting coefficient and adding the weighting coefficient into training;
the positively correlated tasks include histological grading, molecular typing and ki67 expression level, and the sum of positively correlated task correlation coefficients is:
wherein z is the number of positive tasks, and i is the ith task;
the task with negative correlation is whether lymph node metastasis or not, and the sum of the task correlation coefficients with negative correlation is as follows:
wherein f is the number of negative tasks;
the importance of different tasks is weighed by calculating the ratio of formulas (3) and (4) and adding the ratio as a weighting coefficient to training:
during training, the weights of some tasks may be obviously larger than those of other tasks, which may result in poor training effect of some tasks, different weight values are allocated to different tasks, and the weights of each task are dynamically adjusted, so that the training effect of the tasks is as best as possible.
Step 4: training the designed neural network, wherein the learning rate is set to be 0.001 in network training, the batch sample number batch_size of each training is set to be 30, a binary cross entropy loss function is used for lymph node metastasis classification tasks, tissue classification tasks and ki-67 high and low level expression classification, a multi-classification cross entropy loss function is used for molecular classification tasks, an Adam optimizer is used in all task training, and 60 epoch networks tend to converge.
Step 5: and testing a test set by using a convolutional neural network, wherein the test set comprises data of a first time phase and a third time phase of the DCE image, and outputting a classification prediction result of histological classification, molecular classification, lymph node metastasis and Ki-67 high and low levels.
The multi-modal fusion method proposed in this embodiment predicts indexes such as breast cancer histological classification, lymph node metastasis, molecular typing, ki67 and the like in a combined manner, histological classification, lymph node metastasis and ki67 expression levels are classified into two classes, molecular typing into four classes, and different weight values are set for each task according to the correlation between tasks. Through a set of DCE small sample data sets, a multi-task joint prediction experiment is carried out, the number of image samples is 674, five-fold verification is used, the evaluation index accuracy (Acc) is 0.597, and the evaluation indexes Recall and F1-score are as shown in the following Table 2:
TABLE 2 test results for each task weighted
Recall | F1-score | |
Lymph node metastasis | 0.386 | 0.516 |
Histological grading | 0.754 | 0.654 |
Molecular typing | 0.450 | 0.53 |
Ki67 expression | 0.900 | 0.900 |
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions, which are defined by the scope of the appended claims.
Claims (5)
1. The multi-time fusion-based multi-task classification system is characterized by comprising an input module, a neural network module, a feature fusion module and an output module;
the input module is used for receiving data of a first time phase and a third time phase of the dynamic contrast enhanced DCE image of the nuclear magnetic resonance image input by a user and then inputting the data into the neural network module;
the neural network module comprises a convolution layer, a pooling layer and a full-connection layer, wherein the convolution layer analyzes an input image; the pooling layer further reduces characteristic nodes, and the full-connection layer is used for performing multi-task classification; the multitasking specifically comprises whether lymph nodes metastasize, histological grading, molecular typing and Ki67 expression level;
the feature fusion module splices the features; the DCE images of the first time phase and the third time phase are subjected to feature fusion in a feature fusion module, features of the first time phase and the third time phase are added, multiplied and the maximum value is taken, and then the results obtained by the addition, the multiplication and the maximum value are spliced;
and the output module is used for outputting and displaying the task classification result.
2. The multi-time fusion-based multi-task classification method is realized based on the multi-time fusion-based multi-task classification system according to claim 1, and is characterized by comprising the following steps:
step 1: preprocessing a first time phase and a third time phase of a dynamic contrast enhanced DCE image of the nuclear magnetic resonance image, and converting data into a unified jpg format;
step 2: the DCE images of the first time phase and the third time phase are subjected to feature fusion in a feature fusion module, features of the first time phase and the third time phase are added, multiplied and the maximum value is taken, and then the results obtained by the addition, the multiplication and the maximum value are spliced;
step 3: calculating weight information of tasks according to the correlation and learning capacity of each task in the multi-task, changing the weight information of the tasks, wherein the weight changes along with the change of iteration times in the running process of the tasks, and balancing the learning progress among different tasks through a weight adjustment strategy;
step 4: training a neural network;
step 5: the test set is tested by using a convolutional neural network, and the test set comprises data of a first time phase and a third time phase of the DCE image, and outputs classification prediction results of histological classification, molecular classification, lymph node metastasis and Ki67 expression level.
3. The multi-temporal fusion-based multi-task classification method according to claim 2, wherein the multi-task in step 3 specifically includes whether lymph node LymphNode metastasizes, histologic classification, molecular typing and Ki67 expression levels; wherein whether the lymph node metastasize is divided into yes and no, the case is expressed by 1, and the case is expressed by 0; the histological fraction is divided into two cases of more than 2 and less than or equal to 2, wherein more than 2 is represented by 1, and less than or equal to 2 is represented by 0; molecular typing is divided into a lumen epithelial A type, a lumen epithelial B type, a trinocular type and a HER2 over-expression type, which are respectively represented by 0,1,2 and 3; ki67 is a tumor marker, an immunohistochemical marker, expressed at levels divided into high levels, i.e. greater than 14%, and low levels, i.e. less than 14%, expressed with 1 and 0.
4. The multi-temporal fusion-based multi-task classification method according to claim 2, wherein the step 3 specifically comprises the following steps:
step 3.1: the linear correlation between tasks is measured by using a Pearson correlation coefficient method, and the calculation formula is as follows:
Cov(X,Y)=E{[X-E(X)[Y-E(Y)]]} (1)
e { [ X-E (X) [ Y-E (Y) in formula (1)]]The covariance of the random variables X and Y, denoted Cov (X, Y), equation (2) ρ xy The correlation coefficients called random variables X and Y, D (X) and D (Y) are the variances of X, Y, respectively, and E (X) and E (Y) are the expectations of X, Y, respectively;
step (a)3.2: by designing a rebalancing weighting strategy for the task correlation coefficients, the sum of the task correlation coefficients of the positive correlation in task n is recorded asThe sum of the task correlation coefficients of the negative correlation is denoted +.>The importance degrees of different tasks are weighed by calculating the ratio r of the two, adding the ratio r as a weighting coefficient and adding the weighting coefficient into training;
the tasks of positive correlation include histological grading, molecular typing and Ki67 expression level, and the sum of the task correlation coefficients of positive correlation is:
wherein z is the number of positive tasks, and i is the ith task;
the task with negative correlation is whether lymph node metastasis or not, and the sum of the task correlation coefficients with negative correlation is as follows:
wherein f is the number of negative tasks;
the importance of different tasks is weighed by calculating the ratio of formulas (3) and (4) and adding the ratio as a weighting coefficient to training:
。
5. the multi-time fusion-based multi-task classification method according to claim 2, wherein the training of the neural network in the step 4 is that the learning rate is set to 0.001, the batch sample number batch_size of each training is set to 30, a binary cross entropy loss function is used for classifying whether the lymph nodes are metastatic classification tasks, histological classification tasks and Ki67 expression level classification, a multi-classification cross entropy loss function is used for molecular classification tasks, an Adam optimizer is used for all task training, and 60 epoch networks tend to converge.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110558478.6A CN113205150B (en) | 2021-05-21 | 2021-05-21 | Multi-time fusion-based multi-task classification system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110558478.6A CN113205150B (en) | 2021-05-21 | 2021-05-21 | Multi-time fusion-based multi-task classification system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113205150A CN113205150A (en) | 2021-08-03 |
CN113205150B true CN113205150B (en) | 2024-03-01 |
Family
ID=77022905
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110558478.6A Active CN113205150B (en) | 2021-05-21 | 2021-05-21 | Multi-time fusion-based multi-task classification system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113205150B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102247144A (en) * | 2011-04-18 | 2011-11-23 | 大连理工大学 | Time intensity characteristic-based computer aided method for diagnosing benign and malignant breast lesions |
CN106529601A (en) * | 2016-11-16 | 2017-03-22 | 东北大学 | Image classification prediction method based on multi-task learning in sparse subspace |
CN107680088A (en) * | 2017-09-30 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for analyzing medical image |
CN110728674A (en) * | 2019-10-21 | 2020-01-24 | 清华大学 | Image processing method and device, electronic equipment and computer readable storage medium |
CN111488914A (en) * | 2020-03-17 | 2020-08-04 | 哈尔滨工业大学 | Alzheimer disease classification and prediction system based on multitask learning |
CN112687327A (en) * | 2020-12-28 | 2021-04-20 | 中山依数科技有限公司 | Cancer survival analysis system based on multitask and multi-mode |
CN112785605A (en) * | 2021-01-26 | 2021-05-11 | 西安电子科技大学 | Multi-temporal CT image liver tumor segmentation method based on semantic migration |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11657322B2 (en) * | 2018-08-30 | 2023-05-23 | Nec Corporation | Method and system for scalable multi-task learning with convex clustering |
-
2021
- 2021-05-21 CN CN202110558478.6A patent/CN113205150B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102247144A (en) * | 2011-04-18 | 2011-11-23 | 大连理工大学 | Time intensity characteristic-based computer aided method for diagnosing benign and malignant breast lesions |
CN106529601A (en) * | 2016-11-16 | 2017-03-22 | 东北大学 | Image classification prediction method based on multi-task learning in sparse subspace |
CN107680088A (en) * | 2017-09-30 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for analyzing medical image |
CN110728674A (en) * | 2019-10-21 | 2020-01-24 | 清华大学 | Image processing method and device, electronic equipment and computer readable storage medium |
CN111488914A (en) * | 2020-03-17 | 2020-08-04 | 哈尔滨工业大学 | Alzheimer disease classification and prediction system based on multitask learning |
CN112687327A (en) * | 2020-12-28 | 2021-04-20 | 中山依数科技有限公司 | Cancer survival analysis system based on multitask and multi-mode |
CN112785605A (en) * | 2021-01-26 | 2021-05-11 | 西安电子科技大学 | Multi-temporal CT image liver tumor segmentation method based on semantic migration |
Non-Patent Citations (3)
Title |
---|
DW-MRI影像组学在预测乳腺癌21基因检测结果和分子分型的应用研究;刘明;中国博士学位论文全文数据库 医药卫生科技辑;20190115;E072-463 * |
Spatiotemporal features of DCE-MRI for breast cancer diagnosis;Masood Banaie等;Computer Methods and Programs in Biomedicine;20171222;第155卷;153-164 * |
基于四维计算机断层扫描图像的非小细胞肺癌影像组学特征稳定性分析;张白霖等;中国医学影像学杂志;20200725;第28卷(第07期);550-553 * |
Also Published As
Publication number | Publication date |
---|---|
CN113205150A (en) | 2021-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110210560A (en) | Increment training method, classification method and the device of sorter network, equipment and medium | |
US20230298172A1 (en) | Systems and methods for image classification using visual dictionaries | |
Ho et al. | Deep interactive learning: an efficient labeling approach for deep learning-based osteosarcoma treatment response assessment | |
CN111242233B (en) | Alzheimer disease classification method based on fusion network | |
Zewdie et al. | Classification of breast cancer types, sub-types and grade from histopathological images using deep learning technique | |
US20190311258A1 (en) | Data dependent model initialization | |
Weng et al. | Multimodal multitask representation learning for pathology biobank metadata prediction | |
CN111680575B (en) | Human epithelial cell staining classification device, equipment and storage medium | |
CN113269256A (en) | Construction method and application of Misrc-GAN model | |
CN107169264B (en) | complex disease diagnosis system | |
CN116894985A (en) | Semi-supervised image classification method and semi-supervised image classification system | |
Kissel et al. | Forward stability and model path selection | |
CN114580501A (en) | Bone marrow cell classification method, system, computer device and storage medium | |
CN113205150B (en) | Multi-time fusion-based multi-task classification system and method | |
van Loon et al. | View selection in multi-view stacking: choosing the meta-learner | |
CN117371511A (en) | Training method, device, equipment and storage medium for image classification model | |
CN116884597A (en) | Pathological image breast cancer molecular typing method and system based on self-supervision pre-training and multi-example learning | |
CN116129185A (en) | Fuzzy classification method for tongue-like greasy feature of traditional Chinese medicine based on collaborative updating of data and model | |
CN115131628A (en) | Mammary gland image classification method and equipment based on typing auxiliary information | |
CN114708347A (en) | Lung nodule CT image classification method based on self-adaptive selection dual-source-domain heterogeneous migration learning | |
CN110322055B (en) | Method and system for improving grading stability of data risk model | |
JavadiMoghaddam | A novel framework based on deep learning for COVID-19 diagnosis from X-ray images | |
Prabakaran et al. | Robust hyperparameter tuned deep Elman neural network for the diagnosis of osteosarcoma on histology images | |
Ma et al. | An equidistance index intuitionistic fuzzy c-means clustering algorithm based on local density and membership degree boundary | |
Walsh et al. | Evolution of convolutional neural networks for lymphoma classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |