CN113205150A - Multi-temporal fusion-based multi-task classification system and method - Google Patents
Multi-temporal fusion-based multi-task classification system and method Download PDFInfo
- Publication number
- CN113205150A CN113205150A CN202110558478.6A CN202110558478A CN113205150A CN 113205150 A CN113205150 A CN 113205150A CN 202110558478 A CN202110558478 A CN 202110558478A CN 113205150 A CN113205150 A CN 113205150A
- Authority
- CN
- China
- Prior art keywords
- task
- tasks
- classification
- time phase
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000012549 training Methods 0.000 claims description 24
- 210000001165 lymph node Anatomy 0.000 claims description 23
- 238000013528 artificial neural network Methods 0.000 claims description 15
- 230000014509 gene expression Effects 0.000 claims description 15
- 230000002596 correlated effect Effects 0.000 claims description 10
- 208000007433 Lymphatic Metastasis Diseases 0.000 claims description 7
- 210000000981 epithelium Anatomy 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000005481 NMR spectroscopy Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 238000012360 testing method Methods 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 101001012157 Homo sapiens Receptor tyrosine-protein kinase erbB-2 Proteins 0.000 claims description 3
- 102100030086 Receptor tyrosine-protein kinase erbB-2 Human genes 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 230000002055 immunohistochemical effect Effects 0.000 claims description 3
- 239000003550 marker Substances 0.000 claims description 3
- 230000002018 overexpression Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 239000000439 tumor marker Substances 0.000 claims description 3
- 230000002962 histologic effect Effects 0.000 claims 1
- 206010028980 Neoplasm Diseases 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 abstract description 2
- 238000013507 mapping Methods 0.000 abstract description 2
- 238000011369 optimal treatment Methods 0.000 abstract description 2
- 239000010410 layer Substances 0.000 description 20
- 230000000694 effects Effects 0.000 description 6
- 206010006187 Breast cancer Diseases 0.000 description 3
- 208000026310 Breast neoplasm Diseases 0.000 description 3
- 238000013535 dynamic contrast enhanced MRI Methods 0.000 description 3
- 239000000243 solution Substances 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 239000002872 contrast media Substances 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002347 injection Methods 0.000 description 2
- 239000007924 injection Substances 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 230000005773 cancer-related death Effects 0.000 description 1
- 231100000504 carcinogenesis Toxicity 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000011229 interlayer Substances 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/20—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Databases & Information Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a multi-task classification system and method based on multi-temporal fusion, and relates to the technical field of deep learning. The invention improves the prediction accuracy of a plurality of clinical indexes by dynamically updating the weight of each task. Mapping MRI radiology onto relevant clinical indicators can improve the predictive performance of multiple tasks. The method is important for the optimal treatment of the tumor by combining the relevance of the radiology and combining the prediction indexes through multitask learning, and can also be used for clinical decision according to a plurality of clinical indexes.
Description
Technical Field
The invention relates to the technical field of deep learning, in particular to a multi-task classification system and method based on multi-temporal fusion.
Background
Breast cancer is the most common cancer among women worldwide and has become the second leading cause of cancer-related death. The fusion of multiple phases of the medical image can provide an overall view angle for the classification of the medical image. Wherein, the dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) is to carry out dynamic scanning on the basis of a rapid imaging sequence, and can provide higher sensitivity in the aspect of breast cancer prediction.
Eight phase data are included in dynamic contrast enhanced magnetic resonance imaging. The first phase is an image without contrast agent injection, and the second and eighth phases are an image at intervals after contrast agent injection. Clinically, the first and third phases are relatively meaningful, so the first and third phases of the DCE image are feature fused. Feature fusion, the combination of features from different phases, plays an important role in modern network architectures. The fusion is usually performed by such operations as simple weighting and concatenation.
The fusion operation can be performed at the input layer, the intermediate layer and the decision layer. The early fusion is mostly expressed as image fusion, the intermediate layer fusion is expressed as mixed fusion between the characteristics of the intermediate layer, and the research of machine learning shows that the fusion effect of the intermediate layer is better. But the fusion of interlayer features of different phases has different or unordered spatial dimensions making the fusion a challenge.
After medical image fusion, under the condition that the number of samples is limited, a plurality of indexes are jointly predicted by combining a multi-task learning mechanism. Most machine learning models are independently learned, i.e., single-task learning, i.e., a model is designed for a specific task and then iteratively optimized. For a task with little complexity, the task is decomposed into a plurality of tasks, and each part of the multitask is modeled. However, when modeling the subtasks, the association and constraint relationship between the tasks are easily ignored, so that the overall effect of the whole task is not ideal. The goal of multi-task learning is to leverage the knowledge contained in different tasks to improve the generalization performance of multiple related tasks. Multi-task learning presents two major challenges in the training phase compared to single-task learning. One is how to share network parameters, which are hard parameter sharing and soft parameter sharing, respectively. The second is the learning process of how to balance the different tasks. If there are n learning tasks, the tasks are related to each other, the multi-task learning aims to extract relevant information from all the tasks, mine as much information as possible, share the common information among the tasks, balance each task, and therefore the classification effect of the overall tasks can be improved. A plurality of tasks can share one model, and the occupied memory amount is small; the related tasks complement each other by sharing information, and the performances of the related tasks are improved.
Disclosure of Invention
In order to solve the technical problems, the invention provides a multi-task classification system and method based on multi-temporal fusion, which perform multi-task joint prediction by using multi-temporal image fusion and endow corresponding weights to different tasks.
A multi-temporal fusion based multi-task classification system, comprising: the device comprises an input module, a neural network module, a feature fusion module and an output module;
the input module is used for receiving data of a first time phase and a third time phase of the DCE image input by a user and then inputting the data into the neural network;
the neural network module comprises a convolution layer, a pooling layer and a full-connection layer, wherein the convolution layer carries out deeper analysis on an input image so as to obtain characteristics with higher abstraction degree; the pooling layer further reduces the feature nodes, thereby achieving the purpose of reducing the parameters in the whole neural network; the full connection layer is used for carrying out multi-task classification;
the feature fusion module is used for splicing the features;
and the output module is used for outputting and displaying the task classification result.
On the other hand, the multi-task classification method based on multi-temporal fusion is realized by the multi-task classification system based on multi-temporal fusion, and comprises the following steps:
step 1: preprocessing a first time phase and a third time phase of a Dynamic Contrast Enhanced (DCE) image of the nuclear magnetic resonance image, and converting data into a unified jpg format;
step 2: performing feature fusion on the DCE images of the first time phase and the third time phase in a feature fusion module, respectively adding, multiplying and taking the maximum value of the features of the first time phase and the third time phase, and then splicing the results obtained by adding, multiplying and taking the maximum value;
and step 3: calculating the weight information of the tasks according to the relevance and the learning capacity of each task in the multiple tasks, changing the weight information of the tasks, wherein the weight changes along with the change of iteration times in the running process of the tasks, and balancing the learning progress among different tasks through a weight adjusting strategy;
the multitasking specifically includes whether lymph nodes (lymphnodes) are metastasized, histological grading (histopathological grading), molecular typing (molecular typing) and Ki67 (tumor marker, an immunohistochemical marker) expression levels; whether the lymph node is metastasized is divided into two cases, namely, whether the lymph node is metastasized is represented by 1 and whether the lymph node is metastasized is represented by 0; the histological grade is more than 2 and less than or equal to 2, wherein more than 2 is represented by 1, and less than or equal to 2 is represented by 0; the molecular typing is divided into luminal epithelium A type, luminal epithelium B type, trigonal type and HER2 overexpression type which are respectively expressed by 0, 1, 2 and 3; ki67 expression levels were divided into high (greater than 14%) and low (less than 14%), with high levels expressed with 1 and low levels expressed with 0;
step 3.1: a Pearson correlation coefficient method is adopted to measure linear correlation between tasks, and the calculation formula is as follows:
Cov(X,Y)=E{[X-E(X)[Y-E(Y)]]} (1)
in the formula (1), E { [ X-E (X)) [ Y-E (Y))]]The covariance of random variables X and Y is called Cov (X, Y), formula (2) } pxyCorrelation coefficients called random variables X and Y, D (X), D (Y) variance X, Y, respectively, E (X), E (Y) expectation X, Y, respectively;
step 3.2: by designing a rebalancing weighting strategy for the task correlation coefficients, the sum of the positively correlated task correlation coefficients in the task n is recorded asThe sum of the inversely related task correlation coefficients is recorded asThe importance degree of different tasks is weighed by calculating the ratio r of the two and adding the ratio r as a weighting coefficient into training;
the positively correlated tasks include histological grading, molecular typing and ki67 expression levels, with the sum of the positively correlated task correlation coefficients being:
in the formula, z is the number of positive tasks, and i is the ith task;
the negative correlation task is whether the lymph node is metastasized, and the sum of the negative correlation task correlation coefficients is as follows:
in the formula, f is the number of negative tasks;
the importance of different tasks is weighed by calculating the ratio of equations (3), (4) and adding it as a weighting factor to the training:
and 4, step 4: training a designed neural network, wherein the learning rate is set to be 0.001 in network training, the size of each training batch of samples batch _ size is set to be 30, a binary cross entropy loss function is used for lymph node metastasis classification tasks, histological classification tasks and ki-67 high-low level expression classification tasks, a multi-classification cross entropy loss function is used for molecular classification tasks, an Adam optimizer is used in all task training, and 60 epoch networks are trained to tend to converge.
And 5: the test set, which contains data for the first and third phases of the DCE image, was tested using a convolutional neural network, outputting a classification prediction of histological grade, molecular typing, whether lymph nodes are metastasizing and Ki-67 high and low levels.
The invention has the following beneficial effects:
the technical scheme provides a multi-task classification system and method based on multi-temporal fusion, and the prediction accuracy of a plurality of clinical indexes is improved by dynamically updating the weight of each task. Mapping MRI radiology onto relevant clinical indicators can improve the predictive performance of multiple tasks. Combines the relevance of the radiology, is important for the optimal treatment of the tumor through the multitask learning joint prediction index, can carry out clinical decision according to a plurality of clinical indexes, realizes the information complementation of medical images and joint prediction of the plurality of clinical indexes, improves the prediction accuracy of the clinical indexes,
drawings
FIG. 1 is a schematic diagram of a feature fusion module according to an embodiment of the invention;
fig. 2 is an overall flow chart of an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
A multi-time phase fusion-based multi-task classification system comprises an input module, a neural network module, a feature fusion module and an output module;
the input module is used for receiving data of a first time phase and a third time phase of the DCE image input by a user and then inputting the data into the neural network;
the neural network module comprises a convolution layer, a pooling layer and a full-connection layer, wherein the convolution layer carries out deeper analysis on an input image so as to obtain characteristics with higher abstraction degree; the pooling layer further reduces the feature nodes, thereby achieving the purpose of reducing the parameters in the whole neural network; the full connection layer is used for carrying out multi-task classification;
the feature fusion module is used for splicing the features; FIG. 1 is a feature fusion module, wherein F + represents the feature addition of two time phases, F represents the feature multiplication of two time phases, Fm represents the maximum value of the feature of two time phases, concat represents the feature addition, the multiplication and the maximum value are fused, and FC represents a full connection layer.
And the output module is used for outputting and displaying the task classification result.
On the other hand, a multi-temporal fusion based multi-task classification method is implemented by the multi-temporal fusion based multi-task classification system, as shown in fig. 2, and includes the following steps:
step 1: preprocessing a first time phase and a third time phase of a Dynamic Contrast Enhanced (DCE) image of the nuclear magnetic resonance image, and converting data into a unified jpg format; and (3) carrying out feature extraction on the first time phase and the third time phase of the DCE image through the convolutional layer by using a resnet18 pre-training model as a basic network.
Step 2: performing feature fusion on the DCE images of the first time phase and the third time phase in a feature fusion module, respectively adding, multiplying and taking the maximum value of the features of the first time phase and the third time phase, and then splicing the results obtained by adding, multiplying and taking the maximum value;
and step 3: calculating the weight information of the tasks according to the relevance and the learning capacity of each task in the multiple tasks, changing the weight information of the tasks, wherein the weight changes along with the change of iteration times in the running process of the tasks, and balancing the learning progress among different tasks through a weight adjusting strategy;
the multitasking specifically includes whether lymph nodes (lymphnodes) are metastasized, histological grading (histopathological grading), molecular typing (molecular typing) and Ki67 (tumor marker, an immunohistochemical marker) expression levels; whether the lymph node is metastasized is divided into two cases, namely, whether the lymph node is metastasized is represented by 1 and whether the lymph node is metastasized is represented by 0; the histological grade is more than 2 and less than or equal to 2, wherein more than 2 is represented by 1, and less than or equal to 2 is represented by 0; the molecular typing is divided into luminal epithelium A type, luminal epithelium B type, trigonal type and HER2 overexpression type which are respectively expressed by 0, 1, 2 and 3; ki67 expression levels were divided into high (greater than 14%) and low (less than 14%), with high levels expressed with 1 and low levels expressed with 0;
step 3.1: a Pearson correlation coefficient method is adopted to measure linear correlation between tasks, and the calculation formula is as follows:
Cov(X,Y)=E{[X-E(X)[Y-E(Y)]]} (1)
in the formula (1), E { [ X-E (X)) [ Y-E (Y))]]The covariance of random variables X and Y is called Cov (X, Y), formula (2) } pxyCorrelation coefficients called random variables X and Y, D (X), D (Y) variance X, Y, respectively, E (X), E (Y) expectation X, Y, respectively;
in this example, a correlation coefficient matrix table was obtained, in which LymphNode represents whether or not lymph node was metastasized, Hisologic Grading represents histological grade, molecular typing represents molecular typing and ki-67 represents ki-67 expression level, as shown in Table 1. As can be seen from the table: certain positive correlation exists among histology grading, molecular typing and Ki67, and whether lymph node metastasis is negatively correlated with other three tasks. Therefore, the weight design of the tasks is carried out by utilizing the correlation among the tasks, and the accuracy of the multi-task joint prediction can be further improved.
TABLE 1 matrix table of correlation coefficients
The tasks supplement each other, information contained in the medical image can be fully mined under the condition of less samples, and the tasks are used for optimizing the network simultaneously according to the weight values during training. The learning of different tasks has different learning abilities, the learning degrees are different, and the different tasks may be in different learning stages, for example, task a is not trained well, and task B is close to convergence. The method of fixed weighting each loss can limit the task learning at some stage. Therefore, a better weighting mode in the multi-task learning is required to be dynamic, and the weighting mode is adjusted according to the learning ability of different tasks, the task learning stage, the learning difficulty degree, even the learning effect and other comprehensive conditions.
Step 3.2: by designing a rebalancing weighting strategy for the task correlation coefficients, the sum of the positively correlated task correlation coefficients in the task n is recorded asThe sum of the inversely related task correlation coefficients is recorded asThe importance degree of different tasks is weighed by calculating the ratio r of the two and adding the ratio r as a weighting coefficient into training;
the positively correlated tasks include histological grading, molecular typing and ki67 expression levels, with the sum of the positively correlated task correlation coefficients being:
in the formula, z is the number of positive tasks, and i is the ith task;
the negative correlation task is whether the lymph node is metastasized, and the sum of the negative correlation task correlation coefficients is as follows:
in the formula, f is the number of negative tasks;
the importance of different tasks is weighed by calculating the ratio of equations (3), (4) and adding it as a weighting factor to the training:
during training, the weight of some tasks may be obviously greater than that of other tasks, which may result in poor training effect of some tasks, different weight values are allocated to different tasks, and the weight of each task is dynamically adjusted, so that the training effect of the tasks is as best as possible.
And 4, step 4: training a designed neural network, wherein the learning rate is set to be 0.001 in network training, the size of each training batch of samples batch _ size is set to be 30, a binary cross entropy loss function is used for lymph node metastasis classification tasks, histological classification tasks and ki-67 high-low level expression classification tasks, a multi-classification cross entropy loss function is used for molecular classification tasks, an Adam optimizer is used in all task training, and 60 epoch networks are trained to tend to converge.
And 5: the test set, which contains data for the first and third phases of the DCE image, was tested using a convolutional neural network, outputting a classification prediction of histological grade, molecular typing, whether lymph nodes are metastasizing and Ki-67 high and low levels.
The multi-modal fusion method provided in this embodiment jointly predicts indexes such as breast cancer histological grading, lymph node metastasis, molecular typing and ki67, the histological grading, the lymph node metastasis and ki67 expression level are classified into two categories, the molecular typing is classified into four categories, and different weight values are set for each task according to the correlation between tasks. A multi-tasking combined prediction experiment was performed with a set of DCE small sample datasets with a number of image samples of 674, with a five-fold validation, an evaluation index accuracy (Acc) of 0.597, evaluation indices Recall and F1-score, with the results as shown in table 2 below:
TABLE 2 weighted test results for each task
Recall | F1-score | |
Lymph node metastasis | 0.386 | 0.516 |
Histological grading | 0.754 | 0.654 |
Molecular typing | 0.450 | 0.53 |
Ki67 expression | 0.900 | 0.900 |
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions and scope of the present invention as defined in the appended claims.
Claims (5)
1. A multi-time phase fusion-based multi-task classification system is characterized by comprising an input module, a neural network module, a feature fusion module and an output module;
the input module is used for receiving data of a first time phase and a third time phase of a Dynamic Contrast Enhanced (DCE) image of the nuclear magnetic resonance image input by a user and then inputting the data into the neural network module;
the neural network module comprises a convolutional layer, a pooling layer and a full-connection layer, wherein the convolutional layer analyzes an input image; the pooling layer further reduces the feature nodes, and the full connection layer is used for carrying out multi-task classification;
the feature fusion module splices the features;
and the output module is used for outputting and displaying the task classification result.
2. A multi-temporal fusion based multi-task classification method, which is implemented based on the multi-temporal fusion based multi-task classification system of claim 1, and is characterized by specifically comprising the following steps:
step 1: preprocessing a first time phase and a third time phase of a Dynamic Contrast Enhanced (DCE) image of the nuclear magnetic resonance image, and converting data into a unified jpg format;
step 2: performing feature fusion on the DCE images of the first time phase and the third time phase in a feature fusion module, respectively adding, multiplying and taking the maximum value of the features of the first time phase and the third time phase, and then splicing the results obtained by adding, multiplying and taking the maximum value;
and step 3: calculating the weight information of the tasks according to the relevance and the learning capacity of each task in the multiple tasks, changing the weight information of the tasks, wherein the weight changes along with the change of iteration times in the running process of the tasks, and balancing the learning progress among different tasks through a weight adjusting strategy;
and 4, step 4: training a neural network;
and 5: the test set, which contains data for the first and third phases of the DCE image, was tested using a convolutional neural network, outputting a classification prediction of histological grade, molecular typing, whether lymph nodes are metastasizing and Ki-67 high and low levels.
3. The method for multi-temporal fusion-based multi-task classification as claimed in claim 2, wherein the multi-task in step 3 specifically includes whether lymph node (LymphNode) is metastasized, histological grading (histologic grading), molecular typing (molecular typing) and Ki67 (tumor marker, an immunohistochemical marker) expression level; whether the lymph node is metastasized is divided into two cases, namely, whether the lymph node is metastasized is represented by 1 and whether the lymph node is metastasized is represented by 0; the histological grade is more than 2 and less than or equal to 2, wherein more than 2 is represented by 1, and less than or equal to 2 is represented by 0; the molecular typing is divided into luminal epithelium A type, luminal epithelium B type, trigonal type and HER2 overexpression type which are respectively expressed by 0, 1, 2 and 3; ki67 expression levels were divided into high (greater than 14%) and low (less than 14%), with high levels expressed as 1 and low levels expressed as 0.
4. The multi-temporal fusion based multi-task classification method according to claim 2, wherein the step 3 specifically comprises the following steps:
step 3.1: a Pearson correlation coefficient method is adopted to measure linear correlation between tasks, and the calculation formula is as follows:
Cov(X,Y)=E{[X-E(X)[Y-E(Y)]]} (1)
in the formula (1), E { [ X-E (X)) [ Y-E (Y))]]The covariance of random variables X and Y is called Cov (X, Y), formula (2) } pxyCorrelation coefficients called random variables X and Y, D (X), D (Y) variance X, Y, respectively, E (X), E (Y) expectation X, Y, respectively;
step 3.2: by designing a rebalancing weighting strategy for the task correlation coefficient, the task correlation coefficient positively correlated in the task nThe sum is recorded asThe sum of the inversely related task correlation coefficients is recorded asThe importance degree of different tasks is weighed by calculating the ratio r of the two and adding the ratio r as a weighting coefficient into training;
the positively correlated tasks include histological grading, molecular typing and ki67 expression levels, with the sum of the positively correlated task correlation coefficients being:
in the formula, z is the number of positive tasks, and i is the ith task;
the negative correlation task is whether the lymph node is metastasized, and the sum of the negative correlation task correlation coefficients is as follows:
in the formula, f is the number of negative tasks;
the importance of different tasks is weighed by calculating the ratio of equations (3), (4) and adding it as a weighting factor to the training:
5. the multi-temporal fusion based multi-task classification method according to claim 2, wherein in the step 4, the neural network is trained by setting a learning rate to be 0.001, setting the size of each training batch sample number batch _ size to be 30, using a binary cross entropy loss function for the lymph node metastasis classification task, the histological classification task and the ki-67 high-low level expression classification, using a multi-classification cross entropy loss function for the molecular classification task, using an Adam optimizer in all task training, and training 60 epoch networks to converge.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110558478.6A CN113205150B (en) | 2021-05-21 | 2021-05-21 | Multi-time fusion-based multi-task classification system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110558478.6A CN113205150B (en) | 2021-05-21 | 2021-05-21 | Multi-time fusion-based multi-task classification system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113205150A true CN113205150A (en) | 2021-08-03 |
CN113205150B CN113205150B (en) | 2024-03-01 |
Family
ID=77022905
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110558478.6A Active CN113205150B (en) | 2021-05-21 | 2021-05-21 | Multi-time fusion-based multi-task classification system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113205150B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102247144A (en) * | 2011-04-18 | 2011-11-23 | 大连理工大学 | Time intensity characteristic-based computer aided method for diagnosing benign and malignant breast lesions |
CN106529601A (en) * | 2016-11-16 | 2017-03-22 | 东北大学 | Image classification prediction method based on multi-task learning in sparse subspace |
CN107680088A (en) * | 2017-09-30 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for analyzing medical image |
CN110728674A (en) * | 2019-10-21 | 2020-01-24 | 清华大学 | Image processing method and device, electronic equipment and computer readable storage medium |
US20200074341A1 (en) * | 2018-08-30 | 2020-03-05 | NEC Laboratories Europe GmbH | Method and system for scalable multi-task learning with convex clustering |
CN111488914A (en) * | 2020-03-17 | 2020-08-04 | 哈尔滨工业大学 | Alzheimer disease classification and prediction system based on multitask learning |
CN112687327A (en) * | 2020-12-28 | 2021-04-20 | 中山依数科技有限公司 | Cancer survival analysis system based on multitask and multi-mode |
CN112785605A (en) * | 2021-01-26 | 2021-05-11 | 西安电子科技大学 | Multi-temporal CT image liver tumor segmentation method based on semantic migration |
-
2021
- 2021-05-21 CN CN202110558478.6A patent/CN113205150B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102247144A (en) * | 2011-04-18 | 2011-11-23 | 大连理工大学 | Time intensity characteristic-based computer aided method for diagnosing benign and malignant breast lesions |
CN106529601A (en) * | 2016-11-16 | 2017-03-22 | 东北大学 | Image classification prediction method based on multi-task learning in sparse subspace |
CN107680088A (en) * | 2017-09-30 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for analyzing medical image |
US20200074341A1 (en) * | 2018-08-30 | 2020-03-05 | NEC Laboratories Europe GmbH | Method and system for scalable multi-task learning with convex clustering |
CN110728674A (en) * | 2019-10-21 | 2020-01-24 | 清华大学 | Image processing method and device, electronic equipment and computer readable storage medium |
CN111488914A (en) * | 2020-03-17 | 2020-08-04 | 哈尔滨工业大学 | Alzheimer disease classification and prediction system based on multitask learning |
CN112687327A (en) * | 2020-12-28 | 2021-04-20 | 中山依数科技有限公司 | Cancer survival analysis system based on multitask and multi-mode |
CN112785605A (en) * | 2021-01-26 | 2021-05-11 | 西安电子科技大学 | Multi-temporal CT image liver tumor segmentation method based on semantic migration |
Non-Patent Citations (3)
Title |
---|
MASOOD BANAIE等: "Spatiotemporal features of DCE-MRI for breast cancer diagnosis", COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, vol. 155, 22 December 2017 (2017-12-22), pages 153 - 164 * |
刘明: "DW-MRI影像组学在预测乳腺癌21基因检测结果和分子分型的应用研究", 中国博士学位论文全文数据库 医药卫生科技辑, 15 January 2019 (2019-01-15), pages 072 - 463 * |
张白霖等: "基于四维计算机断层扫描图像的非小细胞肺癌影像组学特征稳定性分析", 中国医学影像学杂志, vol. 28, no. 07, 25 July 2020 (2020-07-25), pages 550 - 553 * |
Also Published As
Publication number | Publication date |
---|---|
CN113205150B (en) | 2024-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111882040B (en) | Convolutional neural network compression method based on channel number search | |
CN110210560A (en) | Increment training method, classification method and the device of sorter network, equipment and medium | |
CN113516230B (en) | Automatic convolutional neural network pruning method based on average rank importance ordering | |
US20230298172A1 (en) | Systems and methods for image classification using visual dictionaries | |
CN114841257B (en) | Small sample target detection method based on self-supervision comparison constraint | |
CN113298230B (en) | Prediction method based on unbalanced data set generated against network | |
CN112115967B (en) | Image increment learning method based on data protection | |
CN109787821B (en) | Intelligent prediction method for large-scale mobile client traffic consumption | |
CN113743353B (en) | Cervical cell classification method for space, channel and scale attention fusion learning | |
CN108509840A (en) | The hyperspectral remote sensing image band selection method of Optimization Mechanism is remembered based on quantum | |
CN113205150B (en) | Multi-time fusion-based multi-task classification system and method | |
CN116884597A (en) | Pathological image breast cancer molecular typing method and system based on self-supervision pre-training and multi-example learning | |
CN116757979A (en) | Embryo image fusion method, device, electronic equipment and storage medium | |
CN115131628A (en) | Mammary gland image classification method and equipment based on typing auxiliary information | |
Yi et al. | An Effective Approach for determining Rock Discontinuity sets using a modified Whale optimization Algorithm | |
Wen et al. | Short-term load forecasting based on feature mining and deep learning of big data of user electricity consumption | |
Xie et al. | Using SVM and PSO-NN Models to Predict Breast Cancer | |
Zhang | Artificial immune optimization system solving constrained omni-optimization | |
CN113341461B (en) | Earthquake velocity prediction method, earthquake velocity prediction device and server | |
Yun et al. | [Retracted] Quality Evaluation and Satisfaction Analysis of Online Learning of College Students Based on Artificial Intelligence | |
Ma et al. | An equidistance index intuitionistic fuzzy c-means clustering algorithm based on local density and membership degree boundary | |
CN113222044B (en) | Cervical fluid-based cell classification method based on ternary attention and scale correlation fusion | |
CN117216550A (en) | Classification model training method, device, equipment, medium and program product | |
Santos et al. | Glomerulosclerosis Identification Using a Modified Dense Convolutional Network | |
CN118199652A (en) | Test set compression method, system, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |