CN113205150A - Multi-temporal fusion-based multi-task classification system and method - Google Patents

Multi-temporal fusion-based multi-task classification system and method Download PDF

Info

Publication number
CN113205150A
CN113205150A CN202110558478.6A CN202110558478A CN113205150A CN 113205150 A CN113205150 A CN 113205150A CN 202110558478 A CN202110558478 A CN 202110558478A CN 113205150 A CN113205150 A CN 113205150A
Authority
CN
China
Prior art keywords
task
tasks
classification
time phase
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110558478.6A
Other languages
Chinese (zh)
Other versions
CN113205150B (en
Inventor
栗伟
王珊珊
刘佳叶
冯朝路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN202110558478.6A priority Critical patent/CN113205150B/en
Publication of CN113205150A publication Critical patent/CN113205150A/en
Application granted granted Critical
Publication of CN113205150B publication Critical patent/CN113205150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a multi-task classification system and method based on multi-temporal fusion, and relates to the technical field of deep learning. The invention improves the prediction accuracy of a plurality of clinical indexes by dynamically updating the weight of each task. Mapping MRI radiology onto relevant clinical indicators can improve the predictive performance of multiple tasks. The method is important for the optimal treatment of the tumor by combining the relevance of the radiology and combining the prediction indexes through multitask learning, and can also be used for clinical decision according to a plurality of clinical indexes.

Description

Multi-temporal fusion-based multi-task classification system and method
Technical Field
The invention relates to the technical field of deep learning, in particular to a multi-task classification system and method based on multi-temporal fusion.
Background
Breast cancer is the most common cancer among women worldwide and has become the second leading cause of cancer-related death. The fusion of multiple phases of the medical image can provide an overall view angle for the classification of the medical image. Wherein, the dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) is to carry out dynamic scanning on the basis of a rapid imaging sequence, and can provide higher sensitivity in the aspect of breast cancer prediction.
Eight phase data are included in dynamic contrast enhanced magnetic resonance imaging. The first phase is an image without contrast agent injection, and the second and eighth phases are an image at intervals after contrast agent injection. Clinically, the first and third phases are relatively meaningful, so the first and third phases of the DCE image are feature fused. Feature fusion, the combination of features from different phases, plays an important role in modern network architectures. The fusion is usually performed by such operations as simple weighting and concatenation.
The fusion operation can be performed at the input layer, the intermediate layer and the decision layer. The early fusion is mostly expressed as image fusion, the intermediate layer fusion is expressed as mixed fusion between the characteristics of the intermediate layer, and the research of machine learning shows that the fusion effect of the intermediate layer is better. But the fusion of interlayer features of different phases has different or unordered spatial dimensions making the fusion a challenge.
After medical image fusion, under the condition that the number of samples is limited, a plurality of indexes are jointly predicted by combining a multi-task learning mechanism. Most machine learning models are independently learned, i.e., single-task learning, i.e., a model is designed for a specific task and then iteratively optimized. For a task with little complexity, the task is decomposed into a plurality of tasks, and each part of the multitask is modeled. However, when modeling the subtasks, the association and constraint relationship between the tasks are easily ignored, so that the overall effect of the whole task is not ideal. The goal of multi-task learning is to leverage the knowledge contained in different tasks to improve the generalization performance of multiple related tasks. Multi-task learning presents two major challenges in the training phase compared to single-task learning. One is how to share network parameters, which are hard parameter sharing and soft parameter sharing, respectively. The second is the learning process of how to balance the different tasks. If there are n learning tasks, the tasks are related to each other, the multi-task learning aims to extract relevant information from all the tasks, mine as much information as possible, share the common information among the tasks, balance each task, and therefore the classification effect of the overall tasks can be improved. A plurality of tasks can share one model, and the occupied memory amount is small; the related tasks complement each other by sharing information, and the performances of the related tasks are improved.
Disclosure of Invention
In order to solve the technical problems, the invention provides a multi-task classification system and method based on multi-temporal fusion, which perform multi-task joint prediction by using multi-temporal image fusion and endow corresponding weights to different tasks.
A multi-temporal fusion based multi-task classification system, comprising: the device comprises an input module, a neural network module, a feature fusion module and an output module;
the input module is used for receiving data of a first time phase and a third time phase of the DCE image input by a user and then inputting the data into the neural network;
the neural network module comprises a convolution layer, a pooling layer and a full-connection layer, wherein the convolution layer carries out deeper analysis on an input image so as to obtain characteristics with higher abstraction degree; the pooling layer further reduces the feature nodes, thereby achieving the purpose of reducing the parameters in the whole neural network; the full connection layer is used for carrying out multi-task classification;
the feature fusion module is used for splicing the features;
and the output module is used for outputting and displaying the task classification result.
On the other hand, the multi-task classification method based on multi-temporal fusion is realized by the multi-task classification system based on multi-temporal fusion, and comprises the following steps:
step 1: preprocessing a first time phase and a third time phase of a Dynamic Contrast Enhanced (DCE) image of the nuclear magnetic resonance image, and converting data into a unified jpg format;
step 2: performing feature fusion on the DCE images of the first time phase and the third time phase in a feature fusion module, respectively adding, multiplying and taking the maximum value of the features of the first time phase and the third time phase, and then splicing the results obtained by adding, multiplying and taking the maximum value;
and step 3: calculating the weight information of the tasks according to the relevance and the learning capacity of each task in the multiple tasks, changing the weight information of the tasks, wherein the weight changes along with the change of iteration times in the running process of the tasks, and balancing the learning progress among different tasks through a weight adjusting strategy;
the multitasking specifically includes whether lymph nodes (lymphnodes) are metastasized, histological grading (histopathological grading), molecular typing (molecular typing) and Ki67 (tumor marker, an immunohistochemical marker) expression levels; whether the lymph node is metastasized is divided into two cases, namely, whether the lymph node is metastasized is represented by 1 and whether the lymph node is metastasized is represented by 0; the histological grade is more than 2 and less than or equal to 2, wherein more than 2 is represented by 1, and less than or equal to 2 is represented by 0; the molecular typing is divided into luminal epithelium A type, luminal epithelium B type, trigonal type and HER2 overexpression type which are respectively expressed by 0, 1, 2 and 3; ki67 expression levels were divided into high (greater than 14%) and low (less than 14%), with high levels expressed with 1 and low levels expressed with 0;
step 3.1: a Pearson correlation coefficient method is adopted to measure linear correlation between tasks, and the calculation formula is as follows:
Cov(X,Y)=E{[X-E(X)[Y-E(Y)]]} (1)
Figure BDA0003078031010000021
in the formula (1), E { [ X-E (X)) [ Y-E (Y))]]The covariance of random variables X and Y is called Cov (X, Y), formula (2) } pxyCorrelation coefficients called random variables X and Y, D (X), D (Y) variance X, Y, respectively, E (X), E (Y) expectation X, Y, respectively;
step 3.2: by designing a rebalancing weighting strategy for the task correlation coefficients, the sum of the positively correlated task correlation coefficients in the task n is recorded as
Figure BDA0003078031010000031
The sum of the inversely related task correlation coefficients is recorded as
Figure BDA0003078031010000032
The importance degree of different tasks is weighed by calculating the ratio r of the two and adding the ratio r as a weighting coefficient into training;
the positively correlated tasks include histological grading, molecular typing and ki67 expression levels, with the sum of the positively correlated task correlation coefficients being:
Figure BDA0003078031010000033
in the formula, z is the number of positive tasks, and i is the ith task;
the negative correlation task is whether the lymph node is metastasized, and the sum of the negative correlation task correlation coefficients is as follows:
Figure BDA0003078031010000034
in the formula, f is the number of negative tasks;
the importance of different tasks is weighed by calculating the ratio of equations (3), (4) and adding it as a weighting factor to the training:
Figure BDA0003078031010000035
and 4, step 4: training a designed neural network, wherein the learning rate is set to be 0.001 in network training, the size of each training batch of samples batch _ size is set to be 30, a binary cross entropy loss function is used for lymph node metastasis classification tasks, histological classification tasks and ki-67 high-low level expression classification tasks, a multi-classification cross entropy loss function is used for molecular classification tasks, an Adam optimizer is used in all task training, and 60 epoch networks are trained to tend to converge.
And 5: the test set, which contains data for the first and third phases of the DCE image, was tested using a convolutional neural network, outputting a classification prediction of histological grade, molecular typing, whether lymph nodes are metastasizing and Ki-67 high and low levels.
The invention has the following beneficial effects:
the technical scheme provides a multi-task classification system and method based on multi-temporal fusion, and the prediction accuracy of a plurality of clinical indexes is improved by dynamically updating the weight of each task. Mapping MRI radiology onto relevant clinical indicators can improve the predictive performance of multiple tasks. Combines the relevance of the radiology, is important for the optimal treatment of the tumor through the multitask learning joint prediction index, can carry out clinical decision according to a plurality of clinical indexes, realizes the information complementation of medical images and joint prediction of the plurality of clinical indexes, improves the prediction accuracy of the clinical indexes,
drawings
FIG. 1 is a schematic diagram of a feature fusion module according to an embodiment of the invention;
fig. 2 is an overall flow chart of an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
A multi-time phase fusion-based multi-task classification system comprises an input module, a neural network module, a feature fusion module and an output module;
the input module is used for receiving data of a first time phase and a third time phase of the DCE image input by a user and then inputting the data into the neural network;
the neural network module comprises a convolution layer, a pooling layer and a full-connection layer, wherein the convolution layer carries out deeper analysis on an input image so as to obtain characteristics with higher abstraction degree; the pooling layer further reduces the feature nodes, thereby achieving the purpose of reducing the parameters in the whole neural network; the full connection layer is used for carrying out multi-task classification;
the feature fusion module is used for splicing the features; FIG. 1 is a feature fusion module, wherein F + represents the feature addition of two time phases, F represents the feature multiplication of two time phases, Fm represents the maximum value of the feature of two time phases, concat represents the feature addition, the multiplication and the maximum value are fused, and FC represents a full connection layer.
And the output module is used for outputting and displaying the task classification result.
On the other hand, a multi-temporal fusion based multi-task classification method is implemented by the multi-temporal fusion based multi-task classification system, as shown in fig. 2, and includes the following steps:
step 1: preprocessing a first time phase and a third time phase of a Dynamic Contrast Enhanced (DCE) image of the nuclear magnetic resonance image, and converting data into a unified jpg format; and (3) carrying out feature extraction on the first time phase and the third time phase of the DCE image through the convolutional layer by using a resnet18 pre-training model as a basic network.
Step 2: performing feature fusion on the DCE images of the first time phase and the third time phase in a feature fusion module, respectively adding, multiplying and taking the maximum value of the features of the first time phase and the third time phase, and then splicing the results obtained by adding, multiplying and taking the maximum value;
and step 3: calculating the weight information of the tasks according to the relevance and the learning capacity of each task in the multiple tasks, changing the weight information of the tasks, wherein the weight changes along with the change of iteration times in the running process of the tasks, and balancing the learning progress among different tasks through a weight adjusting strategy;
the multitasking specifically includes whether lymph nodes (lymphnodes) are metastasized, histological grading (histopathological grading), molecular typing (molecular typing) and Ki67 (tumor marker, an immunohistochemical marker) expression levels; whether the lymph node is metastasized is divided into two cases, namely, whether the lymph node is metastasized is represented by 1 and whether the lymph node is metastasized is represented by 0; the histological grade is more than 2 and less than or equal to 2, wherein more than 2 is represented by 1, and less than or equal to 2 is represented by 0; the molecular typing is divided into luminal epithelium A type, luminal epithelium B type, trigonal type and HER2 overexpression type which are respectively expressed by 0, 1, 2 and 3; ki67 expression levels were divided into high (greater than 14%) and low (less than 14%), with high levels expressed with 1 and low levels expressed with 0;
step 3.1: a Pearson correlation coefficient method is adopted to measure linear correlation between tasks, and the calculation formula is as follows:
Cov(X,Y)=E{[X-E(X)[Y-E(Y)]]} (1)
Figure BDA0003078031010000051
in the formula (1), E { [ X-E (X)) [ Y-E (Y))]]The covariance of random variables X and Y is called Cov (X, Y), formula (2) } pxyCorrelation coefficients called random variables X and Y, D (X), D (Y) variance X, Y, respectively, E (X), E (Y) expectation X, Y, respectively;
in this example, a correlation coefficient matrix table was obtained, in which LymphNode represents whether or not lymph node was metastasized, Hisologic Grading represents histological grade, molecular typing represents molecular typing and ki-67 represents ki-67 expression level, as shown in Table 1. As can be seen from the table: certain positive correlation exists among histology grading, molecular typing and Ki67, and whether lymph node metastasis is negatively correlated with other three tasks. Therefore, the weight design of the tasks is carried out by utilizing the correlation among the tasks, and the accuracy of the multi-task joint prediction can be further improved.
TABLE 1 matrix table of correlation coefficients
Figure BDA0003078031010000052
The tasks supplement each other, information contained in the medical image can be fully mined under the condition of less samples, and the tasks are used for optimizing the network simultaneously according to the weight values during training. The learning of different tasks has different learning abilities, the learning degrees are different, and the different tasks may be in different learning stages, for example, task a is not trained well, and task B is close to convergence. The method of fixed weighting each loss can limit the task learning at some stage. Therefore, a better weighting mode in the multi-task learning is required to be dynamic, and the weighting mode is adjusted according to the learning ability of different tasks, the task learning stage, the learning difficulty degree, even the learning effect and other comprehensive conditions.
Step 3.2: by designing a rebalancing weighting strategy for the task correlation coefficients, the sum of the positively correlated task correlation coefficients in the task n is recorded as
Figure BDA0003078031010000061
The sum of the inversely related task correlation coefficients is recorded as
Figure BDA0003078031010000062
The importance degree of different tasks is weighed by calculating the ratio r of the two and adding the ratio r as a weighting coefficient into training;
the positively correlated tasks include histological grading, molecular typing and ki67 expression levels, with the sum of the positively correlated task correlation coefficients being:
Figure BDA0003078031010000063
in the formula, z is the number of positive tasks, and i is the ith task;
the negative correlation task is whether the lymph node is metastasized, and the sum of the negative correlation task correlation coefficients is as follows:
Figure BDA0003078031010000064
in the formula, f is the number of negative tasks;
the importance of different tasks is weighed by calculating the ratio of equations (3), (4) and adding it as a weighting factor to the training:
Figure BDA0003078031010000065
during training, the weight of some tasks may be obviously greater than that of other tasks, which may result in poor training effect of some tasks, different weight values are allocated to different tasks, and the weight of each task is dynamically adjusted, so that the training effect of the tasks is as best as possible.
And 4, step 4: training a designed neural network, wherein the learning rate is set to be 0.001 in network training, the size of each training batch of samples batch _ size is set to be 30, a binary cross entropy loss function is used for lymph node metastasis classification tasks, histological classification tasks and ki-67 high-low level expression classification tasks, a multi-classification cross entropy loss function is used for molecular classification tasks, an Adam optimizer is used in all task training, and 60 epoch networks are trained to tend to converge.
And 5: the test set, which contains data for the first and third phases of the DCE image, was tested using a convolutional neural network, outputting a classification prediction of histological grade, molecular typing, whether lymph nodes are metastasizing and Ki-67 high and low levels.
The multi-modal fusion method provided in this embodiment jointly predicts indexes such as breast cancer histological grading, lymph node metastasis, molecular typing and ki67, the histological grading, the lymph node metastasis and ki67 expression level are classified into two categories, the molecular typing is classified into four categories, and different weight values are set for each task according to the correlation between tasks. A multi-tasking combined prediction experiment was performed with a set of DCE small sample datasets with a number of image samples of 674, with a five-fold validation, an evaluation index accuracy (Acc) of 0.597, evaluation indices Recall and F1-score, with the results as shown in table 2 below:
TABLE 2 weighted test results for each task
Recall F1-score
Lymph node metastasis 0.386 0.516
Histological grading 0.754 0.654
Molecular typing 0.450 0.53
Ki67 expression 0.900 0.900
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions and scope of the present invention as defined in the appended claims.

Claims (5)

1. A multi-time phase fusion-based multi-task classification system is characterized by comprising an input module, a neural network module, a feature fusion module and an output module;
the input module is used for receiving data of a first time phase and a third time phase of a Dynamic Contrast Enhanced (DCE) image of the nuclear magnetic resonance image input by a user and then inputting the data into the neural network module;
the neural network module comprises a convolutional layer, a pooling layer and a full-connection layer, wherein the convolutional layer analyzes an input image; the pooling layer further reduces the feature nodes, and the full connection layer is used for carrying out multi-task classification;
the feature fusion module splices the features;
and the output module is used for outputting and displaying the task classification result.
2. A multi-temporal fusion based multi-task classification method, which is implemented based on the multi-temporal fusion based multi-task classification system of claim 1, and is characterized by specifically comprising the following steps:
step 1: preprocessing a first time phase and a third time phase of a Dynamic Contrast Enhanced (DCE) image of the nuclear magnetic resonance image, and converting data into a unified jpg format;
step 2: performing feature fusion on the DCE images of the first time phase and the third time phase in a feature fusion module, respectively adding, multiplying and taking the maximum value of the features of the first time phase and the third time phase, and then splicing the results obtained by adding, multiplying and taking the maximum value;
and step 3: calculating the weight information of the tasks according to the relevance and the learning capacity of each task in the multiple tasks, changing the weight information of the tasks, wherein the weight changes along with the change of iteration times in the running process of the tasks, and balancing the learning progress among different tasks through a weight adjusting strategy;
and 4, step 4: training a neural network;
and 5: the test set, which contains data for the first and third phases of the DCE image, was tested using a convolutional neural network, outputting a classification prediction of histological grade, molecular typing, whether lymph nodes are metastasizing and Ki-67 high and low levels.
3. The method for multi-temporal fusion-based multi-task classification as claimed in claim 2, wherein the multi-task in step 3 specifically includes whether lymph node (LymphNode) is metastasized, histological grading (histologic grading), molecular typing (molecular typing) and Ki67 (tumor marker, an immunohistochemical marker) expression level; whether the lymph node is metastasized is divided into two cases, namely, whether the lymph node is metastasized is represented by 1 and whether the lymph node is metastasized is represented by 0; the histological grade is more than 2 and less than or equal to 2, wherein more than 2 is represented by 1, and less than or equal to 2 is represented by 0; the molecular typing is divided into luminal epithelium A type, luminal epithelium B type, trigonal type and HER2 overexpression type which are respectively expressed by 0, 1, 2 and 3; ki67 expression levels were divided into high (greater than 14%) and low (less than 14%), with high levels expressed as 1 and low levels expressed as 0.
4. The multi-temporal fusion based multi-task classification method according to claim 2, wherein the step 3 specifically comprises the following steps:
step 3.1: a Pearson correlation coefficient method is adopted to measure linear correlation between tasks, and the calculation formula is as follows:
Cov(X,Y)=E{[X-E(X)[Y-E(Y)]]} (1)
Figure FDA0003078031000000021
in the formula (1), E { [ X-E (X)) [ Y-E (Y))]]The covariance of random variables X and Y is called Cov (X, Y), formula (2) } pxyCorrelation coefficients called random variables X and Y, D (X), D (Y) variance X, Y, respectively, E (X), E (Y) expectation X, Y, respectively;
step 3.2: by designing a rebalancing weighting strategy for the task correlation coefficient, the task correlation coefficient positively correlated in the task nThe sum is recorded as
Figure FDA0003078031000000022
The sum of the inversely related task correlation coefficients is recorded as
Figure FDA0003078031000000023
The importance degree of different tasks is weighed by calculating the ratio r of the two and adding the ratio r as a weighting coefficient into training;
the positively correlated tasks include histological grading, molecular typing and ki67 expression levels, with the sum of the positively correlated task correlation coefficients being:
Figure FDA0003078031000000024
in the formula, z is the number of positive tasks, and i is the ith task;
the negative correlation task is whether the lymph node is metastasized, and the sum of the negative correlation task correlation coefficients is as follows:
Figure FDA0003078031000000025
in the formula, f is the number of negative tasks;
the importance of different tasks is weighed by calculating the ratio of equations (3), (4) and adding it as a weighting factor to the training:
Figure FDA0003078031000000026
5. the multi-temporal fusion based multi-task classification method according to claim 2, wherein in the step 4, the neural network is trained by setting a learning rate to be 0.001, setting the size of each training batch sample number batch _ size to be 30, using a binary cross entropy loss function for the lymph node metastasis classification task, the histological classification task and the ki-67 high-low level expression classification, using a multi-classification cross entropy loss function for the molecular classification task, using an Adam optimizer in all task training, and training 60 epoch networks to converge.
CN202110558478.6A 2021-05-21 2021-05-21 Multi-time fusion-based multi-task classification system and method Active CN113205150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110558478.6A CN113205150B (en) 2021-05-21 2021-05-21 Multi-time fusion-based multi-task classification system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110558478.6A CN113205150B (en) 2021-05-21 2021-05-21 Multi-time fusion-based multi-task classification system and method

Publications (2)

Publication Number Publication Date
CN113205150A true CN113205150A (en) 2021-08-03
CN113205150B CN113205150B (en) 2024-03-01

Family

ID=77022905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110558478.6A Active CN113205150B (en) 2021-05-21 2021-05-21 Multi-time fusion-based multi-task classification system and method

Country Status (1)

Country Link
CN (1) CN113205150B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102247144A (en) * 2011-04-18 2011-11-23 大连理工大学 Time intensity characteristic-based computer aided method for diagnosing benign and malignant breast lesions
CN106529601A (en) * 2016-11-16 2017-03-22 东北大学 Image classification prediction method based on multi-task learning in sparse subspace
CN107680088A (en) * 2017-09-30 2018-02-09 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
CN110728674A (en) * 2019-10-21 2020-01-24 清华大学 Image processing method and device, electronic equipment and computer readable storage medium
US20200074341A1 (en) * 2018-08-30 2020-03-05 NEC Laboratories Europe GmbH Method and system for scalable multi-task learning with convex clustering
CN111488914A (en) * 2020-03-17 2020-08-04 哈尔滨工业大学 Alzheimer disease classification and prediction system based on multitask learning
CN112687327A (en) * 2020-12-28 2021-04-20 中山依数科技有限公司 Cancer survival analysis system based on multitask and multi-mode
CN112785605A (en) * 2021-01-26 2021-05-11 西安电子科技大学 Multi-temporal CT image liver tumor segmentation method based on semantic migration

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102247144A (en) * 2011-04-18 2011-11-23 大连理工大学 Time intensity characteristic-based computer aided method for diagnosing benign and malignant breast lesions
CN106529601A (en) * 2016-11-16 2017-03-22 东北大学 Image classification prediction method based on multi-task learning in sparse subspace
CN107680088A (en) * 2017-09-30 2018-02-09 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
US20200074341A1 (en) * 2018-08-30 2020-03-05 NEC Laboratories Europe GmbH Method and system for scalable multi-task learning with convex clustering
CN110728674A (en) * 2019-10-21 2020-01-24 清华大学 Image processing method and device, electronic equipment and computer readable storage medium
CN111488914A (en) * 2020-03-17 2020-08-04 哈尔滨工业大学 Alzheimer disease classification and prediction system based on multitask learning
CN112687327A (en) * 2020-12-28 2021-04-20 中山依数科技有限公司 Cancer survival analysis system based on multitask and multi-mode
CN112785605A (en) * 2021-01-26 2021-05-11 西安电子科技大学 Multi-temporal CT image liver tumor segmentation method based on semantic migration

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MASOOD BANAIE等: "Spatiotemporal features of DCE-MRI for breast cancer diagnosis", COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, vol. 155, 22 December 2017 (2017-12-22), pages 153 - 164 *
刘明: "DW-MRI影像组学在预测乳腺癌21基因检测结果和分子分型的应用研究", 中国博士学位论文全文数据库 医药卫生科技辑, 15 January 2019 (2019-01-15), pages 072 - 463 *
张白霖等: "基于四维计算机断层扫描图像的非小细胞肺癌影像组学特征稳定性分析", 中国医学影像学杂志, vol. 28, no. 07, 25 July 2020 (2020-07-25), pages 550 - 553 *

Also Published As

Publication number Publication date
CN113205150B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN111882040B (en) Convolutional neural network compression method based on channel number search
CN110210560A (en) Increment training method, classification method and the device of sorter network, equipment and medium
CN113516230B (en) Automatic convolutional neural network pruning method based on average rank importance ordering
US20230298172A1 (en) Systems and methods for image classification using visual dictionaries
CN114841257B (en) Small sample target detection method based on self-supervision comparison constraint
CN113298230B (en) Prediction method based on unbalanced data set generated against network
CN112115967B (en) Image increment learning method based on data protection
CN109787821B (en) Intelligent prediction method for large-scale mobile client traffic consumption
CN113743353B (en) Cervical cell classification method for space, channel and scale attention fusion learning
CN108509840A (en) The hyperspectral remote sensing image band selection method of Optimization Mechanism is remembered based on quantum
CN113205150B (en) Multi-time fusion-based multi-task classification system and method
CN116884597A (en) Pathological image breast cancer molecular typing method and system based on self-supervision pre-training and multi-example learning
CN116757979A (en) Embryo image fusion method, device, electronic equipment and storage medium
CN115131628A (en) Mammary gland image classification method and equipment based on typing auxiliary information
Yi et al. An Effective Approach for determining Rock Discontinuity sets using a modified Whale optimization Algorithm
Wen et al. Short-term load forecasting based on feature mining and deep learning of big data of user electricity consumption
Xie et al. Using SVM and PSO-NN Models to Predict Breast Cancer
Zhang Artificial immune optimization system solving constrained omni-optimization
CN113341461B (en) Earthquake velocity prediction method, earthquake velocity prediction device and server
Yun et al. [Retracted] Quality Evaluation and Satisfaction Analysis of Online Learning of College Students Based on Artificial Intelligence
Ma et al. An equidistance index intuitionistic fuzzy c-means clustering algorithm based on local density and membership degree boundary
CN113222044B (en) Cervical fluid-based cell classification method based on ternary attention and scale correlation fusion
CN117216550A (en) Classification model training method, device, equipment, medium and program product
Santos et al. Glomerulosclerosis Identification Using a Modified Dense Convolutional Network
CN118199652A (en) Test set compression method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant