CN112365980A - Brain tumor multi-target point auxiliary diagnosis and prospective treatment evolution visualization method and system - Google Patents

Brain tumor multi-target point auxiliary diagnosis and prospective treatment evolution visualization method and system Download PDF

Info

Publication number
CN112365980A
CN112365980A CN202011279154.0A CN202011279154A CN112365980A CN 112365980 A CN112365980 A CN 112365980A CN 202011279154 A CN202011279154 A CN 202011279154A CN 112365980 A CN112365980 A CN 112365980A
Authority
CN
China
Prior art keywords
brain tumor
treatment
image
target
growth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011279154.0A
Other languages
Chinese (zh)
Other versions
CN112365980B (en
Inventor
于泽宽
耿道颖
刘晓
曹鑫
李郁欣
刘军
张军
尹波
刘杰
吴昊
耿岩
胡斌
张海燕
杜鹏
陆逸平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huashan Hospital of Fudan University
Original Assignee
Huashan Hospital of Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huashan Hospital of Fudan University filed Critical Huashan Hospital of Fudan University
Priority to CN202011279154.0A priority Critical patent/CN112365980B/en
Publication of CN112365980A publication Critical patent/CN112365980A/en
Application granted granted Critical
Publication of CN112365980B publication Critical patent/CN112365980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Pathology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Probability & Statistics with Applications (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a brain tumor multi-target point auxiliary diagnosis and prospective treatment evolution visualization method and a system,the method comprises the following steps: acquiring and preprocessing multi-target multi-modal MRI data of brain tumors before and after treatment; segmenting the tumor region of the preprocessed brain tumor multi-target multi-modal MRI data matched before and after treatment through a 3DU-net convolutional neural network
Figure DDA0002780149610000011
And
Figure DDA0002780149610000012
will be provided with
Figure DDA0002780149610000013
And
Figure DDA0002780149610000014
obtaining a growth characteristic label L ═ { L through an image omics method1,l2,l3,...,ln}; will be provided with
Figure DDA0002780149610000015
And
Figure DDA0002780149610000016
carrying out feature extraction through a multi-channel convolution neural network and then carrying out SE fusion operation to obtain deep learning features
Figure DDA0002780149610000017
And
Figure DDA0002780149610000018
will be provided with
Figure DDA0002780149610000019
Inputting a prediction model to obtain a brain tumor multi-target growth prediction label
Figure DDA00027801496100000110
Will be provided with
Figure DDA00027801496100000111
And
Figure DDA00027801496100000112
inputting the trained prospective treatment visualization model to obtain a final brain tumor region-of-interest growth evolution image, and inserting the brain tumor region-of-interest growth evolution image into the non-brain tumor region IbackgroundIn the middle, the visualization task of prospective treatment of the brain tumor is completed; the invention has better clinical practicability.

Description

Brain tumor multi-target point auxiliary diagnosis and prospective treatment evolution visualization method and system
Technical Field
The invention relates to the technical field of medical image processing, in particular to a brain tumor multi-target point auxiliary diagnosis and prospective treatment evolution visualization method and system.
Background
Brain tumor is a common tumor in human body, and a central neurologist diagnoses the diseased condition of a patient through information such as Magnetic Resonance Imaging (MRI) of the brain of the patient. At present, the gold standard for accurately diagnosing brain tumors is still histopathological examination and gene detection, but the tumor puncture biopsy is invasive operation, has high surgical risk and cannot accurately reflect the heterogeneity inside tumor tissues; in the medical digital age, the Computer Assisted Diagnosis (CAD) technology can combine cross-fusion effective information of multi-modal brain MRI images (including T1 weighted imaging (T1), T1 weighted enhanced imaging (T1C), T2 weighted imaging (T2), water suppression imaging (Flair), magnetic resonance perfusion imaging (PWI) and Apparent Diffusion Coefficient (ADC), pathology and molecular genes, etc., to construct a multi-target molecular image intelligent noninvasive auxiliary Diagnosis and treatment system for brain tumors, which has important significance for clinical Diagnosis and accurate treatment of brain tumors.
In recent years, the rapid development of the CAD technology is promoted by the digital informatization of the medical industry, and the CAD technology can improve the diagnosis and treatment efficiency by segmenting, classifying and predicting medical images to customize individual precise treatment schemes; because a Convolutional Neural Network (CNN) can learn and capture features of different levels, a prediction model is constructed by finding a certain relation among data interiors, and input is mapped into output (labels or predicted values), so that the method is widely applied to various classification tasks and better results are obtained; with the gene molecules playing more and more important roles in tumor diagnosis, researchers can classify and assist diagnosis of single genotypes through a CNN model, and achieve satisfactory results. Next, some researchers aimed at isocitrate dehydrogenase (isocitrate dehydrogenase) of brain tumors, 1p/19q combined Deletion (1p/19q Co-Deletion), Epidermal Growth Factor Receptor (EGFR), phosphatase and tensin homolog (phosphatase and tensin homolog, PTEN), telomerase reverse transcriptase (TERT), oncostatin (tumor suppressor protein p53, p53TP53), Alpha thalassemia/hypofunction syndrome X-linked (Alpha tha/anantia reaction X-linked, ATRX) and Anaplastic Lymphoma kinase (Anaplastic Lymphoma kinase, ALK) and the like were studied for their precise classification [ ALK 83 ], but only for their classification factors ]. The types of genes targeted by the auxiliary diagnosis methods are relatively fixed, and the clinical diagnosis and treatment requirements cannot be completely met; due to the limitation of brain tumor multi-modality MRI multitask data, at present, the research of multi-task auxiliary diagnosis based on brain tumor multi-modality MRI is not common. Some simple logistic regression, SVM and neural network pages are used for auxiliary diagnosis of 4-5 genes, and the highest accuracy rate is 72% [2 ].
Although these models have some promoting effects on brain tumor assistance, these models have some disadvantages: (a) the problems of insufficient data and the like cause that the classification auxiliary diagnosis research based on multiple disease categories is less, and the classification result is poorer; (b) lack of a multi-target therapy assessment model; (c) compared with the current brain tumor CAD system, most auxiliary diagnosis systems only focus on auxiliary diagnosis, and the auxiliary treatment systems only carry out retrospective digital quantitative evaluation, and cannot carry out prospective visual curative effect evaluation before treatment.
[1]Zhou H,Chang K,Bai HX,et al.Machine learning reveals multimodal MRI patterns predictive of isocitrate dehydrogenase and 1p/19q status in diffuse low-and high-grade gliomas[J].Journal of Neuro-Oncology,2019, 142(2):299-307.
[2]Korfiatis P,Kline T L,Lachance D H,et al.Residual Deep Convolutional Neural Network Predicts MGMT Methylation Statusp[J].Journal of Digital Imaging,2017,30:622-628.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a brain tumor multi-target point auxiliary diagnosis and prospective treatment evolution visualization method and system.
The invention provides a brain tumor multi-target point auxiliary diagnosis and prospective treatment evolution visualization method, which comprises the following steps:
step M1: acquiring the multi-target multi-modality MRI data of the brain tumor paired before and after treatment and pre-predicting the multi-target multi-modality MRI data of the brain tumor paired before and after treatmentProcessing to obtain unified standard brain tumor multi-target point multi-modal MRI data I matched before and after treatmentoriginalAnd Ilater
Step M2: pretreatment-based brain tumor multi-target multi-modal MRI data I matched before and after treatmentoriginalAnd IlaterSegmenting the tumor region by a 3DU-net convolutional neural network to obtain the tumor interested region matched before and after treatment
Figure BDA0002780149590000031
And
Figure BDA0002780149590000032
step M3: tumor regions of interest to be paired before and after treatment
Figure BDA0002780149590000033
And
Figure BDA0002780149590000034
obtaining growth characteristic label L ═ L of paired brain tumors before and after treatment by an imaging omics method1,l2,l3,...,ln};
Step M4: tumor regions of interest to be paired before and after treatment
Figure BDA0002780149590000035
And
Figure BDA0002780149590000036
extracting features through a multi-channel convolution neural network to obtain a feature map, and carrying out SE fusion operation on the feature map to obtain final pre-treatment and post-treatment brain tumor region-of-interest multi-modal MRI image deep learning features
Figure BDA0002780149590000037
And
Figure BDA0002780149590000038
step M5: multi-modal MRI (magnetic resonance imaging) image deep learning feature by using brain tumor region of interest before treatment
Figure BDA0002780149590000039
And the corresponding growth characteristic label L ═ L1,l2,l3,...,lnConstructing and training a long-term and short-term memory network based on multi-task learning, training the long-term and short-term memory network based on multi-task learning by a method of dynamically sequencing predicted label sequences, and obtaining a brain tumor multi-target growth prediction model after training; multimodal MRI image deep learning characteristic of brain tumor region of interest before treatment
Figure BDA00027801495900000310
Inputting the brain tumor multi-target growth prediction model to obtain a brain tumor multi-target growth prediction label
Figure BDA00027801495900000311
Step M6: establishing a prospective treatment visualization model of brain tumor growth evolution based on a CNN network for generating an antagonistic strategy, and carrying out ROI multi-modal MRI image deep learning on the brain tumor before treatment
Figure BDA00027801495900000312
And predicted brain tumor growth characteristic signature
Figure BDA00027801495900000313
Inputting the trained prospective treatment visualization model to obtain a final brain tumor region-of-interest growth evolution image, and inserting the brain tumor region-of-interest growth evolution image into the multi-target multi-modal MRI data I for treating the prospective brain tumor by using the prospective treatment visualization modeloriginalOf non-brain tumor region IbackgroundIn the method, a final brain tumor multi-modal MRI image is obtained, and a prospective brain tumor treatment visualization task is completed;
the prospective treatment visualization model comprises a text encoder module, a tumor growth prediction generation visualization module and a tumor lesion insertion module; the GAN network is used for predicting the generation of tumor growth images, and brain tumor MRI images within corresponding time of treatment are predicted by using the GAN network through the existing multi-mode brain MRI images of patients, so that the prospective visualization of the treatment result is completed;
the tumor growth prediction generation visualization module comprises a generator G and an identifier D for generating an image true and false classifierRF
Preferably, the preprocessing in the step M1 includes performing data processing including desensitization, cleaning, resampling and skull peeling on the pre-treatment and post-treatment paired multi-target multi-modality MRI data of the brain tumors, so as to obtain the pre-treatment and post-treatment paired multi-target multi-modality MRI data of the brain tumors with uniform resolution and same gray scale distribution.
Preferably, the step M3 includes:
step M3.1: obtaining brain tumor multi-target multi-modal MRI data I matched before and after treatment by an imaging methodoriginalAnd IlaterThe image omics characteristics of;
step M3.2: obtaining a growth characteristic label through image omics characteristic calculation;
the image omics characteristics comprise brain tumor target type, brain tumor volume, intensity mean value and intensity standard deviation, gray level co-occurrence matrix and entropy;
the growth characteristic labels comprise brain tumor target point types, brain tumor volume change, intensity mean values, intensity standard deviations, gray level co-occurrence matrixes and entropies.
Preferably, the step M4 includes:
step M4.1: multi-target multi-modality MRI data I of brain tumors using pre-and post-treatment pairing after pretreatmentoriginalAnd IlaterTraining a multichannel convolutional neural network with a preset number of convolutional-activation layer modules;
step M4.2: tumor region of interest paired before and after treatment by using multichannel convolutional neural network
Figure BDA0002780149590000041
And
Figure BDA0002780149590000042
extracting the multi-channel convolution feature map, and obtaining the feature map through concat operation
Figure BDA0002780149590000043
And
Figure BDA0002780149590000044
step M4.3: to the obtained
Figure BDA0002780149590000045
And
Figure BDA0002780149590000046
carrying out SE operation to finally obtain the multi-modal MRI image deep learning characteristics of the brain tumor region of interest before and after treatment
Figure BDA0002780149590000047
And
Figure BDA0002780149590000048
preferably, the step M5 includes:
step M5.1: a preset plurality of long and short term memory networks are connected in parallel to construct a long and short term memory network based on multi-task learning;
step M5.2: multi-modal MRI (magnetic resonance imaging) image deep learning feature of brain tumor region of interest before treatment through full connection layer
Figure BDA0002780149590000049
Initialization results in
Figure BDA00027801495900000410
Step M5.3: will be provided with
Figure BDA00027801495900000411
Growth characteristics label L ═ { L ═1,l2,l3,...,lnAnd preset start and end marker inputsEntering a long-term and short-term memory network; at time step t, the prediction output from the previous time step t-1
Figure BDA00027801495900000412
Converting into feature vector by word embedding matrix
Figure BDA00027801495900000413
Feature vector sum
Figure BDA00027801495900000414
Inputting the long and short term memory network at time step t, and obtaining the prediction label by a dynamic sequencing method
Figure BDA00027801495900000415
Step M5.4: predicting a preset long-short term memory network to obtain a preset prediction vector pt, and forming a prediction matrix p ═ p1,p2,...,pn]Predicting a new label at each time step by a long-short term memory network based on multi-task learning until loss alignment loss function converges to obtain a trained brain tumor multi-target growth prediction model, and obtaining a predicted brain tumor growth characteristic label according to the trained brain tumor multi-target growth prediction model
Figure BDA00027801495900000416
Preferably, the step M6 includes:
step M6.1: establishing a prospective treatment visualization model of brain tumor growth evolution based on a CNN network generating an confrontation strategy;
step M6.2: multi-mode MRI image deep learning characteristic of brain tumor region of interest before and after treatment
Figure BDA0002780149590000051
And
Figure BDA0002780149590000052
and brain tumor growth characteristic signature L ═ { L ═1,l2,l3,...,lnInputting a prospective treatment visualization model, generating a generator G of an antagonistic network through training, and performing multi-modal MRI (magnetic resonance imaging) image deep learning characteristic on the region of interest of the brain tumor before treatment through the generator G of the antagonistic network
Figure BDA0002780149590000053
Generating multi-modal MRI (magnetic resonance imaging) image deep learning characteristics of region of interest of brain tumor after treatment
Figure BDA0002780149590000054
And obtaining a predicted multi-modal MRI image I of the region of interest of the brain tumor after treatment through an up-sampling network of a generator GG
Step M6.3: generating an image true-false discriminator DRFPost-treatment brain tumor region-of-interest multi-modal MRI image I predicted by contrastGDeep learning feature of
Figure BDA0002780149590000055
And real treated brain tumor ROI multi-mode MRI image deep learning characteristic
Figure BDA0002780149590000056
Completing antagonistic learning, and finally obtaining a prospective treatment visualization model of brain tumor growth evolution;
step M6.4: the obtained brain tumor region of interest before clinical treatment multi-modal MRI image deep learning characteristics
Figure BDA0002780149590000057
And predicted brain tumor growth characteristic signature
Figure BDA0002780149590000058
Inputting a prospective treatment visualization model for finally obtaining brain tumor growth evolution to obtain a plurality of different brain tumor growth evolution images, and selecting a brain tumor ROI growth evolution image which meets the preset requirement according to the different brain tumor growth images;
step M6.5: the obtained product meets the preset requirementsThe brain tumor ROI growth evolution image is inserted into I through a Poisson image editing methodoriginalOf non-brain tumor region IbackgroundAnd finally obtaining a brain tumor multi-mode MRI image to complete the prospective treatment visualization task of the brain tumor.
Preferably, said step M6.2 comprises:
step M6.2.1: text editor Final feature F0The method comprises the following steps:
Figure BDA0002780149590000059
where z is a noise vector that is typically sampled from a standard normal distribution,
Figure BDA00027801495900000510
brain tumor predicted growth characteristic label extracted by LSTM network
Figure BDA00027801495900000511
Characteristic;
step M6.2.2: multi-mode MRI image deep learning characteristic of brain tumor region of interest before and after treatment
Figure BDA00027801495900000512
And
Figure BDA00027801495900000513
and text editor Final feature F0After passing the concat operation, as input to the tumor growth prediction generator G
Figure BDA00027801495900000514
Generating a multi-modal MRI image I of a region of interest of a brain tumor after a predictive therapy by means of a predictive generator GG
Preferably, said step M6.5 comprises:
step M6.5.1: carrying out binarization on the obtained multiple different brain tumor growth evolution images to obtain a brain tumor mask IG_mask
Step (ii) ofM6.5.2: matching brain tumor multi-target multi-modal MRI data I before and after treatmentoriginalAnd IlaterEach modal image in (1) and (I)G_maskPerforming position-based and operation to obtain background region non-brain tumor region I of brain tumor multi-target multi-modal MRI databackground
Step M6.5.3: inserting a target image into a source image I of a corresponding modalitybackgroundIn (1), setting the source image gradient to
Figure BDA00027801495900000615
Minimizing the source image gradient to
Figure BDA00027801495900000614
Thereby obtaining a source image I of which the target image is inserted into the corresponding modalitybackgroundAnd (4) completing lesion insertion according to the corresponding expected image f.
The invention provides a brain tumor multi-target point auxiliary diagnosis and prospective treatment evolution visualization system, which comprises:
module M1: obtaining the multi-target multi-modality MRI data of the brain tumor paired before and after treatment and preprocessing the multi-target multi-modality MRI data of the brain tumor paired before and after treatment to obtain the multi-target multi-modality MRI data I of the brain tumor paired before and after treatment with unified standardoriginalAnd Ilater
Module M2: pretreatment-based brain tumor multi-target multi-modal MRI data I matched before and after treatmentoriginalAnd IlaterSegmenting the tumor region by a 3DU-net convolutional neural network to obtain the tumor interested region matched before and after treatment
Figure BDA0002780149590000061
And
Figure BDA0002780149590000062
module M3: tumor regions of interest to be paired before and after treatment
Figure BDA0002780149590000063
And
Figure BDA0002780149590000064
obtaining growth characteristic label L ═ L of paired brain tumors before and after treatment by an imaging omics method1,l2,l3,...,ln};
Module M4: tumor regions of interest to be paired before and after treatment
Figure BDA0002780149590000065
And
Figure BDA0002780149590000066
extracting features through a multi-channel convolution neural network to obtain a feature map, and carrying out SE fusion operation on the feature map to obtain final pre-treatment and post-treatment brain tumor region-of-interest multi-modal MRI image deep learning features
Figure BDA0002780149590000067
And
Figure BDA0002780149590000068
module M5: multi-modal MRI (magnetic resonance imaging) image deep learning feature by using brain tumor region of interest before treatment
Figure BDA0002780149590000069
And the corresponding growth characteristic label L ═ L1,l2,l3,...,lnConstructing and training a long-term and short-term memory network based on multi-task learning, training the long-term and short-term memory network based on multi-task learning by a method of dynamically sequencing predicted label sequences, and obtaining a brain tumor multi-target growth prediction model after training; multimodal MRI image deep learning characteristic of brain tumor region of interest before treatment
Figure BDA00027801495900000610
Inputting the brain tumor multi-target growth prediction model to obtain a brain tumor multi-target growth prediction label
Figure BDA00027801495900000611
Module M6: establishing a prospective treatment visualization model of brain tumor growth evolution based on a CNN network for generating an antagonistic strategy, and carrying out ROI multi-modal MRI image deep learning on the brain tumor before treatment
Figure BDA00027801495900000612
And predicted brain tumor growth characteristic signature
Figure BDA00027801495900000613
Inputting the trained prospective treatment visualization model to obtain a final brain tumor region-of-interest growth evolution image, and inserting the brain tumor region-of-interest growth evolution image into the multi-target multi-modal MRI data I for treating the prospective brain tumor by using the prospective treatment visualization modeloriginalOf non-brain tumor region IbackgroundIn the method, a final brain tumor multi-modal MRI image is obtained, and a prospective brain tumor treatment visualization task is completed;
the prospective treatment visualization model comprises a text encoder module, a tumor growth prediction generation visualization module and a tumor lesion insertion module; the GAN network is used for predicting the generation of tumor growth images, and brain tumor MRI images within corresponding time of treatment are predicted by using the GAN network through the existing multi-mode brain MRI images of patients, so that the prospective visualization of the treatment result is completed;
the tumor growth prediction generation visualization module comprises a generator G and an identifier D for generating an image true and false classifierRF
Preferably, the preprocessing in the module M1 includes performing data processing including desensitization, cleaning, resampling and skull peeling on the brain tumor multi-target multi-modality MRI data paired before and after treatment to obtain the brain tumor multi-target multi-modality MRI data paired before and after treatment with uniform resolution and same gray scale distribution;
the module M3 includes:
module M3.1: obtaining brain tumor multi-target paired before and after treatment by imaging methodPoint Multi-modality MRI data IoriginalAnd IlaterThe image omics characteristics of;
module M3.2: obtaining a growth characteristic label through image omics characteristic calculation;
the image omics characteristics comprise brain tumor target type, brain tumor volume, intensity mean value and intensity standard deviation, gray level co-occurrence matrix and entropy;
the growth characteristic label comprises brain tumor target type, brain tumor volume change, intensity mean value and intensity standard deviation, gray level co-occurrence matrix and entropy;
the module M4 includes:
module M4.1: multi-target multi-modality MRI data I of brain tumors using pre-and post-treatment pairing after pretreatmentoriginalAnd IlaterTraining a multichannel convolutional neural network with a preset number of convolutional-activation layer modules;
module M4.2: tumor region of interest paired before and after treatment by using multichannel convolutional neural network
Figure BDA0002780149590000071
And
Figure BDA0002780149590000072
extracting the multi-channel convolution feature map, and obtaining the feature map through concat operation
Figure BDA0002780149590000073
And
Figure BDA0002780149590000074
module M4.3: to the obtained
Figure BDA0002780149590000075
And
Figure BDA0002780149590000076
carrying out SE operation to finally obtain the multi-modal MRI image deep learning characteristics of the brain tumor region of interest before and after treatment
Figure BDA0002780149590000077
And
Figure BDA0002780149590000078
the module M5 includes:
module M5.1: a preset plurality of long and short term memory networks are connected in parallel to construct a long and short term memory network based on multi-task learning;
module M5.2: multi-modal MRI (magnetic resonance imaging) image deep learning feature of brain tumor region of interest before treatment through full connection layer
Figure BDA0002780149590000079
Initialization results in
Figure BDA00027801495900000710
Module M5.3: will be provided with
Figure BDA00027801495900000711
Growth characteristics label L ═ { L ═1,l2,l3,...,lnInputting the symbol and the preset start and end marks into a long-short term memory network; at time step t, the prediction output from the previous time step t-1
Figure BDA00027801495900000712
Converting into feature vector by word embedding matrix
Figure BDA0002780149590000081
Feature vector sum
Figure BDA0002780149590000082
Inputting the long and short term memory network at time step t, and obtaining the prediction label by a dynamic sequencing method
Figure BDA0002780149590000083
Module M5.4: predicting a preset long-short term memory network to obtain a preset prediction vector pt, and forming a prediction matrix p[p1,p2,...,pn]Predicting a new label at each time step by a long-short term memory network based on multi-task learning until loss alignment loss function converges to obtain a trained brain tumor multi-target growth prediction model, and obtaining a predicted brain tumor growth characteristic label according to the trained brain tumor multi-target growth prediction model
Figure BDA0002780149590000084
The module M6 includes:
module M6.1: establishing a prospective treatment visualization model of brain tumor growth evolution based on a CNN network generating an confrontation strategy;
module M6.2: multi-mode MRI image deep learning characteristic of brain tumor region of interest before and after treatment
Figure BDA0002780149590000085
And
Figure BDA0002780149590000086
and brain tumor growth characteristic signature L ═ { L ═1,l2,l3,...,lnInputting a prospective treatment visualization model, generating a generator G of an antagonistic network through training, and performing multi-modal MRI (magnetic resonance imaging) image deep learning characteristic on the region of interest of the brain tumor before treatment through the generator G of the antagonistic network
Figure BDA0002780149590000087
Generating multi-modal MRI (magnetic resonance imaging) image deep learning characteristics of region of interest of brain tumor after treatment
Figure BDA0002780149590000088
And obtaining a predicted multi-modal MRI image I of the region of interest of the brain tumor after treatment through an up-sampling network of a generator GG
Module M6.3: generating an image true-false discriminator DRFPost-treatment brain tumor region-of-interest multi-modal MRI image I predicted by contrastGDeep learning feature of
Figure BDA0002780149590000089
And real treated brain tumor ROI multi-mode MRI image deep learning characteristic
Figure BDA00027801495900000810
Completing antagonistic learning, and finally obtaining a prospective treatment visualization model of brain tumor growth evolution;
module M6.4: the obtained brain tumor region of interest before clinical treatment multi-modal MRI image deep learning characteristics
Figure BDA00027801495900000811
And predicted brain tumor growth characteristic signature
Figure BDA00027801495900000812
Inputting a prospective treatment visualization model for finally obtaining brain tumor growth evolution to obtain a plurality of different brain tumor growth evolution images, and selecting a brain tumor ROI growth evolution image which meets the preset requirement according to the different brain tumor growth images;
module M6.5: inserting the obtained brain tumor ROI growth evolution image meeting the preset requirements into I through a Poisson image editing methodoriginalOf non-brain tumor region IbackgroundIn the method, a final brain tumor multi-modal MRI image is obtained, and a prospective brain tumor treatment visualization task is completed;
the module M6.2 comprises:
module M6.2.1: text editor Final feature F0The method comprises the following steps:
Figure BDA00027801495900000813
where z is a noise vector that is typically sampled from a standard normal distribution,
Figure BDA00027801495900000814
brain tumor predicted growth characteristic label extracted by LSTM network
Figure BDA0002780149590000091
Characteristic;
module M6.2.2: multi-mode MRI image deep learning characteristic of brain tumor region of interest before and after treatment
Figure BDA0002780149590000092
And
Figure BDA0002780149590000093
and text editor Final feature F0After passing the concat operation, as input to the tumor growth prediction generator G
Figure BDA0002780149590000094
Generating a multi-modal MRI image I of a region of interest of a brain tumor after a predictive therapy by means of a predictive generator GG
Said module M6.5 comprises:
module M6.5.1: carrying out binarization on the obtained multiple different brain tumor growth evolution images to obtain a brain tumor mask IG_mask
Module M6.5.2: matching brain tumor multi-target multi-modal MRI data I before and after treatmentoriginalAnd IlaterEach modal image in (1) and (I)G_maskPerforming position-based and operation to obtain background region non-brain tumor region I of brain tumor multi-target multi-modal MRI databackground
Module M6.5.3: inserting a target image into a source image I of a corresponding modalitybackgroundIn (1), setting the source image gradient to
Figure BDA0002780149590000096
Minimizing the source image gradient to
Figure BDA0002780149590000095
Thereby obtaining a source image I of which the target image is inserted into the corresponding modalitybackgroundAnd (4) completing lesion insertion according to the corresponding expected image f.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention utilizes a deep learning method, and solves the problem that the prior brain tumor auxiliary diagnosis and treatment model can not carry out prospective visual curative effect evaluation before treatment by extracting the ROI multi-mode MRI image deep learning characteristics of the brain tumor, a brain tumor multi-target growth prediction model and a prospective treatment visual model of brain tumor growth evolution; compared with other brain tumor auxiliary diagnosis and treatment models, the brain tumor evolution visualization method changes the communication mode of doctors and patients, and more intuitive curative effect evaluation results can be provided for the doctors and the patients by selecting more accurate brain tumor evolution visualization images by the doctors, so that the brain tumor evolution visualization method has better clinical practicability.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a flow chart of a brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example 1
In order to construct a prospective visualization model of brain tumor multi-target auxiliary diagnosis and brain tumor treatment growth evolution, which can meet the requirements of clinical application, the invention provides a brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method, so as to overcome the defects of the conventional brain tumor auxiliary diagnosis and treatment technology.
The invention provides a brain tumor multi-target point auxiliary diagnosis and prospective treatment evolution visualization method, which comprises the following steps:
step M1: obtaining the multi-target multi-modality MRI data of the brain tumor paired before and after treatment and preprocessing the multi-target multi-modality MRI data of the brain tumor paired before and after treatment to obtain the multi-target multi-modality MRI data I of the brain tumor paired before and after treatment with unified standardoriginalAnd Ilater
Step M2: pretreatment-based brain tumor multi-target multi-modal MRI data I matched before and after treatmentoriginalAnd IlaterSegmenting the tumor region by a 3DU-net convolutional neural network to obtain the tumor interested region matched before and after treatment
Figure BDA0002780149590000101
And
Figure BDA0002780149590000102
step M3: tumor regions of interest to be paired before and after treatment
Figure BDA0002780149590000103
And
Figure BDA0002780149590000104
obtaining growth characteristic label L ═ L of paired brain tumors before and after treatment by an imaging omics method1,l2,l3,...,ln};
Step M4: tumor regions of interest to be paired before and after treatment
Figure BDA0002780149590000105
And
Figure BDA0002780149590000106
extracting features through a multi-channel convolution neural network to obtain a feature map, and carrying out SE fusion operation on the feature map to obtain final pre-treatment and post-treatment brain tumor region-of-interest multi-modal MRI image deep learning features
Figure BDA0002780149590000107
And
Figure BDA0002780149590000108
step M5: multi-modal MRI (magnetic resonance imaging) image deep learning feature by using brain tumor region of interest before treatment
Figure BDA0002780149590000109
And the corresponding growth characteristic label L ═ L1,l2,l3,...,lnConstructing and training a long-term and short-term memory network based on multi-task learning, training the long-term and short-term memory network based on multi-task learning by a method of dynamically sequencing predicted label sequences, and obtaining a brain tumor multi-target growth prediction model after training; multimodal MRI image deep learning characteristic of brain tumor region of interest before treatment
Figure BDA00027801495900001010
Inputting the brain tumor multi-target growth prediction model to obtain a brain tumor multi-target growth prediction label
Figure BDA00027801495900001011
Step M6: establishing a prospective treatment visualization model of brain tumor growth evolution based on a CNN network for generating an antagonistic strategy, and carrying out ROI multi-modal MRI image deep learning on the brain tumor before treatment
Figure BDA00027801495900001012
And predicted brain tumor growth characteristic signature
Figure BDA00027801495900001013
Inputting the trained prospective treatment visualization model to obtain a final brain tumor region-of-interest growth evolution image, and inserting the brain tumor region-of-interest growth evolution image into the multi-target multi-modal MRI data I for treating the prospective brain tumor by using the prospective treatment visualization modeloriginalOf non-brain tumor region IbackgroundIn the method, a final brain tumor multi-modal MRI image is obtained, and a prospective brain tumor treatment visualization task is completed;
the prospective treatment visualization model comprises a text encoder module, a tumor growth prediction generation visualization module and a tumor lesion insertion module; the GAN network is used for predicting the generation of tumor growth images, and brain tumor MRI images within corresponding time of treatment are predicted by using the GAN network through the existing multi-mode brain MRI images of patients, so that the prospective visualization of the treatment result is completed;
the tumor growth prediction generation visualization module comprises a generator G and an identifier D for generating an image true and false classifierRF
Specifically, the preprocessing in the step M1 includes performing data processing including desensitization, cleaning, resampling and skull peeling on the brain tumor multi-target multi-modality MRI data paired before and after treatment, and obtaining the brain tumor multi-target multi-modality MRI data paired before and after treatment with uniform resolution and same gray scale distribution.
Specifically, the step M3 includes:
step M3.1: obtaining brain tumor multi-target multi-modal MRI data I matched before and after treatment by an imaging methodoriginalAnd IlaterThe image omics characteristics of;
step M3.2: obtaining a growth characteristic label through image omics characteristic calculation;
the image omics characteristics comprise brain tumor target type, brain tumor volume, intensity mean value and intensity standard deviation, gray level co-occurrence matrix and entropy;
the growth characteristic labels comprise brain tumor target point types, brain tumor volume change, intensity mean values, intensity standard deviations, gray level co-occurrence matrixes and entropies.
Specifically, the step M4 includes:
step M4.1: multi-target multi-modality MRI data I of brain tumors using pre-and post-treatment pairing after pretreatmentoriginalAnd IlaterTraining a multichannel convolutional neural network with a preset number of convolutional-activation layer modules;
step M4.2: tumor region of interest paired before and after treatment by using multichannel convolutional neural network
Figure BDA0002780149590000111
And
Figure BDA0002780149590000112
extracting the multi-channel convolution feature map, and obtaining the feature map through concat operation
Figure BDA0002780149590000113
And
Figure BDA0002780149590000114
step M4.3: to the obtained
Figure BDA0002780149590000115
And
Figure BDA0002780149590000116
carrying out SE operation to finally obtain the multi-modal MRI image deep learning characteristics of the brain tumor region of interest before and after treatment
Figure BDA0002780149590000117
And
Figure BDA0002780149590000118
specifically, the step M5 includes:
step M5.1: a preset plurality of long and short term memory networks are connected in parallel to construct a long and short term memory network based on multi-task learning;
step M5.2: multi-modal MRI (magnetic resonance imaging) image deep learning feature of brain tumor region of interest before treatment through full connection layer
Figure BDA0002780149590000121
Initialization results in
Figure BDA0002780149590000122
Step M5.3: will be provided with
Figure BDA0002780149590000123
Growth characteristics label L ═ { L ═1,l2,l3,...,lnInputting the symbol and the preset start and end marks into a long-short term memory network; at time step t, the prediction output from the previous time step t-1
Figure BDA0002780149590000124
Converting into feature vector by word embedding matrix
Figure BDA0002780149590000125
Feature vector sum
Figure BDA0002780149590000126
Inputting the long and short term memory network at time step t, and obtaining the prediction label by a dynamic sequencing method
Figure BDA0002780149590000127
Step M5.4: predicting a preset long-short term memory network to obtain a preset prediction vector pt, and forming a prediction matrix p ═ p1,p2,...,pn]Predicting a new label at each time step by a long-short term memory network based on multi-task learning until loss alignment loss function converges to obtain a trained brain tumor multi-target growth prediction model, and obtaining a predicted brain tumor growth characteristic label according to the trained brain tumor multi-target growth prediction model
Figure BDA0002780149590000128
Specifically, the step M6 includes:
step M6.1: establishing a prospective treatment visualization model of brain tumor growth evolution based on a CNN network generating an confrontation strategy;
step M6.2: multi-mode MRI image deep learning characteristic of brain tumor region of interest before and after treatment
Figure BDA0002780149590000129
And
Figure BDA00027801495900001210
and brain tumor growth characteristic signature L ═ { L ═1,l2,l3,...,lnInputting a prospective treatment visualization model, generating a generator G of an antagonistic network through training, and performing multi-modal MRI (magnetic resonance imaging) image deep learning characteristic on the region of interest of the brain tumor before treatment through the generator G of the antagonistic network
Figure BDA00027801495900001211
Generating multi-modal MRI (magnetic resonance imaging) image deep learning characteristics of region of interest of brain tumor after treatment
Figure BDA00027801495900001212
And obtaining a predicted multi-modal MRI image I of the region of interest of the brain tumor after treatment through an up-sampling network of a generator GG
Step M6.3: generating an image true-false discriminator DRFPost-treatment brain tumor region-of-interest multi-modal MRI image I predicted by contrastGDeep learning feature of
Figure BDA00027801495900001213
And real treated brain tumor ROI multi-mode MRI image deep learning characteristic
Figure BDA00027801495900001214
Completing antagonistic learning, and finally obtaining a prospective treatment visualization model of brain tumor growth evolution;
step M6.4: the obtained brain tumor region of interest before clinical treatment multi-modal MRI image deep learning characteristics
Figure BDA00027801495900001215
And predicted brain tumor growth characteristic signature
Figure BDA00027801495900001216
Inputting a prospective treatment visualization model for finally obtaining brain tumor growth evolution to obtain a plurality of different brain tumor growth evolution images according to different brain tumorsSelecting a brain tumor ROI growth evolution image which meets the preset requirement from the tumor growth image;
step M6.5: inserting the obtained brain tumor ROI growth evolution image meeting the preset requirements into I through a Poisson image editing methodoriginalOf non-brain tumor region IbackgroundAnd finally obtaining a brain tumor multi-mode MRI image to complete the prospective treatment visualization task of the brain tumor.
In particular, said step M6.2 comprises:
step M6.2.1: text editor Final feature F0The method comprises the following steps:
Figure BDA0002780149590000131
where z is a noise vector that is typically sampled from a standard normal distribution,
Figure BDA0002780149590000132
brain tumor predicted growth characteristic label extracted by LSTM network
Figure BDA0002780149590000133
Characteristic;
step M6.2.2: multi-mode MRI image deep learning characteristic of brain tumor region of interest before and after treatment
Figure BDA0002780149590000134
And
Figure BDA0002780149590000135
and text editor Final feature F0After passing the concat operation, as input to the tumor growth prediction generator G
Figure BDA0002780149590000136
Generating a multi-modal MRI image I of a region of interest of a brain tumor after a predictive therapy by means of a predictive generator GG
In particular, said step M6.5 comprises:
step M6.5.1: to obtainObtaining a plurality of different brain tumor growth evolution images for binarization to obtain a brain tumor mask IG_mask
Step M6.5.2: matching brain tumor multi-target multi-modal MRI data I before and after treatmentoriginalAnd IlaterEach modal image in (1) and (I)G_maskPerforming position-based and operation to obtain background region non-brain tumor region I of brain tumor multi-target multi-modal MRI databackground
Step M6.5.3: inserting a target image into a source image I of a corresponding modalitybackgroundIn (1), setting the source image gradient to
Figure BDA00027801495900001311
Minimizing the source image gradient to
Figure BDA00027801495900001312
Thereby obtaining a source image I of which the target image is inserted into the corresponding modalitybackgroundAnd (4) completing lesion insertion according to the corresponding expected image f.
Example 2
Example 2 is a modification of example 1
As shown in fig. 1, the method for visualizing the multi-target auxiliary diagnosis and prospective evolution of therapy of brain tumor provided by the present invention comprises the following steps:
step (1): preprocessing brain tumor multi-target multi-modality MRI data (comprising T1, T1C, T2, Flair, PWI and ADC); according to WHO recent recommendations, clinical guidelines and pathological data, multiple molecular gene classes of brain tumor multi-modal MRI data were targeted by physicians: IDH-mutant/wildtype, 1p/19qCo-Deletion, EGFR, PTEN), TERT, p53TP53, ATRX and ALK, etc.; then, the multi-target-point multi-modality MRI data of the brain tumor paired before and after treatment with uniform resolution and approximately same gray distribution are obtained by preprocessing methods such as data desensitization, resampling, skull stripping and the like and are respectively marked as IoriginalAnd IlaterAnd the data size of each modality, ni.gz, is 256 × 256 × 16.
Step (2): brain tumor ROI multi-modal MRI image deep learning(deep learning, DL) feature extraction; firstly, using I obtained in step (1)originalAnd IlaterSegmenting a tumor region of interest (ROI) with a size of 256 × 256 × 4 by a 3DU-net network
Figure BDA0002780149590000137
And
Figure BDA0002780149590000138
then, use
Figure BDA0002780149590000139
And
Figure BDA00027801495900001310
a 4-channel convolutional neural network which is trained and provided with 5 convolutional-activation layer modules (the modules are composed of 3 convolutional layers with the convolutional kernel size of 3 multiplied by 3 and the step size of 1, a ReLU activation function and a maxpool layer with the step size of 2) performs multi-channel convolutional feature map extraction on the ROI to extract each channel to obtain 512 path feature maps with the size of 16 multiplied by 16, then performs Squeze-and-excitation (SE) operation on the obtained 4-channel feature maps, focuses on the channel features and the spatial features with the maximum information quantity, inhibits unimportant features, and obtains final pre-and post-treatment brain tumor multi-modal MRI image deep learning feature of the ROI
Figure BDA0002780149590000141
And
Figure BDA0002780149590000142
the sizes are 512 multiplied by 16 multiplied by 4; use of
Figure BDA0002780149590000143
And
Figure BDA0002780149590000144
obtaining growth characteristic labels of paired brain tumors before and after treatment by an image omics method (the invention sets 6 growth characteristic labels, namely, the types of brain tumor targets, the volume change (such as enlargement or reduction), the intensity mean value and the intensity standard of the brain tumorsAlignment, gray level co-occurrence matrix and entropy) and is noted as L ═ L1,l2,…,l6}。
And (3): brain tumor multi-target growth prediction model; in the training stage, the characteristic of the ROI multi-modal MRI image deep learning of the brain tumor before treatment obtained in the step (2) is utilized
Figure BDA0002780149590000145
Constructing and training a Long Short Term Memory network (LSTM) based on multi-task learning, training the LSTM by a method of dynamically sequencing predicted tag sequences, and obtaining a final brain tumor multi-target growth prediction model; in the testing stage, the pre-treatment brain tumor ROI multi-modal MRI image deep learning characteristics obtained in the step (2) are subjected to deep learning
Figure BDA0002780149590000146
As the input of the brain tumor multi-target growth prediction model, the brain tumor prediction growth characteristic label of brain tumor target type, brain tumor volume change (such as enlargement/reduction), intensity change (such as mean value and standard deviation) and texture change (such as gray level co-occurrence matrix and entropy) is output through the tumor growth prediction model
Figure BDA0002780149590000147
And (4): a prospective treatment visualization model of brain tumor growth evolution; the invention establishes a prospective treatment visualization model of brain tumor growth evolution based on a CNN network generating an confrontation strategy; in the training stage, the model uses the pre-treatment and post-treatment brain tumor ROI multi-modal MRI image deep learning characteristics obtained in the step (2)
Figure BDA0002780149590000148
And
Figure BDA0002780149590000149
and brain tumor growth characteristic signature L ═ { L ═1,l2,…,l6As input, generating the antagonistic network by trainingG forming device is characterized by deep learning of multi-mode MRI (magnetic resonance imaging) images of pre-treatment brain tumor ROI
Figure BDA00027801495900001410
Generating post-treatment brain tumor ROI multi-modal MRI image deep learning features
Figure BDA00027801495900001411
And obtaining predicted post-treatment brain tumor ROI multi-mode MRI image I through an up-sampling network of a generator GGGenerating an image true-false discriminator DRFBy comparison of IGDeep learning feature of
Figure BDA00027801495900001412
And real treated brain tumor ROI multi-mode MRI image deep learning characteristic
Figure BDA00027801495900001413
Completing antagonistic learning, and finally obtaining a prospective treatment visualization model of brain tumor growth evolution; in the testing stage, the ROI multi-modal MRI image deep learning characteristics of the brain tumor before clinical treatment obtained in the step (2) are subjected to deep learning
Figure BDA00027801495900001415
And step (3) predicting brain tumor growth characteristic label
Figure BDA00027801495900001414
As input, outputting 5 different brain tumor growth evolution images through a trained prospective treatment visualization model of brain tumor growth evolution; then, a doctor manually selects the most appropriate image as a final brain tumor ROI growth evolution image; finally, inserting the predicted brain tumor ROI growth evolution image into I through a Poisson image editing methodoriginalOf non-brain tumor region IbackgroundAnd finally obtaining a brain tumor multi-mode MRI image to complete the prospective treatment visualization task of the brain tumor.
FIG. 1 shows a method for visualizing brain tumor multi-target auxiliary diagnosis and prospective treatment evolution, wherein the preprocessing method in step (1) comprises
Data desensitization and washing; sensitive information in original brain tumor multi-modal MRI data collected by a hospital is subjected to data deformation according to a desensitization rule;
data resampling; the invention resamples all the data sets with fixed isomorphic resolution, and resamples all the samples to 256 × 256 × 16;
skull stripping and data storage; the method carries out skull stripping operation on 256 multiplied by 16 data after resampling by using an Fslinstaller _ BetBorainextration method to remove non-brain tissues; and uniformly saves each modality as. ni.gz format data of 256 × 256 × 16 size.
Fig. 1 shows a brain tumor multi-target aided diagnosis and prospective treatment evolution visualization method, wherein the brain tumor ROI multi-modality MRI image deep learning feature extraction in step (2) comprises:
dividing a brain tumor ROI area; the invention adopts 3DU-net to divide the network pair I obtained in the step (1)originalAnd IlaterPerforming tumor segmentation to obtain paired brain tumor region of interest (ROI) with size of 256 × 256 × 4 before and after treatment
Figure BDA0002780149590000151
And
Figure BDA0002780149590000152
the invention randomly divides the multi-modal MRI data of brain tumor with segmented Mask into a training set (0.8), a verification set (0.1) and a test set (0.1), sends the data into a 3DU-net segmentation network, and completes ROI segmentation through network training and network testing.
Secondly, extracting the depth image features of the brain tumor multi-modal MRI; using (1) obtained in step (2)
Figure BDA0002780149590000153
And
Figure BDA0002780149590000154
training 4-channel convolution neural network with 5 convolution-activation layer modules (the module is composed of 3 convolution layers with convolution kernel size of 3 multiplied by 3 and step size of 1, ReLU activation function and maxpool layer with step size of 2) to carry out multi-channel convolution feature map extraction on ROI to obtain 512 path feature maps with size of 16 multiplied by 16 for each channel, and obtaining 4-channel feature maps by concat operation
Figure BDA0002780149590000155
And
Figure BDA0002780149590000156
the sizes are all 16 × 16 × 512 × 4.
Then, a group is given
Figure BDA0002780149590000157
Or
Figure BDA0002780149590000158
Carrying out Squeeze-and-excitation (SE) operation on the channel and spatial characteristics with the largest information quantity, and suppressing unimportant characteristics, wherein the SE fusion operation comprises the following steps:
compression operation (Squeeze)
In the compression operation, the feature map is
Figure BDA0002780149590000159
Wherein z isi,j∈ R16 ×16×512×4I ∈ 1,2, … 512, j ∈ 1,2,3,4, perform a global average pooling operation (global average pooling) to obtain a weight matrix W with a size of 1 × 1 × 512 × 4sCompression characteristic ZSComprises the following steps:
Figure BDA0002780149590000161
excitation operation (Excitation)
In the excitation operation, in the compression operation, the characteristics are mapped as
Figure BDA0002780149590000162
Wherein z isi,j∈R16×16×512×4I ∈ 1,2, … 512, j ∈ 1,2,3,4, first, a full convolution operation (FC) of 2048 × 1024 neurons and a Sigmoid activation layer are performed for the first time, and a first activation feature weight W is obtainedC' (size 1 × 1 × 2048 × 1024); then, carrying out full convolution operation (FC) and Sigmoid activation layer of 2048 neurons for the second time to obtain the second activation feature acquisition channel weight WC(size 1 × 1 × 512 × 4), the excitation characteristics are:
Figure BDA0002780149590000163
finally obtaining the deep learning characteristics of the brain tumor ROI multi-modal MRI image:
Figure BDA0002780149590000164
in the same way, can also obtain
Figure BDA0002780149590000165
Obtaining through image omics method
Figure BDA0002780149590000166
And
Figure BDA0002780149590000167
the imaging group of (1) features 6: the brain tumor target type, the brain tumor volume, the intensity mean value, the intensity standard deviation, the gray level co-occurrence matrix and the entropy are calculated and compared to obtain 6 growth characteristic labels: brain tumor target type, brain tumor volume change (such as enlargement or reduction), intensity mean and intensity standard deviation, gray level co-occurrence matrix and entropy, which are recorded as L ═ L1,l2,…,l6}。
Fig. 1 shows a visualization method for brain tumor multi-target aided diagnosis and prospective treatment evolution, which is a brain tumor multi-target growth prediction model in step (3); in order to generate a growth characteristic label of the brain tumor, a Long Short Term Memory network (LSTM) is introduced into a brain tumor multi-target growth prediction model, and the brain tumor growth characteristic label is predicted by adopting a predicted label sequence dynamic ordering method; the method comprises the following steps:
the invention uses the product obtained in the third step (2)
Figure BDA0002780149590000168
And 6 growth characteristic labels L ═ L obtained in the step (2) < r >1,l2,…,l6-start and end flags as inputs; through the full connecting layer
Figure BDA0002780149590000169
Initialization as an input to the LSTM
Figure BDA00027801495900001610
At time t, to control the forward propagation of LSTM prediction, the formula is as follows:
Figure BDA0002780149590000171
Figure BDA0002780149590000172
Figure BDA0002780149590000173
Figure BDA0002780149590000174
ht=ot⊙tanh(ct)
wherein, ctAnd htIs the model cell and hidden state, ftAnd otIs input at time t, W, U and b are to be learnedAnd σ and tanh are sigmoid and tanh functions.
The invention uses the prediction output from the previous time step t-1 at time step t
Figure BDA0002780149590000175
Embedding as input, E is a word embedding matrix for embedding tags
Figure BDA0002780149590000176
Converted into a vector and the prediction pt for the current time step t is calculated by:
Figure BDA0002780149590000177
the invention uses the prediction of 6 LSTMs to obtain 6 prediction vectors pt, forming a prediction matrix P ═ P1,…,p6,]The LSTM model predicts a new signature at each time step until an end signal is generated, resulting in a predicted brain tumor growth characteristic signature.
According to the brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method, in the brain tumor multi-target growth prediction model in the step (3), correct labels and network prediction are carried out before loss is calculated
Figure BDA0002780149590000178
Alignment, minimum loss alignment loss function
Figure BDA0002780149590000179
Is defined as:
at time step T, the prediction result is compared with the corresponding label in the same step of the gold standard sequence, the matrix T is calculated to minimize the loss of the summed cross entropy, and the predicted brain tumor growth characteristic label is solved through a Hungarian algorithm
Figure BDA00027801495900001712
Figure BDA00027801495900001710
Fig. 1 shows a method for visualizing brain tumor multi-target aided diagnosis and prospective treatment evolution, where the prospective treatment visualization model of brain tumor growth evolution in step (4) includes:
a text encoder module, the module comprising:
the text encoder adopts a pre-trained bidirectional LSTM network, and can extract semantic vectors from the predicted brain tumor growth characteristic labels obtained in the step (3); where z is typically a noise vector sampled from a standard normal distribution,
Figure BDA00027801495900001711
is the sentence feature extracted by LSTM network, the final feature F of text encoder0Enhanced by z and conditions
Figure BDA0002780149590000181
Composition, expressed as:
Figure BDA0002780149590000182
a tumor growth prediction generation visualization module, the module comprising: a generator G, a discriminator D for true and false of the generated imageRF
In the training stage, the model uses the pre-treatment and post-treatment brain tumor ROI multi-modal MRI image deep learning characteristics obtained in the step (2)
Figure BDA0002780149590000183
And
Figure BDA0002780149590000184
and its brain tumor growth characteristic signature L ═ L1,l2,…,l6As input, the generator G of the pair-opposing network is generated by training
Figure BDA0002780149590000185
Generating predicted post-treatment brain tumor ROI multi-modal MRI image deep learning features
Figure BDA0002780149590000186
And obtaining predicted post-treatment brain tumor ROI multi-mode MRI image I through an up-sampling network of a generator GGGenerating an image true-false discriminator DRFBy comparison
Figure BDA0002780149590000187
Deep learning feature of
Figure BDA0002780149590000188
And real treated brain tumor ROI multi-mode MRI image deep learning characteristic
Figure BDA0002780149590000189
Completing antagonistic learning, and finally obtaining a prospective treatment visualization model of brain tumor growth evolution;
in the testing stage, the ROI multi-modal MRI image deep learning characteristics of the brain tumor before clinical treatment obtained in the step (2) are subjected to deep learning
Figure BDA00027801495900001810
And step (3) predicting brain tumor growth characteristic label
Figure BDA00027801495900001811
As input, outputting 5 different brain tumor growth evolution images through a trained prospective treatment visualization model of brain tumor growth evolution; then, the doctor manually selects the most suitable image as the final brain tumor ROI growth evolution image IG
The tumor growth prediction generates a visualization module loss function comprising:
discriminator DRFThe loss function of (a) is defined as:
Figure BDA00027801495900001812
the loss function of generator G is defined as:
Figure BDA00027801495900001813
a tumor lesion insertion module comprising:
firstly, selecting I from Chinese medicine students in step (4)GBinarization is carried out to obtain brain tumor Mask expressed as IG_mask(ii) a Then, each modal image in the multi-target multi-modal MRI data of the brain tumor obtained in the step (1) is compared with IG_maskPerforming position-based and operation to obtain background region I of brain tumor multi-target multi-modal MRI databackground(i.e., non-brain tumor regions); finally, I is edited according to a Poisson image editing methodGI inserted into corresponding modalitybackgroundAnd obtaining a multi-mode MRI image of the predicted brain tumor, and completing a prospective treatment visualization task of the brain tumor.
The brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method shown in fig. 1, wherein the poisson editing in step (4) comprises:
the invention combines the target image IGSource image I inserted into corresponding modalitybackgroundIn, we will
Figure BDA00027801495900001814
Define as the horizontal domain, denote the closed subset of D as Ω and the boundary as
Figure BDA0002780149590000191
The desired result after mixing is denoted as f. Poisson editing is essentially a diffusion process, and the present invention finds f by solving the following minimization problem:
Figure BDA0002780149590000192
wherein the content of the first and second substances,
Figure BDA0002780149590000193
a gradient operator is represented. To solve the actual image insertion problem, the present invention introduces a guide field v into the minimization problem and sets it to
Figure BDA0002780149590000197
To solve the minimization problem:
Figure BDA0002780149590000194
to achieve image editing, the present invention addresses the unique solution of the following Poisson equation with Dirichlet boundary conditions:
Figure BDA0002780149590000198
wherein
Figure BDA0002780149590000195
Is a divergence operator.
The present invention obtains vector v using the following equation:
Figure BDA0002780149590000196
and finally discretizing the pixel grid of the digital image, and then applying an iteration method such as a Jacobi method or Gauss-Seidel iteration with continuous over-relaxation to solve the minimization problem.
Finally, it is noted that the disclosed embodiments are intended to aid in further understanding of the invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and the appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method is characterized by comprising the following steps:
step M1: obtaining the multi-target multi-modality MRI data of the brain tumor paired before and after treatment and preprocessing the multi-target multi-modality MRI data of the brain tumor paired before and after treatment to obtain the multi-target multi-modality MRI data I of the brain tumor paired before and after treatment with unified standardoriginalAnd Ilater
Step M2: pretreatment-based brain tumor multi-target multi-modal MRI data I matched before and after treatmentoriginalAnd IlaterSegmenting the tumor region by a 3DU-net convolutional neural network to obtain the tumor interested region matched before and after treatment
Figure FDA0002780149580000011
And
Figure FDA0002780149580000012
step M3: tumor regions of interest to be paired before and after treatment
Figure FDA0002780149580000013
And
Figure FDA0002780149580000014
obtaining growth characteristic label L ═ L of paired brain tumors before and after treatment by an imaging omics method1,l2,l3,...,ln};
Step M4: tumor regions of interest to be paired before and after treatment
Figure FDA0002780149580000015
And
Figure FDA0002780149580000016
extracting features through a multi-channel convolution neural network to obtain a feature map, and carrying out SE fusion operation on the feature map to obtain final pre-treatment and post-treatment brain tumor region-of-interest multi-modal MRI image deep learning features
Figure FDA0002780149580000017
And
Figure FDA0002780149580000018
step M5: multi-modal MRI (magnetic resonance imaging) image deep learning feature by using brain tumor region of interest before treatment
Figure FDA0002780149580000019
And the corresponding growth characteristic label L ═ L1,l2,l3,...,lnConstructing and training a long-short term memory network based on multi-task learning, training the long-short term memory network based on multi-task learning by a method of dynamically sequencing predicted label sequences,obtaining a brain tumor multi-target growth prediction model after training; multimodal MRI image deep learning characteristic of brain tumor region of interest before treatment
Figure FDA00027801495800000110
Inputting the brain tumor multi-target growth prediction model to obtain a brain tumor multi-target growth prediction label
Figure FDA00027801495800000111
Step M6: establishing a prospective treatment visualization model of brain tumor growth evolution based on a CNN network for generating an antagonistic strategy, and carrying out ROI multi-modal MRI image deep learning on the brain tumor before treatment
Figure FDA00027801495800000112
And predicted brain tumor growth characteristic signature
Figure FDA00027801495800000113
Inputting the trained prospective treatment visualization model to obtain a final brain tumor region-of-interest growth evolution image, and inserting the brain tumor region-of-interest growth evolution image into the multi-target multi-modal MRI data I for treating the prospective brain tumor by using the prospective treatment visualization modeloriginalOf non-brain tumor region IbackgroundIn the method, a final brain tumor multi-modal MRI image is obtained, and a prospective brain tumor treatment visualization task is completed;
the prospective treatment visualization model comprises a text encoder module, a tumor growth prediction generation visualization module and a tumor lesion insertion module; the GAN network is used for predicting the generation of tumor growth images, and brain tumor MRI images within corresponding time of treatment are predicted by using the GAN network through the existing multi-mode brain MRI images of patients, so that the prospective visualization of the treatment result is completed;
the tumor growth prediction generation visualization module comprises a generator G and an identifier D for generating an image true and false classifierRF
2. The method for brain tumor multi-target aided diagnosis and prospective evolution of therapy according to claim 1, wherein the preprocessing in the step M1 includes performing data processing including desensitization, cleaning, resampling and skull peeling on the brain tumor multi-target multi-modality MRI data paired before and after therapy to obtain the brain tumor multi-target multi-modality MRI data paired before and after therapy with uniform resolution and same gray scale distribution.
3. The brain tumor multi-target aided diagnosis and prospective treatment evolution visualization method according to claim 1, wherein the step M3 comprises:
step M3.1: obtaining brain tumor multi-target multi-modal MRI data I matched before and after treatment by an imaging methodoriginalAnd IlaterThe image omics characteristics of;
step M3.2: obtaining a growth characteristic label through image omics characteristic calculation;
the image omics characteristics comprise brain tumor target type, brain tumor volume, intensity mean value and intensity standard deviation, gray level co-occurrence matrix and entropy;
the growth characteristic labels comprise brain tumor target point types, brain tumor volume change, intensity mean values, intensity standard deviations, gray level co-occurrence matrixes and entropies.
4. The brain tumor multi-target aided diagnosis and prospective treatment evolution visualization method according to claim 1, wherein the step M4 comprises:
step M4.1: multi-target multi-modality MRI data I of brain tumors using pre-and post-treatment pairing after pretreatmentoriginalAnd IlaterTraining a multichannel convolutional neural network with a preset number of convolutional-activation layer modules;
step M4.2: tumor region of interest paired before and after treatment by using multichannel convolutional neural network
Figure FDA0002780149580000021
And
Figure FDA0002780149580000022
extracting the multi-channel convolution feature map, and obtaining the feature map through concat operation
Figure FDA0002780149580000023
And
Figure FDA0002780149580000024
step M4.3: to the obtained
Figure FDA0002780149580000025
And
Figure FDA0002780149580000026
carrying out SE operation to finally obtain the multi-modal MRI image deep learning characteristics of the brain tumor region of interest before and after treatment
Figure FDA0002780149580000027
And
Figure FDA0002780149580000028
5. the brain tumor multi-target aided diagnosis and prospective treatment evolution visualization method according to claim 1, wherein the step M5 comprises:
step M5.1: a preset plurality of long and short term memory networks are connected in parallel to construct a long and short term memory network based on multi-task learning;
step M5.2: multi-modal MRI (magnetic resonance imaging) image deep learning feature of brain tumor region of interest before treatment through full connection layer
Figure FDA0002780149580000029
Initialization results in
Figure FDA00027801495800000210
Step M5.3: will be provided with
Figure FDA00027801495800000211
Growth characteristics label L ═ { L ═1,l2,l3,...,lnInputting the symbol and the preset start and end marks into a long-short term memory network; at time step t, the prediction output from the previous time step t-1
Figure FDA0002780149580000031
Converting into feature vector by word embedding matrix
Figure FDA0002780149580000032
Feature vector sum
Figure FDA0002780149580000033
Inputting the long and short term memory network at time step t, and obtaining the prediction label by a dynamic sequencing method
Figure FDA0002780149580000034
Step M5.4: predicting a preset long-short term memory network to obtain a preset prediction vector pt, and forming a prediction matrix p ═ p1,p2,...,pn]Predicting a new label at each time step by a long-short term memory network based on multi-task learning until loss alignment loss function converges to obtain a trained brain tumor multi-target growth prediction model, and obtaining a predicted brain tumor growth characteristic label according to the trained brain tumor multi-target growth prediction model
Figure FDA0002780149580000035
6. The brain tumor multi-target aided diagnosis and prospective treatment evolution visualization method according to claim 1, wherein the step M6 comprises:
step M6.1: establishing a prospective treatment visualization model of brain tumor growth evolution based on a CNN network generating an confrontation strategy;
step M6.2: multi-mode MRI image deep learning characteristic of brain tumor region of interest before and after treatment
Figure FDA0002780149580000036
And
Figure FDA0002780149580000037
and brain tumor growth characteristic signature L ═ { L ═1,l2,l3,...,lnInputting a prospective treatment visualization model, generating a generator G of an antagonistic network through training, and performing multi-modal MRI (magnetic resonance imaging) image deep learning characteristic on the region of interest of the brain tumor before treatment through the generator G of the antagonistic network
Figure FDA0002780149580000038
Generating multi-modal MRI (magnetic resonance imaging) image deep learning characteristics of region of interest of brain tumor after treatment
Figure FDA0002780149580000039
And obtaining a predicted multi-modal MRI image I of the region of interest of the brain tumor after treatment through an up-sampling network of a generator GG
Step M6.3: generating an image true-false discriminator DRFPost-treatment brain tumor region-of-interest multi-modal MRI image I predicted by contrastGDeep learning feature of
Figure FDA00027801495800000310
And real treated brain tumor ROI multi-mode MRI image deep learning characteristic
Figure FDA00027801495800000311
Completing antagonistic learning, and finally obtaining a prospective treatment visualization model of brain tumor growth evolution;
step M6.4: obtaining a multi-modal MRI image of a brain tumor region of interest before clinical treatmentImage deep learning features
Figure FDA00027801495800000312
And predicted brain tumor growth characteristic signature
Figure FDA00027801495800000313
Inputting a prospective treatment visualization model for finally obtaining brain tumor growth evolution to obtain a plurality of different brain tumor growth evolution images, and selecting a brain tumor ROI growth evolution image which meets the preset requirement according to the different brain tumor growth images;
step M6.5: inserting the obtained brain tumor ROI growth evolution image meeting the preset requirements into I through a Poisson image editing methodoriginalOf non-brain tumor region IbackgroundAnd finally obtaining a brain tumor multi-mode MRI image to complete the prospective treatment visualization task of the brain tumor.
7. The brain tumor multi-target aided diagnosis and prospective treatment evolution visualization method according to claim 6, wherein the step M6.2 comprises:
step M6.2.1: text editor Final feature F0The method comprises the following steps:
Figure FDA0002780149580000041
where z is a noise vector that is typically sampled from a standard normal distribution,
Figure FDA0002780149580000042
brain tumor predicted growth characteristic label extracted by LSTM network
Figure FDA0002780149580000043
Characteristic;
step M6.2.2: multi-mode MRI image deep learning characteristic of brain tumor region of interest before and after treatment
Figure FDA0002780149580000044
And
Figure FDA0002780149580000045
and text editor Final feature F0After passing the concat operation, as input to the tumor growth prediction generator G
Figure FDA0002780149580000046
Generating a multi-modal MRI image I of a region of interest of a brain tumor after a predictive therapy by means of a predictive generator GG
8. The brain tumor multi-target aided diagnosis and prospective treatment evolution visualization method according to claim 6, wherein the step M6.5 comprises:
step M6.5.1: carrying out binarization on the obtained multiple different brain tumor growth evolution images to obtain a brain tumor mask IG_mask
Step M6.5.2: matching brain tumor multi-target multi-modal MRI data I before and after treatmentoriginalAnd IlaterEach modal image in (1) and (I)G_maskPerforming position-based and operation to obtain background region non-brain tumor region I of brain tumor multi-target multi-modal MRI databackground
Step 6.5.3: inserting a target image into a source image I of a corresponding modalitybackgroundIn (1), setting the source image gradient to
Figure FDA0002780149580000047
Minimizing the source image gradient to
Figure FDA0002780149580000048
Thereby obtaining a source image I of which the target image is inserted into the corresponding modalitybackgroundAnd (4) completing lesion insertion according to the corresponding expected image f.
9. A brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization system is characterized by comprising:
module M1: obtaining the multi-target multi-modality MRI data of the brain tumor paired before and after treatment and preprocessing the multi-target multi-modality MRI data of the brain tumor paired before and after treatment to obtain the multi-target multi-modality MRI data I of the brain tumor paired before and after treatment with unified standardoriginalAnd Ilater
Module M2: pretreatment-based brain tumor multi-target multi-modal MRI data I matched before and after treatmentoriginalAnd IlaterSegmenting the tumor region by a 3DU-net convolutional neural network to obtain the tumor interested region matched before and after treatment
Figure FDA0002780149580000049
And
Figure FDA00027801495800000410
module M3: tumor regions of interest to be paired before and after treatment
Figure FDA00027801495800000411
And
Figure FDA00027801495800000412
obtaining growth characteristic label L ═ L of paired brain tumors before and after treatment by an imaging omics method1,l2,l3,...,ln};
Module M4: tumor regions of interest to be paired before and after treatment
Figure FDA00027801495800000413
And
Figure FDA00027801495800000414
extracting features through a multi-channel convolution neural network to obtain a feature map, and carrying out SE fusion operation on the feature map to obtain final multi-mode MRI (magnetic resonance imaging) image depth of the brain tumor region of interest before and after treatmentDegree learning feature
Figure FDA00027801495800000415
And
Figure FDA00027801495800000416
module M5: multi-modal MRI (magnetic resonance imaging) image deep learning feature by using brain tumor region of interest before treatment
Figure FDA0002780149580000051
And the corresponding growth characteristic label L ═ L1,l2,l3,...,lnConstructing and training a long-term and short-term memory network based on multi-task learning, training the long-term and short-term memory network based on multi-task learning by a method of dynamically sequencing predicted label sequences, and obtaining a brain tumor multi-target growth prediction model after training; multimodal MRI image deep learning characteristic of brain tumor region of interest before treatment
Figure FDA0002780149580000052
Inputting the brain tumor multi-target growth prediction model to obtain a brain tumor multi-target growth prediction label
Figure FDA0002780149580000053
Module M6: establishing a prospective treatment visualization model of brain tumor growth evolution based on a CNN network for generating an antagonistic strategy, and carrying out ROI multi-modal MRI image deep learning on the brain tumor before treatment
Figure FDA0002780149580000054
And predicted brain tumor growth characteristic signature
Figure FDA0002780149580000055
Inputting the trained prospective treatment visualization model to obtain a final brain tumor region-of-interest growth evolution image, and utilizing the brain tumor region-of-interest growth evolution imageProspective treatment visualization model insertion treatment forebrain tumor multi-target multi-modal MRI data IoriginalOf non-brain tumor region IbackgroundIn the method, a final brain tumor multi-modal MRI image is obtained, and a prospective brain tumor treatment visualization task is completed;
the prospective treatment visualization model comprises a text encoder module, a tumor growth prediction generation visualization module and a tumor lesion insertion module; the GAN network is used for predicting the generation of tumor growth images, and brain tumor MRI images within corresponding time of treatment are predicted by using the GAN network through the existing multi-mode brain MRI images of patients, so that the prospective visualization of the treatment result is completed;
the tumor growth prediction generation visualization module comprises a generator G and an identifier D for generating an image true and false classifierRF
10. The system for brain tumor multi-target aided diagnosis and prospective evolution of therapy according to claim 9, wherein the preprocessing in the module M1 comprises performing data processing including desensitization, cleaning, resampling and skull peeling on the brain tumor multi-target multi-modality MRI data paired before and after therapy to obtain the brain tumor multi-target multi-modality MRI data paired before and after therapy with uniform resolution and same gray scale distribution;
the module M3 includes:
module M3.1: obtaining brain tumor multi-target multi-modal MRI data I matched before and after treatment by an imaging methodoriginalAnd IlaterThe image omics characteristics of;
module M3.2: obtaining a growth characteristic label through image omics characteristic calculation;
the image omics characteristics comprise brain tumor target type, brain tumor volume, intensity mean value and intensity standard deviation, gray level co-occurrence matrix and entropy;
the growth characteristic label comprises brain tumor target type, brain tumor volume change, intensity mean value and intensity standard deviation, gray level co-occurrence matrix and entropy;
the module M4 includes:
module M4.1: multi-target multi-modality MRI data I of brain tumors using pre-and post-treatment pairing after pretreatmentoriginalAnd IlaterTraining a multichannel convolutional neural network with a preset number of convolutional-activation layer modules;
module M4.2: tumor region of interest paired before and after treatment by using multichannel convolutional neural network
Figure FDA0002780149580000061
And
Figure FDA0002780149580000062
extracting the multi-channel convolution feature map, and obtaining the feature map through concat operation
Figure FDA0002780149580000063
And
Figure FDA0002780149580000064
module M4.3: to the obtained
Figure FDA0002780149580000065
And
Figure FDA0002780149580000066
carrying out SE operation to finally obtain the multi-modal MRI image deep learning characteristics of the brain tumor region of interest before and after treatment
Figure FDA0002780149580000067
And
Figure FDA0002780149580000068
the module M5 includes:
module M5.1: a preset plurality of long and short term memory networks are connected in parallel to construct a long and short term memory network based on multi-task learning;
module M5.2: multimodal M of region of interest of brain tumor before treatment through full connection layerRI image deep learning features
Figure FDA0002780149580000069
Initialization results in
Figure FDA00027801495800000610
Module M5.3: will be provided with
Figure FDA00027801495800000611
Growth characteristics label L ═ { L ═1,l2,l3,...,lnInputting the symbol and the preset start and end marks into a long-short term memory network; at time step t, the prediction output from the previous time step t-1
Figure FDA00027801495800000612
Converting into feature vector by word embedding matrix
Figure FDA00027801495800000613
Feature vector sum
Figure FDA00027801495800000614
Inputting the long and short term memory network at time step t, and obtaining the prediction label by a dynamic sequencing method
Figure FDA00027801495800000615
Module M5.4: predicting a preset long-short term memory network to obtain a preset prediction vector pt, and forming a prediction matrix p ═ p1,p2,...,pn]Predicting a new label at each time step by a long-short term memory network based on multi-task learning until loss alignment loss function converges to obtain a trained brain tumor multi-target growth prediction model, and obtaining a predicted brain tumor growth characteristic label according to the trained brain tumor multi-target growth prediction model
Figure FDA00027801495800000616
The module M6 includes:
module M6.1: establishing a prospective treatment visualization model of brain tumor growth evolution based on a CNN network generating an confrontation strategy;
module M6.2: multi-mode MRI image deep learning characteristic of brain tumor region of interest before and after treatment
Figure FDA00027801495800000617
And
Figure FDA00027801495800000618
and brain tumor growth characteristic signature L ═ { L ═1,l2,l3,...,lnInputting a prospective treatment visualization model, generating a generator G of an antagonistic network through training, and performing multi-modal MRI (magnetic resonance imaging) image deep learning characteristic on the region of interest of the brain tumor before treatment through the generator G of the antagonistic network
Figure FDA00027801495800000619
Generating multi-modal MRI (magnetic resonance imaging) image deep learning characteristics of region of interest of brain tumor after treatment
Figure FDA00027801495800000620
And obtaining a predicted multi-modal MRI image I of the region of interest of the brain tumor after treatment through an up-sampling network of a generator GG
Module M6.3: generating an image true-false discriminator DRFPost-treatment brain tumor region-of-interest multi-modal MRI image I predicted by contrastGDeep learning feature of
Figure FDA0002780149580000071
And real treated brain tumor ROI multi-mode MRI image deep learning characteristic
Figure FDA0002780149580000072
Completing antagonistic learning and finally obtaining the brain tumorA long-evolving prospective treatment visualization model;
module M6.4: the obtained brain tumor region of interest before clinical treatment multi-modal MRI image deep learning characteristics
Figure FDA0002780149580000073
And predicted brain tumor growth characteristic signature
Figure FDA0002780149580000074
Inputting a prospective treatment visualization model for finally obtaining brain tumor growth evolution to obtain a plurality of different brain tumor growth evolution images, and selecting a brain tumor ROI growth evolution image which meets the preset requirement according to the different brain tumor growth images;
module M6.5: inserting the obtained brain tumor ROI growth evolution image meeting the preset requirements into I through a Poisson image editing methodoriginalOf non-brain tumor region IbackgroundIn the method, a final brain tumor multi-modal MRI image is obtained, and a prospective brain tumor treatment visualization task is completed;
the module M6.2 comprises:
module M6.2.1: text editor Final feature F0The method comprises the following steps:
Figure FDA0002780149580000075
where z is a noise vector that is typically sampled from a standard normal distribution,
Figure FDA0002780149580000076
brain tumor predicted growth characteristic label extracted by LSTM network
Figure FDA0002780149580000077
Characteristic;
module M6.2.2: multi-mode MRI image deep learning characteristic of brain tumor region of interest before and after treatment
Figure FDA0002780149580000078
And
Figure FDA0002780149580000079
and text editor Final feature F0After passing the concat operation, as input to the tumor growth prediction generator G
Figure FDA00027801495800000710
Generating a multi-modal MRI image I of a region of interest of a brain tumor after a predictive therapy by means of a predictive generator GG
Said module M6.5 comprises:
module M6.5.1: carrying out binarization on the obtained multiple different brain tumor growth evolution images to obtain a brain tumor mask IG_mask
Module M6.5.2: matching brain tumor multi-target multi-modal MRI data I before and after treatmentoriginalAnd IlaterEach modal image in (1) and (I)G_maskPerforming position-based and operation to obtain background region non-brain tumor region I of brain tumor multi-target multi-modal MRI databackground
Module M6.5.3: inserting a target image into a source image I of a corresponding modalitybackgroundIn (1), setting the source image gradient to
Figure FDA00027801495800000711
Minimizing the source image gradient to
Figure FDA00027801495800000712
Thereby obtaining a source image I of which the target image is inserted into the corresponding modalitybackgroundAnd (4) completing lesion insertion according to the corresponding expected image f.
CN202011279154.0A 2020-11-16 2020-11-16 Brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method and system Active CN112365980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011279154.0A CN112365980B (en) 2020-11-16 2020-11-16 Brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011279154.0A CN112365980B (en) 2020-11-16 2020-11-16 Brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method and system

Publications (2)

Publication Number Publication Date
CN112365980A true CN112365980A (en) 2021-02-12
CN112365980B CN112365980B (en) 2024-03-01

Family

ID=74515035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011279154.0A Active CN112365980B (en) 2020-11-16 2020-11-16 Brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method and system

Country Status (1)

Country Link
CN (1) CN112365980B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767420A (en) * 2021-02-26 2021-05-07 中国人民解放军总医院 Nuclear magnetic image segmentation method, device, equipment and medium based on artificial intelligence
CN113571203A (en) * 2021-07-19 2021-10-29 复旦大学附属华山医院 Multi-center federal learning-based brain tumor prognosis survival period prediction method and system
CN113628325A (en) * 2021-08-10 2021-11-09 海盐县南北湖医学人工智能研究院 Small organ tumor evolution model establishing method and computer readable storage medium
CN113717930A (en) * 2021-09-07 2021-11-30 复旦大学附属华山医院 Cranial carotid interlayer specific induced pluripotent stem cell line carrying FBN1 mutation
CN114927216A (en) * 2022-04-29 2022-08-19 中南大学湘雅医院 Method and system for predicting treatment effect of PD-1 of melanoma patient based on artificial intelligence
WO2022221991A1 (en) * 2021-04-19 2022-10-27 深圳市深光粟科技有限公司 Image data processing method and apparatus, computer, and storage medium
CN115861303A (en) * 2023-02-16 2023-03-28 四川大学 EGFR gene mutation detection method and system based on lung CT image

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296699A (en) * 2016-08-16 2017-01-04 电子科技大学 Cerebral tumor dividing method based on deep neural network and multi-modal MRI image
US20170357844A1 (en) * 2016-06-09 2017-12-14 Siemens Healthcare Gmbh Image-based tumor phenotyping with machine learning from synthetic data
CN109242860A (en) * 2018-08-21 2019-01-18 电子科技大学 Based on the brain tumor image partition method that deep learning and weight space are integrated
CN110136828A (en) * 2019-05-16 2019-08-16 杭州健培科技有限公司 A method of medical image multitask auxiliary diagnosis is realized based on deep learning
CN110265141A (en) * 2019-05-13 2019-09-20 上海大学 A kind of liver neoplasm CT images computer aided diagnosing method
EP3576020A1 (en) * 2018-05-30 2019-12-04 Siemens Healthcare GmbH Methods for generating synthetic training data and for training deep learning algorithms for tumor lesion characterization, method and system for tumor lesion characterization, computer program and electronically readable storage medium
WO2020028382A1 (en) * 2018-07-30 2020-02-06 Memorial Sloan Kettering Cancer Center Multi-modal, multi-resolution deep learning neural networks for segmentation, outcomes prediction and longitudinal response monitoring to immunotherapy and radiotherapy
US20200160997A1 (en) * 2018-11-02 2020-05-21 University Of Central Florida Research Foundation, Inc. Method for detection and diagnosis of lung and pancreatic cancers from imaging scans
CN111210909A (en) * 2020-01-13 2020-05-29 青岛大学附属医院 Deep neural network-based rectal cancer T stage automatic diagnosis system and construction method thereof
CN111445946A (en) * 2020-03-26 2020-07-24 北京易康医疗科技有限公司 Calculation method for calculating lung cancer genotyping by using PET/CT (positron emission tomography/computed tomography) images
CN111584073A (en) * 2020-05-13 2020-08-25 山东大学 Artificial intelligence fusion multi-modal information-based diagnosis model for constructing multiple pathological types of benign and malignant pulmonary nodules
CN111599464A (en) * 2020-05-13 2020-08-28 吉林大学第一医院 Novel multi-modal fusion auxiliary diagnosis method based on rectal cancer imaging omics research
WO2020190821A1 (en) * 2019-03-15 2020-09-24 Genentech, Inc. Deep convolutional neural networks for tumor segmentation with positron emission tomography
KR20200114228A (en) * 2019-03-28 2020-10-07 한국과학기술원 Method and system for predicting isocitrate dehydrogenase (idh) mutation using recurrent neural network

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170357844A1 (en) * 2016-06-09 2017-12-14 Siemens Healthcare Gmbh Image-based tumor phenotyping with machine learning from synthetic data
CN106296699A (en) * 2016-08-16 2017-01-04 电子科技大学 Cerebral tumor dividing method based on deep neural network and multi-modal MRI image
EP3576020A1 (en) * 2018-05-30 2019-12-04 Siemens Healthcare GmbH Methods for generating synthetic training data and for training deep learning algorithms for tumor lesion characterization, method and system for tumor lesion characterization, computer program and electronically readable storage medium
WO2020028382A1 (en) * 2018-07-30 2020-02-06 Memorial Sloan Kettering Cancer Center Multi-modal, multi-resolution deep learning neural networks for segmentation, outcomes prediction and longitudinal response monitoring to immunotherapy and radiotherapy
CN109242860A (en) * 2018-08-21 2019-01-18 电子科技大学 Based on the brain tumor image partition method that deep learning and weight space are integrated
US20200160997A1 (en) * 2018-11-02 2020-05-21 University Of Central Florida Research Foundation, Inc. Method for detection and diagnosis of lung and pancreatic cancers from imaging scans
WO2020190821A1 (en) * 2019-03-15 2020-09-24 Genentech, Inc. Deep convolutional neural networks for tumor segmentation with positron emission tomography
KR20200114228A (en) * 2019-03-28 2020-10-07 한국과학기술원 Method and system for predicting isocitrate dehydrogenase (idh) mutation using recurrent neural network
CN110265141A (en) * 2019-05-13 2019-09-20 上海大学 A kind of liver neoplasm CT images computer aided diagnosing method
CN110136828A (en) * 2019-05-16 2019-08-16 杭州健培科技有限公司 A method of medical image multitask auxiliary diagnosis is realized based on deep learning
CN111210909A (en) * 2020-01-13 2020-05-29 青岛大学附属医院 Deep neural network-based rectal cancer T stage automatic diagnosis system and construction method thereof
CN111445946A (en) * 2020-03-26 2020-07-24 北京易康医疗科技有限公司 Calculation method for calculating lung cancer genotyping by using PET/CT (positron emission tomography/computed tomography) images
CN111584073A (en) * 2020-05-13 2020-08-25 山东大学 Artificial intelligence fusion multi-modal information-based diagnosis model for constructing multiple pathological types of benign and malignant pulmonary nodules
CN111599464A (en) * 2020-05-13 2020-08-28 吉林大学第一医院 Novel multi-modal fusion auxiliary diagnosis method based on rectal cancer imaging omics research

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIU, X: "BTSC-TNAS: A neural architecture search-based transformer for brain tumor segmentation and classification", COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, pages 110 *
王锦程;郁芸;杨坤;胡新华;: "基于BP神经网络的脑肿瘤MRI图像分割", 生物医学工程研究, no. 04, 15 December 2016 (2016-12-15), pages 76 - 79 *
王锦程;郁芸;杨坤;胡新华;: "基于BP神经网络的脑肿瘤MRI图像分割", 生物医学工程研究, no. 04, pages 76 - 79 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767420A (en) * 2021-02-26 2021-05-07 中国人民解放军总医院 Nuclear magnetic image segmentation method, device, equipment and medium based on artificial intelligence
CN112767420B (en) * 2021-02-26 2021-11-23 中国人民解放军总医院 Nuclear magnetic image segmentation method, device, equipment and medium based on artificial intelligence
WO2022221991A1 (en) * 2021-04-19 2022-10-27 深圳市深光粟科技有限公司 Image data processing method and apparatus, computer, and storage medium
CN113571203A (en) * 2021-07-19 2021-10-29 复旦大学附属华山医院 Multi-center federal learning-based brain tumor prognosis survival period prediction method and system
CN113571203B (en) * 2021-07-19 2024-01-26 复旦大学附属华山医院 Multi-center federal learning-based brain tumor prognosis survival prediction method and system
CN113628325A (en) * 2021-08-10 2021-11-09 海盐县南北湖医学人工智能研究院 Small organ tumor evolution model establishing method and computer readable storage medium
CN113628325B (en) * 2021-08-10 2024-03-26 海盐县南北湖医学人工智能研究院 Model building method for small organ tumor evolution and computer readable storage medium
CN113717930A (en) * 2021-09-07 2021-11-30 复旦大学附属华山医院 Cranial carotid interlayer specific induced pluripotent stem cell line carrying FBN1 mutation
CN114927216A (en) * 2022-04-29 2022-08-19 中南大学湘雅医院 Method and system for predicting treatment effect of PD-1 of melanoma patient based on artificial intelligence
CN115861303A (en) * 2023-02-16 2023-03-28 四川大学 EGFR gene mutation detection method and system based on lung CT image
CN115861303B (en) * 2023-02-16 2023-04-28 四川大学 EGFR gene mutation detection method and system based on lung CT image

Also Published As

Publication number Publication date
CN112365980B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
JP7143008B2 (en) Medical image detection method and device based on deep learning, electronic device and computer program
CN112365980B (en) Brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method and system
Kim et al. Prospects of deep learning for medical imaging
Munawar et al. Segmentation of lungs in chest X-ray image using generative adversarial networks
Xu et al. Noisy labels are treasure: mean-teacher-assisted confident learning for hepatic vessel segmentation
US20230342918A1 (en) Image-driven brain atlas construction method, apparatus, device and storage medium
CN110097921B (en) Visualized quantitative method and system for glioma internal gene heterogeneity based on image omics
CN113569891A (en) Training data processing device, electronic equipment and storage medium of neural network model
Yang et al. Deep hybrid convolutional neural network for segmentation of melanoma skin lesion
Davamani et al. Biomedical image segmentation by deep learning methods
Patel An Overview and Application of Deep Convolutional Neural Networks for Medical Image Segmentation
Murmu et al. A novel Gateaux derivatives with efficient DCNN-Resunet method for segmenting multi-class brain tumor
CN117079801B (en) Colorectal cancer prognosis risk prediction system
Qin et al. Application of artificial intelligence in diagnosis of craniopharyngioma
CN115190999A (en) Classifying data outside of a distribution using contrast loss
Hu et al. Automatic detection of melanins and sebums from skin images using a generative adversarial network
CN116759076A (en) Unsupervised disease diagnosis method and system based on medical image
Luong et al. A computer-aided detection to intracranial hemorrhage by using deep learning: a case study
CN115458161A (en) Breast cancer progression analysis method, device, apparatus, and medium
CN112633405A (en) Model training method, medical image analysis device, medical image analysis equipment and medical image analysis medium
CN113796850A (en) Parathyroid MIBI image analysis system, computer device, and storage medium
Al-Eiadeh Automatic Lung Field Segmentation using Robust Deep Learning Criteria
Ramakrishnan et al. Automated lung cancer nodule detection
Li et al. CAGAN: Classifier‐augmented generative adversarial networks for weakly‐supervised COVID‐19 lung lesion localisation
Pepe et al. Deep learning and generative adversarial networks in oral and maxillofacial surgery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant