CN113077887B - Automatic quantitative analysis system and interpretation method for white matter lesions of brain - Google Patents
Automatic quantitative analysis system and interpretation method for white matter lesions of brain Download PDFInfo
- Publication number
- CN113077887B CN113077887B CN202110315174.7A CN202110315174A CN113077887B CN 113077887 B CN113077887 B CN 113077887B CN 202110315174 A CN202110315174 A CN 202110315174A CN 113077887 B CN113077887 B CN 113077887B
- Authority
- CN
- China
- Prior art keywords
- white matter
- brain
- image
- data
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000004556 brain Anatomy 0.000 title claims abstract description 58
- 206010072731 White matter lesion Diseases 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000004445 quantitative analysis Methods 0.000 title claims abstract description 17
- 201000010099 disease Diseases 0.000 claims abstract description 54
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 54
- 230000003902 lesion Effects 0.000 claims abstract description 54
- 230000003993 interaction Effects 0.000 claims abstract description 11
- 238000004458 analytical method Methods 0.000 claims abstract description 6
- 210000004885 white matter Anatomy 0.000 claims description 58
- 238000012549 training Methods 0.000 claims description 52
- 239000013598 vector Substances 0.000 claims description 37
- 238000004364 calculation method Methods 0.000 claims description 34
- 230000006870 function Effects 0.000 claims description 24
- 238000007917 intracranial administration Methods 0.000 claims description 24
- 238000012360 testing method Methods 0.000 claims description 23
- 210000001638 cerebellum Anatomy 0.000 claims description 20
- 238000005070 sampling Methods 0.000 claims description 20
- 210000004227 basal ganglia Anatomy 0.000 claims description 18
- 238000013528 artificial neural network Methods 0.000 claims description 17
- 238000010606 normalization Methods 0.000 claims description 17
- 210000000133 brain stem Anatomy 0.000 claims description 16
- 238000003062 neural network model Methods 0.000 claims description 15
- 210000001103 thalamus Anatomy 0.000 claims description 14
- 238000010586 diagram Methods 0.000 claims description 13
- 210000001652 frontal lobe Anatomy 0.000 claims description 13
- 230000011218 segmentation Effects 0.000 claims description 12
- 230000002792 vascular Effects 0.000 claims description 12
- 238000003384 imaging method Methods 0.000 claims description 10
- 210000000869 occipital lobe Anatomy 0.000 claims description 10
- 238000011156 evaluation Methods 0.000 claims description 9
- 210000002569 neuron Anatomy 0.000 claims description 9
- 230000002739 subcortical effect Effects 0.000 claims description 9
- 230000002159 abnormal effect Effects 0.000 claims description 8
- 210000004204 blood vessel Anatomy 0.000 claims description 8
- 238000005728 strengthening Methods 0.000 claims description 8
- 210000003478 temporal lobe Anatomy 0.000 claims description 8
- 230000000007 visual effect Effects 0.000 claims description 8
- 206010008118 cerebral infarction Diseases 0.000 claims description 7
- 208000026106 cerebrovascular disease Diseases 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 7
- 210000001152 parietal lobe Anatomy 0.000 claims description 7
- 230000002093 peripheral effect Effects 0.000 claims description 7
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 6
- 210000000877 corpus callosum Anatomy 0.000 claims description 6
- 210000003484 anatomy Anatomy 0.000 claims description 5
- 210000002565 arteriole Anatomy 0.000 claims description 5
- 230000007570 microbleeding Effects 0.000 claims description 5
- 230000003210 demyelinating effect Effects 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 4
- 238000003709 image segmentation Methods 0.000 claims description 4
- 230000002779 inactivation Effects 0.000 claims description 4
- 230000002757 inflammatory effect Effects 0.000 claims description 4
- 230000007704 transition Effects 0.000 claims description 4
- JXSJBGJIGXNWCI-UHFFFAOYSA-N diethyl 2-[(dimethoxyphosphorothioyl)thio]succinate Chemical compound CCOC(=O)CC(SP(=S)(OC)OC)C(=O)OCC JXSJBGJIGXNWCI-UHFFFAOYSA-N 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 230000008034 disappearance Effects 0.000 claims description 2
- 230000000302 ischemic effect Effects 0.000 claims description 2
- 230000010355 oscillation Effects 0.000 claims description 2
- 238000007781 pre-processing Methods 0.000 claims description 2
- 230000001502 supplementing effect Effects 0.000 claims description 2
- 230000007306 turnover Effects 0.000 claims description 2
- 230000002265 prevention Effects 0.000 claims 1
- 230000000877 morphologic effect Effects 0.000 abstract description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 230000000740 bleeding effect Effects 0.000 abstract description 2
- 238000011002 quantification Methods 0.000 abstract description 2
- 238000012800 visualization Methods 0.000 abstract description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 17
- 230000002787 reinforcement Effects 0.000 description 11
- 206010051290 Central nervous system lesion Diseases 0.000 description 7
- 210000001519 tissue Anatomy 0.000 description 7
- 238000002597 diffusion-weighted imaging Methods 0.000 description 6
- 201000006417 multiple sclerosis Diseases 0.000 description 6
- 208000002552 acute disseminated encephalomyelitis Diseases 0.000 description 4
- 210000005013 brain tissue Anatomy 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 208000008795 neuromyelitis optica Diseases 0.000 description 4
- 208000024891 symptom Diseases 0.000 description 4
- 208000031226 Hyperlipidaemia Diseases 0.000 description 3
- 206010020772 Hypertension Diseases 0.000 description 3
- 208000019695 Migraine disease Diseases 0.000 description 3
- 208000008589 Obesity Diseases 0.000 description 3
- 206010047115 Vasculitis Diseases 0.000 description 3
- 239000008280 blood Substances 0.000 description 3
- 210000004369 blood Anatomy 0.000 description 3
- 206010012601 diabetes mellitus Diseases 0.000 description 3
- 210000004209 hair Anatomy 0.000 description 3
- 206010027599 migraine Diseases 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 235000020824 obesity Nutrition 0.000 description 3
- 230000000391 smoking effect Effects 0.000 description 3
- 201000005665 thrombophilia Diseases 0.000 description 3
- 231100000216 vascular lesion Toxicity 0.000 description 3
- 208000016604 Lyme disease Diseases 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008499 blood brain barrier function Effects 0.000 description 2
- 210000001218 blood-brain barrier Anatomy 0.000 description 2
- 230000002490 cerebral effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000002872 contrast media Substances 0.000 description 2
- 230000009849 deactivation Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 239000012530 fluid Substances 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000002075 inversion recovery Methods 0.000 description 2
- 230000001575 pathological effect Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 208000019553 vascular disease Diseases 0.000 description 2
- 208000017667 Chronic Disease Diseases 0.000 description 1
- 208000028698 Cognitive impairment Diseases 0.000 description 1
- 208000016192 Demyelinating disease Diseases 0.000 description 1
- 208000000202 Diffuse Axonal Injury Diseases 0.000 description 1
- 229910052688 Gadolinium Inorganic materials 0.000 description 1
- 208000034800 Leukoencephalopathies Diseases 0.000 description 1
- 208000012902 Nervous system disease Diseases 0.000 description 1
- 206010030113 Oedema Diseases 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 201000007201 aphasia Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 210000001175 cerebrospinal fluid Anatomy 0.000 description 1
- 208000010877 cognitive disease Diseases 0.000 description 1
- 230000001054 cortical effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000009521 diffuse axonal injury Effects 0.000 description 1
- 206010015037 epilepsy Diseases 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- UIWYJDYFSGRHKR-UHFFFAOYSA-N gadolinium atom Chemical compound [Gd] UIWYJDYFSGRHKR-UHFFFAOYSA-N 0.000 description 1
- 210000004884 grey matter Anatomy 0.000 description 1
- 208000027866 inflammatory disease Diseases 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 230000009545 invasion Effects 0.000 description 1
- 208000027905 limb weakness Diseases 0.000 description 1
- 231100000861 limb weakness Toxicity 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000001338 necrotic effect Effects 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000001936 parietal effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
An automatic quantitative analysis system and interpretation method for white matter lesion of brain, the method utilizes artificial intelligence method to automatically and accurately delineate the focus, accurately calculate the volume of white matter lesion of brain, provide possibility for accurate assessment of progress of disease. Meanwhile, an automatic interpretation system is provided, multidimensional information such as anatomical positioning, morphological structure, lesion size, lesion occurrence time, bleeding and the like of lesions is determined through man-machine interaction, lesion positioning visualization, image index quantification, report term standardization, friendly operation interface and the like are achieved, logic analysis is automatically made, disease judgment is accurately obtained, and the condition that a conclusion is inconsistent with description and an important conclusion is missed is avoided.
Description
Technical Field
The application relates to the field of medical imaging, in particular to an automatic lesion volume quantitative analysis and intelligent disease interpretation method according to multi-mode MRI images of a patient.
Background
The brain is the most complex organ of the human body and is an important component of the nervous system. The brain is divided into three major parts, namely gray matter, white matter and cerebrospinal fluid according to tissue structure. The diseases such as the vascular diseases, the inflammatory diseases, the demyelinating diseases and the like can cause the white matter lesions of the brain, so that the patients have a plurality of symptoms such as limb weakness, cognitive impairment, aphasia, epilepsy and the like, and the patients are difficult to be accurately diagnosed in the early stage of the diseases by depending on clinical symptoms alone. Magnetic Resonance Imaging (MRI) has high soft tissue resolution and no radiation, plays a vital role in the examination of nervous system diseases, particularly cerebral lesions, and is beneficial to early accurate judgment of patients and formulation of treatment schemes by identifying the cerebral white lesions by using the MRI image. However, the images of these diseases are complex in representation and have certain similarities, so that the current images of the diseases mainly depend on the subjective experience and judgment of radiologists, quantitative analysis and information integration are lacked, and the image description is lacked in visual display. Meanwhile, the report content and terms are different due to different annual capital and writing habits of doctors, so that a standard report writing mode is lacked, time and labor are consumed, accurate judgment on diseases is difficult, treatment of patients is possibly delayed, and the effect of image examination on the diseases is not exerted to the maximum. More importantly, many diseases in the white brain lesion are chronic diseases, long-term follow-up and observation is needed to evaluate the change of the disease state, the existing image reporting method only depends on human eye observation, the size of the focus cannot be objectively and accurately measured, and accurate evaluation is difficult to achieve in the aspects of follow-up of the diseases, curative effect evaluation, prognosis prediction and the like.
The method utilizes an artificial intelligence method to automatically and accurately delineate the focus, accurately calculates the volume of white matter lesions of the brain, and provides possibility for accurate evaluation of disease progress. Meanwhile, an automatic interpretation system is provided, multidimensional information such as anatomical positioning, morphological structure, lesion size, lesion occurrence time, bleeding and the like of lesions is determined through man-machine interaction, lesion positioning visualization, image index quantification, report term standardization, friendly operation interface and the like are achieved, logic analysis is automatically made, disease judgment is accurately obtained, and the condition that a conclusion is inconsistent with description and an important conclusion is missed is avoided.
Disclosure of Invention
An automated quantitative analysis system for white brain lesions comprises a clinical information knowledge base module, an anatomical pattern diagram module, an image characteristic standardized description module, an image data quantitative calculation module and a report generation module; the clinical information knowledge base module comprises a selection knowledge item base and a manual input unit, and the selection knowledge base unit comprises a clinical common input option and a manual input unit; the manual input unit comprises clinical and medical history data related to the image of the patient; the anatomical mode map module comprises a visual intracranial tomographic map, the visual intracranial tomographic map is a dot-map anatomical structure displayed by a plane map, the distribution and the position of a lesion are accurately defined, and the brain structure in the visual intracranial tomographic map comprises a left frontal lobe, a left parietal lobe, a left occipital lobe, a left temporal lobe, a left island lobe, a left basal ganglia, a left thalamus, a right frontal lobe, a right parietal lobe, a right occipital lobe, a right temporal lobe, a right island lobe, a right basal ganglia, a right thalamus, a left cerebellum, a right cerebellum, a corpus callosum and a brainstem; the image characteristic standardized description module comprises a human-computer interaction interface, and the human-computer interaction interface comprises a preset indication part and an input part; the image data quantitative calculation module comprises an intracranial white matter lesion segmentation module, a calculation module for calculating the length and the short diameter of the maximum brain white matter lesion of the whole brain and a calculation module for calculating the total volume of the intracranial white matter lesion.
Preferably, the automated quantitative analysis system for the white brain lesion further comprises a neural network unit and/or an image structure interpretation module.
An analysis method of an automated quantitative analysis system for white brain lesions, which provides a selection knowledge item base and manual input through a clinical information knowledge base module, provides clinical common input options through a selection knowledge base unit, and takes a manual input unit as supplementary content; firstly, selecting common input contents in a knowledge item base for checking, and if the requirements cannot be met, selecting a manual input unit for supplementing; the method comprises the steps of segmenting an MRI image of cerebral white matter lesions by an intracranial white matter lesion segmentation module, calculating the maximum focus long diameter and focus short diameter by a calculation module of the maximum brain white matter focus long diameter and focus short diameter, and calculating the total intracranial white matter lesion volume by a calculation module of the total intracranial white matter lesion volume.
Preferably, the analysis method of the automated quantitative analysis system for white brain lesions judges the correct disease category of the patient through the image structure interpretation module.
Preferably, the analysis method of the automated quantitative analysis system for the white brain lesion outputs the disease name of the patient part through the neural network unit.
1. The clinical information knowledge base module:
and providing a selection knowledge item base and a manual input unit, wherein the selection knowledge item base unit provides clinical common input options, and the manual input unit serves as supplementary content. Firstly, selecting common input contents in a knowledge item base for selection, and if the requirements cannot be met, selecting a manual input unit for supplement. This module provides clinical and medical history data related to the patient's image for integration with the image symptoms for the final image structure interpretation module to make the correct interpretation of the patient.
1) Structured report usage scope: MRI examination of leukoencephalopathy;
2) age: □ 65 years □ 65 under 65 to □ 75 years old above 75 years old;
3) vascular risk factors: □ No □ hypertension □ hyperlipidemia □ diabetes □ smoking history □ obesity □ others (e.g. hypercoagulable state of blood, vasculitis, migraine, etc. [ ]) □ are unknown;
4) □ and others: [].
2. An anatomical pattern map module:
the visual intracranial tomography map is designed, a dot-diagram type anatomical structure can be displayed on a plane map, the distribution and the position of a focus can be accurately defined, the operation is simple, the grasp is easy, the output is standard, and even a beginner who just enters the clinic can grasp the map easily. Brain structures in the anatomical map include left frontal lobe, left parietal lobe, left occipital lobe, left temporal lobe, left islet lobe, left basal ganglia, left thalamus, right frontal lobe, right parietal lobe, right occipital lobe, right temporal lobe, right islet lobe, right basal ganglia, right thalamus, left cerebellum, right cerebellum, corpus callosum and brainstem, specific contents (fig. 1-4):
FIG. 1 is a frontal lobe configuration tomographic view of the present application;
FIG. 2 is a tomographic view of the basal ganglia structure of the present application;
FIG. 3 is a frontal temporal occipital lobe configuration tomograph of the present application;
FIG. 4 is a diagram of the structure of the cerebellum and brainstem of the present application;
the lesion location in the anatomical map includes the supratentorial structure: paracortical, subcortical or deep cortical, paraventricular (fig. 5) subtenon structures: peripheral, central;
fig. 5 is a lesion location distribution indicator map of the present application.
3. Image characteristic standardized description module
The human-computer interaction interface provides a preset indicating part and an input part, the preset indicating part provides preset indicating information for a user, and the user inputs parameters for evaluating the MRI image in the input part according to the MRI image of the patient and the preset indicating information provided by the human-computer interaction interface. The parameters may be preset field-type parameters which can be displayed on the human-computer interaction interface together with corresponding input parts according to the previous operation of the user so that the user can intuitively select and input in a single-choice or multi-choice mode, or numerical-type parameters which are input by the user in a blank filling mode, and the corresponding input parts can be displayed on the human-computer interaction interface according to the previous operation of the user so that the user can fill and input. The parameters input by the user can be stored as computer-readable data by a memory module attached to the system or a memory module that exists separately. According to the preset content of the patent, a doctor can perform simple click operation in an image module, call a standard field in a database and generate report content in a standard format.
1) Overall assessment (ARWMC scale score): □ 0 level (no abnormal signal) □ 1 level (scattered in specks) □ 2 level (partial aggregation) □ 3 level (fusion);
2) signal: the pre-processed image defines the acquired 3 sequence or parameter maps as 3 modalities, constituting a set of structural modalities: t1 Weighted Imaging (T1-Weighted Imaging, T1WI), T2 Weighted Imaging (T2-Weighted Imaging, T2WI), Fluid Attenuated Inversion Recovery (FLAIR); sequences such as magnetic resonance T1 weighted imaging (T1WI), T2 weighted imaging (T2WI), fluid attenuation inversion recovery (FLAIR) and the like can clearly and visually present morphological characteristics such as the position, the size, the boundary, the morphology and the like of white matter lesions of the brain. The contrast enhanced T1WI (T1-CE) sequence after gadolinium contrast agent injection indirectly reflects the extent of lesion activity and invasion of surrounding tissues by evaluating the leakage of contrast agent as a result of lesion disruption of the Blood Brain Barrier (BBB). Diffusion Weighted Imaging (DWI) and Apparent Diffusion Coefficient (ADC) images reflect the pathological state of white matter by providing information of water molecule Diffusion, detect the focus in the early stage of the occurrence of the vascular diseases and are beneficial to realizing the recognition of the focus in the acute stage.
a) T1: □ high □, etc. □ low;
b) t2: □ high □, etc. □ low;
c) T2-FLAIR: □ high □, etc. □ low □ central low edge high;
d) DWI: □ high □, etc. □ low;
e) ADC: □ high □, etc. □ low;
3) the form is as follows: □ round, dotted □ oval, fusiform (□ Dawson's finger) □ amorphous;
4) the strengthening mode is as follows: □ No reinforcement □ open ring □ Uniform reinforcement □ non-uniform reinforcement □ point reinforcement;
5) a micro bleeding focus: □ has no □, less than 5 (□ lobes (including hemioval center) □ deep (basal ganglia, thalamus, brainstem, cerebellum) □ with multiple hairs (□ lobes (including hemioval center) □ deep (basal ganglia, thalamus, brainstem, cerebellum)).
4. Image data quantitative calculation module:
a) intracranial white matter lesion segmentation module:
the image segmentation method comprises the following steps:
step 1: performing multi-modal fusion on white matter MRI images of the brain, performing data preprocessing on the fused images, and performing three-dimensional small image block sampling on each image data to serve as training data.
Brain MRI imaging can be divided into four modalities according to the differences in the imaging support conditions: t1-weighted modality, T1 ce-weighted modality, T2-weighted modality, and Flair modality, different modalities being capable of displaying different features of white matter of the brain. The brain images in different modes are fused and sent to the network for training, so that the focus characteristics can be improved, and the accuracy of white matter detection of the brain is improved.
The j-th image of the fused three-dimensional white matter image x can be represented as follows:
x j =[I F (j),I T1 (j),I T1ce (j),I T2 (j)]
where I represents the white matter image of the brain in the four modalities and the subscripts F, T1, T1ce, T2 represent the four modalities, respectively. According to different characteristics of each mode, selecting white matter images of different modes to be fused to construct a whole training set T, namely
T={(x 1 ,y 1 ),…,(x k ,y k ),…,(x m ,y m )}
Wherein (x) k ,y k ) Denotes the k-th training sample, y k E {0,1} is the label for the kth sample, indicating whether the kth sample has a lesion.
All input data have the same distribution, so that data jitter can be reduced, and the convergence speed of a model is accelerated, therefore, the method performs normalization operation on the brain tissue gray level in each image, wherein the mean value of the brain tissue gray level is 0, and the variance of the brain tissue gray level is 1. In order to reduce invalid image information, the invention removes a large amount of 0-value background pixels around brain tissues, and only selects an image layer with pathological data as training data.
In order to solve the problem of class imbalance of a data set, the invention adopts a three-dimensional small image block sampling training method, 70 three-dimensional image blocks are sampled for each case data to serve as training data, the size of each image block is 32 multiplied by 32, and each image block is randomly selected according to the following proportion: background accounts for 1%, normal tissue accounts for 29%, and diseased tissue accounts for 70%. And the three-dimensional image blocks obtained by sampling are subjected to sagittal plane turnover, so that the training set is expanded by 1 time.
Step 2: and establishing a three-dimensional full-convolution neural network, and adding an example normalization layer. The DenseNet dense connection structure has the characteristic of high feature reuse rate, the invention adopts an improved three-dimensional full-convolution DenseNet model structure (called DenseNet _ Base in the invention) to extract and segment the features of white matter images of the brain, and an example normalization layer is added to relieve the problems of data shock and low model convergence speed in training.
The DenseNet _ Base network structure adopted by the invention is divided into a down-sampling channel and an up-sampling channel, and the two channels are connected by a dense connection block (DB). The densely packed block is the basic module of the DenseNet, and each DB used in the present invention consists of 4 convolution modules, and the input of each layer of the network includes the image features learned by all the previous layers. The up-sampling path and the down-sampling path are composed of 3 DBs and corresponding transition sampling modules, and one transition sampling module (TD or TU) is arranged between every two DBs. The initial input profile number and growth rate for the network are 48 and 12, respectively.
In the deep neural network, the distribution difference between different types of input data is large, so that the phenomenon that network model training is not easy to converge due to data jitter is caused, and the problem of data oscillation can be effectively relieved by data normalization. The DenseNet original model adopts a batch normalization method, and the method has better effect when the batch is larger during training. However, due to computer memory and computational limitations in the segmentation task, typically only 1 image can be processed at a time, rendering the batch normalization approach ineffective in this case. The invention introduces the example normalization to replace the batch normalization layer in the original DenseNet model, solves the problem of data jitter and accelerates the convergence speed of the white matter detection network of the brain. The example normalization algorithm was calculated as follows:
first, the mean μ of each white matter picture was calculated along the channel:
where subscripts c, i, j denote the channel, width, and height indices of the input white matter image, respectively, F denotes the pixel values of the input white matter image, and W, H denote the width and height of the input white matter image, respectively.
Next, the variance σ is calculated for each white matter picture along the channel:
Where e is a number greater than 0 and very small, the denominator is prevented from being 0. The example normalization is added in each layer, so that gradient disappearance and gradient overflow can be avoided, the dependence of a network model on initialization of weight values and the like is reduced, network convergence is accelerated, the method can be used as a regularization means, and the requirement of the network on the algorithm for preventing overfitting is reduced.
And step 3: and improving the Dice loss function, increasing the weight of the white matter focus area of the brain, and focusing more on the characteristic learning of the white matter area of the brain in the model training. And constructing a multi-loss function structure, and dividing the segmentation problem of the white matter MRI images of different types into multiple branches for output, so that the convolution kernel is subjected to refined learning and training.
In the current commonly used target segmentation task, a commonly used loss function is a Dice loss function, and a calculation formula of the Dice loss function is as follows:
wherein p is i And g i And respectively representing the detection result of the white matter network of the brain and the value of the ith pixel point in the label. In the three-dimensional MRI brain white matter image, because the medical image has specificity, compared with the natural image, the whole brain white matter image has smaller brain white matter focus area, and the non-focus areaThe duty ratio is larger. If the traditional Dice loss function is adopted, in the network training process, the network tends to learn the characteristics of the non-white matter focus area, and the characteristics of the white matter focus area cannot be effectively extracted, so that the situations of false detection and missed detection are caused. Therefore, in order to improve the learning capacity of the network to the white matter region of the brain, the traditional Dice loss function is improved, and the improved loss function calculation formula is as follows:
according to the above formula, g is i Part of the corresponding area is white matter focus area of brain, and g is adopted i And weighting the weight of the part, and dividing the ratio of the prediction result to the label in a loss function into 1: 3. by the weighting method, the loss coefficient distributed by the loss function to the label is larger, so that the characteristic learning of the network to a white matter lesion area of the brain can be enhanced, the loss value distribution of the network to a non-lesion area is weakened, the interference of the MRI background image of the brain to the characteristic learning of the lesion area is reduced, and the detection accuracy of the network is improved.
The invention uses the brain white matter segmentation data set, the goal is the accurate segmentation of 3 types of lesion areas with subordination relation, in the 3 types of target areas, the whole lesion area is added with edema tissues of one type compared with a lesion core area, and the lesion core area is added with necrotic tissues and non-enhanced tissues of two types compared with an enhanced lesion area. This makes it difficult to accurately segment white matter with gray-scale features only and fuzzy boundaries, which requires the ability to distinguish all regional features for the same convolution kernel. Aiming at the difficulty of learning multi-region characteristics by a single convolution core, the invention improves the structure of the last layers of the network. The invention adds 3 network structure branches in parallel after the last DB of DenseNet _ Base. Each branch consists of 2 layers of DB structures and 1 multiplied by 1 convolution kernel respectively, and corresponds to 3 areas of the whole lesion area, the lesion core area and the lesion enhancement area which need to be segmented by data respectively. Each branch takes the improved Dice _ loss of the present invention as a loss function.
And 4, step 4: and inputting training data into the model for training by adopting a proper optimizer, learning rate and other hyper-parameters until the loss function is reduced to be low enough, stopping training after the model is converged, and storing the model. And inputting the training set data into the stored model to obtain an output white matter MRI image segmentation result.
b) The calculation module of the length and the short diameter of the maximal brain white focus of the whole brain:
the implementation method comprises the following steps:
calculating the maximum focus major diameter:
for each focus area, the voxel set of the segmented focus area is set as P, and the focus edge voxel set is set as M ═ M 1 ,m 2 ,m 3 ,…,m n In which m is i ∈R 3 . The following steps are performed iteratively:
(1) two points M are arbitrarily selected in M i (x 1 ,y 1 ,z 1 ),m j (x 2 ,y 2 ,z 2 ) Form a segment M by e M, i, j being 1 to n and i not equal to j i m j :
(2) The longitudinal slice of the MRI image can be denoted as Z ═ n, n ∈ Z. Suppose z 1 ≤z 2 Taking n as [ z ∈ ] 1 ,z 2 ]Time line segment m i m j And a longitudinal section of the MRI image.
(4) Calculating line segment m i m j Length of (d | m) i m j |:
Where Δ i denotes the resolution of the slice pattern and Δ j denotes the layer thickness.
(5) Judging whether all the point pair combinations in the set M are subjected to iteration processing, if so, performing the step (6); otherwise, returning to the step (1).
(6) Calculating to obtain the maximum line segment length L max =max(|m i m j |),L max I.e. the maximum lesion length.
And (3) short path calculation:
is provided withThen m is p (x p ,y p ,z p ),m q (x q ,y q ,z q ) The line segment m is the two end points of the line segment where the maximum lesion length is located p m q Middle point m of c Can be expressed as:
straight line m p m q The direction vector of (a) is:
then the plane of the short path is:
taking the intersection S of the voxel point where the plane is located and the voxel in the set P, making P ← S, the focus edge voxel set in S is M, and obtaining the focus short diameter L according to the maximum focus length calculation mode min 。
c) A calculation module of total volume of intracranial white matter lesions:
the volume calculation formula is as follows:
wherein h is the layer thickness, S i White matter lesion area of i-th layer (i 1, … n), l is interlamellar spacing, V T Is the total volume.
5. Image structure interpretation module:
the medical image information and the manual input information extracted from the clinical information knowledge base module, the anatomical pattern diagram module and the image characteristic standardized description module are logically analyzed and compared with preset information in a database, the correct disease category of a patient is judged, the result of the data quantitative calculation module is integrated, and the result is structurally output through the report generation module.
Interpretation criteria:
(1) overall evaluation was grade 1 or grade 2:
a) age greater than 65 years, overall assessment grade 1: it is judged that abnormal signals scattered in the white matter of the brain are scattered in the punctate, and the signals accord with age-related changes, please combine with clinic.
b) Age greater than 75 years, overall rating grade 1 or 2: the abnormal signals scattered in the punctate inside the white matter of the brain are judged to be in accordance with age-related changes, please combine with clinic.
c) Level 0: judging that no abnormal signal exists in the intracranial brain parenchyma.
(2) Overall evaluation was grade 3:
a) if the focus position is around the ventricle, the shape is 'Dawson finger' sign; the distribution of the focus is cerebellum, and the position is peripheral distribution; the distribution of the focus is brain stem and the position is peripheral distribution; the focus position is "beside cortex"; the focus location is the "corpus callosum"; the strengthening mode is 'open ring shape', and any judgment is satisfied: inclined to perivascular patterns, inflammatory demyelinating lesions may be present (□ Multiple Sclerosis (MS) □ Acute Disseminated Encephalomyelitis (ADEM) □ neuromyelitis optica (NMO) □ Lyme disease (Lyme) □ Others [ ]), please incorporate clinics.
b) If the focus position is around the ventricle, the shape is oval or fusiform, and no blood vessel high risk factor exists in the clinical data, the judgment is that: predispose to vascular patterns, leukosporosis changes, please bind to the clinic.
c) The lesion position is around the ventricles of brain, the shape is oval or fusiform, and the clinical data indicates that the lesion position is more than 65 years old or has a high risk factor of blood vessels, and the lesion position is judged as follows: predispose to vascular patterns, leukospongia changes, suggesting possible ischemic changes, please incorporate clinics.
d) The distribution of the lesions is cerebellum or brainstem, and the position is peripheral distribution; focal distribution is the "basal ganglia region"; the location "subcortical" or "deep subcortical non-edge region" or "subcortical edge region"; the clinical data includes 'blood vessel risk factors'; micro bleeding exists; the above-mentioned satisfying any one judgment is: predisposed to vascular patterns, suggesting arteriolar occlusive cerebral infarction (white matter changes associated with small vascular lesions.
e) On the basis of d), and among the signals, "DWI high signal" and "ADC low signal", the judgment is: predisposed to vascular patterns, suggesting recent arteriole occlusive cerebral infarction (changes in white matter associated with small vessel lesions.
f) The signal 'FLAIR low signal' is judged as follows: □ tendency to be perivascular space; □ trend toward vascular patterns, suggesting an old arteriole occlusive cerebral infarction (changes in white matter associated with small vascular lesions
g) If the two are not in accordance, manually selecting: □ prone to perivascular patterns, inflammatory demyelinating lesions (□ Multiple Sclerosis (MS) □ Acute Disseminated Encephalomyelitis (ADEM) □ neuromyelitis optica (NMO) □ Lyme disease (Lyme) □ others [ ]); □ predisposition to vascular patterns, suggesting arteriolar occlusive cerebral infarction (changes in white matter associated with small vascular lesions; □ others (e.g. diffuse axonal injury)
6. The neural network unit:
the options and the numerical input content of the clinical information knowledge base unit are coded, an 8-layer BP neural network model is trained through the clinical information and result data set of historical cases, the check of the knowledge item base and the manual input unit and the coding of the input result are input into the trained neural network model, the disease name of the part of the patient is output, and the auxiliary function is provided. The working mode is as follows:
1) and coding the clinical information knowledge question bank and the input result of the doctor. The method adopts a mode of combining one-hot codes and actual numerical values to carry out mixed coding on options of a selected knowledge question bank, numerical manual units (length, area, volume and the like) and doctor input results (disease names) so as to generate a multi-dimensional coding vector. The dimension of the vector is the sum of the total number of all options in the choice knowledge question base, the number of numeric manual input units, and the number of diseases in the table of potential resulting disease names.
For selecting the knowledge item base, the patent adopts the one-hot code to code the options of the knowledge item base. Suppose that a choice in the question bank has n options [ s ] in fixed order 0 ,s 1 ,s 2 ,…,s m-1 ]When the doctor selects the ith option, order s i 1 and s j 0, (j ≠ i) generates an n-dimensional vector; for a numerical manual input unit, the method adopts a form of directly coding an actual numerical value, and takes the actual input numerical value of the numerical manual input unit in a standard unit as the code of the numerical manual input unit; for the doctor to input the result, this patent adopts the one-hot code to encode it. Suppose there are m disease names in the fixed order list of potential outcome disease names, which can be expressed as [ k ] 0 ,k 1 ,k,…,k m-1 ]. When the doctor interprets as the p-th result, a one-to-one corresponding m-dimensional vector is generated in a way of k p 1 and k q =0,(q≠p)。
And combining the three encoding vectors in sequence according to the sequence in the clinical information question bank to form an ordered N-dimensional encoding vector. Wherein, the former N-m dimension is the clinical information sample code, and the latter m dimension is the sample label.
2) And (4) coding the historical case according to the coding mode in the step 1. And (3) coding a large number of historical case clinical knowledge question banks and corresponding information of results according to the coding mode of the step (1) to generate a clinical case data set. The data set is divided into two sets of training set and testing set according to the ratio of 8.5: 1.5.
3) And establishing a neural network model, and training and testing the model. The patent designs a feedforward neural network model composed of 8 layers of neurons, and the number of the neurons from an input layer to an output layer is respectively as follows: n-m (input layer), 128, 256,512,1024, 512, m (output layer). And (4) performing affine calculation on each layer of neurons (except the output layer) and then performing batch normalization calculation and nonlinear mapping respectively. Random deactivations with a deactivation probability of 0.5 were added after affine calculations at layers 4-7 to prevent overfitting of the neural network. A cross entropy loss function and an output layer are employed. The optimizer uses a random gradient descent optimizer, sets the initial learning rate to 0.01 and uses a learning rate cosine function attenuation strategy.
32 untrained sample data are randomly sampled from a training set each time and input into a neural network for model training, and only the first N-m dimensional data of a sample coding vector is input during training to obtain m-dimensional model prediction output. And carrying out one-hot coding on the model prediction output, wherein the specific coding mode is as follows: the largest term is set to 1 and the other terms are set to 0. And calculating cross entropy loss by using the model prediction output after the one-hot coding and the m-dimensional sample label data in the corresponding sample coding vector, and updating the model parameters by using an optimizer. After the data of all training sets are trained for one time, updating the learning rate, inputting sample data of a verification set into a model to obtain a prediction vector, only inputting the front N-m dimensional data of a sample coding vector during prediction to obtain model prediction output in an m-dimensional single hot coding mode, comparing the model prediction output with the rear m-dimensional sample label of the corresponding sample coding vector, and if the two are the same, correctly predicting; otherwise, the prediction is wrong.
And repeatedly inputting the training set data and the test set data into the neural network model for iterative training and testing, and storing the model and the parameters when the testing accuracy is maximum. The test accuracy is calculated by dividing the total number of samples predicted to be correct in the test set by the total number of samples in the test set.
4) And generating clinical information codes according to the judgment result of the doctor on the case in the knowledge question bank, inputting the stored models and outputting the predicted disease names. And (3) when the doctor fills in the clinical information question bank according to the clinical characteristics of the case each time, sequentially generating codes for the question bank information according to the filling condition of the doctor and the step 1, inputting the codes into the neural network model stored in the step 3, and outputting the codes of the prediction result by the model. According to model predictive coding, assuming the z-th item is the maximum item, the disease name of the z-th result is selected as the suggested result disease name against the list of potential result disease names.
7. Report generation module
Part of output content contains typical focus picture and image mode picture; clinical information content; structural terms of lesion location; quantitatively analyzing a result value; the report content is standardized. The preset anatomical structure, lesion form, lesion signal term, output result and the like are manually set in a computer, so that the human input errors and non-standard words are avoided, and a mode image and a typical image of the image expression are output. And the anatomical structure and the image characteristics of the focus are output layer by layer and in a standard mode, and the report content and the accurate focus size numerical value are output in a standard format through man-machine interaction in a standard writing mode.
Drawings
FIG. 1 is a frontal lobe configuration tomographic view of the present application;
FIG. 2 is a tomographic view of the basal ganglia structure of the present application;
FIG. 3 is a frontal temporal occipital lobe structure tomographic view of the present application;
FIG. 4 is a diagram of the structure of the cerebellum and brainstem of the present application;
FIG. 5 is a lesion location distribution indicator map of the present application;
FIG. 6 is a schematic illustration of lesion distribution according to an example embodiment of the present application;
FIG. 7 is a schematic view of lesion location according to an example embodiment of the present application;
fig. 8 is a diagram of an example of an examination report form.
Description of reference numerals: in FIGS. 1-4: 1 right frontal lobe, 2 left frontal lobe, 3 right frontal lobe, 4 left frontal lobe, 5 right temporal lobe, 6 left temporal lobe, 7 right occipital lobe, 8 left occipital lobe, 9 right basal ganglia, 10 left basal ganglia, 11 right thalamus, 12 left thalamus, 13 corpus callosum, 14 right insular lobe, 15 left insular lobe, 16 right cerebellum, 17 left cerebellum, 18 brainstem, 19 subcortical, 20 subcortical or deep cortex, 21 parasymphatic.
Detailed Description
The invention is further illustrated by the following examples.
1. Establishing a clinical information knowledge base module:
the scope of use of the present structured report is the MRI examination of white brain lesions, the first step, determining the age of the patient, e.g., age: □Under 65 years old (selected)□ 65 to □ 75 years old or older 75 years old; the second part determines whether the patient has vascular risk factors: □ No (selected) □ hypertension □ hyperlipidemia □ diabetes □ smoking history □ obesity □ other diseases (such as hypercoagulable state of blood, vasculitis, migraine, etc.)]) □ are not detailed; the third step determines the presence or absence of other relevant clinical history: and others: []。
2. Anatomical pattern map module
The computer display mode diagram module is used for displaying the fault schematic diagram of each brain anatomical structure, after a radiologist reads the images, the distribution and the position of the white brain focus are clicked by a mouse, the color of the brain area is highlighted beside the ventricles of the parietal lobes on the two sides, the position of the focus is accurately positioned and is connected with the report generation module, and the schematic diagram of the position of the focus is output. As shown in fig. 7.
3. Image characteristic standard description module:
a. signal:
t1: □ high □, etc□ low (selected);
T2:□ high (selected)□, etc. □ low;
T2-FLAIR:□ high (selected)□ et al □ low □ center low edge high;
DWI: □ height□, etc. (selected)□ low;
ADC: □ high□, etc. (selected)□ low;
b. the form is as follows: □ round and dotted□ oval, spindle-shaped (□ Dawson's) finger sign) (selected)□ amorphous;
c. the strengthening mode is as follows: □ without strengthening□ open ring shape (selected)□ homogeneous reinforcement □ heterogeneous reinforcement □ point-like reinforcement;
d. quantitative analysis of white matter lesions results:the largest focus is located in the left apical lobe]Major diameter[2.0]cm, short diameter [3.3 ]] cm, total volume of intracranial lesions [13.3 ]]cm3。
e. A micro bleeding focus:□ none (selected)□ there are less than 5 (□ lobes (including hemioval center) □ deep (basal ganglia, thalamus, brainstem, cerebellum) # which can be selected simultaneously) □ and multiple hairs (□ lobes (including hemioval center) □ deep (basal ganglia, thalamus, brainstem, cerebellum));
f. other images are seen:[ Do not have]。
4. Image symptom interpretation module
And (3) sorting and logically analyzing the information of the clinical information knowledge base module, the anatomical pattern diagram module and the image characteristic standardized description module, extracting medical image information, automatically calculated focus parameters and manual input information, automatically comparing the medical image information with preset information of a database in the computer module, and outputting disease judgment.
Specifically, in the first embodiment, the key information is: the lesion of the patient is located beside the ventricles of the frontal lobe, MRI signals are represented by T1 low signals, T2 high signals, T2-FLAIR high signals, DWI signals and ADC signals, the shape of the lesion is oval (Dawson's finger), and the strengthening mode is open-loop strengthening. Compared with the built-in module, the model accords with the judgment of 'tendency to perivascular mode and inflammatory demyelinating lesion possibility (multiple sclerosis)', and the image expression and the disease judgment are output to the report generation module.
5. A neural network module:
the options and the numerical input contents of the clinical information knowledge base unit are coded, an 8-layer BP neural network model is trained through historical case clinical information and result data sets, the check of the knowledge question base and the manual input unit and the coding of the input result are input into the trained neural network model, the disease name of the part of a patient is output, and an auxiliary function is provided. The working mode is as follows:
1) and coding the clinical information knowledge question bank and the input result of the doctor. The method adopts a mode of combining one-hot codes and actual numerical values to carry out mixed coding on options of a selected knowledge question bank, numerical manual units (length, area, volume and the like) and doctor input results (disease names) so as to generate a multi-dimensional coding vector. The dimension of the vector is the sum of the total number of all options in the choice knowledge item base, the number of the numeric manual input units, and the number of diseases in the table of potential outcome disease names.
For selecting the knowledge item base, the patent adopts the one-hot code to code the options of the knowledge item base. Suppose that a choice in the question bank has n options [ s ] in fixed order 0 ,s 1 ,s 2 ,…,s n-1 ]When the doctor selects the ith option, order s i 1 and s j 0, (j ≠ i) generates an n-dimensional vector; for a numerical manual input unit, the method adopts a form of directly coding by actual numerical values, and takes the actual input numerical values of the numerical manual input unit in a standard unit as codes of the numerical manual input unit; for the doctor to input the result, this patent adopts the one-hot code to encode it. Suppose there are m disease names in the fixed order list of potential outcome disease names, which can be expressed as [ k ] 0 ,k 1 ,k,…,k m-1 ]. When the doctor interprets the result as the p-th result, a one-to-one corresponding m-dimensional vector is generated in a corresponding mode of k p 1 and k q =0,(q≠p)。
And combining the three encoding vectors in sequence according to the sequence in the clinical information question bank to form an ordered N-dimensional encoding vector. Wherein, the former N-m dimension is the clinical information sample code, and the latter m dimension is the sample label.
2) And (4) coding the historical case according to the coding mode in the step 1. And (3) coding a large number of historical case clinical knowledge question banks and corresponding information of results according to the coding mode of the step (1) to generate a clinical case data set. The data set is divided into two sets of training set and testing set according to the ratio of 8.5: 1.5.
3) And establishing a neural network model, and training and testing the model. The patent designs a feedforward neural network model composed of 8 layers of neurons, and the number of the neurons from an input layer to an output layer is respectively as follows: n-m (input layer), 128, 256,512,1024, 512, m (output layer). And (4) performing affine calculation on each layer of neurons (except the output layer) and then performing batch normalization calculation and nonlinear mapping respectively. Random inactivation with an inactivation probability of 0.5 was increased after affine calculations at layers 4-7 to prevent overfitting of the neural network. A cross entropy loss function and an output layer are employed. The optimizer uses a random gradient descent optimizer, sets the initial learning rate to 0.01 and uses a learning rate cosine function attenuation strategy.
32 untrained sample data are randomly sampled from a training set each time and input into a neural network for model training, and only the first N-m dimensional data of a sample coding vector is input during training to obtain m-dimensional model prediction output. And carrying out one-hot coding on the model prediction output, wherein the specific coding mode is as follows: the largest term is set to 1 and the other terms are set to 0. And calculating cross entropy loss by using the model prediction output after the one-hot coding and the post-m-dimensional sample label data in the corresponding sample coding vector, and updating model parameters by using an optimizer. After the data of all training sets are trained for one time, updating the learning rate, inputting sample data of a verification set into a model to obtain a prediction vector, only inputting the first N-m dimensional data of a sample coding vector during prediction to obtain model prediction output in an m-dimensional independent hot coding mode, comparing the model prediction output with the rear m-dimensional sample label of the corresponding sample coding vector, and if the model prediction output is the same as the rear m-dimensional sample label of the corresponding sample coding vector, correctly predicting; otherwise, the prediction is wrong.
And repeatedly inputting the training set data and the test set data into the neural network model for iterative training and testing, and storing the model and parameters with the highest test accuracy. The test accuracy is calculated by dividing the total number of samples predicted to be correct in the test set by the total number of samples in the test set.
4) And generating a clinical information code according to the evaluation result of the doctor on the case in the knowledge question bank, inputting the stored model and outputting the predicted disease name. And (3) when the doctor fills in the clinical information question bank according to the clinical characteristics of the case each time, sequentially generating codes for the question bank information according to the filling condition of the doctor and the step 1, inputting the codes into the neural network model stored in the step 3, and outputting the codes of the prediction result by the model. According to model predictive coding, assuming the z-th item is the maximum item, the disease name of the z-th result is selected as the suggested result disease name against the list of potential result disease names.
6. A report generation module:
the report generation module is connected with the clinical information knowledge base module, the anatomical pattern diagram module, the image characteristic standardization description module, the neural network unit and the image comparison module, and outputs an image pattern diagram; clinical information content; structural terms of lesion location; diagnosing diseases; the report content is standardized. Specifically, in the first embodiment, generating the report includes:
clinical data:
1. age: ■ 65 from □ 65 under the age of 65 to □ 75 over the age of 75;
2. vascular risk factors: ■ No □ hypertension □ hyperlipidemia □ diabetes □ smoking history □ obesity □ other (such as hypercoagulable state of blood, vasculitis, migraine, etc. [ ]) □ unknown;
3. and others: [].
The image is seen as follows:
1. overall assessment (ARWMC scale score): □ 0 level (no abnormal signal) □ 1 level (scattered in specks) ■ 2 level (partial aggregation) □ 3 level (fusion);
2. focal lesion:
2.1. distribution: as shown in fig. 6.
2.2. Position: as shown in fig. 7.
2.3. Signal:
t1: □ high □, etc. ■ low;
t2: ■ high □, etc. □ low;
T2-FLAIR: ■ high □, etc. □ low □ central low edge high;
DWI: □ high 9632and equal □ low;
ADC: □ high 9632and equal □ low;
2.4. the form is as follows: □ round, dotted ■ oval, fusiform (■ Dawson's finger) □ amorphous;
2.5. the strengthening mode is as follows: □ No reinforcement ■ open ring □ Uniform reinforcement □ non-uniform reinforcement □ point reinforcement;
2.6. quantitative analysis result of white matter lesions: the largest lesion is located in the left apical lobe]Major diameter [2.0 ]]cm, short diameter [3.3 ]]cm, total volume of intracranial lesions [13.3 ]]cm 3 。
3. A micro bleeding focus: ■ has no □, less than 5 (□ lobes (containing hemioval center) □ deep parts (basal ganglia, thalamus, brainstem, cerebellum) # which can be simultaneously selected) □ and multiple hairs (□ lobes (containing hemioval center) □ deep parts (basal ganglia, thalamus, brainstem, cerebellum));
4. other images are seen: [ none ].
And (3) impression judgment:
bilateral parietal ventriculo-polygenic hyper-signaling predisposes to perivascular patterns and is considered multiple sclerosis. The final report is generated as shown in fig. 8.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make possible variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above, and therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present invention shall fall within the protection scope of the present invention.
Claims (5)
1. An automatic quantitative analysis system for white matter lesions of the brain is characterized by comprising a clinical information knowledge base module, an anatomical pattern graph module, an image characteristic standardized description module, an image data quantitative calculation module and a report generation module;
the clinical information knowledge base module comprises a selection knowledge item base and a manual input unit, and the selection knowledge base unit comprises a clinical common input option and a manual input unit; the manual input unit comprises clinical and medical history data related to the image of the patient;
the anatomical mode map module comprises a visual intracranial tomographic map, the visual intracranial tomographic map is a dot-map anatomical structure displayed by a plane map, the distribution and the position of a lesion are accurately defined, and the brain structure in the visual intracranial tomographic map comprises a left frontal lobe, a left parietal lobe, a left occipital lobe, a left temporal lobe, a left island lobe, a left basal ganglia, a left thalamus, a right frontal lobe, a right parietal lobe, a right occipital lobe, a right temporal lobe, a right island lobe, a right basal ganglia, a right thalamus, a left cerebellum, a right cerebellum, a corpus callosum and a brainstem;
the image characteristic standardized description module comprises a human-computer interaction interface, and the human-computer interaction interface comprises a preset indication part and an input part;
the image data quantitative calculation module comprises an intracranial white matter lesion segmentation module, a full-brain calculation module for calculating the length and the short diameter of the maximum cerebral white matter lesion, and a calculation module for calculating the total volume of the intracranial white matter lesion; the image segmentation mode of the intracranial white matter lesion segmentation module is as follows:
step 1: performing multi-modal fusion on white matter MRI images of the brain, performing data preprocessing on fused images, and performing three-dimensional small image block sampling on each image data to be used as training data; brain MRI imaging is divided into four modalities according to the difference of auxiliary conditions of imaging: the method comprises a T1 weighting mode, a T1ce weighting mode, a T2 weighting mode and a Flair mode, wherein brain images in different modes are fused and are simultaneously sent to a network for training; let j image of the fused three-dimensional white matter image x be expressed as:
x j =[I F (j),I T1 (j),I T1ce (j),I T2 (j)]
wherein I represents the white matter image of the brain in the four modalities, and subscripts F, T1, T1ce, T2 represent the four modalities, respectively; according to different characteristics of each mode, selecting white matter images of different modes to be fused to construct a whole training set T, namely
T={(x 1 ,y 1 ),…,(x k ,y k ),…,(x m ,y m )};
Wherein (x) k ,y k ) Denotes the k-th training sample, y k E {0,1} is a label for the kth sample, indicating whether the kth sample has a lesion;
sampling 70 three-dimensional image blocks for each case data as training data by adopting a three-dimensional small image block sampling training method, wherein the size of each image block is 32 multiplied by 32, and each image block is randomly selected according to the following proportion: background accounts for 1%, normal tissue accounts for 29%, and diseased tissue accounts for 70%; and the three-dimensional image block obtained by sampling is subjected to sagittal plane turnover, so that the training set is expanded by 1 time;
step 2: establishing a three-dimensional full-convolution neural network, adding an example normalization layer, extracting and segmenting the features of a white matter image of a brain by adopting an improved three-dimensional full-convolution DenseNet model structure, and adding the example normalization layer to solve the problems of data oscillation and low model convergence speed in training;
dividing a DenseNet _ Base network structure into a down-sampling channel and an up-sampling channel, wherein the two channels are connected by a dense connecting block; each dense connecting block consists of 4 convolution modules, and the input of each layer network comprises the image characteristics learned by all the previous layers; the up-sampling passage and the down-sampling passage are both composed of 3 dense connecting blocks and corresponding transition sampling modules, and one transition sampling module is arranged between every two dense connecting blocks; the calculation steps of the normalization algorithm are as follows:
s21, calculating the mean μ of each white matter image along the channel:
wherein subscripts c, i, j respectively represent the channel, width and height indices of the input white matter image, F represents the pixel value of the input white matter image, and W, H respectively represent the width and height of the input white matter image;
s22, calculating the variance σ of each white matter image along the channel:
Wherein ∈ is a number larger than 0 and very small, preventing the denominator from being 0; the case normalization is added in each layer, so that gradient disappearance and gradient overflow can be avoided, the dependence of a network model on weight initialization is reduced, network convergence is accelerated, the method can be used as a regularization means, and the requirement of the network on the prevention of an overfitting algorithm is reduced;
and step 3: improving a Dice loss function, increasing the weight of a white matter focus area of the brain, and enabling the model training to focus on feature learning of the white matter area of the brain; constructing a multi-loss function structure, and dividing the segmentation problem of different types of white matter MRI images into multiple branches for output, so that the convolution kernel is subjected to refined learning and training; the improved loss function calculation formula is as follows:
wherein p is i And g i Respectively representing the detection result of the white matter network of the brain and the value of the ith pixel point in the label;
due to g i Corresponding in part to the focal area of white matter of the brain, for g i Weighting the weight, and dividing the ratio of the prediction result to the label in the loss function into 1: 3;
and 4, step 4: and inputting the training set data into the stored model to obtain an output white matter MRI image segmentation result.
2. The automated quantitative analysis system for white matter lesions of the brain according to claim 1, further comprising a neural network unit and an image structure interpretation module;
the neural network unit adopts a mode of combining the one-hot code and the actual numerical value to carry out mixed coding on the options of the selected knowledge question bank, the numerical manual unit and the input result of the doctor so as to generate a multi-dimensional coding vector; the dimension of the vector is the sum of the total number of all options in the choice knowledge item base, the number of the numerical manual input units and the number of diseases in the potential result disease name list;
the image structure interpretation module compares medical image information and manual input information extracted from the clinical information knowledge base module, the anatomical pattern diagram module and the image characteristic standardized description module through logic analysis with preset information in a database, judges the correct disease category of a patient, integrates the result of the data quantitative calculation module, and structurally outputs the result through the report generation module.
3. The interpretation method of the automated quantitative analysis system of white matter lesions of the brain according to claim 1, characterized in that the selection knowledge question bank and the manual input are provided by the clinical information knowledge base module, the selection knowledge base unit provides the clinical common entry options, and the manual input unit is used as the supplementary content; firstly, selecting common input contents in a knowledge question bank for checking, and if the common input contents cannot meet the requirements, selecting a manual input unit for supplementing; the MRI image of the cerebral white matter lesions is segmented by an intracranial white matter lesion segmentation module, the maximum focus long diameter and focus short diameter are calculated by a calculation module of the maximum brain white matter focus length and the maximum focus short diameter, and the intracranial white matter lesion total volume is calculated by a calculation module of the intracranial white matter lesion total volume.
4. The interpretation method of the automated quantitative analysis system of white matter lesions of the brain as claimed in claim 3, wherein the correct disease category of the patient is judged by the image structure interpretation module; interpretation criteria:
(1) overall evaluation was grade 1 or grade 2:
a) age greater than 65 years, overall assessment grade 1: judging that abnormal signals scattered in speckles in white matter of brain accord with age-related changes, please combine clinic;
b) age greater than 75 years, overall rating grade 1 or 2: judging that abnormal signals scattered in speckles in white matter of brain accord with age-related changes, please combine clinic;
c) level 0: judging that no abnormal signal exists in the intracranial brain parenchyma;
(2) the overall evaluation was grade 3:
a) if the focus position is around the ventricle, the shape is 'Dawson finger' sign; the distribution of the focus is cerebellum, and the position is peripheral distribution; the distribution of the focus is brain stem and the position is peripheral distribution; the focus position is "beside cortex"; the focus position is the 'corpus callosum'; the strengthening mode is an open ring shape, and any one of the above judgments is satisfied: predisposing to perivascular patterns, inflammatory demyelinating lesions may, please bind clinically;
b) if the focus position is around the ventricle, the shape is oval or fusiform, and no blood vessel high risk factor exists in the clinical data, the judgment is that: predisposed to vascular patterns, leukoporosis changes, please bind to the clinic;
c) the focus position is around the ventricles of brain, the shape is oval or fusiform, and the disease is judged as follows in clinical data, if the disease is more than 65 years old or has high risk factors of blood vessels: predispose to vascular patterns, brain white matter loosening changes, suggesting possible ischemic changes, please incorporate clinics;
d) the distribution of the lesions is cerebellum or brainstem, and the position is peripheral distribution; focal distribution is the "basal ganglia region"; the location "subcortical" or "deep subcortical non-edge region" or "subcortical edge region"; the clinical data includes 'blood vessel risk factors'; micro bleeding exists; the above-mentioned satisfying any one judges as: a blood vessel tendency mode, which indicates arteriole occlusive cerebral infarction and is combined with clinic;
e) on the basis of d), and among the signals, "DWI high signal" and "ADC low signal", the judgment is: a tendency blood vessel mode, which indicates the recent arteriole occlusive cerebral infarction and is combined with clinic;
f) the signal is 'FLAIR low signal', and the judgment is that: the tendency is perivascular clearance or a tendency vascular mode, which indicates old arteriole occlusive cerebral infarction;
g) and if the data are not in accordance with the data, manually selecting.
5. The interpretation method of the automated quantitative analysis system of white matter lesions of the brain as claimed in claim 3, wherein the disease name of the patient is outputted through the neural network unit in the following working mode:
1) coding the clinical information knowledge item base and the input result of the doctor, and coding the options of the selected knowledge item base by using the one-hot code; suppose that a choice in the question bank has n options [ s ] in fixed order 0 ,s 1 ,s 2 ,…,s n-1 ]When the doctor selects the ith option, order s i 1 and s j Generating an n-dimensional vector when the vector is 0, j is not equal to i; for a numerical manual input unit, a form of directly coding by actual numerical values is adopted, and the actual input numerical values of the numerical manual input unit in a standard unit are used as codes of the numerical manual input unit; for the doctor input result, encoding the result by adopting a unique hot code; suppose there are m disease names in the fixed order table of potential outcome disease names, denoted as [ k ] 0 ,k 1 ,k,…,k m-1 ](ii) a When the doctor interprets the result as the p-th result, a one-to-one corresponding m-dimensional vector is generated in a corresponding mode of k p 1 and k q =0,q≠p;
Combining the three encoding vectors in sequence in the clinical information question bank to form an ordered N-dimensional encoding vector; wherein, the former N-m dimension is the clinical information sample code, and the latter m dimension is the sample label;
2) coding the historical case according to the coding mode in the step 1); coding a large amount of historical case clinical knowledge question banks and corresponding information of results according to the coding mode of the step 1) to generate a clinical case data set; dividing a data set into a training set and a testing set according to a proportion;
3) establishing a feedforward neural network model consisting of 8 layers of neurons, wherein the number of the neurons from an input layer to an output layer is respectively as follows: n-m, 128, 256,512,1024, 512, m; carrying out batch normalization calculation and nonlinear mapping on each layer of neurons after affine calculation; increasing random inactivation with an inactivation probability of 0.5 after affine calculations at layers 4-7 to prevent overfitting of the neural network; adopting a cross entropy loss function; the optimizer uses a random gradient descent optimizer, sets the initial learning rate to be 0.01 and uses a learning rate cosine function attenuation strategy;
randomly sampling 32 untrained sample data from a training set each time, inputting the 32 untrained sample data into a neural network for model training, and only inputting the first N-m dimensional data of a sample coding vector during training to obtain m-dimensional model prediction output; and carrying out one-hot coding on the model prediction output, wherein the specific coding mode is as follows: setting the maximum item to 1 and the other items to 0; calculating cross entropy loss by using the model prediction output after the one-hot encoding and the rear m-dimensional sample label data in the corresponding sample encoding vector, and updating model parameters by using an optimizer; after the data of all training sets are trained for one time, updating the learning rate, inputting sample data of a verification set into a model to obtain a prediction vector, only inputting the first N-m dimensional data of a sample coding vector during prediction to obtain model prediction output in an m-dimensional independent hot coding mode, comparing the model prediction output with the rear m-dimensional sample label of the corresponding sample coding vector, and if the model prediction output is the same as the rear m-dimensional sample label of the corresponding sample coding vector, correctly predicting; otherwise, predicting error;
repeatedly inputting training set data and test set data into the neural network model for iterative training and testing, and storing the model and parameters with the maximum testing accuracy; the calculation mode of the test accuracy is that the total number of the correct samples predicted on the test set is divided by the total number of the samples in the test set;
4) generating clinical information codes according to the evaluation results of doctors on the cases in the knowledge question bank, inputting the stored models and outputting predicted disease names; when a doctor fills in a clinical information question bank according to clinical characteristics of a case each time, sequentially generating codes for the question bank information according to the filling condition of the doctor in the step 1), inputting the codes into the neural network model stored in the step 3), and outputting the codes of a prediction result by the model; according to model predictive coding, assuming the z-th item is the maximum item, the disease name of the z-th result is selected as the suggested result disease name against the list of potential result disease names.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110315174.7A CN113077887B (en) | 2021-03-24 | 2021-03-24 | Automatic quantitative analysis system and interpretation method for white matter lesions of brain |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110315174.7A CN113077887B (en) | 2021-03-24 | 2021-03-24 | Automatic quantitative analysis system and interpretation method for white matter lesions of brain |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113077887A CN113077887A (en) | 2021-07-06 |
CN113077887B true CN113077887B (en) | 2022-09-02 |
Family
ID=76610049
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110315174.7A Active CN113077887B (en) | 2021-03-24 | 2021-03-24 | Automatic quantitative analysis system and interpretation method for white matter lesions of brain |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113077887B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113962958B (en) * | 2021-10-21 | 2023-05-05 | 四川大学华西医院 | Sign detection method and device |
CN114242175A (en) * | 2021-12-22 | 2022-03-25 | 香港中文大学深圳研究院 | Method and system for evaluating brain white matter high signal volume |
CN116612885B (en) * | 2023-04-26 | 2024-03-22 | 浙江大学 | Prediction device for acute exacerbation of chronic obstructive pulmonary disease based on multiple modes |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631930A (en) * | 2015-11-27 | 2016-06-01 | 广州聚普科技有限公司 | DTI (Diffusion Tensor Imaging)-based cranial nerve fiber bundle three-dimensional rebuilding method |
CN110288587A (en) * | 2019-06-28 | 2019-09-27 | 重庆同仁至诚智慧医疗科技股份有限公司 | A kind of lesion recognition methods of cerebral arterial thrombosis nuclear magnetic resonance image |
CN110710986A (en) * | 2019-10-25 | 2020-01-21 | 华院数据技术(上海)有限公司 | CT image-based cerebral arteriovenous malformation detection method and system |
CN111832644A (en) * | 2020-07-08 | 2020-10-27 | 北京工业大学 | Brain medical image report generation method and system based on sequence level |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11219402B2 (en) * | 2013-09-20 | 2022-01-11 | Mayo Foundation For Medical Education And Research | Systems and methods for producing imaging biomarkers indicative of a neurological disease state using gray matter suppressions via double inversion-recovery magnetic resonance imaging |
TWI536969B (en) * | 2015-01-05 | 2016-06-11 | 國立中央大學 | White matter hyperintensities region of magnetic resonance imaging recognizing method and system |
US10417788B2 (en) * | 2016-09-21 | 2019-09-17 | Realize, Inc. | Anomaly detection in volumetric medical images using sequential convolutional and recurrent neural networks |
CN107463786A (en) * | 2017-08-17 | 2017-12-12 | 王卫鹏 | Medical image Knowledge Base based on structured report template |
CN110322444B (en) * | 2019-05-31 | 2021-11-23 | 上海联影智能医疗科技有限公司 | Medical image processing method, medical image processing device, storage medium and computer equipment |
CN111223085A (en) * | 2020-01-09 | 2020-06-02 | 北京安德医智科技有限公司 | Head medical image auxiliary interpretation report generation method based on neural network |
CN111292821B (en) * | 2020-01-21 | 2024-02-13 | 上海联影智能医疗科技有限公司 | Medical diagnosis and treatment system |
CN111476774B (en) * | 2020-04-07 | 2023-04-18 | 广州柏视医疗科技有限公司 | Intelligent sign recognition device based on novel coronavirus pneumonia CT detection |
CN112085695A (en) * | 2020-07-21 | 2020-12-15 | 上海联影智能医疗科技有限公司 | Image processing method, device and storage medium |
-
2021
- 2021-03-24 CN CN202110315174.7A patent/CN113077887B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631930A (en) * | 2015-11-27 | 2016-06-01 | 广州聚普科技有限公司 | DTI (Diffusion Tensor Imaging)-based cranial nerve fiber bundle three-dimensional rebuilding method |
CN110288587A (en) * | 2019-06-28 | 2019-09-27 | 重庆同仁至诚智慧医疗科技股份有限公司 | A kind of lesion recognition methods of cerebral arterial thrombosis nuclear magnetic resonance image |
CN110710986A (en) * | 2019-10-25 | 2020-01-21 | 华院数据技术(上海)有限公司 | CT image-based cerebral arteriovenous malformation detection method and system |
CN111832644A (en) * | 2020-07-08 | 2020-10-27 | 北京工业大学 | Brain medical image report generation method and system based on sequence level |
Non-Patent Citations (2)
Title |
---|
Adaboost and Support Vector Machines for White Matter Lesion Segmentation in MR Images;A.Quddus等;《2005 IEEE Engineering in Medicine and Biology 27th Annual Conference》;20060810;全文 * |
基于常规MRI图像的不同影像组学模型在脑胶质瘤术前分级中的应用;穆建华;《磁共振成像》;20200119;第11卷(第01期);第55-59页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113077887A (en) | 2021-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113077887B (en) | Automatic quantitative analysis system and interpretation method for white matter lesions of brain | |
CN114926477B (en) | Brain tumor multi-mode MRI image segmentation method based on deep learning | |
CN104414636B (en) | Cerebral microbleeds computer-aided detection system based on MRI | |
EP2791901A1 (en) | Probability mapping for visualisation and analysis of biomedical images | |
CN113284126B (en) | Method for predicting hydrocephalus shunt operation curative effect by artificial neural network image analysis | |
US11241190B2 (en) | Predicting response to therapy for adult and pediatric crohn's disease using radiomic features of mesenteric fat regions on baseline magnetic resonance enterography | |
CN112735569B (en) | System and method for outputting glioma operation area result before multi-modal MRI of brain tumor | |
Spairani et al. | A deep learning mixed-data type approach for the classification of FHR signals | |
CN112819800A (en) | DSA image recognition method, device and storage medium | |
CN112784928A (en) | DSA image recognition method, device and storage medium | |
Rachmadi et al. | Limited one-time sampling irregularity map (LOTS-IM) for automatic unsupervised assessment of white matter hyperintensities and multiple sclerosis lesions in structural brain magnetic resonance images | |
Savaş et al. | Comparison of deep learning models in carotid artery Intima-Media thickness ultrasound images: CAIMTUSNet | |
KR20210157948A (en) | Method, Device and Computer Program for Predicting Distribution of Brain Tissue Lesion | |
CN115311193A (en) | Abnormal brain image segmentation method and system based on double attention mechanism | |
CN112863648B (en) | Brain tumor postoperative MRI (magnetic resonance imaging) multi-mode output system and method | |
CN113160256B (en) | MR image placenta segmentation method for multitasking countermeasure model | |
CN113096796B (en) | Intelligent prediction system and method for cerebral hemorrhage hematoma expansion risk | |
Wang et al. | Assessment of stroke risk using MRI-VPD with automatic segmentation of carotid plaques and classification of plaque properties based on deep learning | |
Wang et al. | Adaptive Weights Integrated Convolutional Neural Network for Alzheimer's Disease Diagnosis | |
CN111971751A (en) | System and method for evaluating dynamic data | |
Jiang et al. | Automatic Visual Acuity Loss Prediction in Children with Optic Pathway Gliomas using Magnetic Resonance Imaging | |
CN111184948B (en) | Vascular targeted photodynamic therapy-based nevus flammeus treatment method and system | |
Hu et al. | Alzheimer’s disease diagnosis method based on convolutional neural network using key slices voting | |
Li et al. | Placenta segmentation in magnetic resonance imaging: Addressing position and shape of uncertainty and blurred placenta boundary | |
Nugroho et al. | Quad Convolutional Layers (QCL) CNN Approach for Classification of Brain Stroke in Diffusion Weighted (DW)-Magnetic Resonance Images (MRI). |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |