CN113077887A - Automatic quantitative analysis system and interpretation method for white matter lesions of brain - Google Patents

Automatic quantitative analysis system and interpretation method for white matter lesions of brain Download PDF

Info

Publication number
CN113077887A
CN113077887A CN202110315174.7A CN202110315174A CN113077887A CN 113077887 A CN113077887 A CN 113077887A CN 202110315174 A CN202110315174 A CN 202110315174A CN 113077887 A CN113077887 A CN 113077887A
Authority
CN
China
Prior art keywords
white matter
module
brain
lesion
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110315174.7A
Other languages
Chinese (zh)
Other versions
CN113077887B (en
Inventor
姚骊
吕粟
曾嘉欣
胡娜
李思燚
张文静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
West China Hospital of Sichuan University
Original Assignee
West China Hospital of Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by West China Hospital of Sichuan University filed Critical West China Hospital of Sichuan University
Priority to CN202110315174.7A priority Critical patent/CN113077887B/en
Publication of CN113077887A publication Critical patent/CN113077887A/en
Application granted granted Critical
Publication of CN113077887B publication Critical patent/CN113077887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

An automatic quantitative analysis system and interpretation method for white matter lesion of brain, the method utilizes artificial intelligence method to automatically and accurately delineate the focus, accurately calculate the volume of white matter lesion of brain, provide possibility for accurate assessment of progress of disease. Meanwhile, an automatic interpretation system is provided, multidimensional information such as anatomical positioning, morphological structure, lesion size, lesion occurrence time, bleeding and the like of lesions is determined through man-machine interaction, lesion positioning visualization, image index quantification, report term standardization, friendly operation interface and the like are achieved, logic analysis is automatically made, disease judgment is accurately obtained, and the condition that a conclusion is inconsistent with description and an important conclusion is missed is avoided.

Description

Automatic quantitative analysis system and interpretation method for white matter lesions of brain
Technical Field
The application relates to the field of medical imaging, in particular to an automatic lesion volume quantitative analysis and intelligent disease interpretation method according to multi-mode MRI images of a patient.
Background
The brain is the most complex organ of the human body and is an important component of the nervous system. The brain is divided into three major parts, namely gray matter, white matter and cerebrospinal fluid according to tissue structure. The diseases such as the vascular diseases, the inflammatory diseases, the demyelinating diseases and the like can cause the white matter lesions of the brain, so that the patients have a plurality of symptoms such as limb weakness, cognitive impairment, aphasia, epilepsy and the like, and the patients are difficult to be accurately diagnosed in the early stage of the diseases by depending on clinical symptoms alone. Magnetic Resonance Imaging (MRI) has high soft tissue resolution and no radiation, plays a vital role in the examination of nervous system diseases, particularly cerebral lesions, and is beneficial to early accurate judgment of patients and formulation of treatment schemes by identifying the cerebral white lesions by using the MRI image. However, the images of these diseases are complex in representation and have certain similarities, so that the current images of the diseases mainly depend on the subjective experience and judgment of radiologists, quantitative analysis and information integration are lacked, and the image description is lacked in visual display. Meanwhile, the report content and terms are different due to different annual capital and writing habits of doctors, so that the standard report writing mode is lack, time and labor are consumed, accurate judgment on diseases is difficult, treatment of patients is possibly delayed, and the effect of image examination on the diseases is not exerted to the maximum. More importantly, many diseases in the white brain disease are chronic diseases, long-term follow-up and observation are needed to evaluate the change of the disease condition, the existing image report method only depends on human eyes for observation, the volume of the focus cannot be objectively and accurately measured, and accurate evaluation is difficult to achieve in the aspects of follow-up of the diseases, curative effect evaluation, prognosis prediction and the like.
The method utilizes an artificial intelligence method to automatically and accurately delineate the focus, accurately calculates the volume of white matter lesions of the brain, and provides possibility for accurate evaluation of disease progress. Meanwhile, an automatic interpretation system is provided, multidimensional information such as anatomical positioning, morphological structure, lesion size, lesion occurrence time, bleeding and the like of lesions is determined through man-machine interaction, lesion positioning visualization, image index quantification, report term standardization, friendly operation interface and the like are achieved, logic analysis is automatically made, disease judgment is accurately obtained, and the condition that a conclusion is inconsistent with description and an important conclusion is missed is avoided.
Disclosure of Invention
An automated quantitative analysis system for white brain lesions comprises a clinical information knowledge base module, an anatomical pattern diagram module, an image characteristic standardized description module, an image data quantitative calculation module and a report generation module; the clinical information knowledge base module comprises a selection knowledge item base and a manual input unit, and the selection knowledge base unit comprises a clinical common input option and a manual input unit; the manual input unit comprises clinical and medical history data related to the image of the patient; the anatomical mode map module comprises a visual intracranial tomographic map, the visual intracranial tomographic map is a dot-map anatomical structure displayed by a plane map, the distribution and the position of a lesion are accurately defined, and the brain structure in the visual intracranial tomographic map comprises a left frontal lobe, a left parietal lobe, a left occipital lobe, a left temporal lobe, a left island lobe, a left basal ganglia, a left thalamus, a right frontal lobe, a right parietal lobe, a right occipital lobe, a right temporal lobe, a right island lobe, a right basal ganglia, a right thalamus, a left cerebellum, a right cerebellum, a corpus callosum and a brainstem; the image characteristic standardized description module comprises a human-computer interaction interface, and the human-computer interaction interface comprises a preset indication part and an input part; the image data quantitative calculation module comprises an intracranial white matter lesion segmentation module, a calculation module for calculating the length and the short diameter of the maximum brain white matter lesion of the whole brain and a calculation module for calculating the total volume of the intracranial white matter lesion.
Preferably, the automated quantitative analysis system for the white brain lesion further comprises a neural network unit and/or an image structure interpretation module.
An analysis method of an automated quantitative analysis system for white brain lesions, which provides a selection knowledge item base and manual input through a clinical information knowledge base module, provides clinical common input options through a selection knowledge base unit, and takes a manual input unit as supplementary content; firstly, selecting common input contents in a knowledge question bank for selection, and if the common input contents cannot meet the requirements, selecting a manual input unit for supplement; the method comprises the steps of segmenting an MRI image of cerebral white matter lesions by an intracranial white matter lesion segmentation module, calculating the maximum focus long diameter and focus short diameter by a calculation module of the maximum brain white matter focus long diameter and focus short diameter, and calculating the total intracranial white matter lesion volume by a calculation module of the total intracranial white matter lesion volume.
Preferably, the analysis method of the automated quantitative analysis system for white brain lesions judges the correct disease category of the patient through the image structure interpretation module.
Preferably, the analysis method of the automated quantitative analysis system for the white brain lesion outputs the disease name of the patient part through the neural network unit.
1. The clinical information knowledge base module:
and providing a selection knowledge item base and a manual input unit, wherein the selection knowledge item base unit provides clinical common input options, and the manual input unit serves as supplementary content. Firstly, selecting common input contents in a knowledge item base for selection, and if the requirements cannot be met, selecting a manual input unit for supplement. This module provides clinical and medical history data related to the patient's image for integration with the image symptoms for the final image structure interpretation module to make the correct interpretation of the patient.
1) Structured report usage scope: MRI examination of leukoencephalopathy;
2) age: □ 65 years □ 65 under 65 to □ 75 years old above 75 years old;
3) vascular risk factors: □ No □ hypertension □ hyperlipidemia □ diabetes □ smoking history □ obesity □ others (e.g. hypercoagulable state of blood, vasculitis, migraine, etc. [ ]) □ are unknown;
4) □ others: [].
2. An anatomical pattern map module:
the visual intracranial tomography map is designed, a dot-diagram type anatomical structure can be displayed on a plane map, the distribution and the position of a focus can be accurately defined, the operation is simple, the grasp is easy, the output is standard, and even a beginner who just enters the clinic can grasp the map easily. The brain structure in the anatomical map comprises a left frontal lobe, a left parietal lobe, a left occipital lobe, a left temporal lobe, a left insular lobe, a left basal ganglia, a left thalamus, a right frontal lobe, a right parietal lobe, a right occipital lobe, a right temporal lobe, a right insular lobe, a right basal ganglia, a right thalamus, a left cerebellum, a right cerebellum, a corpus callosum and a brainstem, and the specific contents are as follows (figures 1-4):
FIG. 1 is a frontal lobe configuration tomographic view of the present application;
FIG. 2 is a tomographic view of the basal ganglia structure of the present application;
FIG. 3 is a frontal temporal occipital lobe structure tomographic view of the present application;
FIG. 4 is a diagram of the structure of the cerebellum and brainstem of the present application;
the lesion location in the anatomical map includes the supratentorial structure: paracortical, subcortical or deep cortical, paraventricular (fig. 5) subtenon structures: peripheral, central;
fig. 5 is a lesion location distribution indicator map of the present application.
3. Image characteristic standardized description module
The human-computer interaction interface provides a preset indicating part and an input part, the preset indicating part provides preset indicating information for a user, and the user inputs parameters for evaluating the MRI image in the input part according to the MRI image of the patient and the preset indicating information provided by the human-computer interaction interface. The parameters may be preset field-type parameters which can be displayed on the human-computer interaction interface together with corresponding input parts according to the previous operation of the user so that the user can intuitively select and input in a single-choice or multi-choice mode, or numerical-type parameters which are input by the user in a blank filling mode, and the corresponding input parts can be displayed on the human-computer interaction interface according to the previous operation of the user so that the user can fill and input. The parameters input by the user can be stored as computer-readable data by a memory module attached to the system or a memory module that exists separately. According to the preset content of the patent, a doctor can perform simple click operation in an image module, call a standard field in a database and generate report content in a standard format.
1) Overall assessment (ARWMC scale score): □ 0 level (no abnormal signal) □ 1 level (scattered in specks) □ 2 level (partial aggregation) □ 3 level (fusion);
2) signal: the pre-processed image defines the acquired 3 sequence or parameter maps as 3 modalities, constituting a set of structural modalities: t1 Weighted Imaging (T1-Weighted Imaging, T1WI), T2 Weighted Imaging (T2-Weighted Imaging, T2WI), Fluid Attenuated Inversion Recovery (FLAIR); sequences such as magnetic resonance T1 weighted imaging (T1WI), T2 weighted imaging (T2WI), fluid attenuation inversion recovery (FLAIR) and the like can clearly and intuitively present morphological characteristics such as the position, size, boundary, morphology and the like of white matter lesions of the brain. The contrast enhanced T1WI (T1-CE) sequence after gadolinium contrast agent injection indirectly reflects the extent of lesion activity and invasion of surrounding tissues by evaluating the leakage of contrast agent as a result of lesion disruption of the Blood Brain Barrier (BBB). Diffusion Weighted Imaging (DWI) and Apparent Diffusion Coefficient (ADC) images reflect the pathological state of white matter by providing information of water molecule Diffusion, detect the focus in the early stage of the occurrence of the vascular diseases and are beneficial to realizing the recognition of the focus in the acute stage.
a) T1: □ high □, etc. □ low;
b) t2: □ high □, etc. □ low;
c) T2-FLAIR: □ high □, etc. □ low □ central low edge high;
d) DWI: □ high □, etc. □ low;
e) ADC: □ high □, etc. □ low;
3) the form is as follows: □ round, dotted □ oval, fusiform (□ Dawson's finger) □ amorphous;
4) the strengthening mode is as follows: □ No reinforcement □ open ring □ Uniform reinforcement □ non-uniform reinforcement □ point reinforcement;
5) a micro bleeding focus: □ there were no □, but less than 5 (□ lobes (containing hemioval center) □ deep (basal ganglia, thalamus, brainstem, cerebellum) □ with multiple hairs (□ lobes (containing hemioval center) □ deep (basal ganglia, thalamus, brainstem, cerebellum)).
4. Image data quantitative calculation module:
a) intracranial white matter lesion segmentation module:
the image segmentation method comprises the following steps:
step 1: performing multi-modal fusion on white matter MRI images of the brain, performing data preprocessing on the fused images, and performing three-dimensional small image block sampling on each image data to serve as training data.
Brain MRI imaging can be divided into four modalities according to the differences in the imaging support conditions: t1-weighted modality, T1 ce-weighted modality, T2-weighted modality, and Flair modality, different modalities being capable of displaying different features of white matter of the brain. The brain images in different modes are fused and sent to the network for training, so that the focus characteristics can be improved, and the accuracy of white matter detection of the brain is improved.
Let j image of the fused three-dimensional white matter image x be expressed as:
xj=[IF(j),IT1(j),IT1ce(j),IT2(j)]
where I represents the white matter image of the brain in the four modalities and the subscripts F, T1, T1ce, T2 represent the four modalities, respectively. According to different characteristics of each mode, selecting white matter images of different modes to be fused to construct a whole training set T, namely
T={(x1,y1),…,(xk,yk),…,(xm,ym)}
Wherein (x)k,yk) Denotes the k-th training sample, ykE {0,1} is the label for the kth sample, indicating whether the kth sample has a lesion.
All input data have the same distribution, so that data jitter can be reduced, and the convergence speed of the model is increased, therefore, the method performs normalization operation on the brain tissue gray level in each image with the mean value of 0 and the variance of 1. In order to reduce invalid image information, the invention removes a large amount of 0-value background pixels around brain tissues, and only selects an image layer with pathological data as training data.
In order to solve the problem of class imbalance of a data set, the invention adopts a three-dimensional small image block sampling training method, 70 three-dimensional image blocks are sampled for each case data to serve as training data, the size of each image block is 32 multiplied by 32, and each image block is randomly selected according to the following proportion: background accounts for 1%, normal tissue accounts for 29%, and diseased tissue accounts for 70%. And the three-dimensional image blocks obtained by sampling are subjected to sagittal plane turnover, so that the training set is expanded by 1 time.
Step 2: and establishing a three-dimensional full-convolution neural network, and adding an example normalization layer. The DenseNet dense connection structure has the characteristic of high feature reuse rate, the invention adopts an improved three-dimensional full-convolution DenseNet model structure (called DenseNet _ Base in the invention) to extract and segment the features of white matter images of the brain, and an example normalization layer is added to relieve the problems of data shock and low model convergence speed in training.
The DenseNet _ Base network structure adopted by the invention is divided into a down-sampling channel and an up-sampling channel, and the two channels are connected by a dense connection block (DB). The densely packed block is the basic module of the DenseNet, and each DB used in the present invention is composed of 4 convolution modules, and the input of each layer network includes the image features learned by all the previous layers. The up-sampling path and the down-sampling path are composed of 3 DBs and corresponding transition sampling modules, and one transition sampling module (TD or TU) is arranged between every two DBs. The initial input profile number and growth rate for the network are 48 and 12, respectively.
In the deep neural network, the distribution difference between different types of input data is large, so that the phenomenon that network model training is not easy to converge due to data jitter is caused, and the problem of data oscillation can be effectively relieved by data normalization. The DenseNet original model adopts a batch normalization method, and the method has better effect when the batch is larger during training. However, due to computer memory and computational limitations in the segmentation task, typically only 1 image can be processed at a time, rendering the batch normalization approach ineffective in this case. The invention introduces the example normalization to replace the batch normalization layer in the original DenseNet model, solves the problem of data jitter and accelerates the convergence speed of the white matter detection network of the brain. The example normalization algorithm was calculated as follows:
first, the mean μ of each white matter picture was calculated along the channel:
Figure BDA0002990895100000061
where subscripts c, i, j denote the channel, width, and height indices of the input white matter image, respectively, F denotes the pixel values of the input white matter image, and W, H denote the width and height of the input white matter image, respectively.
Next, the variance σ is calculated for each white matter picture along the channel:
Figure BDA0002990895100000062
finally, normalizing the input white matter picture of the brain to obtain normalized data
Figure BDA0002990895100000064
Figure BDA0002990895100000063
Where e is a number greater than 0 and very small, the denominator is prevented from being 0. The example normalization is added in each layer, so that gradient disappearance and gradient overflow can be avoided, the dependence of a network model on initialization of weight values and the like is reduced, network convergence is accelerated, the method can be used as a regularization means, and the requirement of the network on the algorithm for preventing overfitting is reduced.
And step 3: and improving the Dice loss function, increasing the weight of the white matter focus area of the brain, and focusing more on the characteristic learning of the white matter area of the brain in the model training. And constructing a multi-loss function structure, and dividing the segmentation problem of the white matter MRI images of different types into multiple branches for output, so that the convolution kernel is subjected to refined learning and training.
In the current commonly used target segmentation task, a commonly used loss function is a Dice loss function, and a calculation formula of the Dice loss function is as follows:
Figure BDA0002990895100000071
wherein p isiAnd giAnd respectively representing the detection result of the white matter network of the brain and the value of the ith pixel point in the label. In the three-dimensional MRI white matter image, since the medical image has specificity, the proportion of the white matter lesion area is smaller and the proportion of the non-lesion area is larger in the whole white matter image compared with the natural image. If the traditional Dice loss function is adopted, in the network training process, the network tends to learn the characteristics of the non-white matter focus area, and the characteristics of the white matter focus area cannot be effectively extracted, so that the situations of false detection and missed detection are caused. Therefore, in order to improve the learning capacity of the network to the white matter region of the brain, the traditional Dice loss function is improved, and the improved loss function calculation formula is as follows:
Figure BDA0002990895100000072
according to the above formula, g isiPart of the corresponding area is white matter focus area of brain, and g is adoptediAnd weighting the weight of the part, and dividing the ratio of the prediction result to the label in a loss function into 1: 3. by the weighting method, the loss coefficient distributed by the loss function to the label is larger, so that the characteristic learning of the network to a white matter lesion area of the brain can be enhanced, the loss value distribution of the network to a non-lesion area is weakened, the interference of the MRI background image of the brain to the characteristic learning of the lesion area is reduced, and the detection accuracy of the network is improved.
The invention uses the brain white matter segmentation data set, the goal is the accurate segmentation of 3 types of lesion areas with subordination relation, in the 3 types of target areas, the whole lesion area is added with edema tissues of one type compared with a lesion core area, and the lesion core area is added with necrotic tissues and non-enhanced tissues of two types compared with an enhanced lesion area. This makes it difficult to accurately segment white matter with gray-scale features only and fuzzy boundaries, which requires the ability to distinguish all regional features for the same convolution kernel. Aiming at the difficulty of learning multi-region characteristics by a single convolution core, the invention improves the structure of the last layers of the network. The invention adds 3 network structure branches in parallel after the last DB of the DenseNet _ Base. Each branch consists of 2 layers of DB structures and 1 multiplied by 1 convolution kernel respectively, and corresponds to 3 areas of the whole lesion area, the lesion core area and the lesion enhancement area which need to be segmented by data respectively. Each branch takes the improved Dice _ loss of the present invention as a loss function.
And 4, step 4: and inputting training data into the model for training by adopting a proper optimizer, learning rate and other hyper-parameters until the loss function is reduced to be low enough, stopping training after the model is converged, and storing the model. And inputting the training set data into the stored model to obtain an output white matter MRI image segmentation result.
b) The calculation module of the length and the short diameter of the maximal brain white focus of the whole brain:
the implementation method comprises the following steps:
calculating the maximum focus major diameter:
for each lesion region, the voxel set of the segmented lesion region is set to be P, and the voxel set at the edge of the lesion is set to be M ═ M1,m2,m3,…,mnIn which m isi∈R3. The following steps are performed iteratively:
(1) two points M are arbitrarily selected in Mi(x1,y1,z1),mj(x2,y2,z2) Form a segment M by e M, i, j being 1 to n and i not equal to jimj
Figure BDA0002990895100000081
(2) The longitudinal slice of the MRI image can be denoted as Z ═ n, n ∈ Z. Suppose z1≤z2Taking n as [ z ∈ ]1,z2]Time line segment mimjAnd longitudinal sections of MRI imagesAnd (6) an intersection set U.
(3) Judgment of
Figure BDA0002990895100000086
If yes, performing the step (4); otherwise, performing step (5).
(4) Calculating line segment mimjLength of (d | m)imj|:
Figure BDA0002990895100000082
Where Δ i denotes the resolution of the slice pattern and Δ j denotes the layer thickness.
(5) Judging whether all the point pair combinations in the set M are subjected to iteration processing, if so, performing the step (6); otherwise, returning to the step (1).
(6) Calculating to obtain the maximum line segment length Lmax=max(|mimj|),LmaxI.e. the maximum lesion length.
And (3) short path calculation:
is provided with
Figure BDA0002990895100000083
Then m isp(xp,yp,zp),mq(xq,yq,zq) The line segment m is the two end points of the line segment where the maximum lesion length is locatedpmqMiddle point m ofcCan be expressed as:
Figure BDA0002990895100000084
straight line mpmqThe direction vector of (a) is:
Figure BDA0002990895100000085
then the plane of the minor axis is:
Figure BDA0002990895100000091
taking the intersection S of the voxel point where the plane is located and the voxel in the set P, making P ← S, the focus edge voxel set in S is M, and obtaining the focus short diameter L according to the maximum focus length calculation modemin
c) A calculation module of total volume of intracranial white matter lesions:
the volume calculation formula is as follows:
Figure BDA0002990895100000092
wherein h is the layer thickness, SiIs the white matter lesion area of the ith layer (i ═ 1, … n), l is the interlamellar spacing, VTIs the total volume.
5. Image structure interpretation module:
the medical image information and the manual input information extracted from the clinical information knowledge base module, the anatomical pattern diagram module and the image characteristic standardized description module are logically analyzed and compared with preset information in a database, the correct disease category of a patient is judged, the result of the data quantitative calculation module is integrated, and the result is structurally output through the report generation module.
Interpretation criteria:
(1) overall evaluation was grade 1 or grade 2:
a) age greater than 65 years, overall assessment grade 1: it is judged that abnormal signals scattered in the white matter of the brain are scattered in the punctate, and the signals accord with age-related changes, please combine with clinic.
b) Age greater than 75 years, overall rating grade 1 or 2: it is judged that abnormal signals scattered in the white matter of the brain are scattered in the punctate, and the signals accord with age-related changes, please combine with clinic.
c) Level 0: judging that no abnormal signal exists in the intracranial brain parenchyma.
(2) The overall evaluation was grade 3:
a) if the focus position is around the ventricle, the shape is 'Dawson finger' sign; the distribution of the focus is cerebellum, and the position is peripheral distribution; the distribution of the focus is brain stem, and the position is peripheral distribution; the focus position is "beside cortex"; the focus location is the "corpus callosum"; the strengthening mode is 'open ring shape', and any judgment is satisfied: inclined to perivascular patterns, inflammatory demyelinating lesions may be present (□ Multiple Sclerosis (MS) □ Acute Disseminated Encephalomyelitis (ADEM) □ neuromyelitis optica (NMO) □ Lyme disease (Lyme) □ Others [ ]), please incorporate clinics.
b) If the focus position is around the ventricle, the shape is oval or fusiform, and no blood vessel high risk factor exists in the clinical data, the judgment is that: predispose to vascular patterns, leukosporosis changes, please bind to the clinic.
c) The lesion position is around the ventricles of brain, the shape is oval or fusiform, and the clinical data indicates that the lesion position is more than 65 years old or has a high risk factor of blood vessels, and the lesion position is judged as follows: predispose to vascular patterns, leukospongia changes, suggesting possible ischemic changes, please incorporate clinics.
d) The distribution of the lesions is cerebellum or brainstem, and the position is peripheral distribution; focal distribution is the "basal ganglia region"; the location "subcortical" or "deep subcortical non-edge region" or "subcortical edge region"; the clinical data includes 'blood vessel risk factors'; micro bleeding exists; the above-mentioned satisfying any one judgment is: predisposed to vascular patterns, suggesting arteriolar occlusive cerebral infarction (white matter changes associated with small vascular lesions.
e) On the basis of d), and among the signals, "DWI high signal" and "ADC low signal", the judgment is: predisposed to vascular patterns, suggesting recent arteriole occlusive cerebral infarction (changes in white matter associated with small vessel lesions.
f) The signal is 'FLAIR low signal', and the judgment is that: □ tendency to be perivascular space; □ trend toward vascular patterns, suggesting an old arteriole occlusive cerebral infarction (changes in white matter associated with small vascular lesions
g) If the two are not in accordance, manually selecting: □ predisposition to perivascular patterns, inflammatory demyelinating lesions may be present (□ Multiple Sclerosis (MS) □ Acute Disseminated Encephalomyelitis (ADEM) □ neuromyelitis optica (NMO) □ Lyme disease (Lyme) □ others [ ]); □ predisposition to vascular patterns, suggesting arteriolar occlusive cerebral infarction (changes in white matter associated with small vascular lesions; □ others (e.g. diffuse axonal injury)
6. The neural network unit:
the options and the numerical input content of the clinical information knowledge base unit are coded, an 8-layer BP neural network model is trained through the clinical information and result data set of historical cases, the check of the knowledge item base and the manual input unit and the coding of the input result are input into the trained neural network model, the disease name of the part of the patient is output, and the auxiliary function is provided. The working mode is as follows:
1) and coding the clinical information knowledge question bank and the input result of the doctor. The method adopts a mode of combining one-hot codes and actual numerical values to carry out mixed coding on options of a selected knowledge question bank, numerical manual units (length, area, volume and the like) and doctor input results (disease names) so as to generate a multi-dimensional coding vector. The dimension of the vector is the sum of the total number of all options in the choice knowledge item base, the number of the numeric manual input units, and the number of diseases in the table of potential outcome disease names.
For selecting the knowledge item base, the patent adopts the one-hot code to code the options of the knowledge item base. Suppose that a choice in the question bank has n options [ s ] in fixed order0,s1,s2,…,sm-1]When the doctor selects the ith option, order s i1 and sj0, (j ≠ i) generates an n-dimensional vector; for a numerical manual input unit, the method adopts a form of directly coding an actual numerical value, and takes the actual input numerical value of the numerical manual input unit in a standard unit as the code of the numerical manual input unit; for the doctor to input the result, this patent adopts the one-hot code to encode it. Suppose there are m disease names in the fixed order list of potential outcome disease names, which can be expressed as [ k ]0,k1,k,…,km-1]. When the doctor interprets the result as the p-th result, a one-to-one corresponding m-dimensional vector is generated in a corresponding mode of k p1 and kq=0,(q≠p)。
And combining the three encoding vectors in sequence according to the sequence in the clinical information question bank to form an ordered N-dimensional encoding vector. Wherein, the former N-m dimension is the clinical information sample code, and the latter m dimension is the sample label.
2) And (4) coding the historical case according to the coding mode in the step 1. And (3) coding a large number of historical case clinical knowledge question banks and corresponding information of results according to the coding mode of the step (1) to generate a clinical case data set. The data set is divided into two sets of training set and testing set according to the ratio of 8.5: 1.5.
3) And establishing a neural network model, and training and testing the model. The patent designs a feedforward neural network model composed of 8 layers of neurons, and the number of the neurons from an input layer to an output layer is respectively as follows: n-m (input layer), 128, 256,512,1024, 512, m (output layer). And (4) performing affine calculation on each layer of neurons (except the output layer) and then performing batch normalization calculation and nonlinear mapping respectively. Random inactivation with an inactivation probability of 0.5 was increased after affine calculations at layers 4-7 to prevent overfitting of the neural network. A cross entropy loss function and an output layer are employed. The optimizer uses a random gradient descent optimizer, sets the initial learning rate to 0.01 and uses a learning rate cosine function attenuation strategy.
32 untrained sample data are randomly sampled from a training set each time and input into a neural network for model training, and only the first N-m dimensional data of a sample coding vector is input during training to obtain m-dimensional model prediction output. And carrying out one-hot coding on the model prediction output, wherein the specific coding mode is as follows: the largest term is set to 1 and the other terms are set to 0. And calculating cross entropy loss by using the model prediction output after the one-hot coding and the post-m-dimensional sample label data in the corresponding sample coding vector, and updating model parameters by using an optimizer. After the data of all training sets are trained for one time, updating the learning rate, inputting sample data of a verification set into a model to obtain a prediction vector, only inputting the front N-m dimensional data of a sample coding vector during prediction to obtain model prediction output in an m-dimensional single hot coding mode, comparing the model prediction output with the rear m-dimensional sample label of the corresponding sample coding vector, and if the two are the same, correctly predicting; otherwise, the prediction is wrong.
And repeatedly inputting the training set data and the test set data into the neural network model for iterative training and testing, and storing the model and the parameters when the testing accuracy is maximum. The test accuracy is calculated by dividing the total number of samples predicted to be correct in the test set by the total number of samples in the test set.
4) And generating clinical information codes according to the judgment result of the doctor on the case in the knowledge question bank, inputting the stored models and outputting the predicted disease names. And (3) when the doctor fills in the clinical information question bank according to the clinical characteristics of the case each time, sequentially generating codes for the question bank information according to the filling condition of the doctor and the step 1, inputting the codes into the neural network model stored in the step 3, and outputting the codes of the prediction result by the model. According to model predictive coding, assuming the z-th item is the maximum item, the disease name of the z-th result is selected as the suggested result disease name against the list of potential result disease names.
7. Report generation module
Part of output content contains typical focus picture and image mode picture; clinical information content; structural terms of lesion location; quantitatively analyzing the result value; the report content is standardized. The preset anatomical structure, lesion form, lesion signal term, output result and the like are manually set in a computer, so that the human input errors and non-standard words are avoided, and a mode image and a typical image of the image expression are output. And the anatomical structure and the image characteristics of the focus are output layer by layer and in a standard mode, and the report content and the accurate focus size numerical value are output in a standard format through man-machine interaction in a standard writing mode.
Drawings
FIG. 1 is a frontal lobe configuration tomographic view of the present application;
FIG. 2 is a tomographic view of the basal ganglia structure of the present application;
FIG. 3 is a frontal temporal occipital lobe structure tomographic view of the present application;
FIG. 4 is a diagram of the structure of the cerebellum and brainstem of the present application;
FIG. 5 is a lesion location distribution indicator map of the present application;
FIG. 6 is a schematic illustration of lesion distribution according to an example embodiment of the present application;
FIG. 7 is a schematic view of lesion location according to an example embodiment of the present application;
fig. 8 is a diagram of an example of an examination report form.
Description of reference numerals: in FIGS. 1-4: 1 right frontal lobe, 2 left frontal lobe, 3 right frontal lobe, 4 left frontal lobe, 5 right temporal lobe, 6 left temporal lobe, 7 right occipital lobe, 8 left occipital lobe, 9 right basal ganglia, 10 left basal ganglia, 11 right thalamus, 12 left thalamus, 13 corpus callosum, 14 right insular lobe, 15 left insular lobe, 16 right cerebellum, 17 left cerebellum, 18 brainstem, 19 subcortical, 20 subcortical or deep cortex, 21 parasymphatic.
Detailed Description
The invention is further illustrated by the following examples.
1. Establishing a clinical information knowledge base module:
the scope of use of the present structured report is the MRI examination of white brain lesions, the first step, determining the age of the patient, e.g., age: □Under 65 years old (selected)□ 65 to □ 75 years old or older 75 years old; the second part determines whether the patient has vascular risk factors: □ none (selected) □ hypertension □ hyperlipemia □ diabetes □ smoking history □ obesity □ other (such as blood hypercoagulable state, vasculitis, migraine, etc.)]) □ are not detailed; the third step determines the presence or absence of other relevant clinical history: and others: []。
2. Anatomical pattern map module
The computer display mode diagram module is used for displaying the fault schematic diagram of each brain anatomical structure, after a radiologist reads the images, the distribution and the position of the white brain focus are clicked by a mouse, the color of the brain area is highlighted beside the ventricles of the parietal lobes on the two sides, the position of the focus is accurately positioned and is connected with the report generation module, and the schematic diagram of the position of the focus is output. As shown in fig. 7.
3. Image characteristic standard description module:
a. signal:
t1: □ high □, etc□ low (selected);
T2:□ high (selected)□, etc. □ low;
T2-FLAIR:□ high (selected)□ et al □ Low □The center is low and the edge is high;
DWI: □ high□, etc. (selected)□ low;
ADC: □ high□, etc. (selected)□ low;
b. the form is as follows: □ round and dotted□ oval, spindle-shaped (□ Dawson's) finger sign) (selected)□ amorphous;
c. the strengthening mode is as follows: □ No Reinforcement□ open ring shape (selected)□ homogeneous strengthening □ heterogeneous strengthening □ point-like strengthening;
d. quantitative analysis of white matter lesions results:the largest focus is located in the left apical lobe]Major diameter [2.0 ]]cm, short diameter [3.3 ]] cm, total volume of intracranial lesions [13.3 ]]cm3。
e. A micro bleeding focus:□ none (selected)□ there are less than 5 (□ lobes (including hemioval center) □ deep (basal ganglia, thalamus, brainstem, cerebellum) # which can be selected simultaneously) □ and multiple hairs (□ lobes (including hemioval center) □ deep (basal ganglia, thalamus, brainstem, cerebellum));
f. other images are seen:[ without]。
4. Image symptom interpretation module
And (3) sorting and logically analyzing the information of the clinical information knowledge base module, the anatomical pattern diagram module and the image characteristic standardized description module, extracting medical image information, automatically calculated focus parameters and manual input information, automatically comparing the medical image information with preset information of a database in the computer module, and outputting disease judgment.
Specifically, in the first embodiment, the key information is: the lesion of the patient is located beside the ventricles of the frontal lobe, MRI signals are represented by T1 low signals, T2 high signals, T2-FLAIR high signals, DWI signals and ADC signals, the shape of the lesion is oval (Dawson's finger), and the strengthening mode is open-loop strengthening. Compared with the built-in module, the model accords with the judgment of 'tendency to perivascular mode and inflammatory demyelinating lesion possibility (multiple sclerosis)', and the image expression and the disease judgment are output to the report generation module.
5. A neural network module:
the options and the numerical input content of the clinical information knowledge base unit are coded, an 8-layer BP neural network model is trained through the clinical information and result data set of historical cases, the check of the knowledge item base and the manual input unit and the coding of the input result are input into the trained neural network model, the disease name of the part of the patient is output, and the auxiliary function is provided. The working mode is as follows:
1) and coding the clinical information knowledge question bank and the input result of the doctor. The method adopts a mode of combining one-hot codes and actual numerical values to carry out mixed coding on options of a selected knowledge question bank, numerical manual units (length, area, volume and the like) and doctor input results (disease names) so as to generate a multi-dimensional coding vector. The dimension of the vector is the sum of the total number of all options in the choice knowledge item base, the number of the numeric manual input units, and the number of diseases in the table of potential outcome disease names.
For selecting the knowledge item base, the patent adopts the one-hot code to code the options of the knowledge item base. Suppose that a choice in the question bank has n options [ s ] in fixed order0,s1,s2,…,sn-1]When the doctor selects the ith option, order s i1 and sj0, (j ≠ i) generates an n-dimensional vector; for a numerical manual input unit, the method adopts a form of directly coding an actual numerical value, and takes the actual input numerical value of the numerical manual input unit in a standard unit as the code of the numerical manual input unit; for the doctor to input the result, this patent adopts the one-hot code to encode it. Suppose there are m disease names in the fixed order list of potential outcome disease names, which can be expressed as [ k ]0,k1,k,…,km-1]. When the doctor interprets the result as the p-th result, a one-to-one corresponding m-dimensional vector is generated in a corresponding mode of k p1 and kq=0,(q≠p)。
And combining the three encoding vectors in sequence according to the sequence in the clinical information question bank to form an ordered N-dimensional encoding vector. Wherein, the former N-m dimension is the clinical information sample code, and the latter m dimension is the sample label.
2) And (4) coding the historical case according to the coding mode in the step 1. And (3) coding a large number of historical case clinical knowledge question banks and corresponding information of results according to the coding mode of the step (1) to generate a clinical case data set. The data set is divided into two sets of training set and testing set according to the ratio of 8.5: 1.5.
3) And establishing a neural network model, and training and testing the model. The patent designs a feedforward neural network model composed of 8 layers of neurons, and the number of the neurons from an input layer to an output layer is respectively as follows: n-m (input layer), 128, 256,512,1024, 512, m (output layer). And (4) performing affine calculation on each layer of neurons (except the output layer) and then performing batch normalization calculation and nonlinear mapping respectively. Random inactivation with an inactivation probability of 0.5 was increased after affine calculations at layers 4-7 to prevent overfitting of the neural network. A cross entropy loss function and an output layer are employed. The optimizer uses a random gradient descent optimizer, sets the initial learning rate to 0.01 and uses a learning rate cosine function attenuation strategy.
32 untrained sample data are randomly sampled from a training set each time and input into a neural network for model training, and only the first N-m dimensional data of a sample coding vector is input during training to obtain m-dimensional model prediction output. And carrying out one-hot coding on the model prediction output, wherein the specific coding mode is as follows: the largest term is set to 1 and the other terms are set to 0. And calculating cross entropy loss by using the model prediction output after the one-hot coding and the post-m-dimensional sample label data in the corresponding sample coding vector, and updating model parameters by using an optimizer. After the data of all training sets are trained for one time, updating the learning rate, inputting sample data of a verification set into a model to obtain a prediction vector, only inputting the front N-m dimensional data of a sample coding vector during prediction to obtain model prediction output in an m-dimensional single hot coding mode, comparing the model prediction output with the rear m-dimensional sample label of the corresponding sample coding vector, and if the two are the same, correctly predicting; otherwise, the prediction is wrong.
And repeatedly inputting the training set data and the test set data into the neural network model for iterative training and testing, and storing the model and the parameters when the testing accuracy is maximum. The test accuracy is calculated by dividing the total number of samples predicted to be correct in the test set by the total number of samples in the test set.
4) And generating clinical information codes according to the judgment result of the doctor on the case in the knowledge question bank, inputting the stored models and outputting the predicted disease names. And (3) when the doctor fills in the clinical information question bank according to the clinical characteristics of the case each time, sequentially generating codes for the question bank information according to the filling condition of the doctor and the step 1, inputting the codes into the neural network model stored in the step 3, and outputting the codes of the prediction result by the model. According to model predictive coding, assuming the z-th item is the maximum item, the disease name of the z-th result is selected as the suggested result disease name against the list of potential result disease names.
6. A report generation module:
the report generation module is connected with the clinical information knowledge base module, the anatomical model diagram module, the image characteristic standardized description module, the neural network unit and the image comparison module, and outputs an image model diagram; clinical information content; structural terms of lesion location; diagnosing the disease; the report content is standardized. Specifically, in the first embodiment, generating the report includes:
clinical data:
1. age: ■ 65 years □ 65 under 65 to □ 75 years old above 75 years old;
2. vascular risk factors: ■ No □ hypertension □ hyperlipidemia □ diabetes □ smoking history □ obesity □ others (e.g. hypercoagulable state of blood, vasculitis, migraine, etc. [ ]) □ are unknown;
3. and others: [].
The image is seen as follows:
1. overall assessment (ARWMC scale score): □ 0 level (no abnormal signal) □ 1 level (scattered in specks) ■ 2 level (partial aggregation) □ 3 level (fusion);
2. focal lesion:
2.1. distribution: as shown in fig. 6.
2.2. Position: as shown in fig. 7.
2.3. Signal:
t1: □ high □, etc. ■ low;
t2: ■ high □, etc. □ low;
T2-FLAIR: ■ high □, etc. □ low □ central low edge high;
DWI: □ high 9632and equal □ low;
ADC: □ high 9632and equal □ low;
2.4. the form is as follows: □ round, dotted ■ oval, fusiform (■ Dawson's finger) □ amorphous;
2.5. the strengthening mode is as follows: □ No reinforcement ■ open ring □ Uniform reinforcement □ non-uniform reinforcement □ point reinforcement;
2.6. quantitative analysis of white matter lesions results: the largest focus is located in the left apical lobe]Major diameter [2.0 ]]cm, short diameter [3.3 ]]cm, total volume of intracranial lesions [13.3 ]]cm3
3. A micro bleeding focus: ■ has no □, less than 5 (□ lobes (containing hemioval center) □ deep parts (basal ganglia, thalamus, brainstem, cerebellum) # which can be simultaneously selected) □ and multiple hairs (□ lobes (containing hemioval center) □ deep parts (basal ganglia, thalamus, brainstem, cerebellum));
4. other images are seen: [ none ].
And (3) impression judgment:
bilateral parietal ventriculo-polygenic hyper-signaling predisposes to perivascular patterns and is considered multiple sclerosis. The final report is generated as shown in fig. 8.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make possible variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above, and therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present invention shall fall within the protection scope of the present invention.

Claims (5)

1. An automatic quantitative analysis system for white matter lesions of the brain is characterized by comprising a clinical information knowledge base module, an anatomical pattern graph module, an image characteristic standardized description module, an image data quantitative calculation module and a report generation module; the clinical information knowledge base module comprises a selection knowledge item base and a manual input unit, and the selection knowledge base unit comprises a clinical common input option and a manual input unit; the manual input unit comprises clinical and medical history data related to the image of the patient; the anatomical mode map module comprises a visual intracranial tomographic map, the visual intracranial tomographic map is a dot-map anatomical structure displayed by a plane map, the distribution and the position of a lesion are accurately defined, and the brain structure in the visual intracranial tomographic map comprises a left frontal lobe, a left parietal lobe, a left occipital lobe, a left temporal lobe, a left island lobe, a left basal ganglia, a left thalamus, a right frontal lobe, a right parietal lobe, a right occipital lobe, a right temporal lobe, a right island lobe, a right basal ganglia, a right thalamus, a left cerebellum, a right cerebellum, a corpus callosum and a brainstem; the image characteristic standardized description module comprises a human-computer interaction interface, and the human-computer interaction interface comprises a preset indication part and an input part; the image data quantitative calculation module comprises an intracranial white matter lesion segmentation module, a calculation module for calculating the length and the short diameter of the maximum brain white matter lesion of the whole brain and a calculation module for calculating the total volume of the intracranial white matter lesion.
2. The automated quantitative analysis system of white matter lesions of the brain according to claim 1, further comprising said neural network unit and/or image structure interpretation module.
3. The analysis method of the automated quantitative analysis system for white matter lesions of the brain according to claim 1, characterized in that a clinical information knowledge base module provides a selection knowledge item base and manual input, a selection knowledge base unit provides clinical common input options, and a manual input unit is used as supplementary content; firstly, selecting common input contents in a knowledge question bank for selection, and if the common input contents cannot meet the requirements, selecting a manual input unit for supplement; the method comprises the steps of segmenting an MRI image of cerebral white matter lesions by an intracranial white matter lesion segmentation module, calculating the maximum focus long diameter and focus short diameter by a calculation module of the maximum brain white matter focus long diameter and focus short diameter, and calculating the total intracranial white matter lesion volume by a calculation module of the total intracranial white matter lesion volume.
4. The method as claimed in claim 3, wherein the correct disease category of the patient is determined by an image structure interpretation module.
5. The method as claimed in claim 3, wherein the name of the disease of the patient is outputted through a neural network unit.
CN202110315174.7A 2021-03-24 2021-03-24 Automatic quantitative analysis system and interpretation method for white matter lesions of brain Active CN113077887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110315174.7A CN113077887B (en) 2021-03-24 2021-03-24 Automatic quantitative analysis system and interpretation method for white matter lesions of brain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110315174.7A CN113077887B (en) 2021-03-24 2021-03-24 Automatic quantitative analysis system and interpretation method for white matter lesions of brain

Publications (2)

Publication Number Publication Date
CN113077887A true CN113077887A (en) 2021-07-06
CN113077887B CN113077887B (en) 2022-09-02

Family

ID=76610049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110315174.7A Active CN113077887B (en) 2021-03-24 2021-03-24 Automatic quantitative analysis system and interpretation method for white matter lesions of brain

Country Status (1)

Country Link
CN (1) CN113077887B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113962958A (en) * 2021-10-21 2022-01-21 四川大学华西医院 Symptom detection method and device
CN116612885A (en) * 2023-04-26 2023-08-18 浙江大学 Prediction device for acute exacerbation of chronic obstructive pulmonary disease based on multiple modes

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631930A (en) * 2015-11-27 2016-06-01 广州聚普科技有限公司 DTI (Diffusion Tensor Imaging)-based cranial nerve fiber bundle three-dimensional rebuilding method
TW201625182A (en) * 2015-01-05 2016-07-16 國立中央大學 White matter hyperintensities region of magnetic resonance imaging recognizing method and system
US20160220168A1 (en) * 2013-09-20 2016-08-04 John D. Port Systems and methods for producing imaging biomarkers indicative of a neurological disease state using gray matter suppressions via double inversion-recovery magnetic resonance imaging
CN107463786A (en) * 2017-08-17 2017-12-12 王卫鹏 Medical image Knowledge Base based on structured report template
US20180033144A1 (en) * 2016-09-21 2018-02-01 Realize, Inc. Anomaly detection in volumetric images
CN110288587A (en) * 2019-06-28 2019-09-27 重庆同仁至诚智慧医疗科技股份有限公司 A kind of lesion recognition methods of cerebral arterial thrombosis nuclear magnetic resonance image
CN110322444A (en) * 2019-05-31 2019-10-11 上海联影智能医疗科技有限公司 Medical image processing method, device, storage medium and computer equipment
CN110710986A (en) * 2019-10-25 2020-01-21 华院数据技术(上海)有限公司 CT image-based cerebral arteriovenous malformation detection method and system
CN111223085A (en) * 2020-01-09 2020-06-02 北京安德医智科技有限公司 Head medical image auxiliary interpretation report generation method based on neural network
CN111292821A (en) * 2020-01-21 2020-06-16 上海联影智能医疗科技有限公司 Medical diagnosis and treatment system
CN111476774A (en) * 2020-04-07 2020-07-31 广州柏视医疗科技有限公司 Intelligent sign recognition device based on novel coronavirus pneumonia CT detection
CN111832644A (en) * 2020-07-08 2020-10-27 北京工业大学 Brain medical image report generation method and system based on sequence level
CN112085695A (en) * 2020-07-21 2020-12-15 上海联影智能医疗科技有限公司 Image processing method, device and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160220168A1 (en) * 2013-09-20 2016-08-04 John D. Port Systems and methods for producing imaging biomarkers indicative of a neurological disease state using gray matter suppressions via double inversion-recovery magnetic resonance imaging
TW201625182A (en) * 2015-01-05 2016-07-16 國立中央大學 White matter hyperintensities region of magnetic resonance imaging recognizing method and system
CN105631930A (en) * 2015-11-27 2016-06-01 广州聚普科技有限公司 DTI (Diffusion Tensor Imaging)-based cranial nerve fiber bundle three-dimensional rebuilding method
US20180033144A1 (en) * 2016-09-21 2018-02-01 Realize, Inc. Anomaly detection in volumetric images
CN107463786A (en) * 2017-08-17 2017-12-12 王卫鹏 Medical image Knowledge Base based on structured report template
CN110322444A (en) * 2019-05-31 2019-10-11 上海联影智能医疗科技有限公司 Medical image processing method, device, storage medium and computer equipment
CN110288587A (en) * 2019-06-28 2019-09-27 重庆同仁至诚智慧医疗科技股份有限公司 A kind of lesion recognition methods of cerebral arterial thrombosis nuclear magnetic resonance image
CN110710986A (en) * 2019-10-25 2020-01-21 华院数据技术(上海)有限公司 CT image-based cerebral arteriovenous malformation detection method and system
CN111223085A (en) * 2020-01-09 2020-06-02 北京安德医智科技有限公司 Head medical image auxiliary interpretation report generation method based on neural network
CN111292821A (en) * 2020-01-21 2020-06-16 上海联影智能医疗科技有限公司 Medical diagnosis and treatment system
CN111476774A (en) * 2020-04-07 2020-07-31 广州柏视医疗科技有限公司 Intelligent sign recognition device based on novel coronavirus pneumonia CT detection
CN111832644A (en) * 2020-07-08 2020-10-27 北京工业大学 Brain medical image report generation method and system based on sequence level
CN112085695A (en) * 2020-07-21 2020-12-15 上海联影智能医疗科技有限公司 Image processing method, device and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A.QUDDUS等: "Adaboost and Support Vector Machines for White Matter Lesion Segmentation in MR Images", 《2005 IEEE ENGINEERING IN MEDICINE AND BIOLOGY 27TH ANNUAL CONFERENCE》 *
A.QUDDUS等: "Adaboost and Support Vector Machines for White Matter Lesion Segmentation in MR Images", 《2005 IEEE ENGINEERING IN MEDICINE AND BIOLOGY 27TH ANNUAL CONFERENCE》, 10 August 2006 (2006-08-10) *
穆建华: "基于常规MRI图像的不同影像组学模型在脑胶质瘤术前分级中的应用", 《磁共振成像》 *
穆建华: "基于常规MRI图像的不同影像组学模型在脑胶质瘤术前分级中的应用", 《磁共振成像》, vol. 11, no. 01, 19 January 2020 (2020-01-19), pages 55 - 59 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113962958A (en) * 2021-10-21 2022-01-21 四川大学华西医院 Symptom detection method and device
CN113962958B (en) * 2021-10-21 2023-05-05 四川大学华西医院 Sign detection method and device
CN116612885A (en) * 2023-04-26 2023-08-18 浙江大学 Prediction device for acute exacerbation of chronic obstructive pulmonary disease based on multiple modes
CN116612885B (en) * 2023-04-26 2024-03-22 浙江大学 Prediction device for acute exacerbation of chronic obstructive pulmonary disease based on multiple modes

Also Published As

Publication number Publication date
CN113077887B (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN104414636B (en) Cerebral microbleeds computer-aided detection system based on MRI
CN113077887B (en) Automatic quantitative analysis system and interpretation method for white matter lesions of brain
WO2013088144A1 (en) Probability mapping for visualisation and analysis of biomedical images
US11241190B2 (en) Predicting response to therapy for adult and pediatric crohn's disease using radiomic features of mesenteric fat regions on baseline magnetic resonance enterography
CN112735569B (en) System and method for outputting glioma operation area result before multi-modal MRI of brain tumor
CN113284126B (en) Method for predicting hydrocephalus shunt operation curative effect by artificial neural network image analysis
CN114881968A (en) OCTA image vessel segmentation method, device and medium based on deep convolutional neural network
Savaş et al. Comparison of deep learning models in carotid artery intima-media thickness ultrasound images: Caimtusnet
CN112633416A (en) Brain CT image classification method fusing multi-scale superpixels
CN115311193A (en) Abnormal brain image segmentation method and system based on double attention mechanism
CN112508902A (en) White matter high signal grading method, electronic device and storage medium
CN116864104A (en) Chronic thromboembolic pulmonary artery high-pressure risk classification system based on artificial intelligence
Wang et al. Adaptive Weights Integrated Convolutional Neural Network for Alzheimer's Disease Diagnosis
CN111971751A (en) System and method for evaluating dynamic data
Wang et al. Assessment of stroke risk using MRI-VPD with automatic segmentation of carotid plaques and classification of plaque properties based on deep learning
CN113096796B (en) Intelligent prediction system and method for cerebral hemorrhage hematoma expansion risk
CN112863648B (en) Brain tumor postoperative MRI (magnetic resonance imaging) multi-mode output system and method
CN111184948B (en) Vascular targeted photodynamic therapy-based nevus flammeus treatment method and system
Wang et al. DSA image analysis of clinical features and nursing care of cerebral aneurysm patients based on the deep learning algorithm
Nugroho et al. Quad Convolutional Layers (QCL) CNN Approach for Classification of Brain Stroke in Diffusion Weighted (DW)-Magnetic Resonance Images (MRI).
CN112599216B (en) Brain tumor MRI multi-mode standardized report output system and method
CN113160256B (en) MR image placenta segmentation method for multitasking countermeasure model
CN112863649B (en) System and method for outputting intravitreal tumor image result
Jiang et al. Automatic Visual Acuity Loss Prediction in Children with Optic Pathway Gliomas using Magnetic Resonance Imaging
Abonyi et al. Texture analysis of sonographic image of placenta in pregnancies with normal and adverse outcomes, a pilot study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant