CN113077479A - Automatic segmentation method, system, terminal and medium for acute ischemic stroke focus - Google Patents
Automatic segmentation method, system, terminal and medium for acute ischemic stroke focus Download PDFInfo
- Publication number
- CN113077479A CN113077479A CN202110320238.2A CN202110320238A CN113077479A CN 113077479 A CN113077479 A CN 113077479A CN 202110320238 A CN202110320238 A CN 202110320238A CN 113077479 A CN113077479 A CN 113077479A
- Authority
- CN
- China
- Prior art keywords
- segmentation
- image
- brain parenchyma
- sagittal plane
- stroke
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 118
- 238000000034 method Methods 0.000 title claims abstract description 40
- 208000032382 Ischaemic stroke Diseases 0.000 title claims abstract description 29
- 210000004556 brain Anatomy 0.000 claims abstract description 68
- 208000006011 Stroke Diseases 0.000 claims abstract description 57
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 37
- 230000003902 lesion Effects 0.000 claims abstract description 23
- 210000003625 skull Anatomy 0.000 claims abstract description 21
- 210000001175 cerebrospinal fluid Anatomy 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 13
- 230000009467 reduction Effects 0.000 claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims description 16
- 238000000513 principal component analysis Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 12
- 238000002372 labelling Methods 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 4
- 102000056548 Member 3 Solute Carrier Family 12 Human genes 0.000 abstract description 6
- 108091006623 SLC12A3 Proteins 0.000 abstract description 6
- 238000012360 testing method Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 5
- 230000001154 acute effect Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000002490 cerebral effect Effects 0.000 description 3
- 208000026106 cerebrovascular disease Diseases 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 206010008190 Cerebrovascular accident Diseases 0.000 description 2
- 210000003484 anatomy Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 206010008118 cerebral infarction Diseases 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 208000003174 Brain Neoplasms Diseases 0.000 description 1
- 206010008111 Cerebral haemorrhage Diseases 0.000 description 1
- 206010018985 Haemorrhage intracranial Diseases 0.000 description 1
- 208000016988 Hemorrhagic Stroke Diseases 0.000 description 1
- 208000032851 Subarachnoid Hemorrhage Diseases 0.000 description 1
- 206010053648 Vascular occlusion Diseases 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 210000005013 brain tissue Anatomy 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 210000004884 grey matter Anatomy 0.000 description 1
- 208000035474 group of disease Diseases 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000000302 ischemic effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000002028 premature Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 231100000216 vascular lesion Toxicity 0.000 description 1
- 208000021331 vascular occlusion disease Diseases 0.000 description 1
- 210000004885 white matter Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The invention discloses an automatic segmentation method of an acute ischemic stroke lesion, which comprises the following steps: head CT image registration; removing the skull to obtain a brain parenchyma three-dimensional mask; carrying out ellipse fitting on the three-dimensional brain parenchyma mask by PCA feature dimensionality reduction to obtain median sagittal plane data; calculating a left and right symmetrical parameter map of the brain; setting different window width window positions for the median sagittal plane data to obtain image data of the multi-level window width window positions; marking training data; training and segmenting a convolutional neural network model; inputting test data into a trained segmentation model for segmentation to obtain a segmentation probability map of brain parenchyma, cerebrospinal fluid and stroke focus of a corresponding voxel, and the prediction probability of whether the stroke focus exists in a corresponding two-dimensional image; and outputting different segmentation results according to different prediction probabilities. The method combines brain anatomy prior knowledge to carry out multi-task training, improves the performance of the automatic segmentation method of the convolutional neural network on the NCCT stroke focus, and can accurately segment the acute ischemic stroke focus.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to an automatic segmentation method, a system, a terminal and a medium for an acute ischemic stroke lesion.
Background
"cerebral stroke" is an acute cerebrovascular disease, a group of diseases in which brain tissue is damaged due to sudden rupture of cerebral vessels or due to the inability of blood to flow into the brain due to vascular occlusion, and includes ischemic and hemorrhagic strokes.
In 2018, the death rate of stroke in China is 149.49/10 ten thousand, which accounts for 22.3% of the total death rate of residents in China, and the stroke is the first cause of premature death and disease burden. Stroke hospitalized patients have an average age of 66 years, with the predominant types of cerebral infarction, cerebral hemorrhage and subarachnoid hemorrhage accounting for 81.9%, 14.9% and 3.2%, respectively.
Acute ischemic stroke (acute cerebral infarction) is the most common type of stroke, and accounts for 69.6-70.9% of stroke in China. At present, the time division of the acute phase is not uniform, and generally means within 2 weeks, within 1 week in the light type and within 1 month in the heavy type after the onset of disease. The fatality rate of the inpatient acute ischemic stroke patients in China is about 2.3-3.2% within 1 month after the inpatient acute ischemic stroke patients are suffered from diseases, the fatality rate of the patients in 3 months is 9-9.6%, the fatality/disability rate is 34.5-37.1%, the fatality rate of the patients in 1 year is 14.4-15.4%, and the fatality/disability rate is 33.4-33.8%.
Acute flat scan ct (ncct) can accurately identify a vast majority of intracranial haemorrhages and help identify non-vascular lesions (such as brain tumors), is the preferred imaging examination method for patients with suspected stroke, and is the most common imaging examination method for stroke lesion volume assessment. However, the image doctor is relied on to perform manual lesion segmentation, and the defects of long time consumption, large subjective difference and the like exist.
Existing semi-automatic lesion segmentation tools require human-machine interaction, which may create deviations. Deep Convolutional Neural Networks (CNNs) exhibit excellent performance in various segmentation tasks in medical imaging because they are able to learn complex patterns and relationships in data. In the existing research of CNN segmentation apoplexy focus, only the addition of three-dimensional space structure information is considered mostly, and the utilization of relevant prior knowledge of NCCT images and apoplexy is lacked.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an automatic segmentation method, a system, a terminal and a medium for acute ischemic stroke lesions, and multitask training is carried out by combining cerebral anatomy prior knowledge, so that the performance of a convolutional neural network in the automatic segmentation method for the NCCT stroke lesions is improved, and the acute ischemic stroke lesions can be accurately segmented.
In a first aspect, an embodiment of the present invention provides an automatic segmentation method for an acute ischemic stroke lesion, including the following steps:
rigid body registration of a patient skull CT image to a standard space;
removing the skull to obtain a brain parenchyma three-dimensional mask;
performing ellipse fitting by using PCA (principal component analysis) feature dimensionality reduction on a brain parenchyma three-dimensional mask to obtain median sagittal plane data;
horizontally turning the median sagittal plane data to obtain a turned symmetrical image, and obtaining a left and right symmetrical parameter map of the brain in a mode of subtracting the turned symmetrical image from the median sagittal plane data pixel by pixel;
setting different window width window positions for the median sagittal plane data to obtain image data of the multi-level window width window positions;
manually marking the mid-sagittal plane data to obtain training data of a convolutional neural network, marking the encephalic parenchyma, cerebrospinal fluid and stroke focuses of the encephalic parenchyma region in a two-dimensional image marking mode, and obtaining a binary classification label of whether each two-dimensional image has the stroke focuses;
constructing a segmentation convolutional neural network model, training the segmentation convolutional neural network model by taking the left and right symmetric parameter graphs and the image data of the multi-level window width window level as the input of the segmentation convolutional neural network model, and taking a two-dimensional image labeling result as the output of the segmentation convolutional neural network model to obtain a trained segmentation model;
inputting the left and right symmetrical parameter maps and the image data of the multi-level window width window level into a trained segmentation model for segmentation to obtain a segmentation probability map of the brain parenchyma, the cerebrospinal fluid and the stroke focus of a corresponding voxel and a prediction probability of whether a corresponding two-dimensional image has the stroke focus or not;
and outputting different segmentation results according to different prediction probabilities.
In a second aspect, an embodiment of the present invention provides an automatic segmentation system for an acute ischemic stroke lesion, including: a rigid body registration module, a brain parenchyma extraction module, a median sagittal plane data acquisition module, a symmetry parameter graph calculation module, a multi-level window width and window level image generation module, a segmentation model prediction module and a result output module,
the rigid body registration module registers the patient skull CT image to a standard space in a rigid body manner;
the brain parenchyma extraction module is used for removing the skull to obtain a brain parenchyma three-dimensional mask;
the median sagittal plane data acquisition module is used for carrying out ellipse fitting on PCA (principal component analysis) feature dimensionality reduction on a brain parenchyma three-dimensional mask to obtain median sagittal plane data;
the symmetry parameter map calculation module is used for horizontally turning the median sagittal plane data to obtain a turned symmetrical image, and obtaining a left and right symmetrical parameter map of the brain in a mode of subtracting the turned symmetrical image from the median sagittal plane data pixel by pixel;
the multi-level window width window position image generation module sets different window width window positions for the median sagittal plane data to obtain image data of the multi-level window width window positions;
the segmentation model prediction module constructs a segmentation convolutional neural network model, the left and right symmetric parameter graphs and the image data of the multi-level window width window level are used as the input of the segmentation convolutional neural network model to train the segmentation convolutional neural network model, and the two-dimensional image labeling result is used as the output of the segmentation convolutional neural network model to obtain a trained segmentation model;
inputting the left and right symmetrical parameter maps and the image data of the multi-level window width window level into a trained segmentation model for segmentation to obtain a segmentation probability map of the brain parenchyma, the cerebrospinal fluid and the stroke focus of a corresponding voxel and a prediction probability of whether a corresponding two-dimensional image has the stroke focus or not;
and the result output module outputs different segmentation results according to different prediction probabilities.
In a third aspect, an intelligent terminal provided in an embodiment of the present invention includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, the memory is used to store a computer program, the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method described in the foregoing embodiment.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, the computer program including program instructions, which, when executed by a processor, cause the processor to execute the method described in the above embodiment.
The invention has the beneficial effects that:
the method, the system, the terminal and the medium for automatically segmenting the acute ischemic stroke focus, provided by the embodiment of the invention, are combined with brain anatomy prior knowledge to carry out multi-task training, so that the performance of a convolutional neural network in the method for automatically segmenting the stroke focus of the NCCT is improved, and the acute ischemic stroke focus can be accurately segmented.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
Fig. 1 is a flowchart illustrating an automatic segmentation method for an acute ischemic stroke lesion according to a first embodiment of the present invention;
FIG. 2 shows a schematic diagram of spatial normalization-rigid body registration in a first embodiment of the present invention;
FIG. 3 is a diagram showing a process of extracting brain parenchyma according to a first embodiment of the present invention;
FIG. 4 shows a process diagram of median sagittal plane data obtained in a first embodiment of the present invention;
FIG. 5 is a process diagram of a calculated symmetry parameter map according to a first embodiment of the present invention;
FIG. 6 illustrates a multi-level window-width level image obtained in a first embodiment of the present invention;
FIG. 7 is a diagram illustrating labeling of training data according to a first embodiment of the present invention;
FIG. 8 is a diagram showing the construction of a segmented convolutional neural network model in the first embodiment of the present invention;
FIG. 9 shows a graph of raw test data in a first embodiment of the invention;
fig. 10 is a diagram illustrating a primary segmentation result of a stroke focus according to a first embodiment of the present invention;
FIG. 11 is a diagram showing the adjusted segmentation result in the first embodiment of the present invention;
fig. 12 is a block diagram illustrating an automatic segmentation system for an acute ischemic stroke lesion according to a second embodiment of the present invention;
fig. 13 shows a block diagram of an intelligent terminal according to a third embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
Fig. 1 is a flowchart illustrating an automatic segmentation method for an acute ischemic stroke lesion according to a first embodiment of the present invention, including the following steps:
and S1, rigid body registration of the patient skull CT image to a standard space.
Specifically, the CT of the skull of the patient is panned to the original image and the rigid body is registered to the template, so that all the panned data have a uniform positioning, as shown in fig. 2.
And S2, removing the skull to obtain a brain parenchyma three-dimensional mask.
Specifically, the skull is extracted according to the minimum HU value of the skull, the image is binarized, the skull is set as the foreground, the rest part is set as the background, expansion is performed, the skull cavity is filled, seed points are set in the skull, the region growing is utilized, the regions close to the seed points in pixels are merged in the skull to obtain the brain parenchyma, the filling algorithm is utilized to complete the brain cavity, and the brain cavity is expanded to the original skull, as shown in FIG. 3.
And S3, performing ellipse fitting on the brain parenchyma three-dimensional mask by PCA characteristic dimensionality reduction to obtain median sagittal plane data.
Specifically, based on a brain parenchymal mask, calculating a brain parenchymal volume of each layer of image, acquiring an image layer with the largest brain parenchymal area, acquiring a coordinate matrix X of non-zero pixels through the corresponding brain parenchymal mask, and calculating a mean value vector m of horizontal and vertical coordinatesXAnd covariance matrix PX(ii) a Solving an eigen equation through a covariance matrix to obtain an eigenvalue lambda1,λ2(i.e. fitting the semimajor axis and semiminor axis of the ellipse), and obtaining the feature vector e of the major axis and minor axis of the ellipse according to the feature values1,e2(i.e., the direction of the semi-axis of the ellipse), the rotation angle θ, i.e., the mid-sagittal plane angle (relative to the ideal coordinate system) perpendicular to the reference slice, in the image coordinate system is found by an inverse trigonometric function. And positioning and fitting the long axis coordinate of the ellipse according to the non-zero pixel coordinate matrix X, and rotating the three-dimensional image through the long axis coordinate and the rotation angle theta to obtain the data of the median sagittal plane, as shown in figure 4.
And S4, calculating a symmetry parameter graph.
Specifically, the mid-sagittal plane data is horizontally inverted to obtain an inverted symmetric image, and a left-right symmetric parameter map of the brain is obtained by subtracting the inverted symmetric image from the mid-sagittal plane data pixel by pixel, as shown in fig. 5.
And S5, acquiring the data of the multi-stage window width and window level.
Different window width levels are set for the midsagittal plane data, and the window widths and the window levels are (30, 60), (40, 80) and (50,100), respectively, so as to obtain image data of multi-level window width levels, as shown in fig. 6.
Steps S1-S5 are to preprocess the data.
And S6, training data labeling.
The mid-sagittal plane data is manually labeled to obtain training data of a convolutional neural network, brain parenchyma (gray matter and white matter), cerebrospinal fluid and stroke focuses in a brain parenchyma region are labeled in a two-dimensional image labeling mode, and a binary classification label (no stroke focus is 0, and stroke focus is 1) of whether each two-dimensional image has a stroke focus or not is obtained, as shown in fig. 7.
And S7, building a CNN model and training multiple tasks.
And (3) combining the Unet segmentation structure and an attention convolution module SE-Block to construct a segmentation convolution neural network model, as shown in FIG. 8. The method comprises the steps that multi-level window width and window level data are used as main input of a segmentation convolutional neural network model in a channel splicing mode, depth feature extraction of images is carried out through an Encoder (Encoder) part, corresponding symmetry parameter graphs are used as auxiliary input and directly subjected to channel splicing with output of a Decoder (Decoder), and three independent segmentation outputs are obtained through convolution and respectively correspond to segmentation results of brain parenchyma, cerebrospinal fluid and stroke focuses. And evaluating the prediction result by using a Dice coefficient, wherein the Dice coefficient is a Dice similarity coefficient between the network segmentation result and the manual labeling result. And finishing the back propagation of the convolutional network. In particular, for a two-dimensional image labeled as having no stroke focus, the stroke focus segmentation Dice of the corresponding image does not participate in back propagation. Meanwhile, in an Encoder (Encoder) part of the segmentation model, depth features are cascaded with a FlattenandFC layer (full connection layer) to complete binary prediction of an input image (whether stroke focus exists or not).
And S8, stroke focus segmentation based on the CNN model.
As shown in fig. 9, an original test data graph is shown, the test data is preprocessed in steps S1-S5, and the preprocessed data is input into a trained segmentation model, so as to obtain a segmentation probability graph of brain parenchyma, cerebrospinal fluid and stroke lesion corresponding to the CNN model, and a prediction probability of whether a stroke lesion exists in a corresponding two-dimensional image.
And correcting the segmentation probability map of the stroke focus based on the classification probability of the stroke focus and the segmentation probability map of the brain parenchyma and the cerebrospinal fluid. When the probability of the model prediction image having the stroke focus is greater than 0.5, the segmentation result is directly output from the stroke focus segmentation probability map of the model, as shown in fig. 10, and the region marked as B corresponds to the stroke focus. On the contrary, when the prediction probability is less than 0.5, the brain parenchyma corresponding to the voxel, the cerebrospinal fluid and the stroke focus with the highest segmentation probability are taken as the segmentation result, and the adjusted stroke focus segmentation result is obtained, as shown in fig. 11, the region marked as B corresponds to the stroke focus.
The automatic segmentation method for the acute ischemic stroke focus provided by the embodiment of the invention combines cerebral anatomy prior knowledge, optimizes the NCCT stroke focus automatic segmentation algorithm based on the CNN, and improves the automatic segmentation accuracy of the CNN method on the acute ischemic stroke focus in the NCCT image.
In the first embodiment, an automatic acute ischemic stroke lesion segmentation method is provided, and accordingly, an automatic acute ischemic stroke lesion segmentation system is also provided. Fig. 12 is a block diagram illustrating an automatic segmentation system for an acute ischemic stroke focus according to a second embodiment of the present invention. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
As shown in fig. 12, a block diagram of an automatic segmentation system for an acute ischemic stroke lesion according to a second embodiment of the present invention is shown, and the system includes: a rigid body registration module, a brain parenchyma extraction module, a median sagittal plane data acquisition module, a symmetry parameter graph calculation module, a multi-level window width and window level image generation module, a segmentation model prediction module and a result output module,
the rigid body registration module registers the patient skull CT image to a standard space in a rigid body manner;
the brain parenchyma extraction module is used for removing the skull to obtain a brain parenchyma three-dimensional mask;
the median sagittal plane data acquisition module is used for carrying out ellipse fitting on PCA (principal component analysis) feature dimensionality reduction on a brain parenchyma three-dimensional mask to obtain median sagittal plane data;
the symmetry parameter map calculation module is used for horizontally turning the median sagittal plane data to obtain a turned symmetrical image, and obtaining a left and right symmetrical parameter map of the brain in a mode of subtracting the turned symmetrical image from the median sagittal plane data pixel by pixel;
the multi-level window width window level image generation module sets different window width window levels for the median sagittal plane data, and respectively adopts (30, 60), (40, 80), (50,100) as the window width and the window level to obtain the image data of the multi-level window width window level;
the segmentation model prediction module constructs a segmentation convolutional neural network model, the left and right symmetric parameter graphs and the image data of the multi-level window width window level are used as the input of the segmentation convolutional neural network model to train the segmentation convolutional neural network model, and the two-dimensional image labeling result is used as the output of the segmentation convolutional neural network model to obtain a trained segmentation model;
inputting the left and right symmetrical parameter maps and the image data of the multi-level window width window level into a trained segmentation model for segmentation to obtain a segmentation probability map of the brain parenchyma, the cerebrospinal fluid and the stroke focus of a corresponding voxel and a prediction probability of whether a corresponding two-dimensional image has the stroke focus or not;
and the result output module outputs different segmentation results according to different prediction probabilities.
The result output module comprises a judging unit and an output unit, wherein the judging unit is used for judging whether the prediction probability is greater than 0.5, and when the prediction probability is greater than or equal to 0.5, the output unit outputs a stroke focus segmentation probability graph of the segmentation model as a segmentation result; when the prediction probability is less than 0.5, the output unit outputs the class with the maximum segmentation probability of the brain parenchyma, the cerebrospinal fluid and the stroke focus of the corresponding voxel as a segmentation result to obtain an adjusted stroke focus segmentation result.
The median sagittal plane data acquisition module performs ellipse fitting on PCA (principal component analysis) feature dimensionality reduction for the brain parenchyma three-dimensional mask to obtain median sagittal plane data, and the obtaining of the median sagittal plane data specifically comprises the following steps:
calculating the volume of each layer of image as the brain parenchyma according to the brain parenchyma three-dimensional mask to obtain an image layer with the maximum brain parenchyma area;
obtaining a coordinate matrix of non-zero pixels according to the corresponding brain parenchyma three-dimensional mask;
calculating a covariance matrix of a horizontal coordinate mean vector machine;
solving a characteristic equation according to the covariance matrix to obtain characteristic values which are respectively a semi-major axis and a semi-minor axis of the fitting ellipse;
calculating the feature vectors of the major axis and the minor axis of the ellipse according to the feature values;
calculating a rotation angle through an inverse trigonometric function, and positioning and fitting the long axis coordinate of the ellipse according to the non-zero pixel coordinate matrix;
and rotating the three-dimensional image through the long-axis coordinate and the rotation angle to obtain the data of the positive and middle sagittal planes.
The above description is directed to an embodiment of an automatic segmentation system for an acute ischemic stroke lesion according to a second embodiment of the present invention.
The system for automatically segmenting the acute ischemic stroke lesion provided by the invention and the method for automatically segmenting the acute ischemic stroke lesion have the same inventive concept and the same beneficial effects, and are not repeated herein.
As shown in fig. 13, a block diagram illustrating a third embodiment of the present invention further provides an intelligent terminal, where the terminal includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, the memory is used for storing a computer program, the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method described in the foregoing embodiment.
It should be understood that in the embodiments of the present invention, the Processor may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device may include a display (LCD, etc.), a speaker, etc.
The memory may include both read-only memory and random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In a specific implementation, the processor, the input device, and the output device described in the embodiments of the present invention may execute the implementation described in the method embodiments provided in the embodiments of the present invention, and may also execute the implementation described in the system embodiments in the embodiments of the present invention, which is not described herein again.
The invention also provides an embodiment of a computer-readable storage medium, in which a computer program is stored, which computer program comprises program instructions that, when executed by a processor, cause the processor to carry out the method described in the above embodiment.
The computer readable storage medium may be an internal storage unit of the terminal described in the foregoing embodiment, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.
Claims (10)
1. An automatic segmentation method for an acute ischemic stroke lesion is characterized by comprising the following steps:
rigid body registration of a patient skull CT image to a standard space;
removing the skull to obtain a brain parenchyma three-dimensional mask;
performing ellipse fitting by using PCA (principal component analysis) feature dimensionality reduction on a brain parenchyma three-dimensional mask to obtain median sagittal plane data;
horizontally turning the median sagittal plane data to obtain a turned symmetrical image, and obtaining a left and right symmetrical parameter map of the brain in a mode of subtracting the turned symmetrical image from the median sagittal plane data pixel by pixel;
setting different window width window positions for the median sagittal plane data to obtain image data of the multi-level window width window positions;
manually marking the mid-sagittal plane data to obtain training data of a convolutional neural network, marking the encephalic parenchyma, cerebrospinal fluid and stroke focuses of the encephalic parenchyma region in a two-dimensional image marking mode, and obtaining a binary classification label of whether each two-dimensional image has the stroke focuses;
constructing a segmentation convolutional neural network model, training the segmentation convolutional neural network model by taking the left and right symmetric parameter graphs and the image data of the multi-level window width window level as the input of the segmentation convolutional neural network model, and taking a two-dimensional image labeling result as the output of the segmentation convolutional neural network model to obtain a trained segmentation model;
inputting the left and right symmetrical parameter maps and the image data of the multi-level window width window level into a trained segmentation model for segmentation to obtain a segmentation probability map of the brain parenchyma, the cerebrospinal fluid and the stroke focus of a corresponding voxel and a prediction probability of whether a corresponding two-dimensional image has the stroke focus or not;
and outputting different segmentation results according to different prediction probabilities.
2. The method of claim 1, wherein the outputting the different segmentation results according to the difference of the prediction probabilities specifically comprises:
when the prediction probability is more than or equal to 0.5, outputting a stroke focus segmentation probability graph of the segmentation model as a segmentation result;
and when the prediction probability is less than 0.5, outputting the class with the highest segmentation probability of the brain parenchyma, the cerebrospinal fluid and the stroke focus of the corresponding voxel as a segmentation result.
3. The method of claim 1, wherein obtaining the median sagittal plane data by performing an ellipse fitting on the brain parenchyma three-dimensional mask with PCA feature dimensionality reduction specifically comprises:
calculating the volume of each layer of image as the brain parenchyma according to the brain parenchyma three-dimensional mask to obtain an image layer with the maximum brain parenchyma area;
obtaining a coordinate matrix of non-zero pixels according to the corresponding brain parenchyma three-dimensional mask;
calculating a covariance matrix of a horizontal coordinate mean vector machine;
solving a characteristic equation according to the covariance matrix to obtain characteristic values which are respectively a semi-major axis and a semi-minor axis of the fitting ellipse;
calculating the feature vectors of the major axis and the minor axis of the ellipse according to the feature values;
calculating a rotation angle through an inverse trigonometric function, and positioning and fitting the long axis coordinate of the ellipse according to the non-zero pixel coordinate matrix;
and rotating the three-dimensional image through the long-axis coordinate and the rotation angle to obtain the data of the positive and middle sagittal planes.
4. The method of claim 1, wherein setting different window width levels specifically comprises: as the window width and level, (30, 60), (40, 80), (50,100) were used, respectively.
5. An automatic segmentation system for an acute ischemic stroke lesion, comprising: a rigid body registration module, a brain parenchyma extraction module, a median sagittal plane data acquisition module, a symmetry parameter graph calculation module, a multi-level window width and window level image generation module, a segmentation model prediction module and a result output module,
the rigid body registration module registers the patient skull CT image to a standard space in a rigid body manner;
the brain parenchyma extraction module is used for removing the skull to obtain a brain parenchyma three-dimensional mask;
the median sagittal plane data acquisition module is used for carrying out ellipse fitting on PCA (principal component analysis) feature dimensionality reduction on a brain parenchyma three-dimensional mask to obtain median sagittal plane data;
the symmetry parameter map calculation module is used for horizontally turning the median sagittal plane data to obtain a turned symmetrical image, and obtaining a left and right symmetrical parameter map of the brain in a mode of subtracting the turned symmetrical image from the median sagittal plane data pixel by pixel;
the multi-level window width window position image generation module sets different window width window positions for the median sagittal plane data to obtain image data of the multi-level window width window positions;
the segmentation model prediction module constructs a segmentation convolutional neural network model, the left and right symmetric parameter graphs and the image data of the multi-level window width window level are used as the input of the segmentation convolutional neural network model to train the segmentation convolutional neural network model, and the two-dimensional image labeling result is used as the output of the segmentation convolutional neural network model to obtain a trained segmentation model;
inputting the left and right symmetrical parameter maps and the image data of the multi-level window width window level into a trained segmentation model for segmentation to obtain a segmentation probability map of the brain parenchyma, the cerebrospinal fluid and the stroke focus of a corresponding voxel and a prediction probability of whether a corresponding two-dimensional image has the stroke focus or not;
and the result output module outputs different segmentation results according to different prediction probabilities.
6. The system of claim 5, wherein the result output module comprises a judging unit for judging whether the prediction probability is greater than 0.5, and an output unit for outputting a stroke focus segmentation probability map of the segmentation model as the segmentation result when the prediction probability is greater than or equal to 0.5; when the prediction probability is less than 0.5, the output unit outputs the class with the highest segmentation probability of the brain parenchyma, the cerebrospinal fluid and the stroke focus of the corresponding voxel as the segmentation result.
7. The system of claim 5, wherein the median sagittal plane data acquisition module performs ellipse fitting on the three-dimensional mask of the brain parenchyma using PCA feature dimension reduction to obtain the median sagittal plane data specifically comprises:
calculating the volume of each layer of image as the brain parenchyma according to the brain parenchyma three-dimensional mask to obtain an image layer with the maximum brain parenchyma area;
obtaining a coordinate matrix of non-zero pixels according to the corresponding brain parenchyma three-dimensional mask;
calculating a covariance matrix of a horizontal coordinate mean vector machine;
solving a characteristic equation according to the covariance matrix to obtain characteristic values which are respectively a semi-major axis and a semi-minor axis of the fitting ellipse;
calculating the feature vectors of the major axis and the minor axis of the ellipse according to the feature values;
calculating a rotation angle through an inverse trigonometric function, and positioning and fitting the long axis coordinate of the ellipse according to the non-zero pixel coordinate matrix;
and rotating the three-dimensional image through the long-axis coordinate and the rotation angle to obtain the data of the positive and middle sagittal planes.
8. The system of claim 5, wherein the multi-level window-width window-level image generation module employs (30, 60), (40, 80), (50,100) as window width and window level, respectively.
9. An intelligent terminal comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, the memory being adapted to store a computer program, the computer program comprising program instructions, characterized in that the processor is configured to invoke the program instructions to perform the method according to any of claims 1-4.
10. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110320238.2A CN113077479A (en) | 2021-03-25 | 2021-03-25 | Automatic segmentation method, system, terminal and medium for acute ischemic stroke focus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110320238.2A CN113077479A (en) | 2021-03-25 | 2021-03-25 | Automatic segmentation method, system, terminal and medium for acute ischemic stroke focus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113077479A true CN113077479A (en) | 2021-07-06 |
Family
ID=76610749
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110320238.2A Pending CN113077479A (en) | 2021-03-25 | 2021-03-25 | Automatic segmentation method, system, terminal and medium for acute ischemic stroke focus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113077479A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113409456A (en) * | 2021-08-19 | 2021-09-17 | 江苏集萃苏科思科技有限公司 | Modeling method, system, device and medium for three-dimensional model before craniocerebral puncture operation |
CN113538464A (en) * | 2021-07-22 | 2021-10-22 | 脑玺(苏州)智能科技有限公司 | Brain image segmentation model training method, segmentation method and device |
CN114332532A (en) * | 2021-12-24 | 2022-04-12 | 深圳市铱硙医疗科技有限公司 | Cerebral stroke classification system and method based on brain image |
CN114463288A (en) * | 2022-01-18 | 2022-05-10 | 深圳市铱硙医疗科技有限公司 | Brain medical image scoring method, device, computer equipment and storage medium |
CN114601486A (en) * | 2022-03-07 | 2022-06-10 | 深圳市澈影医生集团有限公司 | Detection system and method for acute ischemic stroke |
CN114638843A (en) * | 2022-03-18 | 2022-06-17 | 北京安德医智科技有限公司 | Method and device for identifying high-density characteristic image of middle cerebral artery |
CN115272206A (en) * | 2022-07-18 | 2022-11-01 | 深圳市医未医疗科技有限公司 | Medical image processing method, medical image processing device, computer equipment and storage medium |
WO2023133933A1 (en) * | 2022-01-14 | 2023-07-20 | 汕头市超声仪器研究所股份有限公司 | Ultrasonic brain standard plane imaging and abnormal area automatic detection and display method |
CN118097245A (en) * | 2024-02-20 | 2024-05-28 | 深圳市儿童医院 | Brain nodule load generation method based on MRI and related equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109409503A (en) * | 2018-09-27 | 2019-03-01 | 深圳市铱硙医疗科技有限公司 | Training method, image conversion method, device, equipment and the medium of neural network |
CN111445443A (en) * | 2020-03-11 | 2020-07-24 | 北京深睿博联科技有限责任公司 | Method and device for detecting early acute cerebral infarction |
CN111584066A (en) * | 2020-04-13 | 2020-08-25 | 清华大学 | Brain medical image diagnosis method based on convolutional neural network and symmetric information |
CN111667458A (en) * | 2020-04-30 | 2020-09-15 | 杭州深睿博联科技有限公司 | Method and device for detecting early acute cerebral infarction in flat-scan CT |
CN111861989A (en) * | 2020-06-10 | 2020-10-30 | 杭州深睿博联科技有限公司 | Method, system, terminal and storage medium for detecting midline of brain |
CN112164082A (en) * | 2020-10-09 | 2021-01-01 | 深圳市铱硙医疗科技有限公司 | Method for segmenting multi-modal MR brain image based on 3D convolutional neural network |
-
2021
- 2021-03-25 CN CN202110320238.2A patent/CN113077479A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109409503A (en) * | 2018-09-27 | 2019-03-01 | 深圳市铱硙医疗科技有限公司 | Training method, image conversion method, device, equipment and the medium of neural network |
CN111445443A (en) * | 2020-03-11 | 2020-07-24 | 北京深睿博联科技有限责任公司 | Method and device for detecting early acute cerebral infarction |
CN111584066A (en) * | 2020-04-13 | 2020-08-25 | 清华大学 | Brain medical image diagnosis method based on convolutional neural network and symmetric information |
CN111667458A (en) * | 2020-04-30 | 2020-09-15 | 杭州深睿博联科技有限公司 | Method and device for detecting early acute cerebral infarction in flat-scan CT |
CN111861989A (en) * | 2020-06-10 | 2020-10-30 | 杭州深睿博联科技有限公司 | Method, system, terminal and storage medium for detecting midline of brain |
CN112164082A (en) * | 2020-10-09 | 2021-01-01 | 深圳市铱硙医疗科技有限公司 | Method for segmenting multi-modal MR brain image based on 3D convolutional neural network |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113538464A (en) * | 2021-07-22 | 2021-10-22 | 脑玺(苏州)智能科技有限公司 | Brain image segmentation model training method, segmentation method and device |
CN113409456A (en) * | 2021-08-19 | 2021-09-17 | 江苏集萃苏科思科技有限公司 | Modeling method, system, device and medium for three-dimensional model before craniocerebral puncture operation |
CN114332532A (en) * | 2021-12-24 | 2022-04-12 | 深圳市铱硙医疗科技有限公司 | Cerebral stroke classification system and method based on brain image |
WO2023133933A1 (en) * | 2022-01-14 | 2023-07-20 | 汕头市超声仪器研究所股份有限公司 | Ultrasonic brain standard plane imaging and abnormal area automatic detection and display method |
CN114463288A (en) * | 2022-01-18 | 2022-05-10 | 深圳市铱硙医疗科技有限公司 | Brain medical image scoring method, device, computer equipment and storage medium |
CN114463288B (en) * | 2022-01-18 | 2023-01-10 | 深圳市铱硙医疗科技有限公司 | Brain medical image scoring method and device, computer equipment and storage medium |
CN114601486A (en) * | 2022-03-07 | 2022-06-10 | 深圳市澈影医生集团有限公司 | Detection system and method for acute ischemic stroke |
CN114638843A (en) * | 2022-03-18 | 2022-06-17 | 北京安德医智科技有限公司 | Method and device for identifying high-density characteristic image of middle cerebral artery |
CN114638843B (en) * | 2022-03-18 | 2022-09-06 | 北京安德医智科技有限公司 | Method and device for identifying high-density characteristic image of middle cerebral artery |
CN115272206A (en) * | 2022-07-18 | 2022-11-01 | 深圳市医未医疗科技有限公司 | Medical image processing method, medical image processing device, computer equipment and storage medium |
CN115272206B (en) * | 2022-07-18 | 2023-07-04 | 深圳市医未医疗科技有限公司 | Medical image processing method, medical image processing device, computer equipment and storage medium |
CN118097245A (en) * | 2024-02-20 | 2024-05-28 | 深圳市儿童医院 | Brain nodule load generation method based on MRI and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113077479A (en) | Automatic segmentation method, system, terminal and medium for acute ischemic stroke focus | |
Namburete et al. | Fully-automated alignment of 3D fetal brain ultrasound to a canonical reference space using multi-task learning | |
CN111798462B (en) | Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image | |
CN108416802B (en) | Multimode medical image non-rigid registration method and system based on deep learning | |
Qiu et al. | Automatic segmentation approach to extracting neonatal cerebral ventricles from 3D ultrasound images | |
WO2021030629A1 (en) | Three dimensional object segmentation of medical images localized with object detection | |
US20230104173A1 (en) | Method and system for determining blood vessel information in an image | |
CN110599528A (en) | Unsupervised three-dimensional medical image registration method and system based on neural network | |
Oghli et al. | Automatic fetal biometry prediction using a novel deep convolutional network architecture | |
CN112164082A (en) | Method for segmenting multi-modal MR brain image based on 3D convolutional neural network | |
Li et al. | Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images | |
CN113298831B (en) | Image segmentation method and device, electronic equipment and storage medium | |
Nie et al. | Automatic detection of standard sagittal plane in the first trimester of pregnancy using 3-D ultrasound data | |
CN112991363A (en) | Brain tumor image segmentation method and device, electronic equipment and storage medium | |
CN113298856B (en) | Image registration method, device, equipment and medium | |
Qiu et al. | 3D MR ventricle segmentation in pre-term infants with post-hemorrhagic ventricle dilatation (PHVD) using multi-phase geodesic level-sets | |
CN117809122B (en) | Processing method, system, electronic equipment and medium for intracranial large blood vessel image | |
CN114463288B (en) | Brain medical image scoring method and device, computer equipment and storage medium | |
KR20190068254A (en) | Method, Device and Program for Estimating Time of Lesion Occurrence | |
CN110992310A (en) | Method and device for determining partition where mediastinal lymph node is located | |
CN116862930B (en) | Cerebral vessel segmentation method, device, equipment and storage medium suitable for multiple modes | |
EP3853814B1 (en) | Analyzing symmetry in image data | |
CN113012127A (en) | Cardiothoracic ratio measuring method based on chest medical image | |
CN115631194B (en) | Method, device, equipment and medium for identifying and detecting intracranial aneurysm | |
Delmoral et al. | Segmentation of pathological liver tissue with dilated fully convolutional networks: A preliminary study |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210706 |
|
RJ01 | Rejection of invention patent application after publication |