CN114565572A - Cerebral hemorrhage CT image classification method based on image sequence analysis - Google Patents
Cerebral hemorrhage CT image classification method based on image sequence analysis Download PDFInfo
- Publication number
- CN114565572A CN114565572A CN202210162764.5A CN202210162764A CN114565572A CN 114565572 A CN114565572 A CN 114565572A CN 202210162764 A CN202210162764 A CN 202210162764A CN 114565572 A CN114565572 A CN 114565572A
- Authority
- CN
- China
- Prior art keywords
- image
- brain
- hemorrhage
- cerebral hemorrhage
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention discloses a cerebral hemorrhage CT image classification method based on image sequence analysis. The method belongs to the technical field of medical image analysis, and comprises the steps of obtaining brain CT image data, carrying out imaging processing on the data, constructing a model and realizing a classification method in the operation process; the specific operation steps are as follows: obtaining a brain CT scanning result to obtain brain CT image original data; carrying out imaging processing on original data of the brain CT image to form a three-channel image sequence as the input of a subsequent model; constructing a model comprising a channel attention module and a space attention module to extract the characteristics of a three-channel image sequence; the model is trained to optimize parameters, positive cerebral hemorrhage is identified, cerebral hemorrhage subtypes are distinguished, classification of cerebral hemorrhage CT images can be achieved, CT scanning image information can be fully utilized by the model, attention to key information in an image sequence is improved, the judging process of the whole method accords with the diagnosis process of a doctor in a real scene, and the cerebral hemorrhage classification effect is improved.
Description
Technical Field
The invention belongs to the technical field of medical image analysis, and relates to a cerebral hemorrhage CT image classification method based on image sequence analysis.
Background
With the continuous development of computer related technologies, artificial intelligence methods are increasingly applied to various fields, and medical image analysis related work based on machine learning and deep learning methods can improve the diagnosis efficiency of clinical related diseases and reduce the workload of doctors; in clinical diagnosis of cerebral hemorrhage, the medical image is usually based on the result of Computer Tomography (CT) image of brain of a patient, the medical image carries out tomography on the brain at preset intervals and obtains a plurality of image layers, the scanning time is fast, the image has enough definition, and the CT examination in clinic generally preferentially adopts a non-contrast enhanced flat scanning mode; the cerebral hemorrhage is usually sudden, and can generate a stronger image for a patient, the image has higher lethality rate, and the rapid diagnosis has great significance; meanwhile, the number of brain CT images to be examined is large, and therefore, in the whole important and repeated work, the radiologist responsible for the radiographing is required to have a considerable degree of professional knowledge, rich experience, a certain recognition speed and a certain work efficiency; artificial intelligence techniques are gradually playing a role in the intelligent identification of CT images and the rapid diagnosis of cerebral hemorrhage diseases, many of which have better performance, higher accuracy and faster diagnosis speed, and hopefully assist the diagnosis and treatment of doctors in practical application to achieve the purpose of rapidly screening cerebral hemorrhage and starting symptomatic treatment as soon as possible.
Currently, the work related to diagnosis of cerebral hemorrhage based on brain CT images can be mainly divided into two categories: 1) identifying and segmenting details and forms of a cerebral hemorrhage area of a patient, 2) screening and subtype analysis of cerebral hemorrhage for a subject to be diagnosed, judging whether cerebral hemorrhage occurs in an image result mainly through a related classification method, and judging a specific hemorrhage type according to the state and characteristics of the bleeding; in the cerebral hemorrhage recognition and segmentation work under the artificial intelligence technology, the training and verification of the model generally need the marking of image level and even pixel level, which is often difficult to obtain, on one hand, because the workload is large; on the other hand, the procedure itself is cumbersome and rigid, requiring a physician with a certain experience to perform the marking task, which consumes a lot of time and effort; meanwhile, the cerebral hemorrhage recognition and segmentation model can enable doctors to quickly obtain hemorrhage position information, but the judgment of the hemorrhage type is not visual enough, so that a model specially used for directly diagnosing the hemorrhage condition and the hemorrhage type is often needed; for this purpose, the classification model is more able to meet this requirement in contrast; in the task of classifying cerebral hemorrhage, the specific purpose of the task is divided into two classes of bleeding and two classes of bleeding subtypes, furthermore, on the setting of a label, part of the work is to mark each scanning layer image of a single scanning object, and one part of the work is to mark one scanning object, so that after the scanning images are converted into image representation suitable for artificial intelligence technology, the former trains a model by using a single image and generates a diagnosis result for an independent image; the method considers a plurality of scanning surfaces of the whole scanning, utilizes the image sequence to train the model, can obtain diagnosis aiming at the whole scanning, avoids discarding the context image information in a scanning sample by fusing the image characteristics in the sequence, and simultaneously is more consistent with the diagnosis mode of clinical experts; in addition, when the DICOM standard format of CT scanning medical images is converted into common image representation, most of the conventional methods fix a specific window level window width for displaying the whole brain state, and the single fixed value has no determined standard and simultaneously loses information in other windows; some methods obtain image display at this time by setting a plurality of windows, and perform feature fusion in a model, but actually, the importance levels of the windows are different, and attention needs to be paid to the windows in different levels; in summary, in order to improve clinical diagnosis efficiency and reduce the workload of doctors, it is desirable to provide direct assistance for diagnosis of cerebral hemorrhage by an artificial intelligence auxiliary diagnosis method, and provide a cerebral hemorrhage CT image classification method based on image sequence analysis by using more comprehensive and more appropriate scan intralayer information and interlaminar information.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the problems in the related work of the existing cerebral hemorrhage diagnosis based on the cerebral CT image, which comprises the following steps: the fine marks in the identification and segmentation tasks are complex to obtain, the context relation contained in the whole sample scanning sequence is ignored aiming at the diagnosis method of a single scanning layer, information is easy to lose when the window level window width is set in the imaging display process of DICOM data, special attention to important features is lacked in a model method, and the like; the method for classifying the cerebral hemorrhage CT images based on image sequence analysis has significance in assisting clinical diagnosis of cerebral hemorrhage diseases through artificial intelligence; specifically, the present invention solves the above problems to some extent; the method comprises the steps of improving the utilization degree of image information through a multi-window and attention mode of a whole scanning layer sequence, introducing a channel attention and space attention convolution short-time memory (CLSTM) network to extract and pay attention to the features in a multi-channel image sequence in a distinguishing mode, training an end-to-end cerebral hemorrhage classification model based on reasonable global feature representation extracted from a scanning sample, intelligently diagnosing the cerebral hemorrhage occurrence condition and distinguishing cerebral hemorrhage subtypes, and providing help for screening and detecting of cerebral hemorrhage diseases clinically.
The technical scheme is as follows: the invention relates to a cerebral hemorrhage CT image classification method based on image sequence analysis, which comprises the steps of classifying cerebral hemorrhage CT images; the method specifically comprises the following steps:
determining whether cerebral hemorrhage occurs or not from the cerebral CT scanning, namely judging whether the cerebral CT scanning result is cerebral hemorrhage positive or cerebral hemorrhage negative; and (3) identifying cerebral hemorrhage subtypes according to the cerebral hemorrhage positive sample, namely judging whether the cerebral hemorrhage displayed in the brain CT scanning result is intracerebral hemorrhage, brain parenchyma hemorrhage, subarachnoid hemorrhage, epidural hemorrhage or subdural hemorrhage.
Further, the operation process of the cerebral hemorrhage CT image classification method based on image sequence analysis comprises the steps of obtaining cerebral CT image data, carrying out imaging processing on the data, constructing a model and realizing the classification method;
specifically, the method comprises the following steps: obtaining a brain CT scanning result to obtain brain CT image original data; carrying out imaging processing on original data of the brain CT image to form a three-channel image sequence as the input of a subsequent model; constructing a model comprising a channel attention module and a space attention module to extract the characteristics of a three-channel image sequence; and training model optimization parameters, identifying cerebral hemorrhage positive and distinguishing cerebral hemorrhage subtypes, and realizing classification of cerebral hemorrhage CT images.
Further, the operation steps are as follows:
(1) and processing brain CT image data:
(1.1) obtaining a brain CT scanning result, storing the obtained brain CT scanning result in a standard mode of a DICOM medical image format, and taking the stored brain CT scanning result as original data of a brain CT image;
(1.2) determining the coarse-grained type labels of all samples in the stored original data of the brain CT image,
marking original data of the CT brain image as positive brain hemorrhage and negative brain hemorrhage, namely ICH positive and ICH negative, according to whether the image shows high-density images representing the occurrence of the brain hemorrhage;
(1.3) determining fine-grained class labels of all samples according to the marked ICH positive brain CT image original data;
(1.4) carrying out imaging processing on all the brain CT image original data in the DICOM format, and extracting each sample in the brain CT image original data into a three-channel image sequence;
(2) and constructing a model:
(2.1) weighting the features extracted from the three-channel image sequence by using a channel attention module, and fusing the weighted features;
(2.2) generating spatial region attention for the fused features by using a CLSTM network with a spatial attention module, extracting space-time features of the three-channel image sequence and classifying the features;
(3) and the classification method is realized as follows:
and taking a three-channel image sequence obtained after the imaging processing as the input of the constructed model, and then training parameters in the model by utilizing the determined coarse-grained and fine-grained class labels to obtain the model finally used for classifying the cerebral hemorrhage CT images.
Further, in step (1.3), the determining the fine-grained class label of each sample for the ICH positive data specifically includes: according to the position and shape of the high-density shadow shown in the image, the primary data of the brain CT image with positive ICH is marked as five subtypes of cerebral hemorrhage, including: intraventricular hemorrhage, parenchymal cerebral hemorrhage, subarachnoid hemorrhage, epidural hemorrhage, and subdural hemorrhage, i.e., IVH class, IPH class, SAH class, EDH class, and SDH class.
Further, in step (1.4), the step of performing the imaging processing on the original data of the brain CT images in all DICOM formats specifically includes: setting different window levels and window widths, respectively selecting a whole brain window, a blood window and a skull window, and converting information in a corresponding HU value range into a common image for representation, so that each sample in the original data of the brain CT image is extracted into a three-channel image sequence, wherein the channel obtained after processing is in an image PNG format, and the sequence length is the number of tomography layers in the original data of the sample brain CT image.
Has the advantages that: compared with the prior art, the invention has the characteristics that: the method of the invention splits CT image information and pays attention to the information in a distinguishing way, and improves the diagnosis capability of the model by obtaining high-level characteristic representation of channel key information, space key information and time axis context information. Comprehensively simulating the mode of a doctor when viewing the image result on the whole, comprehensively paying attention to each scanning layer while more considering key CT window channels and key areas in the image, capturing spatial dimension information contained in medical image data and context relation of each scanning on a time axis, and finally obtaining a reasonable diagnosis result. In practical application, after a training set is divided, according to label information, corresponding models are respectively trained according to the existence of bleeding and whether the bleeding can be diagnosed as five subtypes, and the models are used in respective diagnosis tasks. Specifically, for a sample to be diagnosed, a coarse-grained diagnosis model with or without bleeding is used to obtain a result of whether cerebral hemorrhage exists in an image, if so, the five fine-grained models are used to perform cerebral hemorrhage subtype subdivision diagnosis, and the output results of the models are integrated to obtain a final cerebral CT image classification result for cerebral hemorrhage diseases.
Drawings
FIG. 1 is a flow chart of the operation of the present invention;
FIG. 2 is a schematic diagram illustrating the operation of the imaging process of the CT brain image data according to the present invention;
FIG. 3 is a schematic diagram of an operation of extracting three-channel image sequence features by using a channel attention module in model construction according to the present invention;
FIG. 4 is a schematic diagram of the operation of extracting features to complete classification by using a spatial attention module in the model construction of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
As shown in the figure, according to the method for classifying the cerebral hemorrhage CT image based on the image sequence analysis, the cerebral hemorrhage CT image is firstly classified; the method specifically comprises the following steps:
determining whether cerebral hemorrhage occurs or not from the cerebral CT scanning, namely judging whether the cerebral CT scanning result is cerebral hemorrhage positive or cerebral hemorrhage negative; and (3) identifying cerebral hemorrhage subtypes according to the cerebral hemorrhage positive sample, namely judging whether the cerebral hemorrhage displayed in the brain CT scanning result is intracerebral hemorrhage, brain parenchyma hemorrhage, subarachnoid hemorrhage, epidural hemorrhage or subdural hemorrhage.
Further, the operation process of the cerebral hemorrhage CT image classification method based on image sequence analysis comprises the steps of obtaining cerebral CT image data, carrying out imaging processing on the data, constructing a model and realizing the classification method;
specifically, the method comprises the following steps: obtaining a brain CT scanning result to obtain brain CT image original data; carrying out imaging processing on original data of the brain CT image to form a three-channel image sequence as the input of a subsequent model; constructing a model comprising a channel attention module and a space attention module to extract the characteristics of a three-channel image sequence; and training model optimization parameters, identifying cerebral hemorrhage positive and distinguishing cerebral hemorrhage subtypes, and realizing classification of cerebral hemorrhage CT images.
Specifically, in the brain CT image data processing part, the obtained brain CT scan is stored as the brain CT imaging data based on the DICOM medical image format standard, and a category label is determined for each object sample; classifying the occurrence of cerebral hemorrhage into cerebral hemorrhage positive and cerebral hemorrhage negative according to whether the cerebral CT image data can observe the cerebral hemorrhage according to the coarse particle category, namely ICH (intracerebral hemorrhage) positive and ICH negative;
whereas cerebral hemorrhage comprises five specific subtypes: intraventricular hemorrhage, cerebral parenchymal hemorrhage, subarachnoid hemorrhage, epidural hemorrhage, and subdural hemorrhage;
ICH positive samples were labeled IVH (intravenous hemorrhage) class, IPH (Intra pathological hemorrhage) class, SAH (Subarachroned hemorrhage) class, EDH (anterior hemohemomatoma) class, and SDH (SDH hemotoma) class, respectively, according to the fine-grained bleeding subtype;
then, in order to enable the data to be identified by a model method, carrying out imaging processing on the brain CT medical imaging data in the DICOM format according to a clinical picture reading mode;
in the medical field, the absorption degree of different tissues to X-rays is different due to different densities of the tissues, and the degree is measured by Hounsfield Unit (HU), and a pair of Window Level (WL) and Window Width (WW) is set, so that a Window is defined, and the lower bound W of the WindowminAnd an upper bound WmaxRespectively as follows:
tissues and lesions with HU values above this range will appear as white shadows in the gray-scale image, while parts below this range will appear as black shadows; the HU value (HU value) in the window range corresponds to a Gray level (Gray level) in a range of 0 to 255, and the closer to 255, the brighter the HU value (HU value) is, the corresponding manner is:
wherein I is an indicator function;
in this way, the organization structure in the range can be distinguished through different gray scales, and a gray scale image corresponding to the window can be obtained; respectively selecting a whole brain window (WL: 50, WW: 100), a hematoma window (WL: 70, WW: 50) and a skull window (WL: 500, WW: 3000), converting information in the corresponding HU value range into a common image representation, as shown in fig. 2, forming a three-channel medical image for single sample data, wherein the length L of the image sequence is equal to the length L of the three-channel medical image, and the three-channel medical image is a three-channel image represented by a three-channel image representationiEqual to the number of tomographic layers contained in a CT scan sample of the object;
on the other hand, the samples may have different distances between tomography, and in order to ensure that the diagnosis task of the model is not interfered as much as possible, a resampling technology is adopted to fix the distance on the z axis, and all data are sampled to be consistent standard intervals; for the sampled image sequences with different lengths, because the framework of the network is fixed, the sequence needs to be cut to a fixed length, namely N windows with the length of L are randomly intercepted on three channels of the sequence at the same time, N L-length subsequences are obtained on each sample, the labels of the subsequences are consistent with the source sequence, and finally N L-length three-channel image sequence samples are obtained from all diagnostic objects and are used for training a channel and space dual-attention CLSTM network model;
in the construction of the model, the composition of the main body part of the model mainly comprises: 1) the channel attention module is utilized to give corresponding weight to the features extracted from the processed three-channel image sequence, the features are fused based on different attention degrees, and the specific structure of the feature is shown in FIG. 3;
2) generating space region attention for the fused feature map by using a CLSTM network with a space attention module, and extracting space-time features of the sequence for classification, wherein the specific structure of the sequence is shown in FIG. 4;
wherein, after the data is preprocessed, N sub-sequences with labels are obtained in total and are marked asXnRepresents a subsequence;
wherein N belongs to {1,2, …, N }; inputting a fixed-length subsequence as an image sequence of the model, wherein the length of each subsequence is L and is marked as { Xn:Xn,1,Xn,2,…,Xn,LN-1, 2, …, N, wherein each image of the sub-sequence consists of three channel images taken under the whole brain, haematoma and skull windows,
firstly, each channel of each image in the sequence is input into convolution with Batch Normalization (BN) and Relu activation, wherein the convolution kernel size of the convolution is 2 multiplied by 2, the step size is 2, and feature maps of three channels are obtained respectively
Then, the feature map of a single scan plane is given weights in channel dimensions using channel attentionThe specific implementation modules comprise a Global Max Pooling (GMP) and a Global Average Pooling (GAP) which are parallel, a Multi-layer perceptron (MLP) and a sigmoid activation function; after passing through GMP and GAP, the input feature graph is respectively compressed into two one-dimensional vectors in the spatial dimension, and the two vectors are input and activated by ReluThe multilayer perceptron fuses the obtained results;
finally, a channel weight vector is obtained after sigmoid activation, the vector length is equal to the number of input channels, and the whole process is represented as follows:
CWn,t=CAtt(fn,t)
=Sigmoid(MLP(GMP(fn,t))+MLP(GAP(fn,t)))
the attention weight of each channel is superposed on the respective characteristics, and the obtained weighted characteristic graphs are superposed pixel by pixel to obtain the characteristic representation of the current scanning surface fusing three channels:
will be composed of L Fn,tInputting the formed characteristic image sequence into a CLSTM network capable of capturing space-time characteristics; the network comprises a CLSTM layer with spatial attention, a common CLSTM layer, a global average pooling layer, a Full Connected (FC) layer and a sigmoid activation function;
in particular, in the CLSTM layer, for a subsequence sample XnInputting the current two-dimensional feature map F into the network at the t time stepn,tHidden states obtained by CLSTM computing the first layerThe calculation method is as follows:
in the formula (I), the compound is shown in the specification,the input gate is shown to be one of,the forgetting of the door is indicated,an output gate is shown which is shown,indicating the new memory cell in the memory cell array,indicating a finally updated memory cell, indicating a Hadamard product (Hadamard product) operation between matrices, Conv indicating a convolution operation;
wherein, the first and the second end of the pipe are connected with each other,representing the weight parameters to be trained in the convolution operation of the respective gate, b*Indicating an offset within the door; CLSTM layer integrated with space attention mechanism and based on initial input F on time stepn,tAnd current step hidden state obtained after passing through common CLSTM layerTo obtain a spaceThe weight of (1); f is to ben,tAndmerging the features into a two-channel feature by splicing in series, carrying out convolution operation on the two-channel feature by utilizing a hidden layer with a convolution kernel with the size of 1 multiplied by 2 and the step length of 1, and obtaining a spatial attention weight SW corresponding to the feature map after sigmoid activationn,tThe calculation process is expressed as:
in the formula (I), the compound is shown in the specification,representing the concatenation of two features in a dimension,and b is the parameter to be trained in the convolution, and the attention weight SW is obtainedn,tAnd Fn,tHaving the same size, SWn,tAnd Fn,tOverlapping with Hadamard product to obtain the feature with space distinguishing attentionThe hidden state is calculated in the general CLSTM layer input to the next layer
Sequentially inputting a two-dimensional feature map containing channel attention in the sequence according to time steps, and obtaining the last time step F by modeling attention weight and space-time features through a modeln,LHidden state ofWherein the overall information of the image sequence is contained; thus, as high-level information extracted throughout the CT scan sequence, theAnd performing GAP operation, inputting the result into an FC layer, and obtaining the classification result of each sequence through a sigmoid activation function, wherein the classification result is represented as the probability of belonging to a class.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.
Claims (5)
1. A cerebral hemorrhage CT image classification method based on image sequence analysis is characterized in that cerebral hemorrhage CT images are classified firstly; the method specifically comprises the following steps:
determining whether cerebral hemorrhage occurs or not from the cerebral CT scanning, namely judging whether the cerebral CT scanning result is cerebral hemorrhage positive or cerebral hemorrhage negative; and (3) identifying cerebral hemorrhage subtypes according to the cerebral hemorrhage positive sample, namely judging whether the cerebral hemorrhage displayed in the brain CT scanning result is intracerebral hemorrhage, brain parenchyma hemorrhage, subarachnoid hemorrhage, epidural hemorrhage or subdural hemorrhage.
2. The method for classifying the cerebral hemorrhage CT image based on the image sequence analysis according to claim 1, wherein the operation process comprises the acquisition of the brain CT image data, the imaging processing of the data, the construction of the model and the implementation of the classification method;
specifically, the method comprises the following steps: obtaining a brain CT scanning result to obtain brain CT image original data; carrying out imaging processing on original data of the brain CT image to form a three-channel image sequence as the input of a subsequent model; constructing a model comprising a channel attention module and a space attention module to extract the characteristics of a three-channel image sequence; and training model optimization parameters, identifying cerebral hemorrhage positive and distinguishing cerebral hemorrhage subtypes, and realizing classification of cerebral hemorrhage CT images.
3. The method for classifying the cerebral hemorrhage CT image based on the image sequence analysis as claimed in claim 2,
the operation steps are as follows:
(1) and processing brain CT image data:
(1.1) obtaining a brain CT scanning result, storing the obtained brain CT scanning result in a standard mode of a DICOM medical image format, and taking the stored brain CT scanning result as original data of a brain CT image;
(1.2) determining the coarse-grained type labels of all samples in the stored original data of the brain CT image,
marking original data of the CT brain image as positive brain hemorrhage and negative brain hemorrhage, namely ICH positive and ICH negative, according to whether the image shows high-density images representing the occurrence of the brain hemorrhage;
(1.3) determining fine-grained class labels of all samples according to the marked ICH positive brain CT image original data;
(1.4) carrying out imaging processing on all the brain CT image original data in the DICOM format, and extracting each sample in the brain CT image original data into a three-channel image sequence;
(2) and constructing a model:
(2.1) weighting the features extracted from the three-channel image sequence by using a channel attention module, and fusing the weighted features;
(2.2) generating spatial region attention for the fused features by using a CLSTM network with a spatial attention module, extracting space-time features of the three-channel image sequence and classifying the features;
(3) and the classification method is realized as follows:
and taking a three-channel image sequence obtained after the imaging processing as the input of the constructed model, and then training parameters in the model by utilizing the determined coarse-grained and fine-grained class labels to obtain the model finally used for classifying the cerebral hemorrhage CT images.
4. The method for classifying the cerebral hemorrhage CT image based on the image sequence analysis as claimed in claim 3,
in step (1.3), the determining the fine-grained class label of each sample for the ICH positive data specifically includes: according to the position and shape of the high-density shadow shown in the image, the primary data of the brain CT image with positive ICH is marked as five subtypes of cerebral hemorrhage, including: intraventricular hemorrhage, parenchymal cerebral hemorrhage, subarachnoid hemorrhage, epidural hemorrhage, and subdural hemorrhage, i.e., IVH class, IPH class, SAH class, EDH class, and SDH class.
5. The method for classifying the cerebral hemorrhage CT image based on the image sequence analysis as claimed in claim 3,
in the step (1.4), the step of performing the imaging processing on the original brain CT image data in all DICOM formats specifically includes: setting different window levels and window widths, respectively selecting a whole brain window, a blood window and a skull window, and converting information in a corresponding HU value range into a common image for representation, so that each sample in the original data of the brain CT image is extracted into a three-channel image sequence, wherein the channel obtained after processing is in an image PNG format, and the sequence length is the number of tomography layers in the original data of the sample brain CT image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210162764.5A CN114565572A (en) | 2022-02-22 | 2022-02-22 | Cerebral hemorrhage CT image classification method based on image sequence analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210162764.5A CN114565572A (en) | 2022-02-22 | 2022-02-22 | Cerebral hemorrhage CT image classification method based on image sequence analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114565572A true CN114565572A (en) | 2022-05-31 |
Family
ID=81713448
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210162764.5A Pending CN114565572A (en) | 2022-02-22 | 2022-02-22 | Cerebral hemorrhage CT image classification method based on image sequence analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114565572A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114913383A (en) * | 2022-06-24 | 2022-08-16 | 北京赛迈特锐医疗科技有限公司 | Model training method for identifying image sequence type and method for configuring image equipment |
CN115439686A (en) * | 2022-08-30 | 2022-12-06 | 一选(浙江)医疗科技有限公司 | Method and system for detecting attention object based on scanned image |
CN116245951A (en) * | 2023-05-12 | 2023-06-09 | 南昌大学第二附属医院 | Brain tissue hemorrhage localization and classification and hemorrhage quantification method, device, medium and program |
-
2022
- 2022-02-22 CN CN202210162764.5A patent/CN114565572A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114913383A (en) * | 2022-06-24 | 2022-08-16 | 北京赛迈特锐医疗科技有限公司 | Model training method for identifying image sequence type and method for configuring image equipment |
CN115439686A (en) * | 2022-08-30 | 2022-12-06 | 一选(浙江)医疗科技有限公司 | Method and system for detecting attention object based on scanned image |
CN115439686B (en) * | 2022-08-30 | 2024-01-09 | 一选(浙江)医疗科技有限公司 | Method and system for detecting object of interest based on scanned image |
CN116245951A (en) * | 2023-05-12 | 2023-06-09 | 南昌大学第二附属医院 | Brain tissue hemorrhage localization and classification and hemorrhage quantification method, device, medium and program |
CN116245951B (en) * | 2023-05-12 | 2023-08-29 | 南昌大学第二附属医院 | Brain tissue hemorrhage localization and classification and hemorrhage quantification method, device, medium and program |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Cao et al. | A novel attention-guided convolutional network for the detection of abnormal cervical cells in cervical cancer screening | |
WO2018120942A1 (en) | System and method for automatically detecting lesions in medical image by means of multi-model fusion | |
CN112529894B (en) | Thyroid nodule diagnosis method based on deep learning network | |
CN114565572A (en) | Cerebral hemorrhage CT image classification method based on image sequence analysis | |
CN111179227B (en) | Mammary gland ultrasonic image quality evaluation method based on auxiliary diagnosis and subjective aesthetics | |
Oghli et al. | Automatic fetal biometry prediction using a novel deep convolutional network architecture | |
Li et al. | Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images | |
CN114782307A (en) | Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning | |
CN114332572B (en) | Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network | |
CN114693933A (en) | Medical image segmentation device based on generation of confrontation network and multi-scale feature fusion | |
WO2022110525A1 (en) | Comprehensive detection apparatus and method for cancerous region | |
CN111986189A (en) | Multi-category pneumonia screening deep learning device based on CT images | |
Abbasi et al. | Automatic brain ischemic stroke segmentation with deep learning: A review | |
Kaliyugarasan et al. | Pulmonary nodule classification in lung cancer from 3D thoracic CT scans using fastai and MONAI | |
Wang et al. | Deep learning based fetal middle cerebral artery segmentation in large-scale ultrasound images | |
Wang et al. | Automatic measurement of fetal head circumference using a novel GCN-assisted deep convolutional network | |
CN117422788B (en) | Method for generating DWI image based on CT brain stem image | |
Asif et al. | StoneNet: An Efficient Lightweight Model Based on Depthwise Separable Convolutions for Kidney Stone Detection from CT Images | |
CN117350979A (en) | Arbitrary focus segmentation and tracking system based on medical ultrasonic image | |
CN112508943A (en) | Breast tumor identification method based on ultrasonic image | |
CN116934683A (en) | Method for assisting ultrasonic diagnosis of spleen wound by artificial intelligence | |
CN117036288A (en) | Tumor subtype diagnosis method for full-slice pathological image | |
Giv et al. | Lung segmentation using active shape model to detect the disease from chest radiography | |
Fang et al. | Reliable thyroid carcinoma detection with real-time intelligent analysis of ultrasound images | |
CN116188424A (en) | Method for establishing artificial intelligence assisted ultrasonic diagnosis spleen and liver wound model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |