CN112614145B - Deep learning-based intracranial hemorrhage CT image segmentation method - Google Patents

Deep learning-based intracranial hemorrhage CT image segmentation method Download PDF

Info

Publication number
CN112614145B
CN112614145B CN202011633932.1A CN202011633932A CN112614145B CN 112614145 B CN112614145 B CN 112614145B CN 202011633932 A CN202011633932 A CN 202011633932A CN 112614145 B CN112614145 B CN 112614145B
Authority
CN
China
Prior art keywords
image
stage
intracranial hemorrhage
layer
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011633932.1A
Other languages
Chinese (zh)
Other versions
CN112614145A (en
Inventor
胡凯
侯媛媛
张园
高协平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangtan University
Original Assignee
Xiangtan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangtan University filed Critical Xiangtan University
Priority to CN202011633932.1A priority Critical patent/CN112614145B/en
Publication of CN112614145A publication Critical patent/CN112614145A/en
Application granted granted Critical
Publication of CN112614145B publication Critical patent/CN112614145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The invention provides a deep learning-based intracranial hemorrhage CT image segmentation method, which comprises the following steps: acquiring an intracranial hemorrhage CT image; preprocessing the CT image of intracranial hemorrhage, and taking the preprocessed partial CT image of intracranial hemorrhage as a training sample; training the deep convolutional neural network by using a training sample to obtain a trained deep convolutional neural network; inputting the preprocessed intracranial hemorrhage CT image into a trained deep convolution neural network for image segmentation, outputting the segmented intracranial hemorrhage CT image, and displaying a hemorrhage region segmentation result of the intracranial hemorrhage CT image through a GUI interface. According to the invention, the high-level features of the image are automatically extracted by means of the deep convolutional neural network, and the bleeding area is segmented, so that the problem of data imbalance caused by overlarge bleeding area difference is effectively solved, and high-precision segmentation is realized.

Description

Deep learning-based intracranial hemorrhage CT image segmentation method
Technical Field
The invention relates to the field of artificial intelligence and medical image processing, in particular to a deep learning-based intracranial hemorrhage CT image segmentation method.
Background
Intracranial hemorrhage (ICH) refers to bleeding caused by rupture of cerebral blood vessels, which compresses the surrounding nervous tissue and induces functional disorders. Because car accidents, trauma, hypertension, vascular diseases, brain tumors and the like can cause intracranial hemorrhage, the disease becomes a common disease, the disease deterioration speed is very high, the disease has very high possibility of causing disability or death, and the diagnosis and treatment scheme of intracranial hemorrhage patients in time have extremely important significance. With the development of imaging technology, clinical diagnosis of intracranial hemorrhage is mainly that a radiologist checks an electronic Computed Tomography (CT) image to detect and locate an intracranial hemorrhage region, but because of the problems of complex brain structure, inconsistent size and shape of the hemorrhage region, low contrast of the CT image of the brain, fuzzy boundary of the hemorrhage region and the like, manual judgment of the hemorrhage region is time-consuming and labor-consuming, and has certain subjective consciousness errors.
Disclosure of Invention
In order to improve the defects of the existing intracranial hemorrhage CT image segmentation technology and solve the problem of influence on segmentation caused by overlarge bleeding area difference, the invention provides an intracranial hemorrhage CT image segmentation method based on deep learning.
The technical scheme of the invention comprises the following steps:
1) acquiring an intracranial hemorrhage CT image;
2) preprocessing the CT image of intracranial hemorrhage, and taking the preprocessed partial CT image of intracranial hemorrhage as a training sample;
3) training the deep convolutional neural network by using a training sample to obtain a trained deep convolutional neural network;
4) inputting the preprocessed intracranial hemorrhage CT image into a trained deep convolution neural network for image segmentation, and outputting the segmented intracranial hemorrhage CT image.
In the above method for segmenting the intracranial hemorrhage CT image based on deep learning, the step 2) comprises the following specific steps:
for the intracranial hemorrhage CT images received by the image preprocessing module, carrying out size adjustment on each intracranial hemorrhage CT image according to a preset image size, carrying out edge filling on the images with the size less than the preset size by 0 pixel, and carrying out center slice cutting on the images with the size more than the preset size;
for the intracranial hemorrhage CT image after the image size is adjusted, a region containing a blood lump in the brain is extracted by adopting a region growing method, so that the interference of other tissues with higher CT values such as the skull and the like in the segmentation process is avoided; the region growing method comprises the following steps:
generating a zero matrix with the same size as each original image of the intracranial hemorrhage CT image;
selecting seed points with pixel values meeting conditions on an original image, adding the seed points into a growth area, and simultaneously setting the gray value of points at the same positions as the seed points on a zero matrix as 1;
randomly selecting an unmarked pixel point from the growth area, calculating the gray value difference between the pixel point and all neighborhood pixel points, adding the neighborhood pixel point into the growth area if the difference value meets the threshold condition, simultaneously setting the gray value of the pixel point at the same position on a zero matrix as the pixel point in the field as 1, and marking the pixel point in the growth area after all the domain pixel points of the pixel point are processed;
if the unmarked pixel points do not exist in the growing region, the region growing is finished, otherwise, the pixel points are continuously selected, and the previous step is repeated;
and performing open operation on the generated image matrix by using structural elements of a 5 multiplied by 5 rectangle, smoothing the image boundary, eliminating fine protrusions, and cross-multiplying the obtained image matrix with the original image to finally obtain the image only containing the intracranial structure.
In the above method for segmenting an intracranial hemorrhage CT image based on deep learning, the specific step in step 3) is: converting image training sample data into tensor input network input layers, transmitting input variables layer by layer forwards to obtain a prediction result finally, calculating errors between the prediction result and actual values and reversely feeding back the errors layer by layer, updating the weight and bias of each layer of neurons in the network, finally repeating the forward propagation again, repeating the processes to complete the fitting of the training data, and obtaining the trained deep convolutional neural network.
In the above method for segmenting an intracranial hemorrhage CT image based on deep learning, the specific steps in step 4) are as follows: extracting high-level semantic information of the image through a deep convolutional neural network, judging whether the image belongs to an intracranial hemorrhage area from a pixel level, and segmenting the intracranial hemorrhage CT image.
The invention has the following beneficial effects: aiming at the difficulties existing in the segmentation of the CT image of intracranial hemorrhage, the invention automatically segments the CT image of intracranial hemorrhage by using the deep convolutional neural network fused with attention, the attention focuses on and enhances the target area, the multi-scale convolutional layer can extract the characteristics of the image with different receptive field sizes, the segmentation of the target area with extreme size is effectively aimed at, the bleeding area can be segmented with high precision and high efficiency, the basic clinical requirements are met, the error caused by subjective consciousness during manual segmentation can be effectively avoided, the labor cost is saved, and the invention has positive significance for assisting a radiologist to diagnose the intracranial hemorrhage diseases.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a structural diagram of a deep convolutional neural network in the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
As shown in fig. 1, the present invention provides a method for segmenting an intracranial hemorrhage CT image based on deep learning, which specifically comprises the following steps:
1) acquiring an intracranial hemorrhage CT image;
2) carry out the preliminary treatment to intracranial hemorrhage CT image, because the intracranial hemorrhage CT image data collection who uses is from different imaging device of different hospitals, the condition that the size is inconsistent exists in the image of different cases, need adjust intracranial hemorrhage CT image to the size of predetermineeing that accords with the network model input, its process is: for the obtained intracranial hemorrhage CT images, carrying out size adjustment on each intracranial hemorrhage CT image according to a preset image size, carrying out edge filling on the images with the size less than the preset size by 0 pixel, and carrying out center slice cutting on the images with the size more than the preset size;
for the intracranial hemorrhage CT image after the image size is adjusted, a region containing a blood lump in the brain is extracted by adopting a region growing method, so that the interference of other tissues with higher CT values such as the skull and the like in the segmentation process is avoided; the region growing method comprises the following steps:
generating a zero matrix with the same size as each original image of the intracranial hemorrhage CT image;
selecting seed points with pixel values meeting conditions on an original image, adding the seed points into a growth area, and simultaneously setting the gray value of points at the same positions as the seed points on a zero matrix as 1;
randomly selecting an unmarked pixel point from the growth area, calculating the gray value difference between the pixel point and all neighborhood pixel points, adding the neighborhood pixel point into the growth area if the difference value meets the threshold condition, simultaneously setting the gray value of the pixel point at the same position on a zero matrix as the pixel point in the field as 1, and marking the pixel point in the growth area after all the domain pixel points of the pixel point are processed;
if the unmarked pixel points do not exist in the growing region, the region growing is finished, otherwise, the pixel points are continuously selected, and the previous step is repeated;
and performing open operation on the generated image matrix by using structural elements of a 5 multiplied by 5 rectangle, smoothing the image boundary, eliminating fine protrusions, and cross-multiplying the obtained image matrix with the original image to finally obtain the image only containing the intracranial structure.
Taking the preprocessed partial intracranial hemorrhage CT image as a training sample;
3) training the deep convolutional neural network by using a training sample to obtain a trained deep convolutional neural network; the process is as follows: converting image training sample data into tensor input network input layers, transmitting input variables layer by layer forwards to obtain a prediction result finally, calculating errors between the prediction result and actual values and reversely feeding back the errors layer by layer, updating the weight and bias of each layer of neurons in the network, finally repeating the forward propagation again, repeating the processes to complete the fitting of the training data, and obtaining the trained deep convolutional neural network.
The deep convolutional neural network is shown in fig. 2, the deep convolutional neural network is of a coding and decoding network structure, a coding part is composed of five stages, the five stages are named as a coding 1 stage, a coding 2 stage, a coding 3 stage, a coding 4 stage and a coding 5 stage in the figure, referring to a coding X stage on the right side of fig. 2, each stage of the coding part is composed of two convolutional layers with the convolutional kernel size of 3 × 3, and a ReLU activation function is used after each convolutional layer; each stage of the coding part is connected through a maximum pooling layer with the filter parameter of 2 multiplied by 2, and the pooling layer gradually reduces the spatial resolution of the output characteristic diagram and is used for extracting the position information and the deep semantic information of the image; the decoding part consists of four stages, wherein the four stages are named as a decoding 1 stage, a decoding 2 stage, a decoding 3 stage and a decoding 4 stage in the figure, and as shown by referring to a decoding X stage on the right side of the figure 2, each stage of the decoding part also consists of two convolutional layers with the convolutional kernel size of 3 multiplied by 3, and a ReLU activation function is used after each convolutional layer; each stage of the decoding part is connected with the previous stage through an up-sampling layer, wherein the first stage is connected with the fifth stage of the encoding part through the up-sampling layer, the up-sampling layer restores the spatial resolution of the feature map layer by layer, and the feature map is restored to the size of the original map.
Splicing operation is carried out between the first four stages of the coding part and the four stages of the corresponding decoding part of the deep convolutional neural network through jump connection containing attention units, and the splicing operation is carried out by referring to fig. 2, wherein the same attention units are connected in the coding 1 stage and the decoding 3 stage, and the output of the attention units is spliced with the feature map of the decoding 4 stage; connecting the same attention unit in the encoding 2 stage and the decoding 2 stage, and splicing the output of the attention unit with the feature map in the decoding 3 stage; connecting the same attention unit in the encoding 3 stage and the decoding 1 stage, and splicing the output of the attention unit with the feature map in the decoding 2 stage; and connecting the same attention unit in the encoding 4 stage and the encoding 5 stage, and splicing the output of the attention unit with the feature map in the decoding 1 stage.
The attention unit obtains an attention coefficient by utilizing high-level semantic information to filter useless information in a low-level feature map and help a decoding part to mainly repair detailed features of a target area, wherein:
the feature diagram matrix of a stage before a certain stage of a decoding part is restored to the same size as the feature diagram of a corresponding encoding stage through an up-sampling layer, the two feature diagram matrices are respectively subjected to convolution operation with the convolution kernel size of 1 multiplied by 1, then are added point by point, the added result is input into a parallel multi-scale convolution module through a ReLU activation function, splicing operation is executed after convolution layers with the convolution kernel sizes of 1 multiplied by 1, 3 multiplied by 3 and 5 multiplied by 5 are respectively passed, the fused result is then subjected to convolution operation with the convolution kernel size of 1 multiplied by 1, and an attention coefficient alpha is obtained through a Sigmoid activation function, and the value [0,1] is taken. And multiplying the obtained attention coefficient alpha with the characteristic diagram matrix of the coding stage to eliminate the influence of irrelevant areas, improve the weight of the target area, and finally splicing the output result and the characteristic diagram of the decoding stage to supplement lost spatial information when the resolution of the characteristic diagram is reduced.
4) Inputting the preprocessed intracranial hemorrhage CT image into a trained deep convolution neural network for image segmentation, and outputting the segmented intracranial hemorrhage CT image.
Finally, it should be noted that the above-mentioned embodiments are only intended to illustrate the design idea and embodiments of the present invention, and not to limit the same, and those skilled in the art should understand that other modifications or equivalent substitutions for the technical solution of the present invention are included in the scope defined by the claims of the present application.

Claims (3)

1. A deep learning-based intracranial hemorrhage CT image segmentation method is characterized by comprising the following steps:
1) acquiring an intracranial hemorrhage CT image;
2) preprocessing the CT image of intracranial hemorrhage, and taking the preprocessed partial CT image of intracranial hemorrhage as a training sample; the method comprises the following specific steps: performing size adjustment on each obtained intracranial hemorrhage CT image according to a preset image size, performing edge filling of 0 pixel on the image with the size less than the preset size, and performing center slice cutting on the image with the size more than the preset size;
for the intracranial hemorrhage CT image after the image size is adjusted, a region containing a blood lump in the brain is extracted by adopting a region growing method, so that the interference of the skull or other tissues with higher CT values in the segmentation process is avoided; the region growing method comprises the following steps:
generating a zero matrix with the same size as each original image of the intracranial hemorrhage CT image;
selecting seed points with pixel values meeting conditions on an original image, adding the seed points into a growth area, and simultaneously setting the gray value of points at the same positions as the seed points on a zero matrix as 1;
randomly selecting an unmarked pixel point from the growth area, calculating the gray value difference between the pixel point and all neighborhood pixel points, adding the neighborhood pixel point into the growth area if the difference value meets the threshold condition, simultaneously setting the gray value of the pixel point at the same position on a zero matrix as the neighborhood pixel point to be 1, and marking the pixel point in the growth area after all the neighborhood pixel points of the point are processed;
if the unmarked pixel points do not exist in the growing region, the region growing is finished, otherwise, the pixel points are continuously selected, and the previous step is repeated;
performing open operation on the generated image matrix by using structural elements of a 5 multiplied by 5 rectangle, smoothing the image boundary, eliminating fine prominence, and then cross-multiplying the obtained image matrix with the original image to finally obtain an image only containing an intracranial structure;
3) training the deep convolutional neural network by using a training sample to obtain a trained deep convolutional neural network; the deep convolutional neural network is of a coding and decoding network structure, a coding part consists of five stages, each stage of the coding part consists of two convolutional layers with the convolutional kernel size of 3 multiplied by 3, a ReLU activation function is used after each convolutional layer, the coding part is connected with each stage through a maximum pooling layer with the filter parameter of 2 multiplied by 2, and the pooling layer gradually reduces the spatial resolution of an output characteristic diagram and is used for extracting position information and deep semantic information of an image; the decoding part consists of four stages, each stage of the decoding part also consists of two convolutional layers with the convolutional kernel size of 3 multiplied by 3, a ReLU activation function is used after each convolutional layer, each stage of the decoding part is connected with the previous stage through an upsampling layer, the first stage is connected with the fifth stage of the encoding part through the upsampling layer, the upsampling layer restores the spatial resolution of the feature map layer by layer, and the feature map is restored to the size of the original map; splicing operation is carried out between the first four stages of the coding part and the four stages of the corresponding decoding part of the deep convolutional neural network through jump connection containing attention units, wherein the first stage of the coding part corresponds to the fourth stage of the decoding part, the second stage of the coding part corresponds to the third stage of the decoding part, the third stage of the coding part corresponds to the second stage of the decoding part, and the fourth stage of the coding part corresponds to the first stage of the decoding part; the attention unit obtains an attention coefficient by utilizing high-level semantic information to filter useless information in a low-level feature map and help a decoding part to mainly repair detailed features of a target area, wherein: the feature diagram matrix of a stage before a certain stage of a decoding part is restored to the same size with the feature diagram of a corresponding encoding stage through an up-sampling layer, the two feature diagram matrices are respectively subjected to convolution operation with the convolution kernel size of 1 multiplied by 1, then are added point by point, the added result is input into a parallel multi-scale convolution module through a ReLU activation function, splicing operation is executed after convolution layers with the convolution kernel sizes of 1 multiplied by 1, 3 multiplied by 3 and 5 multiplied by 5, the fusion result is subjected to convolution operation with the convolution kernel size of 1 multiplied by 1, an attention coefficient alpha is obtained through a Sigmoid activation function, a value [0,1] is taken, the obtained attention coefficient alpha is further subjected to multiplication operation with the feature diagram matrix of the encoding stage, the influence of an irrelevant region is eliminated, the weight of a target region is improved, and finally, the output result is spliced with the feature diagram of the stage of the decoding part, supplementing lost spatial information when the resolution of the feature map is reduced;
4) inputting the preprocessed intracranial hemorrhage CT image into a trained deep convolution neural network for image segmentation, and outputting the segmented intracranial hemorrhage CT image.
2. The deep learning-based intracranial hemorrhage CT image segmentation method according to claim 1, wherein: the specific steps of the step 3) are as follows: converting image training sample data into tensor input network input layers, transmitting input variables layer by layer forwards to obtain a prediction result finally, calculating errors between the prediction result and actual values and reversely feeding back the errors layer by layer, updating the weight and bias of each layer of neurons in the network, finally repeating the forward propagation again, repeating the processes to complete the fitting of the training data, and obtaining the trained deep convolutional neural network.
3. The deep learning-based intracranial hemorrhage CT image segmentation method according to claim 1, wherein: the specific steps of the step 4) are as follows: extracting high-level semantic information of the image through a deep convolutional neural network, judging whether the image belongs to an intracranial hemorrhage area from a pixel level, and segmenting the intracranial hemorrhage CT image.
CN202011633932.1A 2020-12-31 2020-12-31 Deep learning-based intracranial hemorrhage CT image segmentation method Active CN112614145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011633932.1A CN112614145B (en) 2020-12-31 2020-12-31 Deep learning-based intracranial hemorrhage CT image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011633932.1A CN112614145B (en) 2020-12-31 2020-12-31 Deep learning-based intracranial hemorrhage CT image segmentation method

Publications (2)

Publication Number Publication Date
CN112614145A CN112614145A (en) 2021-04-06
CN112614145B true CN112614145B (en) 2022-04-12

Family

ID=75252972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011633932.1A Active CN112614145B (en) 2020-12-31 2020-12-31 Deep learning-based intracranial hemorrhage CT image segmentation method

Country Status (1)

Country Link
CN (1) CN112614145B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155232A (en) * 2021-12-08 2022-03-08 中国科学院深圳先进技术研究院 Intracranial hemorrhage area detection method and device, computer equipment and storage medium
CN115187600B (en) * 2022-09-13 2022-12-09 杭州涿溪脑与智能研究所 Brain hemorrhage volume calculation method based on neural network
CN116245951B (en) * 2023-05-12 2023-08-29 南昌大学第二附属医院 Brain tissue hemorrhage localization and classification and hemorrhage quantification method, device, medium and program
CN116385977B (en) * 2023-06-06 2023-08-15 首都医科大学附属北京安贞医院 Intraoperative bleeding point detection system based on deep learning
CN116703948B (en) * 2023-08-03 2023-11-14 杭州脉流科技有限公司 Intracranial vessel tree segmentation method and device based on deep neural network
CN117197162B (en) * 2023-09-27 2024-04-09 东北林业大学 Intracranial hemorrhage CT image segmentation method based on differential convolution

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3107930A1 (en) * 2018-07-30 2020-02-06 Memorial Sloan Kettering Cancer Center Multi-modal, multi-resolution deep learning neural networks for segmentation, outcomes prediction and longitudinal response monitoring to immunotherapy and radiotherapy
CN111754520A (en) * 2020-06-09 2020-10-09 江苏师范大学 Deep learning-based cerebral hematoma segmentation method and system
CN112116605A (en) * 2020-09-29 2020-12-22 西北工业大学深圳研究院 Pancreas CT image segmentation method based on integrated depth convolution neural network
CN112132817A (en) * 2020-09-29 2020-12-25 汕头大学 Retina blood vessel segmentation method for fundus image based on mixed attention mechanism

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3719745A1 (en) * 2019-04-01 2020-10-07 Siemens Healthcare GmbH Processing a medical image
CN111627019B (en) * 2020-06-03 2023-03-14 西安理工大学 Liver tumor segmentation method and system based on convolutional neural network
CN112102321B (en) * 2020-08-07 2023-09-01 深圳大学 Focal image segmentation method and system based on depth convolution neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3107930A1 (en) * 2018-07-30 2020-02-06 Memorial Sloan Kettering Cancer Center Multi-modal, multi-resolution deep learning neural networks for segmentation, outcomes prediction and longitudinal response monitoring to immunotherapy and radiotherapy
WO2020028382A1 (en) * 2018-07-30 2020-02-06 Memorial Sloan Kettering Cancer Center Multi-modal, multi-resolution deep learning neural networks for segmentation, outcomes prediction and longitudinal response monitoring to immunotherapy and radiotherapy
CN111754520A (en) * 2020-06-09 2020-10-09 江苏师范大学 Deep learning-based cerebral hematoma segmentation method and system
CN112116605A (en) * 2020-09-29 2020-12-22 西北工业大学深圳研究院 Pancreas CT image segmentation method based on integrated depth convolution neural network
CN112132817A (en) * 2020-09-29 2020-12-25 汕头大学 Retina blood vessel segmentation method for fundus image based on mixed attention mechanism

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Automatic segmentation of intracerebral hemorrhage in CT images using encoder–decoder convolutional neural network;Kai Hu 等;《Information Processing & Management》;20201130;第57卷(第6期);正文第2-3部分 *
Segmenting Hemorrhagic and Ischemic Infarct Simultaneously From Follow-Up Non-Contrast CT Images in Patients With Acute Ischemic Stroke;Hulin Kuang 等;《IEEE Access》;20190322;全文 *
用于图像分割的强制召回特征注意力网络;魏建华 等;《集成技术》;20201130;第9卷(第6期);全文 *

Also Published As

Publication number Publication date
CN112614145A (en) 2021-04-06

Similar Documents

Publication Publication Date Title
CN112614145B (en) Deep learning-based intracranial hemorrhage CT image segmentation method
CN111008984B (en) Automatic contour line drawing method for normal organ in medical image
Saad et al. Image segmentation for lung region in chest X-ray images using edge detection and morphology
CN112132817A (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
Qian et al. Digital mammography: comparison of adaptive and nonadaptive CAD methods for mass detection
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN114494296A (en) Brain glioma segmentation method and system based on fusion of Unet and Transformer
CN114048806A (en) Alzheimer disease auxiliary diagnosis model classification method based on fine-grained deep learning
CN113576508A (en) Cerebral hemorrhage auxiliary diagnosis system based on neural network
CN114998265A (en) Liver tumor segmentation method based on improved U-Net
CN111091575B (en) Medical image segmentation method based on reinforcement learning method
CN116188479A (en) Hip joint image segmentation method and system based on deep learning
CN113539402B (en) Multi-mode image automatic sketching model migration method
CN114882048A (en) Image segmentation method and system based on wavelet scattering learning network
CN116309647B (en) Method for constructing craniocerebral lesion image segmentation model, image segmentation method and device
CN109919098B (en) Target object identification method and device
CN116758336A (en) Medical image intelligent analysis system based on artificial intelligence
CN116310535A (en) Multi-scale multi-region thyroid nodule prediction method
CN114049315B (en) Joint recognition method, electronic device, storage medium, and computer program product
Alzoubi et al. Automatic segmentation and detection system for varicocele in supine position
CN115294023A (en) Liver tumor automatic segmentation method and device
Taş et al. Detection of retinal diseases from ophthalmological images based on convolutional neural network architecture.
CN112754511A (en) CT image intracranial thrombus detection and property classification method based on deep learning
CN113781636B (en) Pelvic bone modeling method and system, storage medium, and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant