CN111724397B - Automatic segmentation method for craniocerebral CT image bleeding area - Google Patents

Automatic segmentation method for craniocerebral CT image bleeding area Download PDF

Info

Publication number
CN111724397B
CN111724397B CN202010559693.3A CN202010559693A CN111724397B CN 111724397 B CN111724397 B CN 111724397B CN 202010559693 A CN202010559693 A CN 202010559693A CN 111724397 B CN111724397 B CN 111724397B
Authority
CN
China
Prior art keywords
craniocerebral
training
image
layers
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010559693.3A
Other languages
Chinese (zh)
Other versions
CN111724397A (en
Inventor
曹国刚
王一杰
朱信玉
李梦雪
曹聪
刘顺堃
毛红东
孔德卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Technology
Original Assignee
Shanghai Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Technology filed Critical Shanghai Institute of Technology
Priority to CN202010559693.3A priority Critical patent/CN111724397B/en
Publication of CN111724397A publication Critical patent/CN111724397A/en
Application granted granted Critical
Publication of CN111724397B publication Critical patent/CN111724397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

The invention discloses an automatic segmentation method for a craniocerebral CT image bleeding area, which comprises the following steps: s1, acquiring a craniocerebral CT image which needs to be automatically segmented in a bleeding area; s2: the brain CT images sequentially pass through an improved U-Net convolutional neural network which is constructed and trained in advance, the overall structure of the network is three downsampling layers and three upsampling layers, feature images obtained by downsampling are respectively copied, convolved and cut in the jump connection process, the feature images are spliced with the feature images after the respective upsampling layers, and the corresponding upsampling layers upsample and convolve the spliced feature images. The improved U-Net convolutional neural network comprises seven layers in total, so that the loss of information is reduced while the feature extraction is ensured, the time is saved, and the overall efficiency is improved; the convolution operation is added to the downsampled feature map in the jump connection step, so that more information can be provided for the upsampling layer without changing the model layer number, and the subsequent image segmentation of the craniocerebral hemorrhage CT image is improved.

Description

Automatic segmentation method for craniocerebral CT image bleeding area
Technical Field
The invention belongs to the field of medical image processing, and particularly relates to an automatic segmentation method for a bleeding area of a craniocerebral CT image.
Background
It is known from the global disease burden that stroke is the second most common cause of death and one of the main causes of disability worldwide. Stroke occurs when the flow of blood to a specific area of the brain is interrupted, and in this case, brain cells will be deprived of oxygen and die. When a stroke causes brain cell death, the functions (e.g., memory and muscle control) controlled by that region of the brain are lost. Hemorrhagic stroke occurs when a blood vessel is ruptured, while ischemic stroke occurs when a blood vessel is blocked, and the damage caused by hemorrhagic stroke is greater than ischemic stroke.
In the medical field, in order to meet the requirements of diagnosis of a disease, formulation of a therapeutic regimen, etc., it is often necessary to scan a patient to determine the condition of each internal organ. Computed tomography CT, which is faster and cheaper than other examinations, is the most common examination method for assessing suspected stroke patients. Through this check, the expert can check the severity of the damage by visual assessment. In the prior art, the bleeding part of the craniocerebral CT image is segmented through the traditional U-Net convolutional neural network so as to be convenient to identify, however, the traditional U-Net convolutional neural network has the technical defects of long image processing time, more image loss information, low precision and the like.
Disclosure of Invention
The invention aims to provide an automatic segmentation method for a craniocerebral CT image bleeding area, which aims to solve the technical problems of low image recognition speed and low accuracy.
In order to solve the problems, the technical scheme of the invention is as follows:
an automatic segmentation method for a craniocerebral CT image bleeding area comprises the following steps:
s1: acquiring a craniocerebral CT image which needs to be automatically segmented in a bleeding area;
s2: the method comprises the steps of inputting a craniocerebral CT image into a pre-constructed model of an improved U-Net convolutional neural network, wherein the structure of the improved U-Net convolutional neural network is sequentially provided with three downsampling layers and three upsampling layers, so that a bleeding area is automatically segmented, a segmentation result is obtained, the downsampling layers correspond to the upsampling layers one by one, and after feature images obtained by three downsampling are respectively copied, convolved and cut, the feature images are spliced with feature images output by the previous layer of the upsampling layer, and the corresponding upsampling layers upsample and convolve the spliced feature images.
Further preferably, step S1 is preceded by step S0:
s01: constructing an improved U-Net convolutional neural network;
s02: and collecting craniocerebral CT training images, and training the improved U-Net convolutional neural network through the craniocerebral CT training images.
Specifically, step S02 further includes:
a1: collecting craniocerebral CT training images;
a2: screening the collected craniocerebral CT training images to obtain the craniocerebral CT training images with clear images;
a3: preprocessing the screened craniocerebral CT training image to obtain craniocerebral CT standardized data;
a4: the brain CT standardized data are input into the improved U-Net convolutional neural network for training.
Specifically, in step A3, preprocessing the screened craniocerebral CT training image further includes:
a31: performing image block taking operation on the screened craniocerebral CT training images to obtain craniocerebral CT amplified images with increased data volume;
a32: and performing contrast-limited self-adaptive histogram equalization on the craniocerebral CT amplified image, wherein the craniocerebral CT amplified image is normalized and mapped to [0,1] to obtain craniocerebral CT standardized data.
Specifically, step A4 further includes:
dividing the craniocerebral CT standardized data to obtain training set data and test set data, wherein the number of the training set data is greater than that of the test set data;
inputting training set data into an improved U-Net convolutional neural network to obtain a plurality of training models, and selecting and reserving the training model with the best training performance to obtain an optimal training model;
inputting the test set data into an optimal training model for automatic segmentation operation of the bleeding area, obtaining an automatic segmentation performance index of the optimal training model, and obtaining a segmentation result of the bleeding area of the test set data.
Specifically, step S2 further includes:
s21: the craniocerebral CT image sequentially passes through three downsampling of the improved U-Net convolutional neural network, wherein the three downsampling comprises two convolutions and one maximum pooling;
s22: performing convolution twice on the feature map output by the last downsampling in the step S21;
s23: and respectively copying, convoluting and cutting the feature images obtained by the three downsampling, then splicing the feature images with the feature images output by the previous layer of the corresponding upsampling layer, and carrying out one upsampling and two convolutions on the spliced feature images by the corresponding upsampling layer.
The convolution kernels of the convolutions in step S21 are 3×3, the pooling layer sizes of the maximum pooling operation are 2×2, the activation functions of the downsampling layers are relu, and the filling modes of the downsampling layers are nopading.
The convolution kernels of the convolutions in step S22 are all 3×3, the activation functions of the jump connection layers are all relu, and the filling modes of the jump connection layers are all nopading.
The convolution kernel sizes of the convolution operations in step S23 are all 3×3, the convolution kernel sizes of the upsampling are all 2×2, the activation functions of the upsampling layers are all relu, and the filling modes of the upsampling layers are all nopading.
By adopting the technical scheme, the invention has the following advantages and positive effects compared with the prior art:
(1) The improved U-Net convolutional neural network is adopted in the invention, and is different from the classical U-Net neural network model structure in the prior art, and specifically comprises the following steps: the number of model layers varies and the upsampling implementation varies.
The improved U-Net convolutional neural network comprises seven layers in total, and the classical U-Net neural network model has a ten-layer structure, and the more the number of layers of the neural network is, the deeper the network is, the more parameters are, and the more time is spent for calculation. The more the number of layers is, the more convolution and pooling are performed, and although the convolution can extract characteristic points, the pooling can lose more information, so that compared with the traditional ten layers, the seven-layer network structure can reduce the loss of information while guaranteeing the characteristic extraction, save time and improve the overall efficiency;
the convolution operation is added to the feature map of the downsampling in the upsampling step, namely a series of operations such as copying, convolution and cutting are performed, and the feature map copying and cutting operation in the prior art is not only performed, but also image features can be better extracted without changing the model layer number, more information is provided for the upsampling layer, and therefore the subsequent image segmentation of the craniocerebral hemorrhage CT image is improved; in addition, the invention also increases the data volume through data preprocessing, and is suitable for medical images with smaller sample data sets.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
FIG. 1 is a flow chart of an automatic segmentation method for a craniocerebral CT image bleeding area according to the present invention;
FIG. 2 is a schematic diagram of a construction and training improved U-Net convolutional neural network of the present invention;
FIG. 3 is a schematic diagram of an improved U-Net convolutional neural network structure of a brain CT image hemorrhage zone automatic segmentation method of the present invention;
fig. 4 is a schematic diagram of a conventional U-Net convolutional neural network.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will explain the specific embodiments of the present invention with reference to the accompanying drawings. It is evident that the drawings in the following description are only examples of the invention, from which other drawings and other embodiments can be obtained by a person skilled in the art without inventive effort.
For the sake of simplicity of the drawing, the parts relevant to the present invention are shown only schematically in the figures, which do not represent the actual structure thereof as a product. Additionally, in order to simplify the drawing for ease of understanding, components having the same structure or function in some of the drawings are shown schematically with only one of them, or only one of them is labeled. Herein, "a" means not only "only this one" but also "more than one" case.
The invention provides an automatic craniocerebral CT image bleeding area segmentation method which is further described in detail below with reference to the accompanying drawings and the specific embodiments. Advantages and features of the invention will become more apparent from the following description and from the claims.
Referring to fig. 1, an automatic segmentation method for a craniocerebral CT image bleeding area comprises the following steps:
s1: acquiring a craniocerebral CT image which needs to be automatically segmented in a bleeding area; s2: the method comprises the steps of inputting a craniocerebral CT image into a pre-constructed model of an improved U-Net convolutional neural network, wherein the structure of the improved U-Net convolutional neural network is sequentially provided with three downsampling layers and three upsampling layers, so that a bleeding area is automatically segmented, a segmentation result is obtained, the downsampling layers correspond to the upsampling layers one by one, and after feature images obtained by three downsampling are respectively copied, convolved and cut, the feature images are spliced with feature images output by the previous layer of the upsampling layer, and the corresponding upsampling layers upsample and convolve the spliced feature images.
Referring to fig. 2 and 3, the improved U-Net convolutional neural network preferably needs to be pre-constructed and trained prior to implementation of the present embodiment.
Specifically, referring to fig. 3, in the present embodiment, an improved U-Net convolutional neural network is constructed whose structure is mainly composed of a left compression channel and a right expansion channel. The compression channel is a typical convolutional neural network structure, which repeatedly adopts a structure of 2 convolutional layers and 1 maximum pooling layer, and the dimension of the characteristic diagram is increased by 1 time after each pooling operation. And in the expansion channel, carrying out deconvolution for 1 time, namely, up-sampling, halving the dimension of the feature map, then splicing the feature map obtained by the corresponding compression channel, wherein the feature map obtained by the compression channel is obtained by copying, convoluting and cutting the feature map obtained by the down-sampling corresponding to the up-sampling, reconstructing the feature map with the size of 2 times, carrying out feature extraction by convoluting for 2 times, and repeating the structure. At the final output layer, the 64-dimensional feature map is mapped into a 2-dimensional output map with 2 convolutional layers.
Specifically, the seven layers of the improved U-Net convolutional neural network have the following specific structures:
the first layer is used for inputting an input layer of training set data, carrying out continuous convolution on the training set data twice and carrying out one-time maximum pooling to obtain first output data, specifically, the convolution kernel size of the two convolutions is 3×3, the convolution kernel number is 64, the maximum pooling layer size is 2×2, the activation function of the first layer is relu, and the filling mode is nopading; in the process of downsampling of the layer, obtaining first feature map data through two continuous convolutions;
the second layer carries out continuous convolution twice and one maximum pooling on the first output data to obtain second output data, specifically, the convolution kernel size of the convolution twice is 3×3, the convolution kernel number is 128, the pooling layer size of the maximum pooling is 2×2, the activation function of the second layer is relu, and the filling mode is nopading; in the process of downsampling of the layer, obtaining second feature map data through two continuous convolutions;
the third layer is used for carrying out continuous convolution twice and one-time maximum pooling on the second output data to obtain third output data, specifically, the convolution kernel size of the convolution twice is 3×3, the convolution kernel number is 256, the pooling layer size of the maximum pooling is 2×2, the activation function of the third layer is relu, and the filling mode is nopading; in the process of downsampling of the layer, obtaining third feature map data through two continuous convolutions;
a fourth layer, according to the third output data obtained in the third layer, performing continuous convolution on the third output data twice to obtain fourth output data, wherein the convolution kernel size of the convolution twice is 3×3, the number of convolution kernels is 512, and the activation function is relu;
a fifth layer, wherein the third feature image data is duplicated, the convolution kernel size is 3 multiplied by 3, the combination image is obtained by splicing the third feature image data and the fourth output data after cutting, the combination image is up-sampled, and the fifth output data is obtained by continuous convolution twice, the convolution kernel size of the continuous convolution twice is 3 multiplied by 3, and the number of the convolution kernels is 256; the convolution kernel of the layer up-sampling is 2×2, the activation function of the fifth layer is relu, and the filling mode is nopading;
a sixth layer, wherein the second feature map data is duplicated, the convolution kernel size is 3 multiplied by 3, the combination image is obtained by splicing the second feature map data and the fifth output data after cutting, the combination image is subjected to up-sampling operation and continuous convolution twice to obtain sixth output data, the convolution kernel size of the continuous convolution twice is 3 multiplied by 3, and the number of the convolution kernels is 128; the convolution kernel of the layer up-sampling is 2×2, the activation function of the fifth layer is relu, and the filling mode is nopading;
a seventh layer, wherein the first feature image data is duplicated, the convolution kernel size is 3 multiplied by 3, the combination image is obtained by splicing the first feature image data and the sixth output data after clipping, the combination image is up-sampled and is subjected to continuous convolution twice, the seventh output data is obtained, the convolution kernel size of the continuous convolution twice is 3 multiplied by 3, and the number of the convolution kernels is 64; the convolution kernel of the layer up-sampling is 2×2, the activation function of the fifth layer is relu, and the filling mode is nopading;
finally, the seventh layer output data is convolved to obtain final output data, the convolution block size is 1×1, the number of convolution kernels is 2, and the activation function is relu.
Specifically, referring to FIG. 2, the specific steps for training the improved U-Net convolutional neural network are as follows: firstly, in the step A1, collecting CT images of a craniocerebral hemorrhage area as craniocerebral CT training images; and in the step A2, checking the definition of each craniocerebral CT training image to obtain the craniocerebral CT training image with clear images, and eliminating or reducing the interference of the fuzzy craniocerebral CT training image on the subsequent model training, thereby improving the accuracy of model generation and the accuracy of the model segmentation on the craniocerebral CT image after the training is finished. Then in step A3, since the convolutional neural network needs a large amount of image data for training, but the number of the collected craniocerebral hemorrhage CT images is insufficient, the too small sample data volume can cause too few feature points obtained by the convolutional neural network, and the segmentation precision of the hemorrhage area is greatly reduced, so that preprocessing is performed according to the craniocerebral hemorrhage CT images with the definition conforming to the training obtained before: specifically, performing image block taking operation on the screened experimental data, and randomly sliding a window in a data range from left to right and from top to bottom according to a sequence to obtain a craniocerebral CT amplified image with increased data volume; performing contrast-limited self-adaptive histogram equalization on the craniocerebral CT amplified image to realize data standardization, specifically mapping the amplification experimental data normalization processing to [0,1] to obtain craniocerebral CT standardized data; finally, in step A4, dividing the craniocerebral CT standardized data after preprocessing, randomly selecting a certain proportion of standardized experimental data as training set data, and selecting and reserving a plurality of training models according to training parameters, wherein the quantity of the training set data is greater than that of the testing set data, in the embodiment, the training set data accounts for 80% of the standardized experimental data, the testing set data accounts for 20% of the standardized experimental data, training the training set data in an improved U-Net convolutional neural network for model training, and obtaining a plurality of training models with optimal training performance by selecting and reserving the training models to obtain an optimal training model.
Inputting the test set data into the optimal training model to obtain the performance index of the optimal training model, and obtaining the segmentation result of the bleeding area of the test set data. After the test of the set to be tested is finished, if the accuracy does not meet the requirement, continuously changing training parameters to obtain a training model with more excellent performance; if the accuracy is high enough, the exogenous data set, namely the craniocerebral CT image which needs to be automatically segmented in the bleeding area, is adopted for testing, namely the craniocerebral bleeding CT image different from the data of the test set and the data of the training set, the performance of the optimal training model is further determined, and if the bleeding area which can be accurately segmented in the craniocerebral CT image is finally obtained, the training model is applied to the step S2 in the embodiment to automatically segment the bleeding area of the craniocerebral CT image.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the above embodiments. Even if various changes are made to the present invention, it is within the scope of the appended claims and their equivalents to fall within the scope of the invention.

Claims (5)

1. An automatic segmentation method for a craniocerebral CT image bleeding area is characterized by comprising the following steps:
s1: acquiring a craniocerebral CT image which needs to be automatically segmented in a bleeding area;
s2: inputting the craniocerebral CT image into a pre-constructed model of an improved U-Net convolutional neural network, wherein the structure of the improved U-Net convolutional neural network is sequentially provided with three lower sampling layers and three upper sampling layers, so as to automatically divide a bleeding area to obtain a division result, the lower sampling layers are in one-to-one correspondence with the upper sampling layers, and feature images obtained by three times of lower sampling are respectively copied, convolved and cut and then are spliced with feature images output by the previous layer of each upper sampling layer, and the corresponding upper sampling layers up-sample and convolve the spliced feature images;
the step S1 further includes a step S0:
s01: constructing the improved U-Net convolutional neural network;
s02: collecting craniocerebral CT training images, and training the improved U-Net convolutional neural network through the craniocerebral CT training images;
the step S02 further includes:
a1: collecting the craniocerebral CT training images;
a2: screening the collected craniocerebral CT training images to obtain craniocerebral CT training images with clear images;
a3: preprocessing the screened craniocerebral CT training image to obtain craniocerebral CT standardized data;
a4: inputting the craniocerebral CT standardized data into the improved U-Net convolutional neural network for training;
in the step A3, the preprocessing the screened craniocerebral CT training image further includes:
a31: performing image block taking operation on the screened craniocerebral CT training images to obtain craniocerebral CT amplified images with increased data volume;
a32: performing contrast-limited self-adaptive histogram equalization on the craniocerebral CT amplified image, wherein the craniocerebral CT amplified image is normalized and mapped to [0,1] to obtain the craniocerebral CT standardized data;
the step S2 further includes:
s21: the craniocerebral CT image sequentially passes through three downsampling of the improved U-Net convolutional neural network, wherein the three downsampling comprises two convolutions and one maximum pooling;
s22: performing convolution twice on the feature map output by the last downsampling in the step S21;
s23: and respectively copying, convoluting and cutting the feature images obtained by the three downsampling, then splicing the feature images with the feature images output by the previous layer of the upsampling layer corresponding to the feature images, and carrying out one upsampling and two convolutions on the spliced feature images by the upsampling layer corresponding to the upsampling layer.
2. The automatic segmentation method for bleeding areas of craniocerebral CT images according to claim 1, wherein said step A4 further comprises:
dividing the craniocerebral CT standardized data to obtain training set data and test set data, wherein the number of the training set data is greater than that of the test set data;
inputting the training set data into the improved U-Net convolutional neural network to obtain a plurality of training models, and selecting and reserving the training model with the best training performance to obtain an optimal training model;
inputting the test set data into the optimal training model to perform automatic segmentation operation of the bleeding area, obtaining the automatic segmentation performance index of the optimal training model, and obtaining the segmentation result of the bleeding area of the test set data.
3. The automatic segmentation method for a bleeding area of a craniocerebral CT image according to claim 1, wherein the convolution kernels of the convolution in the step S21 are 3×3, the pooling layers of the maximum pooling operation are 2×2, the activation functions of the downsampling layers are relu, and the filling modes of the downsampling layers are nopading.
4. The automatic segmentation method for a bleeding area of a craniocerebral CT image according to claim 1, wherein the convolution kernels of the convolutions in the step S22 are 3×3, the activation functions of the jump connection layers are relu, and the filling modes of the jump connection layers are nopading.
5. The automatic brain CT image hemorrhage zone segmentation method according to claim 1, wherein the convolution kernels of the convolution operation in step S23 are 3×3, the upsampled convolution kernels are 2×2, the activation functions of the upsampling layers are relu, and the filling modes of the upsampling layers are nopading.
CN202010559693.3A 2020-06-18 2020-06-18 Automatic segmentation method for craniocerebral CT image bleeding area Active CN111724397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010559693.3A CN111724397B (en) 2020-06-18 2020-06-18 Automatic segmentation method for craniocerebral CT image bleeding area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010559693.3A CN111724397B (en) 2020-06-18 2020-06-18 Automatic segmentation method for craniocerebral CT image bleeding area

Publications (2)

Publication Number Publication Date
CN111724397A CN111724397A (en) 2020-09-29
CN111724397B true CN111724397B (en) 2024-04-16

Family

ID=72567460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010559693.3A Active CN111724397B (en) 2020-06-18 2020-06-18 Automatic segmentation method for craniocerebral CT image bleeding area

Country Status (1)

Country Link
CN (1) CN111724397B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348796B (en) * 2020-11-06 2024-01-30 上海应用技术大学 Cerebral hemorrhage segmentation method and system based on multi-model combination
CN116547671A (en) * 2020-11-25 2023-08-04 香港大学 Dissimilar paired neural network architecture for data segmentation
CN113298830B (en) * 2021-06-22 2022-07-15 西南大学 Acute intracranial ICH region image segmentation method based on self-supervision
CN113538348B (en) * 2021-06-29 2024-03-26 沈阳东软智能医疗科技研究院有限公司 Processing method of craniocerebral magnetic resonance diffusion weighted image and related products
CN115187600B (en) * 2022-09-13 2022-12-09 杭州涿溪脑与智能研究所 Brain hemorrhage volume calculation method based on neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087327A (en) * 2018-07-13 2018-12-25 天津大学 A kind of thyroid nodule ultrasonic image division method cascading full convolutional neural networks
CN110097545A (en) * 2019-04-29 2019-08-06 南京星程智能科技有限公司 Eye fundus image generation method based on deep learning
WO2019200753A1 (en) * 2018-04-17 2019-10-24 平安科技(深圳)有限公司 Lesion detection method, device, computer apparatus and storage medium
CN111192245A (en) * 2019-12-26 2020-05-22 河南工业大学 Brain tumor segmentation network and method based on U-Net network
CN111292338A (en) * 2020-01-22 2020-06-16 苏州大学 Method and system for segmenting choroidal neovascularization from fundus OCT image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019200753A1 (en) * 2018-04-17 2019-10-24 平安科技(深圳)有限公司 Lesion detection method, device, computer apparatus and storage medium
CN109087327A (en) * 2018-07-13 2018-12-25 天津大学 A kind of thyroid nodule ultrasonic image division method cascading full convolutional neural networks
CN110097545A (en) * 2019-04-29 2019-08-06 南京星程智能科技有限公司 Eye fundus image generation method based on deep learning
CN111192245A (en) * 2019-12-26 2020-05-22 河南工业大学 Brain tumor segmentation network and method based on U-Net network
CN111292338A (en) * 2020-01-22 2020-06-16 苏州大学 Method and system for segmenting choroidal neovascularization from fundus OCT image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
龙法宁 ; 朱晓姝 ; 甘井中 ; .基于卷积神经网络的臂丛神经超声图像分割方法.合肥工业大学学报(自然科学版).2018,(第09期),全文. *

Also Published As

Publication number Publication date
CN111724397A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
CN111724397B (en) Automatic segmentation method for craniocerebral CT image bleeding area
CN109389584A (en) Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
CN111696126B (en) Multi-view-angle-based multi-task liver tumor image segmentation method
CN111815766B (en) Processing method and system for reconstructing three-dimensional model of blood vessel based on 2D-DSA image
CN112508953B (en) Meningioma rapid segmentation qualitative method based on deep neural network
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
CN113793348B (en) Retinal blood vessel segmentation method and device
CN113012163A (en) Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network
CN113393469A (en) Medical image segmentation method and device based on cyclic residual convolutional neural network
Castro et al. Convolutional neural networks for detection intracranial hemorrhage in CT images.
CN113576508A (en) Cerebral hemorrhage auxiliary diagnosis system based on neural network
CN115471470A (en) Esophageal cancer CT image segmentation method
CN112348839A (en) Image segmentation method and system based on deep learning
CN116188485A (en) Image processing method, device, computer equipment and storage medium
CN114565601A (en) Improved liver CT image segmentation algorithm based on DeepLabV3+
CN115035127A (en) Retinal vessel segmentation method based on generative confrontation network
CN109215035B (en) Brain MRI hippocampus three-dimensional segmentation method based on deep learning
CN113808085A (en) Training method, segmentation method and training device for segmentation model of brain CT image
CN112950611A (en) Liver blood vessel segmentation method based on CT image
CN114742802B (en) Pancreas CT image segmentation method based on 3D transform mixed convolution neural network
CN112348796B (en) Cerebral hemorrhage segmentation method and system based on multi-model combination
CN115147404A (en) Intracranial aneurysm segmentation method with dual-feature fusion MRA image
CN113139627B (en) Mediastinal lump identification method, system and device
CN113658700B (en) Gate pulse high-pressure noninvasive evaluation method and system based on machine learning
CN115049682A (en) Retina blood vessel segmentation method based on multi-scale dense network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant