CN111161273B - Medical ultrasonic image segmentation method based on deep learning - Google Patents

Medical ultrasonic image segmentation method based on deep learning Download PDF

Info

Publication number
CN111161273B
CN111161273B CN201911416817.6A CN201911416817A CN111161273B CN 111161273 B CN111161273 B CN 111161273B CN 201911416817 A CN201911416817 A CN 201911416817A CN 111161273 B CN111161273 B CN 111161273B
Authority
CN
China
Prior art keywords
data
module
convolution
layer
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911416817.6A
Other languages
Chinese (zh)
Other versions
CN111161273A (en
Inventor
罗亮
袁浩瀚
车博
陈智
方俊
巨学明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Sichuan Provincial Peoples Hospital
Original Assignee
University of Electronic Science and Technology of China
Sichuan Provincial Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China, Sichuan Provincial Peoples Hospital filed Critical University of Electronic Science and Technology of China
Priority to CN201911416817.6A priority Critical patent/CN111161273B/en
Publication of CN111161273A publication Critical patent/CN111161273A/en
Application granted granted Critical
Publication of CN111161273B publication Critical patent/CN111161273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention belongs to the technical field of deep learning computer vision and medical information processing, and particularly relates to a medical ultrasonic image segmentation method based on deep learning. On the basis of a general image segmentation neural network model, the method disclosed by the invention integrates multiple input multiple output technologies, attention mechanism, hole convolution technology, small sample medical data enhancement and other novel technologies, mainly solves the difficult and painful point problems of small sample learning, low ultrasound image contrast, fuzzy nodule edge and the like, and obtains the optimal segmentation strategy disclosed by the invention.

Description

Medical ultrasonic image segmentation method based on deep learning
Technical Field
The invention belongs to the technical field of deep learning computer vision and medical information processing, and particularly relates to a medical ultrasonic image segmentation method based on deep learning.
Background
With the progress of science and technology, the medical imaging technology has been developed greatly, and the ultrasonic imaging technology has important value in prevention, diagnosis and treatment due to the advantages of simple operation, no radiation damage, low cost and the like. Currently, segmenting a region of interest in a medical image is the basis for image analysis and lesion identification. An artificial segmentation method is widely adopted to segment the ultrasonic image clinically, and experienced clinicians manually delineate the interested field according to the professional knowledge of the clinicians. However, the manual segmentation is not only time-consuming and extremely depends on the professional skill and the abundant experience of the doctor, but also the ultrasonic image has the characteristics of fuzzy edge, low contrast and the like, so that the visual resolution of human eyes is very difficult. Therefore, how to automatically and efficiently segment the ultrasound image has become a problem that needs to be solved urgently.
In recent years, a deep neural network model, namely a Convolutional Neural Network (CNN), provides great technical support for improving the segmentation performance of biomedical images. The convolutional neural network can automatically learn low-level visual features and high-level semantic features in the image, and the complex process of manually designing and extracting the image features in the traditional algorithm is avoided. However, conventional CNNs cannot reasonably propagate the underlying features to higher layers. In a semantic segmentation model (U-NET), channel fusion of low-dimensional features and high-dimensional features can be realized by methods such as jump connection and the like, and a good segmentation effect is achieved.
Disclosure of Invention
The invention aims to provide an ultrasonic image segmentation design scheme based on a deep learning Feature-Pyramid-orientation-Unet (FPA-Unet) network in ultrasonic medical image processing so as to obtain better segmentation performance.
The technical scheme adopted by the invention is as follows:
a medical ultrasonic image segmentation method based on deep learning comprises the following steps:
step 1, preprocessing ultrasonic image data to be segmented to obtain training set and verification set data;
step 2, performing data enhancement on the training set and the verification set data, including:
1) Increasing the data volume of the training data by adopting offline enhancement: adopting rotation transformation and horizontal turning transformation to perform 10 times of enhancement;
2) Enhancing the generalization of the network model by online enhancement: the method adopts rotation transformation, scale transformation, scaling transformation, translation transformation and color contrast transformation, and reduces the memory pressure while enhancing the data diversity by using an online iterator;
step 3, constructing a multi-input multi-output hole convolution U-shaped network, comprising the following steps:
1) A multi-input down-sampling module: the downsampling module has 4 layers in all, the multiple input adopts an image multi-scale method to scale the input data into four pairs of data of 8; the down-sampling module utilizes the convolution layer and the maximum pooling layer to complete the acquisition of bottom layer characteristics and sequentially obtain a characteristic diagram; the convolution kernel size of each layer is 3 x 3, the hole convolution r =2 is adopted, namely, intervals are added in the conventional convolution kernels so as to increase the image receptive field, and the convolution kernels of the first layer to the fourth layer are 32, 64, 128 and 256 respectively;
2) An up-sampling module: the upper sampling module has 4 layers in total, deconvolution is used as an upper sampling mode, and each layer connects the attention characteristic diagram with the upper sampling characteristic diagram through a channel to perform multi-dimensional characteristic fusion; the up-sampling module sequentially enlarges the size of the characteristic image, reduces the number of channels and finally obtains a prediction graph with the same size as the input data; the size of the convolution kernel of each layer is 3 multiplied by 3, and the number of the convolution kernels from the first layer to the fourth layer is respectively 256, 128, 64 and 32; the up-sampling module further comprises a channel attention mechanism module for improving the accuracy and effectiveness of the network, wherein the channel attention mechanism module comprises:
1) Calculating a gate signal diagram: performing 1-by-1 convolution on the high-dimensional features to obtain a U-shaped network gate signal;
2) Calculating an attention coefficient: 2 times of low-dimensional features are up-sampled, added with an AND gate signal, subjected to global average pooling, sent into 1 x 1 convolution, and up-sampled to obtain an attention coefficient;
3) Calculating an attention characterization: multiplying the attention coefficient by the low-dimensional feature to obtain a feature result of the attention module;
3) The deep supervision multi-output module: carrying out 4 times of size transformation on the label to form four pairs of data of 8;
step 4, designing a characteristic pyramid attention module as a U-shaped network bottom layer central module, wherein the formula is as follows:
Figure BDA0002351405490000021
wherein, the input X reduces the channel dimension of the characteristic diagram while ensuring the size to be unchanged through 1 × 1 convolution to obtain X 1 (ii) a Adopting convolution kernels with different sizes of 5 × 5,3 × 3,2 × 2, and constructing pyramid structures H (X) of multiple receptive fields through a down-sampling and up-sampling U-shaped network, and combining the pyramid structures H (X) with X 1 Multiplication to better fuse neighboring regionsCharacteristic; the input X is used as a branch X through global average pooling and upsampling 2 Adding the pyramid feature attention output result F and the result channel to obtain a final pyramid feature attention output result F;
step 5, inputting training set data into the constructed U-shaped network for training to obtain a learned convolutional neural network model, and performing parameter adjustment on a verification set until an optimal model and corresponding parameters thereof are obtained to obtain a trained U-shaped network;
and 6, inputting the preprocessed ultrasonic image data to be segmented into the trained U-shaped network to obtain a segmentation result.
The invention has the beneficial effects that:
the invention provides a segmentation method for an ultrasonic medical image, which integrates multiple input multiple output technology, attention mechanism, hole convolution technology, small sample medical data enhancement and other novel technologies on the basis of a general image segmentation neural network model, and mainly solves the difficult and pain point problems of small sample learning, low ultrasonic image contrast, fuzzy nodule edge and the like to obtain the optimal segmentation strategy.
In addition, by combining the attention mechanism with the up-sampling module, the features can be extracted better aiming at different data, so that the required features can be extracted from huge data more quickly and better, and the network parameters are trained better by utilizing the features, so that the accuracy and the effectiveness of the network are improved.
Drawings
Fig. 1 is a design diagram of a medical image segmentation method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating an overall structure of the FPA-uet in step 3 according to the embodiment of the present invention.
FIG. 3 is a schematic diagram of the feature pyramid attention module of step 4 according to an embodiment of the present invention
FIG. 4 is a schematic diagram of a step 5 channel attention upsampling module according to an embodiment of the present invention
Fig. 5 is a schematic diagram of the accuracy and loss of the training set and the verification set according to the embodiment of the present invention, where (a) is a schematic diagram of a loss function of the training set and the verification set obtained by training using the FPA-Unet network, and (b) is a schematic diagram of the accuracy of the training set and the verification set.
Fig. 6 is a schematic diagram illustrating an original label and a segmentation image provided by an embodiment of the present invention, where the left side of fig. 6 is the label image, and the right side of fig. 6 is the segmentation result.
Detailed Description
The invention is described in detail below with reference to the following figures and simulations:
the invention provides a network segmentation method based on thyroid nodule ultrasonic images, which comprises 6 steps and mainly comprises 5 modules of data set acquisition, image preprocessing, network model construction, network training, network testing and evaluation, and is shown in figure 1.
In this embodiment, the specific steps are as follows:
1. and preprocessing ultrasonic image data to be segmented to obtain training set data and test set data.
1) Removing the private information and the medical image instrument mark, and screening out an original ultrasonic image which is not manually marked by an imaging doctor;
2) Manually marking the label under the guidance of an ultrasound imaging physician;
3) Image quality enhancement under the premise of guaranteeing to keep image detail texture characteristics
3-1) reduction of sum noise and non-uniform plaque using adaptive mean filtering
3-2) improving the filtering effect by using two morphological operations-on and off operation
3-3) histogram equalization
3-4) Sobel operator edge enhancement
4) The data were divided into a training set, a validation set, and a test set at a ratio of 6
5) Performing decolorizing processing on the image to obtain a gray level image, and performing scale normalization to unify the resolution ratio into 256 × 256
6) The data label is subjected to binarization processing and is normalized into a [0,1] interval
2. Data enhancement of training set small sample data
Because the result of deep learning is closely related to the quality and the quantity of data, but medical samples are difficult to collect, the data volume is less, and in order to increase the data volume, avoid overfitting and improve the segmentation precision, two enhanced combination modes are adopted, so that the defect of small sample data is overcome.
1) The data volume of training data is increased by adopting offline enhancement, and 10 times of enhancement is mainly performed by adopting rotation transformation and horizontal turnover transformation.
2) And enhancing the generalization of the network model by utilizing online enhancement. Mainly adopts rotation transformation, scale transformation, scaling transformation, translation transformation, color contrast transformation and the like, and reduces the memory pressure while enhancing the data diversity by using an online iterator mode.
3. Building multi-input multi-output hole convolution U-shaped network
The overall network structure is shown in fig. 2.
1) Multi-input down-sampling module
The multi-input down-sampling module is shown in the left half of the figure 2U-type network.
1-1) multi-input adopts an image multi-scale idea, input data is subjected to size scaling, and four pairs of data are changed into 8.
1-2) the downsampling module has 4 layers in total, and the feature acquisition of the bottom layer is completed mainly by utilizing the convolution layer and the maximum pooling layer, so that feature maps with more channels and smaller size are obtained in sequence. The convolution kernel size of each layer is 3 x 3, and a hole convolution r =2 is used, i.e. a space is added to the conventional convolution kernel to increase the image receptive field. The sizes of the number of convolution kernels of the first layer to the fourth layer are 32, 64, 128 and 256 respectively.
2) Upsampling module
The upsampling module structure is shown in the right half of the U-type network of FIG. 2.
2-1) design of up-sampling structure
The up-sampling module has 4 layers in total, and adopts deconvolution as an up-sampling mode. And each layer is used for connecting the attention feature map and the up-sampling feature map by channels to perform multi-dimensional feature fusion.
And the up-sampling module sequentially enlarges the size of the characteristic image, reduces the number of channels and finally obtains a prediction graph with the same size as the input data. The size of the convolution kernel of each layer is 3 × 3, and the number of convolution kernels from the first layer to the fourth layer is 256, 128, 64, 32, respectively.
2-2) upsampling channel attention Module design, the Module is shown in FIG. 4
First we convolve the high dimensional features by 1 x 1, compute and get the Unet gate signal, and then we compute the attention coefficients using this signal. Here we upsample the low dimensional features by a factor of 2 and add the and gate signal. And (4) after global average pooling, sending into 1 × 1 convolution, performing up-sampling to obtain an attention coefficient, and calculating an attention characterization by using the attention coefficient. Finally, the attention coefficient is multiplied by the low-dimensional feature to obtain a final feature result of the attention module, so that the design of the attention module of the up-sampling channel is completed.
3) Deep supervision multi-output module
And 4 times of size transformation is carried out on the label, four pairs of data of 8.
4. Constructing a feature pyramid attention module as a network bottom Center module
The Center module is shown in figure 3.
The formula for the design of this module is as follows:
Figure BDA0002351405490000061
1) Reducing the dimension of a characteristic diagram channel while ensuring the size of the input X is unchanged through 1X 1 convolution to obtain X 1
2) Adopting convolution kernels with different sizes such as 5 × 5,3 × 3,2 × 2, etc., constructing a pyramid structure H (X) of various receptive fields through a U-shaped network of down-sampling and up-sampling, and combining the pyramid structure H (X) with the X 1 Multiplied together to better fuse the features of neighboring regions.
2) Pooling input X by global averagingAnd upsampled as a branch X 2 And adding the pyramid feature attention output result F and the result channel to obtain a final pyramid feature attention output result F.
5. Inputting training set data into the designed network for training to obtain the learned convolutional neural network model
1) And recording the loss and the segmentation accuracy of each training.
2) And modifying parameters and retraining the network according to the loss and the accuracy on the verification set. Until the best model and its corresponding parameters are selected.
6. Inputting the preprocessed ultrasonic image data to be segmented into the learned convolutional neural network model to obtain the segmentation result of each pixel.
The final effect of the practice of the invention is shown here, with the results shown in figures 5 and 6. Fig. 5 is a schematic diagram illustrating the accuracy and loss of a training set and a verification set provided in an embodiment of the present invention, where (a) is a schematic diagram illustrating a loss function of the training set and the verification set obtained by training using an FPA-Unet network, and (b) is a schematic diagram illustrating the accuracy of the training set and the verification set. Fig. 6 is a schematic diagram of an original label and a segmentation image provided by an embodiment of the present invention, where the left side of fig. 6 is the label image, and the right side of fig. 6 is the segmentation result.

Claims (2)

1. A medical ultrasonic image segmentation method based on deep learning is characterized by comprising the following steps:
step 1, preprocessing ultrasonic image data to be segmented to obtain training set and verification set data;
step 2, performing data enhancement on the training set and the verification set data, including:
1) Increasing the data volume of the training data by adopting offline enhancement: adopting rotation transformation and horizontal turning transformation to perform 10 times of enhancement;
2) Enhancing the generalization of the network model by online enhancement: the method adopts rotation transformation, scale transformation, scaling transformation, translation transformation and color contrast transformation, and reduces the memory pressure while enhancing the data diversity by using an online iterator;
step 3, constructing a multi-input multi-output cavity convolution U-shaped network, comprising the following steps:
1) A multi-input down-sampling module: the downsampling module has 4 layers in total, the multiple input adopts an image multi-scale method to scale the input data into four pairs of data of 8; the down-sampling module finishes bottom layer feature acquisition by utilizing the convolution layer and the maximum pooling layer, and sequentially acquires feature maps; the convolution kernel size of each layer is 3 x 3, the hole convolution r =2 is adopted, namely, intervals are added in the conventional convolution kernels so as to increase the image receptive field, and the convolution kernels of the first layer to the fourth layer are 32, 64, 128 and 256 respectively;
2) An up-sampling module: the upper sampling module has 4 layers in total, deconvolution is used as an upper sampling mode, and each layer connects the attention characteristic diagram with the upper sampling characteristic diagram through a channel to perform multi-dimensional characteristic fusion; the up-sampling module sequentially enlarges the size of the characteristic image, reduces the number of channels and finally obtains a prediction graph with the same size as the input data; the convolution kernel size of each layer is 3 multiplied by 3, and the number of convolution kernels from the first layer to the fourth layer is respectively 256, 128, 64 and 32;
3) The deep supervision multi-output module: carrying out 4 times of size transformation on the label to form four pairs of data of 8;
step 4, designing a characteristic pyramid attention module as a U-shaped network bottom layer central module, wherein the formula is as follows:
Figure FDA0004077830200000011
wherein, the input X reduces the channel dimension of the characteristic diagram while ensuring the size to be unchanged through 1 × 1 convolution to obtain X 1 (ii) a Adopting convolution kernels with different sizes of 5 × 5,3 × 3,2 × 2, and constructing pyramid structures H (X) of multiple receptive fields through a down-sampling and up-sampling U-shaped network, and combining the pyramid structures H (X) with X 1 Multiplying to better fuse the features of neighboring regions; taking input X as a branch X through global average pooling and upsampling 2 With H (X) and X 1 Adding the multiplied result channels to obtain a final pyramid feature attention output result F;
step 5, inputting training set data into the constructed U-shaped network for training to obtain a learned convolutional neural network model, and performing parameter adjustment on a verification set until an optimal model and corresponding parameters thereof are obtained to obtain a trained U-shaped network;
and 6, inputting the preprocessed ultrasonic image data to be segmented into the trained U-shaped network to obtain a segmentation result.
2. The deep learning-based medical ultrasound image segmentation method according to claim 1, wherein the up-sampling module further comprises a channel attention mechanism module, and the channel attention mechanism module comprises:
1) Calculating a gate signal diagram: performing 1-by-1 convolution on the high-dimensional features to obtain a U-shaped network gate signal;
2) Calculating an attention coefficient: 2 times of low-dimensional features are up-sampled, added with the gate signals, subjected to global average pooling, sent into 1 × 1 convolution, and up-sampled to obtain an attention coefficient;
3) Calculating an attention characterization: and multiplying the attention coefficient by the low-dimensional feature to obtain a feature result of the attention module.
CN201911416817.6A 2019-12-31 2019-12-31 Medical ultrasonic image segmentation method based on deep learning Active CN111161273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911416817.6A CN111161273B (en) 2019-12-31 2019-12-31 Medical ultrasonic image segmentation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911416817.6A CN111161273B (en) 2019-12-31 2019-12-31 Medical ultrasonic image segmentation method based on deep learning

Publications (2)

Publication Number Publication Date
CN111161273A CN111161273A (en) 2020-05-15
CN111161273B true CN111161273B (en) 2023-03-21

Family

ID=70560203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911416817.6A Active CN111161273B (en) 2019-12-31 2019-12-31 Medical ultrasonic image segmentation method based on deep learning

Country Status (1)

Country Link
CN (1) CN111161273B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932550B (en) * 2020-07-01 2021-04-30 浙江大学 3D ventricle nuclear magnetic resonance video segmentation system based on deep learning
CN111784721B (en) * 2020-07-01 2022-12-13 华南师范大学 Ultrasonic endoscopic image intelligent segmentation and quantification method and system based on deep learning
CN111738262A (en) * 2020-08-21 2020-10-02 北京易真学思教育科技有限公司 Target detection model training method, target detection model training device, target detection model detection device, target detection equipment and storage medium
CN112085741B (en) * 2020-09-04 2024-03-26 厦门大学 Gastric cancer pathological section segmentation algorithm based on deep learning
CN112150428B (en) * 2020-09-18 2022-12-02 青岛大学 Medical image segmentation method based on deep learning
CN112561876B (en) * 2020-12-14 2024-02-23 中南大学 Image-based water quality detection method and system for ponds and reservoirs
CN112750142A (en) * 2021-01-26 2021-05-04 上海工程技术大学 Ultrasonic image segmentation system and method based on side window attention mechanism
CN112773508A (en) * 2021-02-04 2021-05-11 清华大学 Robot operation positioning method and device
CN112906780A (en) * 2021-02-08 2021-06-04 中国科学院计算技术研究所 Fruit and vegetable image classification system and method
CN113936006A (en) * 2021-10-29 2022-01-14 天津大学 Segmentation method and device for processing high-noise low-quality medical image
CN114004836B (en) * 2022-01-04 2022-04-01 中科曙光南京研究院有限公司 Self-adaptive biomedical image segmentation method based on deep learning
CN114092477A (en) * 2022-01-21 2022-02-25 浪潮云信息技术股份公司 Image tampering detection method, device and equipment
CN116402906B (en) * 2023-06-08 2023-08-11 四川省医学科学院·四川省人民医院 Signal grade coding method and system based on kidney echo

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087327A (en) * 2018-07-13 2018-12-25 天津大学 A kind of thyroid nodule ultrasonic image division method cascading full convolutional neural networks
CN110060235A (en) * 2019-03-27 2019-07-26 天津大学 A kind of thyroid nodule ultrasonic image division method based on deep learning

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102737382B (en) * 2012-06-22 2017-02-08 刘怡光 Automatic precise partition method for prostate ultrasonic image
CN103955702B (en) * 2014-04-18 2017-02-15 西安电子科技大学 SAR image terrain classification method based on depth RBF network
CN105825509A (en) * 2016-03-17 2016-08-03 电子科技大学 Cerebral vessel segmentation method based on 3D convolutional neural network
US11832969B2 (en) * 2016-12-22 2023-12-05 The Johns Hopkins University Machine learning approach to beamforming
US20190205758A1 (en) * 2016-12-30 2019-07-04 Konica Minolta Laboratory U.S.A., Inc. Gland segmentation with deeply-supervised multi-level deconvolution networks
CN108573491A (en) * 2017-03-10 2018-09-25 南京大学 A kind of three-dimensional ultrasound pattern dividing method based on machine learning
CN107832807B (en) * 2017-12-07 2020-08-07 上海联影医疗科技有限公司 Image processing method and system
CN108268870B (en) * 2018-01-29 2020-10-09 重庆师范大学 Multi-scale feature fusion ultrasonic image semantic segmentation method based on counterstudy
CN108335313A (en) * 2018-02-26 2018-07-27 阿博茨德(北京)科技有限公司 Image partition method and device
CN108460764B (en) * 2018-03-31 2022-02-15 华南理工大学 Ultrasonic image intelligent segmentation method based on automatic context and data enhancement
US11080857B2 (en) * 2018-04-26 2021-08-03 NeuralSeg Ltd. Systems and methods for segmenting an image
CN109191476B (en) * 2018-09-10 2022-03-11 重庆邮电大学 Novel biomedical image automatic segmentation method based on U-net network structure
CN109711413B (en) * 2018-12-30 2023-04-07 陕西师范大学 Image semantic segmentation method based on deep learning
CN109816657A (en) * 2019-03-03 2019-05-28 哈尔滨理工大学 A kind of brain tumor medical image cutting method based on deep learning
CN110189334B (en) * 2019-05-28 2022-08-09 南京邮电大学 Medical image segmentation method of residual error type full convolution neural network based on attention mechanism
CN110570431A (en) * 2019-09-18 2019-12-13 东北大学 Medical image segmentation method based on improved convolutional neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087327A (en) * 2018-07-13 2018-12-25 天津大学 A kind of thyroid nodule ultrasonic image division method cascading full convolutional neural networks
CN110060235A (en) * 2019-03-27 2019-07-26 天津大学 A kind of thyroid nodule ultrasonic image division method based on deep learning

Also Published As

Publication number Publication date
CN111161273A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN111161273B (en) Medical ultrasonic image segmentation method based on deep learning
CN111145170B (en) Medical image segmentation method based on deep learning
CN109886986B (en) Dermatoscope image segmentation method based on multi-branch convolutional neural network
CN109063712B (en) Intelligent diagnosis method for multi-model liver diffuse diseases based on ultrasonic images
CN111179275B (en) Medical ultrasonic image segmentation method
CN110930416A (en) MRI image prostate segmentation method based on U-shaped network
CN111161271A (en) Ultrasonic image segmentation method
CN110889852A (en) Liver segmentation method based on residual error-attention deep neural network
CN112508953B (en) Meningioma rapid segmentation qualitative method based on deep neural network
CN111275686B (en) Method and device for generating medical image data for artificial neural network training
CN115170582A (en) Liver image segmentation method based on multi-scale feature fusion and grid attention mechanism
CN111476794B (en) Cervical pathological tissue segmentation method based on UNET
CN109118487A (en) Bone age assessment method based on non-down sampling contourlet transform and convolutional neural networks
CN114037714A (en) 3D MR and TRUS image segmentation method for prostate system puncture
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN113744271A (en) Neural network-based automatic optic nerve segmentation and compression degree measurement and calculation method
CN114511502A (en) Gastrointestinal endoscope image polyp detection system based on artificial intelligence, terminal and storage medium
CN109919216B (en) Counterlearning method for computer-aided diagnosis of prostate cancer
CN111383759A (en) Automatic pneumonia diagnosis system
CN116912253B (en) Lung cancer pathological image classification method based on multi-scale mixed neural network
CN116664590B (en) Automatic segmentation method and device based on dynamic contrast enhancement magnetic resonance image
CN113538363A (en) Lung medical image segmentation method and device based on improved U-Net
CN113362360B (en) Ultrasonic carotid plaque segmentation method based on fluid velocity field
CN115661029A (en) Pulmonary nodule detection and identification system based on YOLOv5
CN115409857A (en) Three-dimensional hydrocephalus CT image segmentation method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant