CN111179275A - A Medical Ultrasound Image Segmentation Method - Google Patents

A Medical Ultrasound Image Segmentation Method Download PDF

Info

Publication number
CN111179275A
CN111179275A CN201911409096.6A CN201911409096A CN111179275A CN 111179275 A CN111179275 A CN 111179275A CN 201911409096 A CN201911409096 A CN 201911409096A CN 111179275 A CN111179275 A CN 111179275A
Authority
CN
China
Prior art keywords
data
layer
image
network
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911409096.6A
Other languages
Chinese (zh)
Other versions
CN111179275B (en
Inventor
车博
袁浩瀚
罗亮
陈智
方俊
熊雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Sichuan Provincial Peoples Hospital
Original Assignee
University of Electronic Science and Technology of China
Sichuan Provincial Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China, Sichuan Provincial Peoples Hospital filed Critical University of Electronic Science and Technology of China
Priority to CN201911409096.6A priority Critical patent/CN111179275B/en
Publication of CN111179275A publication Critical patent/CN111179275A/en
Application granted granted Critical
Publication of CN111179275B publication Critical patent/CN111179275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention belongs to the technical field of deep learning computer vision and medical information processing, and particularly relates to a medical ultrasonic image segmentation method. On the basis of a general image segmentation neural network model, the method disclosed by the invention integrates multiple input multiple output technology, hole convolution technology, small sample medical data enhancement and other novel technologies, mainly solves the difficult and painful point problems of small sample learning, low ultrasound image contrast, fuzzy nodule edge and the like, and obtains the optimal segmentation strategy disclosed by the invention.

Description

Medical ultrasonic image segmentation method
Technical Field
The invention belongs to the technical field of deep learning computer vision and medical information processing, and particularly relates to a medical ultrasonic image segmentation method.
Background
With the progress of science and technology, the medical imaging technology has been developed greatly, and the ultrasonic imaging technology has important value in prevention, diagnosis and treatment due to the advantages of simple operation, no radiation damage, low cost and the like. Currently, segmenting regions of interest in medical images is the basis for image analysis and lesion identification. An artificial segmentation method is widely adopted to segment the ultrasonic image clinically, and experienced clinicians manually delineate the interested field according to the professional knowledge of the clinicians. However, the manual segmentation is time-consuming and extremely depends on the professional skill and abundant experience of a doctor, and the ultrasonic images have the characteristics of fuzzy edges, low contrast and the like, so that the visual resolution of human eyes is very difficult. Therefore, how to automatically and efficiently segment the ultrasonic image has become a problem that needs to be solved urgently.
In recent years, a Convolutional Neural Network (CNN), a deep neural network model, provides great technical support for improving the segmentation performance of biomedical images. The convolutional neural network can automatically learn low-level visual features and high-level semantic features in the image, and the complex process of manually designing and extracting the image features in the traditional algorithm is avoided. However, conventional CNNs cannot reasonably propagate the underlying features to higher layers. In a semantic segmentation model (U-NET), channel fusion of low-dimensional features and high-dimensional features can be realized by methods such as jump connection and the like, and a good segmentation effect is achieved.
Disclosure of Invention
The invention aims to provide an ultrasonic image segmentation design scheme of a network Multi-scaled-Unet (MD-Unet) based on deep learning in ultrasonic medical image processing so as to obtain better segmentation performance.
The technical scheme adopted by the invention is as follows:
a medical ultrasound image segmentation method, comprising the steps of:
step 1, preprocessing ultrasonic image data to be segmented to obtain training set and verification set data;
step 2, performing data enhancement on the training set and the verification set data, including:
1) increasing the data volume of the training data by adopting offline enhancement: adopting rotation transformation and horizontal turning transformation to perform 10 times of enhancement;
2) enhancing the generalization of the network model by online enhancement: the memory pressure is reduced while the data diversity is enhanced by adopting rotation transformation, scale transformation, scaling transformation, translation transformation and color contrast transformation and adopting an online iterator mode;
step 3, constructing a multi-input multi-output cavity convolution U-shaped network, comprising the following steps:
1) a multi-input down-sampling module: the downsampling module has 4 layers in total, the multi-input adopts the image multi-scale thought, the input data is subjected to size scaling and changed into four pairs of data of 8:4:2:1, and the four pairs of data are respectively fused with a two-three downsampling layer of the network; the down-sampling module utilizes the convolution layer and the maximum pooling layer to complete the acquisition of bottom layer characteristics and sequentially obtain a characteristic diagram; the convolution kernel size of each layer is 3 × 3, a hole convolution r is 2, namely, an interval is added in a conventional convolution kernel so as to increase an image receptive field, and the number of the convolution kernels of the first layer to the fourth layer is 32, 64, 128 and 256 respectively;
2) an up-sampling module: the up-sampling module has 4 layers in total, adopts deconvolution as an up-sampling mode, sequentially enlarges the size of the characteristic image by utilizing the up-sampling module, reduces the number of channels and finally obtains a prediction image with the same size as the input data; the size of the convolution kernel of each layer is 3 multiplied by 3, and the number of the convolution kernels from the first layer to the fourth layer is respectively 256, 128, 64 and 32;
3) the deep supervision multi-output module: carrying out size transformation on the label for 4 times to form four pairs of data of 8:4:2:1, and sequentially using the four pairs of data as training labels of output layers sampled on 4 layers;
step 4, inputting training set data into the constructed U-shaped network for training to obtain a learned convolutional neural network model, and performing parameter adjustment on the verification set until an optimal model and corresponding parameters thereof are obtained to obtain a trained U-shaped network;
and 5, inputting the preprocessed ultrasonic image data to be segmented into the trained U-shaped network to obtain the segmentation result of each pixel.
The invention has the beneficial effects that: the invention provides a segmentation method for an ultrasonic medical image, which integrates multiple input multiple output technology, hole convolution technology, small sample medical data enhancement and other novel technologies on the basis of a general image segmentation neural network model, and mainly solves the difficult and painful point problems of small sample learning, low ultrasonic image contrast, fuzzy nodule edge and the like to obtain the optimal segmentation strategy.
Drawings
Fig. 1 is a design diagram of a medical image segmentation method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a data processing module in step 1 according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a data enhancement module in step 2 according to an embodiment of the present invention.
Fig. 4 is a schematic diagram illustrating an overall structure of the MD-Unet in step 3 according to an embodiment of the present invention.
Fig. 5 is a diagram illustrating the correctness and loss of a training set and a verification set according to an embodiment of the present invention, where (a) is a diagram illustrating a loss function of the training set and the verification set obtained by training using an MD-Unet network, and (b) is a diagram illustrating the correctness of the training set and the verification set.
Fig. 6 is a schematic diagram illustrating an original label and a segmentation image provided by an embodiment of the present invention, where the left side of fig. 6 is the label image, and the right side of fig. 6 is the segmentation result.
Detailed Description
The invention is described in detail below with reference to the following figures and simulations:
the invention provides a network segmentation method based on thyroid nodule ultrasonic images, which comprises 5 steps, and mainly comprises 5 modules of data set acquisition, image preprocessing, network model construction, network training, network testing and evaluation, wherein a flow chart is shown in figure 1. In this embodiment, the specific steps are as follows:
1. the ultrasound image data to be segmented is preprocessed to obtain training set data and test set data, and the data processing flow is shown in fig. 2.
1) Removing the private information and the medical image instrument mark, and screening out an original ultrasonic image which is not manually marked by an imaging doctor;
2) manually marking the label under the guidance of an ultrasound imaging physician;
3) image quality enhancement under the premise of guaranteeing to keep image detail texture characteristics
3-1) reduction of sum noise and non-uniform plaque using adaptive mean filtering
3-2) improving the filtering effect by using two morphological operations-on and off operation
3-3) histogram equalization
3-4) Sobel operator edge enhancement
4) Dividing the data into a training set, a verification set and a test set in a ratio of 6:2:2
5) The image is decolored to obtain a gray image, and the resolution is unified into 256 × 256 by carrying out scale normalization
6) The data label is subjected to binarization processing and is normalized into a [0,1] interval
2. The data enhancement is performed on the training set small sample data, and the flow is shown in fig. 3.
Because the result of deep learning is closely related to the quality and the quantity of data, but medical samples are difficult to acquire, the data volume is small, in order to increase the data volume, avoid overfitting and improve the segmentation precision, two enhanced combination modes are adopted, and the defect of small sample data is overcome.
1) The data volume of training data is increased by adopting offline enhancement, and 10 times of enhancement is mainly performed by adopting rotation transformation and horizontal turnover transformation.
2) And enhancing the generalization of the network model by utilizing online enhancement. Mainly adopts rotation transformation, scale transformation, scaling transformation, translation transformation, color contrast transformation and the like, and reduces the memory pressure while enhancing the data diversity by using an online iterator mode.
3. A multi-input multi-output hole convolution U-shaped network is constructed, and the overall structure of the network is shown in figure 4.
1) Multi-input down-sampling module
The multi-input down-sampling module is shown in the left half of the U-network of fig. 4.
1-1) firstly, the multi-input image multi-scale concept is adopted, input data is subjected to size scaling and is changed into four pairs of data with the ratio of 8:4:2:1, and the four pairs of data are respectively fused with a two-one-three-four sampling layer of an input network.
1-2) the downsampling module has 4 layers in total, and the feature acquisition of the bottom layer is completed mainly by utilizing the convolution layer and the maximum pooling layer, so that feature maps with more channels and smaller size are obtained in sequence. The convolution kernel size of each layer is 3 × 3, and a hole convolution r ═ 2 is adopted, namely, a space is added in a conventional convolution kernel so as to increase the image receptive field. The sizes of the number of convolution kernels of the first layer to the fourth layer are 32, 64, 128 and 256 respectively.
2) Upsampling module
The upsampling module structure is shown in the right half of the U-type network of FIG. 4. The up-sampling module has 4 layers in total, and adopts deconvolution as an up-sampling mode. And the up-sampling module sequentially enlarges the size of the characteristic image, reduces the number of channels and finally obtains a prediction graph with the same size as the input data. The size of the convolution kernel of each layer is 3 × 3, and the number of convolution kernels from the first layer to the fourth layer is 256, 128, 64, 32, respectively.
3) Deep supervision multi-output module
And 4 times of size transformation is carried out on the label to form four pairs of data of 8:4:2:1, and the four pairs of data are sequentially used as training labels of output layers sampled on 4 layers.
4. Inputting training set data into the designed network for training to obtain the learned convolutional neural network model
1) And recording the loss and the segmentation accuracy of each training.
2) And modifying parameters and retraining the network according to the loss and the accuracy on the verification set. Until the best model and its corresponding parameters are selected.
5. Inputting the preprocessed ultrasonic image data to be segmented into the learned convolutional neural network model to obtain the segmentation result of each pixel.
The final results of the practice of the invention are shown here and the results are shown in figures 5 and 6. Fig. 5 is a schematic diagram of the accuracy and loss of a training set and a verification set provided in the embodiment of the present invention, where (a) is a schematic diagram of a loss function of the training set and the verification set obtained by training using an MD-Unet network, and (b) is a schematic diagram of the accuracy of the training set and the verification set. Fig. 6 is a schematic diagram of an original label and a segmentation image provided by an embodiment of the present invention, where the left side of fig. 6 is the label image, and the right side of fig. 6 is the segmentation result.

Claims (2)

1.一种医学超声图像分割方法,其特征在于,包括以下步骤:1. a medical ultrasound image segmentation method, is characterized in that, comprises the following steps: 步骤1、对待分割超声图像数据进行预处理,得到训练集、验证集数据;Step 1. Preprocess the ultrasound image data to be segmented to obtain training set and validation set data; 步骤2、对训练集、验证集数据做数据增强,包括:Step 2. Perform data enhancement on the training set and validation set data, including: 1)采用离线增强增加训练数据的数据量:采用旋转变换与水平翻转变换,做10倍增强;1) Use offline enhancement to increase the amount of training data: use rotation transformation and horizontal flip transformation to do 10-fold enhancement; 2)利用在线增强增强网络模型的泛化性:采用旋转变换、尺度变换、缩放变换、平移变换、颜色对比度变换,用在线迭代器方式在增强数据多样性的同时减少内存压力;2) Use online enhancement to enhance the generalization of the network model: use rotation transformation, scale transformation, scaling transformation, translation transformation, color contrast transformation, and use online iterators to enhance data diversity while reducing memory pressure; 步骤3、构建多输入多输出空洞卷积U型网络,包括:Step 3. Build a multi-input and multi-output atrous convolutional U-shaped network, including: 1)多输入下采样模块:下采样模块一共4层,多输入采用图像多尺度思想,对输入数据做大小放缩,变为8:4:2:1的四副数据,分别与进网络的一二三四下采样层融合;下采样模块利用卷积层与最大池化层完成底层特征获取,依次获得特征图;每一层的卷积核大小为3×3,采用空洞卷积r=2,即在常规卷积核中加入间隔从而增加图像感受野,第一层到第四层的卷积核数量大小分别为32、64、128、256;1) Multi-input down-sampling module: The down-sampling module has a total of 4 layers. The multi-input adopts the idea of image multi-scale, and the size of the input data is scaled to become four pairs of data of 8:4:2:1, which are respectively related to the input data of the network. One, two, three, and four down-sampling layers are fused; the down-sampling module uses the convolution layer and the maximum pooling layer to complete the acquisition of the underlying features, and obtains feature maps in turn; the size of the convolution kernel of each layer is 3 × 3, and the hole convolution r= 2, that is, adding a space to the conventional convolution kernel to increase the image receptive field. The number of convolution kernels from the first layer to the fourth layer is 32, 64, 128, and 256 respectively; 2)上采样模块:上采样模块一共4层,采用反卷积作为上采样方式,并利用上采样模块依次扩大特征图像尺寸,减少通道数量,最终得到与输入数据相同大小的预测图;每一层的卷积核大小为3×3,从第一层到第四层的卷积核数量大小分别为256、128、64、32;2) Upsampling module: The upsampling module has a total of 4 layers, using deconvolution as the upsampling method, and using the upsampling module to sequentially expand the size of the feature image, reduce the number of channels, and finally obtain a prediction map of the same size as the input data; The size of the convolution kernel of the layer is 3×3, and the number of convolution kernels from the first layer to the fourth layer is 256, 128, 64, and 32 respectively; 3)深度监督多输出模块:对标签做4次大小变换,形成8:4:2:1的四副数据,依次作为4层上采样的输出层的训练标签;3) In-depth supervision and multi-output module: make 4 size transformations on the label to form four pairs of data of 8:4:2:1, which are sequentially used as the training label of the output layer of the 4-layer upsampling; 步骤4、将训练集数据输入构建的U型网络进行训练,得到学习后的卷积神经网络模型,并在验证集上进行调参,直到得到最优的模型及其对应参数,获得训练好的U型网络;Step 4. Input the training set data into the constructed U-shaped network for training, obtain the learned convolutional neural network model, and adjust the parameters on the verification set until the optimal model and its corresponding parameters are obtained, and the trained model is obtained. U-shaped network; 步骤5、将预处理后的待分割超声图像数据输入训练好的U型网络,得到每个像素的分割结果。Step 5: Input the preprocessed ultrasound image data to be segmented into the trained U-shaped network to obtain the segmentation result of each pixel. 2.根据权利要求1所述的一种医学超声图像分割方法,其特征在于,在数据增强和空洞卷积U型网络模块,其中数据增强包括:2. a kind of medical ultrasound image segmentation method according to claim 1 is characterized in that, in data enhancement and hollow convolution U-shaped network module, wherein data enhancement comprises: 1)通过对原始数据的离线增强,提高数据的利用率;1) Improve data utilization through offline enhancement of original data; 2)通过对原始数据的在线增强,进一步增强网络的鲁棒性的同时减少服务器的内存压力。2) Through the online enhancement of the original data, the robustness of the network is further enhanced and the memory pressure of the server is reduced. 空洞卷积U型网络模块包括:The atrous convolutional U-network module includes: 1)通过多输入模块对图像数据进行缩放,并将其与下采样层融合从而进一步增强图像利用率并提高网络对于图像特征提取的能力;1) The image data is scaled by the multi-input module and fused with the down-sampling layer to further enhance the image utilization and improve the network's ability to extract image features; 2)在下采样与上采样的过程中加入空洞卷积层,从而增大感受野大小,改善由于卷积带来的图像细节损失等问题。2) A hole convolution layer is added in the process of downsampling and upsampling, thereby increasing the size of the receptive field and improving the loss of image details caused by convolution.
CN201911409096.6A 2019-12-31 2019-12-31 Medical ultrasonic image segmentation method Active CN111179275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911409096.6A CN111179275B (en) 2019-12-31 2019-12-31 Medical ultrasonic image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911409096.6A CN111179275B (en) 2019-12-31 2019-12-31 Medical ultrasonic image segmentation method

Publications (2)

Publication Number Publication Date
CN111179275A true CN111179275A (en) 2020-05-19
CN111179275B CN111179275B (en) 2023-04-25

Family

ID=70650617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911409096.6A Active CN111179275B (en) 2019-12-31 2019-12-31 Medical ultrasonic image segmentation method

Country Status (1)

Country Link
CN (1) CN111179275B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861929A (en) * 2020-07-24 2020-10-30 深圳开立生物医疗科技股份有限公司 Ultrasound image optimization processing method, system and device
CN111915626A (en) * 2020-08-14 2020-11-10 大连东软教育科技集团有限公司 Automatic segmentation method and device for ventricle area of heart ultrasonic image and storage medium
CN113034507A (en) * 2021-05-26 2021-06-25 四川大学 CCTA image-based coronary artery three-dimensional segmentation method
CN113610859A (en) * 2021-06-07 2021-11-05 东北大学 Automatic thyroid nodule segmentation method based on ultrasonic image
CN113920129A (en) * 2021-09-16 2022-01-11 电子科技大学长三角研究院(衢州) Medical image segmentation method and device based on multi-scale and global context information
CN116824146A (en) * 2023-07-05 2023-09-29 深圳技术大学 A small sample CT image segmentation method, system, terminal and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080081998A1 (en) * 2006-10-03 2008-04-03 General Electric Company System and method for three-dimensional and four-dimensional contrast imaging
US20110201931A1 (en) * 2010-02-16 2011-08-18 Palmeri Mark L Ultrasound Methods, Systems and Computer Program Products for Imaging Contrasting Objects Using Combined Images
CN107680678A (en) * 2017-10-18 2018-02-09 北京航空航天大学 Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108090904A (en) * 2018-01-03 2018-05-29 深圳北航新兴产业技术研究院 A kind of medical image example dividing method and device
CN108268870A (en) * 2018-01-29 2018-07-10 重庆理工大学 Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN108734694A (en) * 2018-04-09 2018-11-02 华南农业大学 Thyroid tumors ultrasonoscopy automatic identifying method based on faster r-cnn
CN108898606A (en) * 2018-06-20 2018-11-27 中南民族大学 Automatic division method, system, equipment and the storage medium of medical image
CN109064455A (en) * 2018-07-18 2018-12-21 清华大学深圳研究生院 A kind of classification method of the breast ultrasound Image Multiscale fusion based on BI-RADS
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109671086A (en) * 2018-12-19 2019-04-23 深圳大学 A kind of fetus head full-automatic partition method based on three-D ultrasonic
CN109816657A (en) * 2019-03-03 2019-05-28 哈尔滨理工大学 A segmentation method of brain tumor medical images based on deep learning
CN110189334A (en) * 2019-05-28 2019-08-30 南京邮电大学 Medical Image Segmentation Method Based on Residual Fully Convolutional Neural Network Based on Attention Mechanism
CN110415253A (en) * 2019-05-06 2019-11-05 南京大学 A point-based interactive medical image segmentation method based on deep neural network
CN110570431A (en) * 2019-09-18 2019-12-13 东北大学 A Medical Image Segmentation Method Based on Improved Convolutional Neural Network

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080081998A1 (en) * 2006-10-03 2008-04-03 General Electric Company System and method for three-dimensional and four-dimensional contrast imaging
US20110201931A1 (en) * 2010-02-16 2011-08-18 Palmeri Mark L Ultrasound Methods, Systems and Computer Program Products for Imaging Contrasting Objects Using Combined Images
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN107680678A (en) * 2017-10-18 2018-02-09 北京航空航天大学 Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system
CN108090904A (en) * 2018-01-03 2018-05-29 深圳北航新兴产业技术研究院 A kind of medical image example dividing method and device
CN108268870A (en) * 2018-01-29 2018-07-10 重庆理工大学 Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN108734694A (en) * 2018-04-09 2018-11-02 华南农业大学 Thyroid tumors ultrasonoscopy automatic identifying method based on faster r-cnn
CN108898606A (en) * 2018-06-20 2018-11-27 中南民族大学 Automatic division method, system, equipment and the storage medium of medical image
CN109064455A (en) * 2018-07-18 2018-12-21 清华大学深圳研究生院 A kind of classification method of the breast ultrasound Image Multiscale fusion based on BI-RADS
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109671086A (en) * 2018-12-19 2019-04-23 深圳大学 A kind of fetus head full-automatic partition method based on three-D ultrasonic
CN109816657A (en) * 2019-03-03 2019-05-28 哈尔滨理工大学 A segmentation method of brain tumor medical images based on deep learning
CN110415253A (en) * 2019-05-06 2019-11-05 南京大学 A point-based interactive medical image segmentation method based on deep neural network
CN110189334A (en) * 2019-05-28 2019-08-30 南京邮电大学 Medical Image Segmentation Method Based on Residual Fully Convolutional Neural Network Based on Attention Mechanism
CN110570431A (en) * 2019-09-18 2019-12-13 东北大学 A Medical Image Segmentation Method Based on Improved Convolutional Neural Network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
梁舒: "基于残差学习U型卷积神经网络的乳腺超声图像肿瘤分割研究", 《华南理工大学》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861929A (en) * 2020-07-24 2020-10-30 深圳开立生物医疗科技股份有限公司 Ultrasound image optimization processing method, system and device
CN111915626A (en) * 2020-08-14 2020-11-10 大连东软教育科技集团有限公司 Automatic segmentation method and device for ventricle area of heart ultrasonic image and storage medium
CN111915626B (en) * 2020-08-14 2024-02-02 东软教育科技集团有限公司 Automatic segmentation method, device and storage medium for heart ultrasonic image ventricular region
CN113034507A (en) * 2021-05-26 2021-06-25 四川大学 CCTA image-based coronary artery three-dimensional segmentation method
CN113610859A (en) * 2021-06-07 2021-11-05 东北大学 Automatic thyroid nodule segmentation method based on ultrasonic image
CN113610859B (en) * 2021-06-07 2023-10-31 东北大学 Automatic thyroid nodule segmentation method based on ultrasonic image
CN113920129A (en) * 2021-09-16 2022-01-11 电子科技大学长三角研究院(衢州) Medical image segmentation method and device based on multi-scale and global context information
CN116824146A (en) * 2023-07-05 2023-09-29 深圳技术大学 A small sample CT image segmentation method, system, terminal and storage medium
CN116824146B (en) * 2023-07-05 2024-06-07 深圳技术大学 Small sample CT image segmentation method, system, terminal and storage medium

Also Published As

Publication number Publication date
CN111179275B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN111161273B (en) Medical ultrasonic image segmentation method based on deep learning
CN111145170B (en) Medical image segmentation method based on deep learning
CN111179275A (en) A Medical Ultrasound Image Segmentation Method
CN109886986B (en) Dermatoscope image segmentation method based on multi-branch convolutional neural network
CN110930416B (en) A U-shaped network-based MRI image prostate segmentation method
CN110276402B (en) Salt body identification method based on deep learning semantic boundary enhancement
CN110555835B (en) A brain slice image region division method and device
CN109063712A (en) A kind of multi-model Hepatic diffused lesion intelligent diagnosing method and system based on ultrasound image
CN109993735A (en) Image segmentation method based on concatenated convolution
WO2020211530A1 (en) Model training method and apparatus for detection on fundus image, method and apparatus for detection on fundus image, computer device, and medium
Cao et al. Gastric cancer diagnosis with mask R-CNN
Hervella et al. Learning the retinal anatomy from scarce annotated data using self-supervised multimodal reconstruction
CN105654141A (en) Isomap and SVM algorithm-based overlooked herded pig individual recognition method
CN111476794B (en) Cervical pathological tissue segmentation method based on UNET
CN111445474A (en) Kidney CT image segmentation method based on bidirectional complex attention deep network
CN111951288A (en) A deep learning-based segmentation method for skin cancer lesions
CN108564570A (en) A kind of method and apparatus of intelligentized pathological tissues positioning
CN114565620A (en) A method for blood vessel segmentation in fundus images based on skeleton prior and contrast loss
Chen Medical image segmentation based on u-net
Kareem et al. Skin lesions classification using deep learning techniques
CN113947805A (en) A classification method of nystagmus type based on video images
CN117876690A (en) A method and system for multi-tissue segmentation of ultrasound images based on heterogeneous UNet
CN109919216B (en) An adversarial learning method for computer-aided diagnosis of prostate cancer
CN118154627B (en) Heart super image domain adaptive segmentation method based on eye movement and attention drive
CN113362360B (en) Ultrasonic carotid plaque segmentation method based on fluid velocity field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant