CN111161235A - Breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding - Google Patents

Breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding Download PDF

Info

Publication number
CN111161235A
CN111161235A CN201911364557.2A CN201911364557A CN111161235A CN 111161235 A CN111161235 A CN 111161235A CN 201911364557 A CN201911364557 A CN 201911364557A CN 111161235 A CN111161235 A CN 111161235A
Authority
CN
China
Prior art keywords
image
segmentation
network
anatomical
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911364557.2A
Other languages
Chinese (zh)
Other versions
CN111161235B (en
Inventor
罗耀忠
黄庆华
金连文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201911364557.2A priority Critical patent/CN111161235B/en
Publication of CN111161235A publication Critical patent/CN111161235A/en
Application granted granted Critical
Publication of CN111161235B publication Critical patent/CN111161235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding. The method comprises the following steps: preprocessing a mammary gland ultrasonic image to obtain image intensity information; extracting basic visual features of the image by adopting a VGGNet-based feature extraction base network; constructing an initial segmentation network for performing semantic segmentation on different anatomical structure regions in the image to obtain a semantic segmentation result; constructing an RNN network, taking the result of the semantic segmentation and the image intensity information as the input of the RNN, and establishing spatial correlation on the image in a composition mode; connecting the primary segmentation network with the RNN to construct an end-to-end segmentation network; after the segmentation network is constructed, performing unified training to obtain a trained segmentation network; and segmenting the breast ultrasound image by adopting the trained segmentation network. The invention can solve the problems of small data volume, difficult labeling and high labeling threshold of medical ultrasonic images through transfer learning.

Description

Breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding
Technical Field
The invention relates to breast ultrasound image understanding and segmentation, in particular to a breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding.
Background
Ultrasonic Imaging is one of the four most important Imaging methods at present, has the advantages of no radiation, low cost, safety and the like, and is one of the most important Imaging means for diagnosing breast diseases (J.A. noble, D.Boukerrou.ultrasonic. ultrasound Imaging: A scientific. IEEE Transactions on Medical Imaging,25(8): 987. cndot. 1010, 2006). The segmentation of breast Ultrasound images is a major research hotspot, and mainly includes a segmentation method based on clustering, a segmentation method based on graph theory, a segmentation method based on threshold, a segmentation method based on watershed, a segmentation method based on level set, a segmentation method based on deep learning, and the like (Huang QH, Luo YZ, and Zhang QZ. Breast Ultrasound imaging selection: A surface. International Journal of Computer Assisted radio surgery.12(3): 493-. However, the past research is only limited to segmenting tumors, and the information and content of each human tissue structure are ignored. And the breast ultrasound image contains rich anatomical semantic information. The technical problem of this application is that the different anatomical semantic regions of an Ultrasound image do not appear very different in the underlying features of texture, intensity, etc., but the key is the relationship between the regions, i.e., the semantic association of the various anatomical regions (Luo YZ, Liu LZ, Huang QH, and Li XL. A Novel Segmentation application combining region-and Edge-based Information for the ultra images. biomed research International. article ID 9157341,2017.). The method excavates semantic information of each tissue and region, constructs semantic association of each region, and obtains and optimizes segmentation effect of each anatomical tissue region.
Disclosure of Invention
The invention aims to overcome the defects of breast ultrasound full-image understanding based on texture and intensity information, and provides a method for constructing semantic association of each region to perform fine-grained anatomical semantic understanding and segmentation of an image.
The purpose of the invention is realized by at least one of the following technical solutions.
A breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding comprises the following steps:
s1, preprocessing the breast ultrasonic image, and normalizing the size of the image to obtain image intensity information;
s2, extracting a base network by adopting the VGGNet-based features, transferring network parameters of the base network on ImageNet, and extracting basic visual features of the image;
s3, adopting a U-net encoder-decoder structure, taking the feature extraction base network in the step S2 as an encoder, constructing a decoder in a manner of being symmetrical to the encoder, adding skip connection between the encoder and the decoder, and constructing a primary segmentation network for performing semantic segmentation on different anatomical structure regions in the image to obtain a semantic segmentation result;
s4, constructing an RNN network, then taking the result of the previous semantic segmentation and the image intensity information as the input of the RNN, and establishing spatial correlation on the image in a composition mode;
s5, connecting the primary segmentation network with the RNN network to construct an end-to-end segmentation network; after the segmentation network is constructed, performing unified training to obtain a trained segmentation network;
and S6, segmenting the breast ultrasound image by adopting the segmentation network trained in the step S5.
Further, in step S1, the normalization method uses a resize function in a transform module under the skiimage in python, and then maps the gray-level value into the space of [ -1,1], where the gray-level value is the image intensity information.
Further, since the purpose of the invention is to perform image understanding of the whole ultrasonic image, it is necessary to extract visual features and perform image representation; in step S2, in view of the relatively small data size of the ultrasound Image and the excellent capability of the VGGNet in extracting the small sample data features, VGG19 (published in 2015) is adopted as an encoder of U-net in the model, and the network parameters of the VGGNet in the ImageNet pre-training are migrated, where the migrated network parameters are parameters of a VGG19 feature layer; however, the parameters trained on the ImageNet data set are not completely suitable for the ultrasound image, so after the migration is completed, the network parameters are finely tuned in the final training process, and the fine tuning method is a back propagation method for training with a small learning rate.
Further, in step S3, in the segmentation process, a U-net encoder-decoder structure is adopted, and in order to highlight details of the image segmented by the anatomical semantic region, a skip connection between the encoding layer and the decoding layer is introduced, where the skip connection is made by copying and superimposing the encoding layer at the corresponding level in the encoding-decoding structure onto the decoding layer at the corresponding level.
Further, in step S4, in order to construct semantic relations of each anatomical region in the image, a RECURRENT NEURAL NETWORK RNN (paper RECURRENT NEURAL NETWORK reconstruction, 2015) is introduced, neighborhood patterning is performed first, pixels on the image are connected according to a domain, then image intensity information and an initial segmentation result are used as input of the RNN, and hidden layer information is used for category discrimination of each pixel, and is also transmitted to a hidden layer of a discrimination structure of another pixel along a path of the constructed image in parallel, so as to optimize a segmentation result of the whole image.
Further, in step S5, the primary segmentation network and the RNN network are connected in series, that is, the output of the primary segmentation network is used as the input of the RNN network to obtain an end-to-end segmentation network for fine-grained anatomical semantic understanding and segmentation of the breast ultrasound image; and uniformly training the segmentation network, wherein the training mode is a backpropagation method.
Compared with the prior art, the invention has the following advantages and effects:
1) the invention can understand and segment each anatomical tissue structure in the image.
2) The invention can solve the problems of small data volume, difficult labeling and high labeling threshold of medical ultrasonic images through transfer learning.
3) The invention avoids the deficiency of understanding and segmenting only by depending on texture features and intensity features, constructs the spatial relationship and semantic association of each tissue region by using an RNN network iterative training mode, and has better image understanding and segmenting effects.
4) The segmentation network constructed in the invention is an end-to-end network, the training and testing have no intermediate step, the network is simple and clear, and the method can also be used for segmenting objects of naturally shot images.
Drawings
Fig. 1 is an overall schematic diagram of a breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding according to the present invention.
Fig. 2 is a schematic diagram of an embodiment of the present invention.
FIG. 3 is a schematic diagram of a patterning method in an embodiment of the present invention.
Detailed Description
Specific embodiments of the present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
Example (b):
a breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding, as shown in fig. 1, includes the following steps:
s1, preprocessing the breast ultrasonic image, and normalizing the size of the image to obtain image intensity information;
the normalization method adopts a resize function in a transform module under the skiimage in python, and then maps the gray value into a space of [ -1,1], wherein the gray value is image intensity information.
In this embodiment, in the step of preprocessing the ultrasound image, the ultrasound image is unified to a size of 516 × 516, and in the process of intensity normalization, the average of the gray values of the image is first obtained, and then the difference between the gray value of each pixel and the average value is obtained, so as to avoid the influence of the brightness of the image on understanding.
S2, extracting a base network by adopting the VGGNet-based features, transferring network parameters of the base network on ImageNet, and extracting basic visual features of the image;
because the invention aims to carry out image understanding of the whole ultrasonic image, visual features need to be extracted and image representation needs to be carried out; in step S2, in view of the relatively small data size of the ultrasound image and the excellent capability of the VGGNet in extracting the small sample data features, VGG19 (published in 2015) is adopted as an encoder of U-net in the model, and the network parameters of the VGGNet in the ImageNet pre-training are migrated, where the migrated network parameters are parameters of a VGG19 feature layer; however, the parameters trained on the ImageNet data set are not completely suitable for the ultrasound image, so after the migration is completed, the network parameters are finely tuned in the final training process, and the fine tuning method is a back propagation method for training with a small learning rate.
In this embodiment, the learning rate of the fine adjustment is set to 0.0001.
S3, adopting a U-net encoder-decoder structure, taking the feature extraction base network in the step S2 as an encoder, constructing a decoder in a manner of being symmetrical to the encoder, adding skip connection between the encoder and the decoder, and constructing a primary segmentation network for performing semantic segmentation on different anatomical structure regions in the image to obtain a semantic segmentation result;
in the segmentation process, a U-net encoder-decoder structure is adopted, skip connection between an encoding layer and a decoding layer is introduced in order to highlight image details segmented by anatomical semantic regions, in the design of the skip connection layer, after each small module, namely a plurality of convolution-pooling layers, skip connection of an encoder and a decoder is added, and the skip connection mode is that the encoding layer of a corresponding level in the encoding and decoding structure is copied and superposed into the decoding layer of the corresponding level.
S4, constructing an RNN network, then taking the result of the previous semantic segmentation and the image intensity information as the input of the RNN, and establishing spatial correlation on the image in a composition mode;
in order to construct semantic association of each anatomical region in an image, a RECURRENT NEURAL NETWORK (RNN) (2015) is introduced, neighborhood composition is firstly carried out, pixels on the image are connected according to the domain in the composition mode, then image intensity information and an initial segmentation result are used as input of the RNN, hidden layer information is used for distinguishing the category of each pixel forward, and is also conveyed to a hidden layer of a distinguishing structure of other pixels in parallel along the path of the constructed image, and the segmentation result of the whole image is optimized.
In the link of image pixel neighborhood composition, the composition method shown in fig. 3 is selected, that is, the middle layer of each pixel is transmitted to the right pixel, the lower pixel and the right lower pixel along the arrow direction in fig. 3.
S5, connecting the primary segmentation network with the RNN network to construct an end-to-end segmentation network; after the network is constructed, performing unified training to obtain a trained segmentation network;
the primary segmentation network and the RNN are connected in a serial connection mode, namely the output of the primary segmentation network is used as the input of the RNN to obtain an end-to-end segmentation network, and the whole network structure is shown in figure 2 and is used for fine-grained anatomical semantic understanding and segmentation of the breast ultrasound image; and uniformly training the segmentation network, wherein the training mode is a backpropagation method.
And S6, segmenting the breast ultrasound image by adopting the segmentation network trained in the step S5.
The present invention can be preferably realized as described above.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (6)

1. A breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding is characterized by comprising the following steps:
s1, preprocessing the breast ultrasonic image, and normalizing the size of the image to obtain image intensity information;
s2, extracting a base network by adopting the VGGNet-based features, transferring network parameters of the base network on ImageNet, and extracting basic visual features of the image;
s3, adopting a U-net encoder-decoder structure, taking the feature extraction base network in the step S2 as an encoder, constructing a decoder in a manner of being symmetrical to the encoder, adding skip connection between the encoder and the decoder, and constructing a primary segmentation network for performing semantic segmentation on different anatomical structure regions in the image to obtain a semantic segmentation result;
s4, constructing an RNN network, then taking the result of the previous semantic segmentation and the image intensity information as the input of the RNN, and establishing spatial correlation on the image in a composition mode;
s5, connecting the primary segmentation network with the RNN network to construct an end-to-end segmentation network; after the network is constructed, performing unified training to obtain a trained segmentation network;
and S6, segmenting the breast ultrasound image by adopting the segmentation network trained in the step S5.
2. The method for segmenting an ultrasound image of breast based on fine-grained anatomical semantic understanding as claimed in claim 1, wherein in step S1, the normalization method uses a resize function in a transform module under the skiimage in python, and then maps gray-level values into a space of [ -1,1], where the gray-level values are image intensity information.
3. The method for segmenting an ultrasound image of breast based on fine-grained anatomical semantic understanding according to claim 1, wherein in step S2, in view of a relatively small data size of the ultrasound image and an excellent capability of extracting features of VGGNet in small sample data, VGG19 is adopted as an encoder of U-net in the model, and a network parameter pre-trained in ImageNet is migrated, and the migrated network parameter is a parameter of a VGG19 feature layer.
4. The method for segmenting a breast ultrasound image based on fine-grained anatomical semantic understanding according to claim 1, wherein in step S3, a structure of a U-net encoder-decoder is adopted during segmentation, and in order to highlight image details of segmentation of anatomical semantic regions, skip-connection between an encoding layer and a decoding layer is introduced, and the skip-connection is implemented by copying and overlapping the encoding layer at a corresponding level in the encoding-decoding structure into the decoding layer at a corresponding level.
5. The breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding according to claim 1, wherein in step S4, in order to construct semantic association of each anatomical region in the image, a recurrent neural network RNN is introduced, neighborhood composition is performed first, the composition is performed in a manner that pixels on the image are connected according to the domain, then image intensity information and the result of initial segmentation are used as input of the RNN, and hidden layer information is used for category discrimination of each pixel forward and is also transmitted to a hidden layer of a discrimination structure of another pixel in parallel along the path of the constructed image, so as to optimize the segmentation result of the whole image.
6. The breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding according to claim 1, wherein in step S5, the primary segmentation network and the RNN network are connected in a serial connection manner, that is, the output of the primary segmentation network is used as the input of the RNN network to obtain an end-to-end segmentation network for fine-grained anatomical semantic understanding and segmentation of the breast ultrasound image; and uniformly training the segmentation network, wherein the training mode is a back propagation method.
CN201911364557.2A 2019-12-26 2019-12-26 Breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding Active CN111161235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911364557.2A CN111161235B (en) 2019-12-26 2019-12-26 Breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911364557.2A CN111161235B (en) 2019-12-26 2019-12-26 Breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding

Publications (2)

Publication Number Publication Date
CN111161235A true CN111161235A (en) 2020-05-15
CN111161235B CN111161235B (en) 2023-05-23

Family

ID=70558057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911364557.2A Active CN111161235B (en) 2019-12-26 2019-12-26 Breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding

Country Status (1)

Country Link
CN (1) CN111161235B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276402A (en) * 2019-06-25 2019-09-24 北京工业大学 A kind of salt body recognition methods based on the enhancing of deep learning semanteme boundary
CN110458844A (en) * 2019-07-22 2019-11-15 大连理工大学 A kind of semantic segmentation method of low illumination scene

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276402A (en) * 2019-06-25 2019-09-24 北京工业大学 A kind of salt body recognition methods based on the enhancing of deep learning semanteme boundary
CN110458844A (en) * 2019-07-22 2019-11-15 大连理工大学 A kind of semantic segmentation method of low illumination scene

Also Published As

Publication number Publication date
CN111161235B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
Liu et al. Twin adversarial contrastive learning for underwater image enhancement and beyond
CN108665456B (en) Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence
US11507777B2 (en) Labeling techniques for a modified panoptic labeling neural network
CN104463804B (en) Image enhancement method based on intuitional fuzzy set
CN114565761B (en) Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image
Yuan et al. Prostate segmentation with encoder-decoder densely connected convolutional network (Ed-Densenet)
CN110265141A (en) A kind of liver neoplasm CT images computer aided diagnosing method
CN113658201B (en) Deep learning colorectal cancer polyp segmentation device based on enhanced multi-scale features
US11164021B2 (en) Methods, systems, and media for discriminating and generating translated images
Groenendijk et al. On the benefit of adversarial training for monocular depth estimation
CN110363068B (en) High-resolution pedestrian image generation method based on multiscale circulation generation type countermeasure network
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN108305253A (en) A kind of pathology full slice diagnostic method based on more multiplying power deep learnings
CN105809175A (en) Encephaledema segmentation method and system based on support vector machine algorithm
CN111369574A (en) Thoracic cavity organ segmentation method and device
CN111696042B (en) Image super-resolution reconstruction method based on sample learning
CN114972847A (en) Image processing method and device
Cai et al. Perception preserving decolorization
Jana et al. Liver fibrosis and nas scoring from ct images using self-supervised learning and texture encoding
Jie et al. Medical image fusion based on extended difference-of-Gaussians and edge-preserving
CN111161235A (en) Breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding
CN108742627B (en) Detection apparatus based on brain medical image fusion classification
CN113409321B (en) Cell nucleus image segmentation method based on pixel classification and distance regression
Li et al. A review of image colourisation
CN114118123A (en) Fluorescence-stained urine exfoliated cell identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant