CN111161235B - Breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding - Google Patents

Breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding Download PDF

Info

Publication number
CN111161235B
CN111161235B CN201911364557.2A CN201911364557A CN111161235B CN 111161235 B CN111161235 B CN 111161235B CN 201911364557 A CN201911364557 A CN 201911364557A CN 111161235 B CN111161235 B CN 111161235B
Authority
CN
China
Prior art keywords
image
network
segmentation
anatomical
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911364557.2A
Other languages
Chinese (zh)
Other versions
CN111161235A (en
Inventor
罗耀忠
黄庆华
金连文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201911364557.2A priority Critical patent/CN111161235B/en
Publication of CN111161235A publication Critical patent/CN111161235A/en
Application granted granted Critical
Publication of CN111161235B publication Critical patent/CN111161235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding. The method comprises the following steps: preprocessing the mammary gland ultrasonic image to obtain image intensity information; extracting basic visual features of the image by adopting a feature extraction base network based on VGGNet; constructing an initial segmentation network for carrying out semantic segmentation on different anatomical structure areas in the image to obtain a semantic segmentation result; constructing an RNN network, and then taking the result of the previous semantic segmentation and image intensity information as the input of the RNN to establish spatial association on the image in a composition mode; connecting the primary segmentation network with the RNN network to construct an end-to-end segmentation network; after the segmentation network is built, unified training is carried out to obtain a trained segmentation network; and segmenting the breast ultrasonic image by adopting a trained segmentation network. The method can solve the problems of small data size, difficult marking and high marking threshold of medical ultrasonic images through transfer learning.

Description

Breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding
Technical Field
The invention relates to breast ultrasound image understanding and segmentation, in particular to a breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding.
Background
Ultrasonic imaging is one of the four most important imaging methods at present, has the advantages of no radiation, low cost, safety and the like, and is one of the most important imaging means for diagnosing breast diseases (J.A.Noble, D.Boukerroui.Ultrasound image segmentation: A survey. IEEE Transactions on Medical Imaging,25 (8): 987-1010, 2006.). The segmentation of breast ultrasound images is a big research hotspot, and mainly includes a segmentation method based on clustering, a segmentation method based on graph theory, a segmentation method based on threshold values, a segmentation method based on watershed, a segmentation method based on level set, a segmentation method based on deep learning, and the like (Huang QH, luo YZ, and Zhang QZ. Breast Ultrasound Image Segmentation: A vector Journal of Computer Assisted Radiology and Surgery 12 (3): 493-507, 2017). However, the previous research is limited to dividing the tumor, and the information and content of each human tissue structure are ignored. And the breast ultrasound image contains rich anatomical semantic information. The technical problem of this application is that the presentation of different anatomical semantic regions of an ultrasound image on the image does not differ significantly in the underlying features of texture, intensity, etc., the key being the relationship between the regions, i.e. the semantic association of the individual anatomical regions (Luo YZ, liu LZ, huang QH, and Li xl.a Novel Segmentation Approach Combining Region-and Edge-based Information for Ultrasound images.biomed Research international.arc ID 9157341,2017.). The method is used for mining semantic information of each organization and each region, constructing semantic association of each region and obtaining and optimizing segmentation effect of each anatomical tissue region.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of performing breast ultrasound full-image understanding based on texture and intensity information only, and provides a method for performing image fine-granularity anatomical semantic understanding and segmentation by constructing semantic association of each region.
The object of the invention is achieved by at least one of the following technical solutions.
A breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding comprises the following steps:
s1, preprocessing a breast ultrasonic image, and normalizing the size of the image to obtain image intensity information;
s2, extracting a base network by adopting the characteristic extraction based on VGGNet, and migrating network parameters of the base network on the ImageNet to extract basic visual characteristics of the image;
s3, adopting the structure of a U-net encoder-decoder, taking the characteristic extraction base network in the step S2 as an encoder, constructing the decoder in a symmetrical mode with the encoder, adding jump connection between the encoder and the decoder, and constructing an initial segmentation network for carrying out semantic segmentation on different anatomical structure areas in an image to obtain a semantic segmentation result;
s4, constructing an RNN network, and then taking the result of the previous semantic segmentation and image intensity information as the input of the RNN to establish spatial association on the image in a composition mode;
s5, connecting the initial segmentation network and the RNN network to construct an end-to-end segmentation network; after the segmentation network is built, unified training is carried out to obtain a trained segmentation network;
s6, segmenting the breast ultrasonic image by adopting the segmentation network trained in the step S5.
Further, in step S1, the normalization method uses the resolution function in the transform module under the skimage in python, and then maps the gray value into the space of [ -1,1], where the gray value is the image intensity information.
Further, since the purpose of the present invention is to perform image understanding of the full view of an ultrasound image, it is necessary to extract visual features and perform image representation; in step S2, in view of the small data size of the ultrasound image and the excellent capability of VGGNet in small sample data feature extraction, VGG19 (paper Very Deep Convolutional Networks for Large-Scale Image Recognition, published 2015) is adopted as an encoder of U-net in the model, and network parameters of the model in ImageNet pre-training are migrated, wherein the migrated network parameters are parameters of VGG19 feature layers; however, the parameters trained on the ImageNet dataset are not completely suitable for the ultrasonic image, so that after migration is completed, network parameters can be finely tuned in the final training process, and the fine tuning method is a back training method for training with small learning rate.
Further, in step S3, in the segmentation process, a structure of a U-net encoder-decoder is adopted, in order to highlight the image details of segmentation of the anatomical semantic region, a skip connection between the encoding layer and the decoding layer is introduced, and the skip connection mode is that the encoding layer of the corresponding level in the encoding and decoding structure is copied and superimposed into the decoding layer of the corresponding level.
Further, in step S4, in order to construct semantic association of each anatomical region in the image, a recurrent neural network RNN (papers RECURRENT NEURAL NETWORK REGULARIZATION, 2015) is introduced, and a neighborhood composition is first performed in such a way that pixels on the image are connected by domain, then image intensity information and an initial segmentation result are used as inputs of RNN, and hidden layer information is used for discriminating the category of each pixel along a path of the constructed image and is also transferred in parallel to hidden layers of discrimination structures of other pixels, so as to optimize the segmentation result of the whole image.
Further, in step S5, the primary segmentation network and the RNN network are connected in series, that is, the output of the primary segmentation network is used as the input of the RNN network, so as to obtain an end-to-end segmentation network for understanding and segmenting fine-grained anatomical semantics of the breast ultrasound image; and uniformly training the segmentation network in a back propagation method.
Compared with the prior art, the invention has the following advantages and effects:
1) The invention can understand and divide each anatomical structure in the image.
2) The method can solve the problems of small data size, difficult marking and high marking threshold of medical ultrasonic images through transfer learning.
3) The invention avoids the defect of understanding and segmenting by only relying on texture features and strength features, constructs the spatial relationship and semantic association of each tissue region in an RNN network iterative training mode, and has better image understanding and segmenting effects.
4) The segmentation network constructed in the invention is an end-to-end network, no intermediate steps exist in training and testing, the network is simple and clear, and the invention can also be used for segmenting objects of natural shooting images.
Drawings
Fig. 1 is an overall schematic diagram of a breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding of the present invention.
Fig. 2 is a schematic diagram of an implementation in an embodiment of the invention.
Fig. 3 is a schematic diagram of a patterning method in an embodiment of the present invention.
Detailed Description
Specific embodiments of the present invention will be described in further detail below with reference to specific examples, but the embodiments of the present invention are not limited thereto.
Examples:
a mammary gland ultrasonic image segmentation method based on fine granularity anatomical semantic understanding, as shown in figure 1, comprises the following steps:
s1, preprocessing a breast ultrasonic image, and normalizing the size of the image to obtain image intensity information;
the normalization method adopts a resolution function in a transform module under the image in python, and then maps the gray value into the space of [ -1,1], wherein the gray value is the image intensity information.
In this embodiment, in the link of preprocessing an ultrasonic image, the ultrasonic image is unified to be 516x 516, and in the process of intensity normalization, the average of the gray values of the image is first obtained, and then the difference between the gray values of each pixel and the average is obtained, so as to avoid the influence of the brightness of the image on understanding.
S2, extracting a base network by adopting the characteristic extraction based on VGGNet, and migrating network parameters of the base network on the ImageNet to extract basic visual characteristics of the image;
since the invention aims at carrying out image understanding of the full image of the ultrasonic image, visual features are required to be extracted and image representation is required; in step S2, in view of the small data size of the ultrasound image and the excellent capability of VGGNet in small sample data feature extraction, VGG19 (paper Very Deep Convolutional Networks for Large-Scale Image Recognition, published 2015) is adopted as an encoder of U-net in the model, and network parameters of the model in ImageNet pre-training are migrated, wherein the migrated network parameters are parameters of VGG19 feature layers; however, the parameters trained on the ImageNet dataset are not completely suitable for the ultrasonic image, so that after migration is completed, network parameters can be finely tuned in the final training process, and the fine tuning method is a back training method for training with small learning rate.
In this embodiment, the learning rate of fine tuning is set to 0.0001.
S3, adopting the structure of a U-net encoder-decoder, taking the characteristic extraction base network in the step S2 as an encoder, constructing the decoder in a symmetrical mode with the encoder, adding jump connection between the encoder and the decoder, and constructing an initial segmentation network for carrying out semantic segmentation on different anatomical structure areas in an image to obtain a semantic segmentation result;
in the segmentation process, a structure of a U-net encoder-decoder is adopted, in order to highlight image details of segmentation of an anatomical semantic region, a jump connection between an encoding layer and a decoding layer is introduced, in the design of the jump connection layer, after each small module, namely a plurality of convolution-pooling layers, the jump connection of the encoder and the decoder is added, and the jump connection mode is that the encoding layer of a corresponding level in the encoding and decoding structure is copied and overlapped into the decoding layer of the corresponding level.
S4, constructing an RNN network, and then taking the result of the previous semantic segmentation and image intensity information as the input of the RNN to establish spatial association on the image in a composition mode;
in order to construct semantic association of each anatomical region in an image, a recurrent neural network RNN (papers RECURRENT NEURAL NETWORK REGULARIZATION, 2015) is introduced, neighborhood composition is first performed, the composition mode is that pixels on the image are connected according to the field, then image intensity information and an initial segmentation result are used as input of the RNN, hidden layer information is used for discriminating the category of each pixel along the forward direction, and is also conveyed to hidden layers of discrimination structures of other pixels in parallel along the path of the constructed image, and the segmentation result of the full image is optimized.
In the image pixel neighborhood composition link, a composition method shown in fig. 3 is selected, that is, the middle layer of each pixel is transmitted to the right pixel, the lower pixel and the lower right pixel along the arrow direction in fig. 3.
S5, connecting the initial segmentation network and the RNN network to construct an end-to-end segmentation network; after the network is built, unified training is carried out to obtain a trained segmentation network;
the primary segmentation network and the RNN network are connected in series, namely, the output of the primary segmentation network is used as the input of the RNN network to obtain an end-to-end segmentation network, and the whole network structure is shown in figure 2 and is used for understanding and segmenting the fine-granularity anatomical semantics of the breast ultrasound image; and uniformly training the segmentation network in a back propagation method.
S6, segmenting the breast ultrasonic image by adopting the segmentation network trained in the step S5.
The present invention can be preferably implemented as described above.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (4)

1. The breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding is characterized by comprising the following steps of:
s1, preprocessing a breast ultrasonic image, and normalizing the size of the image to obtain image intensity information;
s2, extracting a base network by adopting the characteristic extraction based on VGGNet, and migrating network parameters of the base network on the ImageNet to extract basic visual characteristics of the image; in view of the small data size of the ultrasonic image and the excellent capability of VGGNet in small sample data feature extraction, VGG19 is adopted as an encoder of U-net in a model, and network parameters of the model in the image Net pre-training are migrated, wherein the migrated network parameters are parameters of a VGG19 feature layer;
s3, adopting the structure of a U-net encoder-decoder, taking the characteristic extraction base network in the step S2 as an encoder, constructing the decoder in a symmetrical mode with the encoder, adding jump connection between the encoder and the decoder, and constructing an initial segmentation network for carrying out semantic segmentation on different anatomical structure areas in an image to obtain a semantic segmentation result;
s4, constructing an RNN network, and then taking the result of the previous semantic segmentation and image intensity information as the input of the RNN to establish spatial association on the image in a composition mode; in order to construct semantic association of each anatomical region in an image, a cyclic neural network RNN is introduced, neighborhood composition is firstly carried out, the composition mode is that pixels on the image are connected according to the field, then image intensity information and an initial segmentation result are used as input of the RNN, hidden layer information is used for discriminating the category of each pixel forwards, and is also transmitted to hidden layers of discrimination structures of other pixels in parallel along a path of the constructed image, and the segmentation result of the whole image is optimized;
s5, connecting the initial segmentation network and the RNN network to construct an end-to-end segmentation network; after the network is built, unified training is carried out to obtain a trained segmentation network;
s6, segmenting the breast ultrasonic image by adopting the segmentation network trained in the step S5.
2. The breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding according to claim 1, wherein in step S1, the normalization method uses a resolution function in a transform module under a skimage in python, and then maps gray values into a space of [ -1,1], wherein the gray values are image intensity information.
3. The breast ultrasound image segmentation method based on fine-grained anatomical semantic understanding according to claim 1, wherein in step S3, a structure of a U-net encoder-decoder is adopted in the segmentation process, and in order to highlight image details of anatomical semantic region segmentation, a skip connection between an encoding layer and a decoding layer is introduced, wherein the skip connection is that the encoding layer of a corresponding level in the encoding and decoding structure is copied and overlapped into the decoding layer of the corresponding level.
4. The breast ultrasound image segmentation method based on fine-granularity anatomical semantic understanding according to claim 1, wherein in step S5, the primary segmentation network and the RNN network are connected in a serial connection manner, that is, the output of the primary segmentation network is used as the input of the RNN network, so as to obtain an end-to-end segmentation network for fine-granularity anatomical semantic understanding and segmentation of the breast ultrasound image; and uniformly training the segmentation network in a back propagation method.
CN201911364557.2A 2019-12-26 2019-12-26 Breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding Active CN111161235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911364557.2A CN111161235B (en) 2019-12-26 2019-12-26 Breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911364557.2A CN111161235B (en) 2019-12-26 2019-12-26 Breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding

Publications (2)

Publication Number Publication Date
CN111161235A CN111161235A (en) 2020-05-15
CN111161235B true CN111161235B (en) 2023-05-23

Family

ID=70558057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911364557.2A Active CN111161235B (en) 2019-12-26 2019-12-26 Breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding

Country Status (1)

Country Link
CN (1) CN111161235B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276402A (en) * 2019-06-25 2019-09-24 北京工业大学 A kind of salt body recognition methods based on the enhancing of deep learning semanteme boundary
CN110458844A (en) * 2019-07-22 2019-11-15 大连理工大学 A kind of semantic segmentation method of low illumination scene

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276402A (en) * 2019-06-25 2019-09-24 北京工业大学 A kind of salt body recognition methods based on the enhancing of deep learning semanteme boundary
CN110458844A (en) * 2019-07-22 2019-11-15 大连理工大学 A kind of semantic segmentation method of low illumination scene

Also Published As

Publication number Publication date
CN111161235A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN108446730B (en) CT pulmonary nodule detection device based on deep learning
US11854244B2 (en) Labeling techniques for a modified panoptic labeling neural network
CN111369565B (en) Digital pathological image segmentation and classification method based on graph convolution network
CN111488921B (en) Intelligent analysis system and method for panoramic digital pathological image
CN110265141A (en) A kind of liver neoplasm CT images computer aided diagnosing method
CN104463804B (en) Image enhancement method based on intuitional fuzzy set
CN114565761B (en) Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image
US11164021B2 (en) Methods, systems, and media for discriminating and generating translated images
Akram et al. Intensity-based statistical features for classification of lungs CT scan nodules using artificial intelligence techniques
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN111445457B (en) Network model training method and device, network model identification method and device, and electronic equipment
WO2019037654A1 (en) 3d image detection method and apparatus, electronic device, and computer readable medium
Wu et al. BA-GCA net: boundary-aware grid contextual attention net in osteosarcoma MRI image segmentation
CN113706545A (en) Semi-supervised image segmentation method based on dual-branch nerve discrimination dimensionality reduction
CN112488976A (en) Multi-modal medical image fusion method based on DARTS network
CN111369574A (en) Thoracic cavity organ segmentation method and device
CN108304889A (en) A kind of digital breast imaging image radiation group method based on deep learning
Feng et al. GCCINet: Global feature capture and cross-layer information interaction network for building extraction from remote sensing imagery
CN114332122A (en) Cell counting method based on attention mechanism segmentation and regression
CN111161235B (en) Breast ultrasound image segmentation method based on fine granularity anatomical semantic understanding
Zhong et al. Full‐scale attention network for automated organ segmentation on head and neck CT and MR images
CN113409321B (en) Cell nucleus image segmentation method based on pixel classification and distance regression
Zhang et al. Multi-scale aggregation networks with flexible receptive fields for melanoma segmentation
Chen et al. Contrastive learning with feature fusion for unpaired thermal infrared image colorization
Pan et al. X-ray mammary image segmentation based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant