CN111784701A - Ultrasonic image segmentation method and system combining boundary feature enhancement and multi-scale information - Google Patents

Ultrasonic image segmentation method and system combining boundary feature enhancement and multi-scale information Download PDF

Info

Publication number
CN111784701A
CN111784701A CN202010523520.6A CN202010523520A CN111784701A CN 111784701 A CN111784701 A CN 111784701A CN 202010523520 A CN202010523520 A CN 202010523520A CN 111784701 A CN111784701 A CN 111784701A
Authority
CN
China
Prior art keywords
image
boundary
segmentation
feature
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010523520.6A
Other languages
Chinese (zh)
Inventor
张海
王伟明
朱磊
吴韵竹
张若昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Peoples Hospital
Original Assignee
Shenzhen Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Peoples Hospital filed Critical Shenzhen Peoples Hospital
Priority to CN202010523520.6A priority Critical patent/CN111784701A/en
Publication of CN111784701A publication Critical patent/CN111784701A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention discloses an ultrasonic image segmentation method and system combining boundary characteristic enhancement and multi-scale information. A mammary gland ultrasonic image I is given, firstly, downsampling is carried out on the I to obtain an image J, and the I and the J are simultaneously input into a feature pyramid network to obtain a group of feature maps with different spatial resolutions. Then, by learning the boundary map of the breast lesion region, a boundary-guided feature enhancement module is developed to enhance the feature map of each FPN layer. Then, the characteristic diagram after enhancement is subjected to up-sampling and connection operation, and a subdivision diagram S corresponding to I and J is predictedIAnd a rough segmentation chart SJ. Finally, in order to utilize image information of different scales, S is fusedIAnd SJTo obtain the segmentation result of the breast lesion. By applying enhanced boundary featuresAnd the information of the image and the multi-scale image is combined into a unified frame, the method can accurately segment the breast lesion area from the ultrasonic image and effectively remove the error detection area caused by various imaging artifacts.

Description

Ultrasonic image segmentation method and system combining boundary feature enhancement and multi-scale information
Technical Field
The invention belongs to the field of image processing, relates to an ultrasonic image segmentation method, and particularly relates to an ultrasonic image segmentation method and system combining boundary feature enhancement and multi-scale information.
Background
Breast cancer is the most common cancer in women, and among them, ultrasound examination has become a very attractive method for breast lesion detection and analysis imaging due to its many advantages, such as safety, flexibility, and versatility. However, since ultrasound images are difficult to understand and quantitative measurement of breast lesion areas is a cumbersome and difficult task, clinical diagnosis of breast lesions based on ultrasound imaging typically requires a trained and experienced radiologist to operate. Therefore, the automatic localization of the breast lesion will greatly facilitate the process of clinical detection and analysis, make the diagnosis more efficient, and achieve higher sensitivity and specificity. Unfortunately, accurately segmenting breast lesions from ultrasound images is very challenging due to strong imaging artifacts such as speckle noise, low contrast, and intensity non-uniformity. For some samples of ultrasound images, please refer to fig. 1;
in the historical literature, breast lesion segmentation algorithms for ultrasound images have been extensively studied. Early methods, which mainly inferred the boundary of the breast lesion region by manually characterizing the segmentation model, can be divided into three major categories, namely region growing methods, deformable models, and also graphical models.
The region growing method starts with a set of manually or automatically selected seeds that are gradually expanded to capture the boundaries of the target region according to predefined growth rules. An efficient method was developed by Shan et al to automatically generate regions of interest for segmentation of breast lesion areas, and Kwak et al defined growth rules using smoothness of contour lines and similarity of regions (average intensity and size).
Deformable models first construct an initial model and then deform the model based on internal and external energies to reach the object boundary. Madabhushi et al initialize a deformable model with boundary points and use the forces of the balloon to define the external energy, chang et al use rod filters to remove speckle noise from the ultrasound image and then deform the model to segment breast lesion regions.
The graphical model enables segmentation of breast lesion regions by efficient energy optimization using markov random fields or graph cut frameworks. Chiang et al uses a pre-trained probabilistic enhanced tree classifier to determine the data items for graph cut energy, and Xian et al defines the energy function by modeling the frequency and spatial domain information. While many prior models have been designed to aid in the segmentation of breast lesion regions, these methods do not have sufficient ability to acquire high-level semantic features to identify weak boundaries in ambiguous regions, resulting in boundary leakage that is prone to occur in low-contrast ultrasound images.
In contrast, the learning-based approach trains classifiers with a set of manually designed features, and then performs the segmentation task with these classifiers. Liu et al extracts 18 local image features to train an SVM classifier and use it to segment breast lesion regions. Jiang et al used 24 Harr-like features and a trained Adaboost classifier for breast tumor segmentation. Convolutional neural networks have achieved great benefit in many medical applications in recent years by learning high-level semantic features from labeled data by building a series of deep convolutional layers. Currently, several convolutional neural network frameworks have been developed for segmenting breast lesion regions from ultrasound images. For example, Yap et al investigated the performance of three networks for breast lesion detection, including block-based LeNet, U network and migratory learning methods in conjunction with pre-trained FCN-AlexNet. Lei et al propose a deep convolutional codec network with deep boundary supervision and adaptive domain transmission for segmentation of the mammary anatomy layer. Hu et al segmented breast tumors by combining a dilated complete convolution network with an active contour model. Although convolutional neural network-based methods improve the performance of breast lesion segmentation in low-contrast ultrasound images, they are still subject to artifacts such as speckle noise and brightness non-uniformity, which are often present in clinical applications and thus can produce inaccurate segmentation results.
Disclosure of Invention
In order to solve the above technology, we propose a Feature Pyramid Network (FPN) -based boundary-guided multi-scale network (BGM-Net) to improve the performance of ultrasound image breast lesion segmentation. Specifically, we first developed a boundary-guided feature enhancement module that enhances the feature map of each FPN layer by learning the boundary map of the breast lesion region. This step is particularly important for the performance of BGM-Net because it improves the ability of the FPN framework to detect the boundary of breast lesion regions in low contrast ultrasound images, thus solving the boundary leakage problem of fuzzy regions. Then, we have devised a multi-scale scheme that deals with ultrasound artifacts by fusing image information of different scales. Firstly, a test image is downsampled, then the test image and the downsampled image are simultaneously input into BGM-Net, a fine-scale segmentation graph and a coarse-scale segmentation graph are respectively predicted, and finally a segmentation result is generated by fusing the fine-scale segmentation graph and the coarse-scale segmentation graph.
In order to achieve the purpose, the invention provides the technical scheme that: an ultrasonic image segmentation method combining boundary feature enhancement and multi-scale information comprises the following steps:
step 1, obtaining an ultrasonic image training data set, labeling a lesion area in a training image in the training data set to obtain a labeling mask, and performing edge detection on the labeling mask by using a canny detector to obtain a boundary map of the lesion area;
step 2, down-sampling each training image I in the training data set to obtain an image J, inputting the I and the J into an improved feature pyramid network FPN for training at the same time to reach a preset iteration number, and predicting a subdivision segmentation chart S corresponding to the I and the JIAnd a rough segmentation chart SJObtaining a trained network model, wherein the improved feature pyramid network FPN is based on the existing feature pyramid network framework, enhances the feature map of each FPN layer through a boundary guide feature enhancement module, and then performs up-sampling and connection operation on the enhanced feature map; finally, by meltingAnd then SIAnd SJTo obtain the segmentation result of the lesion region;
step 3, a test ultrasonic image I 'is given, firstly, the I' is downsampled to obtain an image J ', and the I' and the J 'are simultaneously input into the trained network model to obtain a subdivision segmentation map S corresponding to the I' and the JI’And a rough segmentation chart SJ’Finally by fusing SI’And SJ’And obtaining the segmentation result of the lesion area.
Further, the specific processing procedure of the boundary guiding feature enhancing module in step 2 is as follows,
given a feature map F, first apply the convolution layer of 3X3 to F to obtain a first intermediate image X, and then apply the convolution layer of 1X1 to obtain a second intermediate image Y, which will be used to learn the boundary map B of the lesion region, wherein the boundary map B is obtained by: labeling the lesion area in the training image to obtain a labeling mask, and performing edge detection on the labeling mask by using a canny detector to obtain a boundary image B of the lesion area;
then, a convolution layer of 3x3 is applied to Y to obtain a third intermediate image Z, and each channel of Z is multiplied by B in units of pixels; finally, connecting X and Z, and then applying 1X1 convolutional layer to obtain enhanced feature map
Figure BDA0002532895900000031
Figure BDA0002532895900000032
The c channel of (a) is calculated as follows:
Figure BDA0002532895900000033
wherein f isconvIs the 1x1 convolution parameter, ZcIs the c-th channel of Z, concate represents the join operation on the feature map.
Further, the calculation of the total loss function L of the modified feature pyramid network FPN in step 2 is as follows,
L=Dseg+αDedge,
wherein D issegAnd DedgeSegmentation and boundary losses, respectively, α are used to balance DsegAnd DedgeThe parameters of (1); dsegAnd DedgeThe calculation formula of (a) is as follows:
Figure BDA0002532895900000034
Figure BDA0002532895900000035
wherein G issAnd GeRespectively, labeling mask and boundary map of the lesion region, SIAnd SJRespectively, a fine segmentation map corresponding to I and a coarse segmentation map corresponding to J, SfIs the final segmentation result of the training image I; b iskIs a predicted boundary map, function, of the lesion region on the kth boundary guide feature enhancement module
Figure BDA0002532895900000036
Including Dice loss and cross entropy loss, are defined as follows:
Figure BDA0002532895900000037
wherein phiCEAnd phiDiceRespectively, cross entropy loss function and Dice loss function, β is used to balance phiCEAnd phiDiceThe parameter (c) of (c).
Further, the ultrasound image is a breast ultrasound image.
The invention also provides an ultrasonic image segmentation system combining boundary feature enhancement and multi-scale information, which comprises the following modules:
the training data set acquisition module is used for acquiring an ultrasonic image training data set, labeling a lesion area in a training image in the training data set to obtain a labeling mask, and performing edge detection on the labeling mask by using a canny detector to obtain a boundary map of the lesion area;
an improved characteristic pyramid network FPN training module used for down-sampling each training image I in the training data set to obtain an image J, inputting the I and the J into the improved characteristic pyramid network FPN for training at the same time to reach a certain iteration number, thereby predicting a subdivision chart S corresponding to the I and the JIAnd a rough segmentation chart SJObtaining a trained network model, wherein the improved feature pyramid network FPN is based on the existing feature pyramid network framework, enhances the feature map of each FPN layer through a boundary guide feature enhancement module, and then performs up-sampling and connection operation on the enhanced feature map; finally, by fusing SIAnd SJTo obtain the segmentation result of the lesion region;
a segmentation result obtaining model for giving a test ultrasonic image I ', firstly down-sampling I' to obtain an image J ', and simultaneously inputting I' and J 'into the trained network model to obtain a subdivision segmentation image S corresponding to I' and JI’And a rough segmentation chart SJ’Finally by fusing SI’And SJ’And obtaining the segmentation result of the lesion area.
Further, the specific processing procedure of the boundary guide feature enhancement module in the improved feature pyramid network FPN training module is as follows,
given a feature map F, first apply the convolution layer of 3X3 to F to obtain a first intermediate image X, and then apply the convolution layer of 1X1 to obtain a second intermediate image Y, which will be used to learn the boundary map B of the lesion region, wherein the boundary map B is obtained by: labeling the lesion area in the training image to obtain a labeling mask, and performing edge detection on the labeling mask by using a canny detector to obtain a boundary image B of the lesion area;
then, a convolution layer of 3x3 is applied to Y to obtain a third intermediate image Z, and each channel of Z is multiplied by B in units of pixels; finally, connecting X and Z, and then applying 1X1 convolutional layer to obtain enhanced feature map
Figure BDA0002532895900000041
Figure BDA0002532895900000042
The c channel of (a) is calculated as follows:
Figure BDA0002532895900000043
wherein f isconvIs the 1x1 convolution parameter, ZcIs the c-th channel of Z, concate represents the join operation on the feature map.
Further, the calculation of the total loss function L of the improved feature pyramid network FPN in the improved feature pyramid network FPN training module is as follows,
L=Dseg+αDedge,
wherein D issegAnd DedgeSegmentation and boundary losses, respectively, α are used to balance DsegAnd DedgeThe parameters of (1); dsegAnd DedgeThe calculation formula of (a) is as follows:
Figure BDA0002532895900000051
Figure BDA0002532895900000052
wherein G issAnd GeRespectively, labeling mask and boundary map of the lesion region, SIAnd SJRespectively, a fine segmentation map corresponding to I and a coarse segmentation map corresponding to J, SfIs the final segmentation result of the training image I; b iskIs a predicted boundary map, function, of the lesion region on the kth boundary guide feature enhancement module
Figure BDA0002532895900000053
Including Dice loss and cross entropy loss, are defined as follows:
Figure BDA0002532895900000054
wherein phiCEAnd phiDiceRespectively, cross entropy loss function and Dice loss function, β is used to balance phiCEAnd phiDiceThe parameter (c) of (c).
Further, the ultrasound image is a breast ultrasound image.
Compared with the prior art, the invention has the advantages and beneficial effects that: according to the method, the characteristic diagram of each FPN layer is enhanced through the boundary guide characteristic enhancement module, and the error detection area caused by various imaging artifacts is effectively removed by adopting a multi-scale scheme, so that the method can accurately segment the breast lesion area from the ultrasonic image and effectively remove the error detection area. Finally, two extremely challenging mammary gland ultrasonic data sets are adopted to verify the performance of the method, and experimental results show that the method is superior to the existing method.
Drawings
Fig. 1 shows breast ultrasound image samples, where (a) - (c) boundaries between lesion and non-lesion areas are blurred, and (d) - (f) brightness within lesion areas is not uniform.
Fig. 2 is a flowchart of processing a training image of an improved feature pyramid network FPN according to an embodiment of the present invention.
FIG. 3 is a flowchart of a boundary guiding feature enhancing module according to an embodiment of the present invention.
FIG. 4 is a comparison of breast lesion segmentation results between different methods, (a) test images, (b) gold standards, (c) - (h) our segmentation results (BGM-Net), and ConveEDNet, DeeplabV3+, FPN, U-Net + +, and U-Net, respectively. The first three lines of images are from the BUSZPH dataset and the last three lines of images are from the BUSI dataset.
Fig. 5 is a comparison of breast lesion segmentation results between different reference networks, (a) test images, (b) gold standards, (c) - (f) are our results and the results of the three reference networks, respectively.
Detailed Description
The technical solution of the present invention is further explained with reference to the drawings and the embodiments.
FIG. 2 shows the utilization of the present inventionAnd processing the training image by using the feature pyramid network FPN. Given a breast ultrasound image I, firstly, down-sampling I to obtain an image J, and simultaneously inputting I and J into a feature pyramid network to obtain a group of feature maps with different spatial resolutions. Then, by learning the boundary map of the breast lesion region, we develop a boundary-guided feature enhancement module to optimize the feature map of each FPN layer. Next, we perform upsampling and concatenating operations on the enhanced feature map and predict a subdivided segmentation map S corresponding to I and JIAnd a rough segmentation chart SJ. Finally, to exploit the image information of different scales, we fuse SIAnd SJTo obtain the segmentation result of the breast lesion. By combining the enhanced boundary features and the multi-scale image information into a unified frame, the method can accurately segment breast lesion areas from the ultrasonic image and effectively remove error detection areas caused by various imaging artifacts. Two important points of the present invention are described in detail below:
1. boundary-guided feature enhancement
The FPN framework firstly uses a convolutional neural network to extract a group of characteristic maps with different spatial resolutions, and then from the last layer, adjacent image layers are combined continuously through iteration until reaching the first layer. Although FPN improves the performance of breast lesion segmentation, the accuracy of its boundary detection still remains to be improved due to the presence of ultrasound artifacts. To solve this problem, we developed a boundary-guided feature enhancement module that improves the boundary detection capability of each FPN layer by learning the boundary map of the breast lesion region.
FIG. 3 illustrates a flow diagram of the boundary-guided feature enhancement module. Given a feature map F, we first apply the convolution layer of 3X3 to F to obtain a first intermediate image X, and then apply the convolution layer of 1X1 to obtain a second intermediate image Y, which will be used to learn the boundary map B of the breast lesion region. Then, we apply a convolution layer of 3x3 to Y to obtain a third intermediate image Z, and multiply each channel of Z by B in units of pixels. Finally, we connect X and Z, and then apply a 1X1 convolutional layerTo obtain an enhanced characteristic map
Figure BDA0002532895900000061
Figure BDA0002532895900000062
The c channel of (a) is calculated as follows:
Figure BDA0002532895900000063
wherein f isconvIs the 1x1 convolution parameter, ZcIs the c-th channel of Z, concate represents the join operation on the feature map.
2. Multiscale scheme
To cope with various ultrasound artifacts, we have designed a multi-scale scheme that generates the final segmentation result by fusing image information of different scales. Specifically, for each breast ultrasound image tested, we first down-sample it to a coarse scale image with a resolution of 320x 320. In our experiment, the resolution of all training images was adjusted to 416x416 based on past experience. Accordingly, the test image is also adjusted to the same resolution. Then, the test image and the down-sampling image are simultaneously input into the network, corresponding subdivided images and corresponding rough segmented images are respectively predicted, and finally, the subdivided images and the rough segmented images are fused to obtain a final segmentation result. Since the wrong detection area on the fine-scale image can be removed by the information on the coarse-scale image, the method can accurately segment the breast lesion area.
In our study, each training image has an annotation mask corresponding to the breast lesion region, which will serve as a gold standard for breast lesion region segmentation. Furthermore, we applied a canny detector to the mask to obtain a boundary map of the breast lesion area, which will serve as a gold standard for boundary detection. Based on the two gold criteria above, we combine the segmentation and boundary detection losses to calculate the overall loss function L as follows:
L=Dseg+αDedge,
wherein DsegAnd DedgeSegmentation loss and boundary detection loss, respectively, α are used to balance DsegAnd DedgeAnd empirically set to 0.1. DsegAnd DedgeThe definitions of (A) are respectively:
Figure BDA0002532895900000071
Figure BDA0002532895900000072
wherein G issAnd GeGold standard for breast lesion segmentation and boundary detection, respectively. SIAnd SJAre the segmentation maps, S, of I and J, respectivelyfIs the final segmentation result, BkIs a predicted boundary map of the breast lesion region on the kth boundary guide feature enhancement module; function(s)
Figure BDA0002532895900000073
Including dice loss and cross-entropy loss, are defined as follows:
Figure BDA0002532895900000074
wherein phiCEAnd phiDiceRespectively, cross entropy loss function and Dice loss function, β is used to balance phiCEAnd phiDiceAnd empirically set to 0.5.
In the training dataset, breast ultrasound images are randomly rotated, cropped, horizontally flipped to increase the amount of data. We trained the entire framework through 10000 iterations using Adam optimizer. We initialized the learning rate to 0.0001 and reduced it to 0.00001 after 5000 iterations. We implemented the proposed method on Keras and run it on a single GPU with a batch size of 8.
The embodiment of the invention also provides an ultrasonic image segmentation system combining boundary characteristic enhancement and multi-scale information, which comprises the following modules:
the training data set acquisition module is used for acquiring an ultrasonic image training data set, labeling a lesion area in a training image in the training data set to obtain a labeling mask, and performing edge detection on the labeling mask by using a canny detector to obtain a boundary map of the lesion area;
an improved characteristic pyramid network FPN training module used for down-sampling each training image I in the training data set to obtain an image J, inputting the I and the J into the improved characteristic pyramid network FPN for training at the same time to reach a certain iteration number, thereby predicting a subdivision chart S corresponding to the I and the JIAnd a rough segmentation chart SJObtaining a trained network model, wherein the improved feature pyramid network FPN is based on the existing feature pyramid network framework, enhances the feature map of each FPN layer through a boundary guide feature enhancement module, and then performs up-sampling and connection operation on the enhanced feature map; finally, by fusing SIAnd SJTo obtain the segmentation result of the lesion region;
a segmentation result obtaining model for giving a test ultrasonic image I ', firstly down-sampling I' to obtain an image J ', and simultaneously inputting I' and J 'into the trained network model to obtain a subdivision segmentation image S corresponding to I' and JI’And a rough segmentation chart SJ’Finally by fusing SI’And SJ’And obtaining the segmentation result of the lesion area.
Wherein, the specific processing procedure of the boundary guide feature enhancement module in the improved feature pyramid network FPN training module is as follows,
given a feature map F, first apply the convolution layer of 3X3 to F to obtain a first intermediate image X, and then apply the convolution layer of 1X1 to obtain a second intermediate image Y, which will be used to learn the boundary map B of the lesion region, wherein the boundary map B is obtained by: labeling the lesion area in the training image to obtain a labeling mask, and performing edge detection on the labeling mask by using a canny detector to obtain a boundary image B of the lesion area;
then, a convolution layer of 3x3 is applied to Y to obtain a third intermediate image Z, and each channel of Z is multiplied by B in units of pixels; finally, connecting X and Z, and then applying 1X1 convolutional layer to obtain enhanced feature map
Figure BDA0002532895900000081
Figure BDA0002532895900000082
The c channel of (a) is calculated as follows:
Figure BDA0002532895900000083
wherein f isconvIs the 1x1 convolution parameter, ZcIs the c-th channel of Z, concate represents the join operation on the feature map.
Wherein, the calculation of the total loss function L of the improved feature pyramid network FPN in the improved feature pyramid network FPN training module is as follows,
L=Dseg+αDedge,
wherein D issegAnd DedgeSegmentation and boundary losses, respectively, α are used to balance DsegAnd DedgeThe parameters of (1); dsegAnd DedgeThe calculation formula of (a) is as follows:
Figure BDA0002532895900000091
Figure BDA0002532895900000092
wherein G issAnd GeRespectively, labeling mask and boundary map of the lesion region, SIAnd SJRespectively, a fine segmentation map corresponding to I and a coarse segmentation map corresponding to J, SfIs the final segmentation result of the training image I; b iskIs a predicted boundary map, function, of the lesion region on the kth boundary guide feature enhancement module
Figure BDA0002532895900000093
Including Dice loss and cross entropy loss, are defined as follows:
Figure BDA0002532895900000094
wherein phiCEAnd phiDiceRespectively, cross entropy loss function and Dice loss function, β is used to balance phiCEAnd phiDiceThe parameter (c) of (c).
To validate the effectiveness of the method of the invention, we used two very challenging breast ultrasound datasets for evaluation. The first data set (BUSI, Dataset of Breast ultrasound images. data in Brief2020,28,104863) came from Baheya hospital, piolo, and was used primarily for early detection and treatment of female cancers. 780 tumor images of 600 patients were collected in this dataset, and we randomly selected 661 images as the training dataset and the remaining 119 images as the test dataset. The second data set is from Shenzhen people hospital (BUSZPH for short), the data set comprises 632 breast ultrasound images, 500 images are randomly selected as a training data set, and the rest 132 images are selected as a testing data set. The breast lesion areas in all images were manually segmented by experienced radiologists, and each annotation result was confirmed by three clinicians.
We verified the performance of the proposed method by comparison with five recent methods, U-Net + +, Feature Pyramid Network (FPN), Deeplab V3+, and ConveEDNet, respectively. Table 1 and table 1 list the measurements of the different segmentation methods on the two data sets, respectively. Obviously, our method takes higher measurements on four metrics of Dice, Jaccard, accuracy and recall, and also the ADB measurement is lower, which proves that the proposed method can better segment breast lesion regions from ultrasound images.
TABLE 1 measurement of different segmentation methods on a BUSZPH dataset
Figure BDA0002532895900000101
TABLE 2 measurement of different segmentation methods on BUSI datasets
Figure BDA0002532895900000102
Figure four compares the segmentation results of our method with the other five segmentation methods. As shown, our method can accurately segment breast lesion regions from ultrasound images despite the presence of significant artifacts. Other methods often produce over-or under-segmentation results due to erroneous segmentation of a portion of a non-diseased region or missing of a portion of a diseased region. In the first and second row images, our results have the highest similarity to the gold standard despite the strong speckle noise. This is because, in our loss function, the boundary detection loss function can regularize the boundary shape of the detection region using the boundary information of the gold standard. Furthermore, in the third and fourth row images, because we adopt a multi-scale scheme to fuse image information of different scales, our method can remove most of non-lesion areas even if the blurred areas have weak boundaries. Finally, in the fifth and sixth lines of images, our method is able to accurately locate the boundary of a breast lesion region from an ultrasound image with uneven brightness, benefiting from the action of the boundary-guided feature enhancement module. In contrast, other methods have poorer segmentation results than our since they cannot effectively cope with various ultrasound artifacts.
In addition, we performed an ablation analysis to evaluate key components of the proposed method. Specifically, we consider three reference networks and analyze their results versus our results on two data sets. The first reference network does not consider the enhancement of the boundary features and the fusion of multi-scale information, which is equivalent to removing the enhancement of the boundary guide features and the multi-scale scheme from the proposed method, and the proposed method is equivalent to a feature pyramid network, which is represented by Basic. The second reference network only considers multi-scale schemes and not boundary-guided feature enhancements, we denote by Basic + Multiscale. The third reference network considers only boundary-guided feature enhancement and does not consider multi-scale schemes, which we denote by Basic + BGFE. Tables 3 and 4 list the measurements of different reference networks on the two data sets, respectively. As shown, both Basic + BGFE and Basic + Multiscale results were better than Basic in that Dice, Jaccard, accuracy and recall had higher measurements and ADB measurements were lower. This fully accounts for the role of the boundary-guided feature enhancement module and the multi-scale scheme. In addition, the results of the proposed method are superior to three reference networks, which also verifies the superiority that our method shows by combining boundary feature enhancement and multi-scale information fusion into a unified framework.
TABLE 3 measurement of different reference networks on BUSZPH data set
Figure BDA0002532895900000111
TABLE 4 measurement of different reference networks on BUSI datasets
Figure BDA0002532895900000112
Figure five compares the segmentation results of our method and three reference networks. Clearly, our method allows better segmentation of breast lesions. The false detection region caused by speckle noise is still visible in the results of Basic + BGFE, which mistakenly divides a large part of the non-diseased region due to the weak boundary in the blurred region. In contrast, our method relies on a boundary-guided feature enhancement module to accurately locate the boundary of a breast lesion. In addition, the error detection area can be effectively solved through a multi-scale scheme. Therefore, our results have the highest similarity to the gold standard.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (8)

1. An ultrasonic image segmentation method combining boundary feature enhancement and multi-scale information is characterized by comprising the following steps:
step 1, obtaining an ultrasonic image training data set, labeling a lesion area in a training image in the training data set to obtain a labeling mask, and performing edge detection on the labeling mask by using a canny detector to obtain a boundary map of the lesion area;
step 2, down-sampling each training image I in the training data set to obtain an image J, inputting the I and the J into an improved feature pyramid network FPN for training at the same time to reach a certain iteration number, and predicting a subdivision segmentation chart S corresponding to the I and the JIAnd a rough segmentation chart SJObtaining a trained network model, wherein the improved feature pyramid network FPN is based on the existing feature pyramid network framework, enhances the feature map of each FPN layer through a boundary guide feature enhancement module, and then performs up-sampling and connection operation on the enhanced feature map; finally, by fusing SIAnd SJTo obtain the segmentation result of the lesion region;
step 3, a test ultrasonic image I 'is given, firstly, the I' is downsampled to obtain an image J ', and the I' and the J 'are simultaneously input into the trained network model to obtain a subdivision segmentation map S corresponding to the I' and the JI’And a rough segmentation chart SJ’Finally by fusing SI’And SJ’And obtaining the segmentation result of the lesion area.
2. A method of ultrasound image segmentation in combination with border feature enhancement and multiscale information as claimed in claim 1, wherein: the specific processing procedure of the boundary guiding feature enhancing module in step 2 is as follows,
given a feature map F, first apply the convolution layer of 3X3 to F to obtain a first intermediate image X, and then apply the convolution layer of 1X1 to obtain a second intermediate image Y, which will be used to learn the boundary map B of the lesion region, wherein the boundary map B is obtained by: labeling the lesion area in the training image to obtain a labeling mask, and performing edge detection on the labeling mask by using a canny detector to obtain a boundary image B of the lesion area;
then, a convolution layer of 3x3 is applied to Y to obtain a third intermediate image Z, and each channel of Z is multiplied by B in units of pixels; finally, connecting X and Z, and then applying 1X1 convolutional layer to obtain enhanced feature map
Figure FDA0002532895890000011
The c channel of (a) is calculated as follows:
Figure FDA0002532895890000012
wherein f isconvIs the 1x1 convolution parameter, ZcIs the c-th channel of Z, concate represents the join operation on the feature map.
3. A method of ultrasound image segmentation in combination with border feature enhancement and multiscale information as claimed in claim 1, wherein: the overall loss function L of the modified feature pyramid network FPN in step 2 is calculated as follows,
L=Dseg+αDedge,
wherein D issegAnd DedgeSegmentation and boundary losses, respectively, α are used to balance DsegAnd DedgeThe parameters of (1); dsegAnd DedgeThe calculation formula of (a) is as follows:
Figure FDA0002532895890000021
Figure FDA0002532895890000022
wherein G issAnd GeRespectively, labeling mask and boundary map of the lesion region, SIAnd SJRespectively, a fine segmentation map corresponding to I and a coarse segmentation map corresponding to J, SfIs the final segmentation result of the training image I; b iskIs a predicted boundary map, function, of the lesion region on the kth boundary guide feature enhancement module
Figure FDA0002532895890000023
Including Dice loss and cross entropy loss, are defined as follows:
Figure FDA0002532895890000024
wherein phiCEAnd phiDiceRespectively, cross entropy loss function and Dice loss function, β is used to balance phiCEAnd phiDiceThe parameter (c) of (c).
4. A method of ultrasound image segmentation in combination with border feature enhancement and multiscale information as claimed in claim 1, wherein: the ultrasound image is a breast ultrasound image.
5. An ultrasound image segmentation system combining border feature enhancement and multi-scale information, comprising the following modules:
the training data set acquisition module is used for acquiring an ultrasonic image training data set, labeling a lesion area in a training image in the training data set to obtain a labeling mask, and performing edge detection on the labeling mask by using a canny detector to obtain a boundary map of the lesion area;
an improved characteristic pyramid network FPN training module used for down-sampling each training image I in the training data set to obtain an image J, inputting the I and the J into the improved characteristic pyramid network FPN for training at the same time to reach a certain iteration number, thereby predicting a subdivision chart S corresponding to the I and the JIAnd a rough segmentation chart SJAnd obtaining a trained network model, the improved feature pyramid network FPN beingBased on the existing characteristic pyramid network framework, a boundary guide characteristic enhancement module is used for enhancing the characteristic diagram of each FPN layer, and then the enhanced characteristic diagram is subjected to up-sampling and connection operation; finally, by fusing SIAnd SJTo obtain the segmentation result of the lesion region;
a segmentation result obtaining model for giving a test ultrasonic image I ', firstly down-sampling I' to obtain an image J ', and simultaneously inputting I' and J 'into the trained network model to obtain a subdivision segmentation image S corresponding to I' and JI’And a rough segmentation chart SJ’Finally by fusing SI’And SJ’And obtaining the segmentation result of the lesion area.
6. The ultrasound image segmentation system in combination with border feature enhancement and multiscale information as set forth in claim 5 wherein: the specific processing procedure of the boundary-guided feature enhancement module in the improved feature pyramid network FPN training module is as follows,
given a feature map F, first apply the convolution layer of 3X3 to F to obtain a first intermediate image X, and then apply the convolution layer of 1X1 to obtain a second intermediate image Y, which will be used to learn the boundary map B of the lesion region, wherein the boundary map B is obtained by: labeling the lesion area in the training image to obtain a labeling mask, and performing edge detection on the labeling mask by using a canny detector to obtain a boundary image B of the lesion area;
then, a convolution layer of 3x3 is applied to Y to obtain a third intermediate image Z, and each channel of Z is multiplied by B in units of pixels; finally, connecting X and Z, and then applying 1X1 convolutional layer to obtain enhanced feature map
Figure FDA0002532895890000031
Figure FDA0002532895890000037
The c channel of (a) is calculated as follows:
Figure FDA0002532895890000032
wherein f isconvIs the 1x1 convolution parameter, ZcIs the c-th channel of Z, concate represents the join operation on the feature map.
7. The ultrasound image segmentation system in combination with border feature enhancement and multiscale information as set forth in claim 5 wherein: the overall loss function L of the modified feature pyramid network FPN in the modified feature pyramid network FPN training module is calculated as follows,
L=Dseg+αDedge,
wherein D issegAnd DedgeSegmentation and boundary losses, respectively, α are used to balance DsegAnd DedgeThe parameters of (1); dsegAnd DedgeThe calculation formula of (a) is as follows:
Figure FDA0002532895890000033
Figure FDA0002532895890000034
wherein G issAnd GeRespectively, labeling mask and boundary map of the lesion region, SIAnd SJRespectively, a fine segmentation map corresponding to I and a coarse segmentation map corresponding to J, SfIs the final segmentation result of the training image I; b iskIs a predicted boundary map, function, of the lesion region on the kth boundary guide feature enhancement module
Figure FDA0002532895890000035
Including Dice loss and cross entropy loss, are defined as follows:
Figure FDA0002532895890000036
wherein phiCEAnd phiDiceRespectively, cross entropy loss function and Dice loss function, β is used to balance phiCEAnd phiDiceThe parameter (c) of (c).
8. The ultrasound image segmentation system in combination with border feature enhancement and multiscale information as set forth in claim 5 wherein: the ultrasound image is a breast ultrasound image.
CN202010523520.6A 2020-06-10 2020-06-10 Ultrasonic image segmentation method and system combining boundary feature enhancement and multi-scale information Pending CN111784701A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010523520.6A CN111784701A (en) 2020-06-10 2020-06-10 Ultrasonic image segmentation method and system combining boundary feature enhancement and multi-scale information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010523520.6A CN111784701A (en) 2020-06-10 2020-06-10 Ultrasonic image segmentation method and system combining boundary feature enhancement and multi-scale information

Publications (1)

Publication Number Publication Date
CN111784701A true CN111784701A (en) 2020-10-16

Family

ID=72755834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010523520.6A Pending CN111784701A (en) 2020-06-10 2020-06-10 Ultrasonic image segmentation method and system combining boundary feature enhancement and multi-scale information

Country Status (1)

Country Link
CN (1) CN111784701A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785602A (en) * 2021-01-08 2021-05-11 重庆兆琨智医科技有限公司 Multi-modal brain tumor image segmentation system and method
CN112785598A (en) * 2020-11-05 2021-05-11 南京天智信科技有限公司 Ultrasonic breast tumor automatic segmentation method based on attention enhancement improved U-shaped network
CN113506310A (en) * 2021-07-16 2021-10-15 首都医科大学附属北京天坛医院 Medical image processing method and device, electronic equipment and storage medium
CN113781440A (en) * 2020-11-25 2021-12-10 北京医准智能科技有限公司 Ultrasonic video focus detection method and device
CN114723670A (en) * 2022-03-10 2022-07-08 苏州鸿熙融合智能医疗科技有限公司 Intelligent processing method for breast cancer lesion picture

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070081712A1 (en) * 2005-10-06 2007-04-12 Xiaolei Huang System and method for whole body landmark detection, segmentation and change quantification in digital images
CN108268870A (en) * 2018-01-29 2018-07-10 重庆理工大学 Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN111145170A (en) * 2019-12-31 2020-05-12 电子科技大学 Medical image segmentation method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070081712A1 (en) * 2005-10-06 2007-04-12 Xiaolei Huang System and method for whole body landmark detection, segmentation and change quantification in digital images
CN108268870A (en) * 2018-01-29 2018-07-10 重庆理工大学 Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN111145170A (en) * 2019-12-31 2020-05-12 电子科技大学 Medical image segmentation method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜雪琦;伍岳庆;姚宇;: "基于活动形状模型对超声图像左心室的分割", 计算机应用, no. 1, 15 June 2017 (2017-06-15) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785598A (en) * 2020-11-05 2021-05-11 南京天智信科技有限公司 Ultrasonic breast tumor automatic segmentation method based on attention enhancement improved U-shaped network
CN113781440A (en) * 2020-11-25 2021-12-10 北京医准智能科技有限公司 Ultrasonic video focus detection method and device
CN113781440B (en) * 2020-11-25 2022-07-29 北京医准智能科技有限公司 Ultrasonic video focus detection method and device
CN112785602A (en) * 2021-01-08 2021-05-11 重庆兆琨智医科技有限公司 Multi-modal brain tumor image segmentation system and method
CN113506310A (en) * 2021-07-16 2021-10-15 首都医科大学附属北京天坛医院 Medical image processing method and device, electronic equipment and storage medium
CN114723670A (en) * 2022-03-10 2022-07-08 苏州鸿熙融合智能医疗科技有限公司 Intelligent processing method for breast cancer lesion picture

Similar Documents

Publication Publication Date Title
CN111784701A (en) Ultrasonic image segmentation method and system combining boundary feature enhancement and multi-scale information
CN108898595B (en) Construction method and application of positioning model of focus region in chest image
CN108053417B (en) lung segmentation device of 3D U-Net network based on mixed rough segmentation characteristics
CN108416360B (en) Cancer diagnosis system and method based on breast molybdenum target calcification features
CN107644420B (en) Blood vessel image segmentation method based on centerline extraction and nuclear magnetic resonance imaging system
CN110929789A (en) Liver tumor automatic classification method and device based on multi-stage CT image analysis
CN111476793B (en) Dynamic enhanced magnetic resonance imaging processing method, system, storage medium and terminal
Le et al. Liver tumor segmentation from MR images using 3D fast marching algorithm and single hidden layer feedforward neural network
KR20230059799A (en) A Connected Machine Learning Model Using Collaborative Training for Lesion Detection
CN101103924A (en) Galactophore cancer computer auxiliary diagnosis method based on galactophore X-ray radiography and system thereof
CN110706225B (en) Tumor identification system based on artificial intelligence
Huang et al. One-stage pulmonary nodule detection using 3-D DCNN with feature fusion and attention mechanism in CT image
Rela et al. Liver tumor segmentation and classification: A systematic review
Hille et al. Joint liver and hepatic lesion segmentation in MRI using a hybrid CNN with transformer layers
Ganvir et al. Filtering method for pre-processing mammogram images for breast cancer detection
Nayan et al. A deep learning approach for brain tumor detection using magnetic resonance imaging
Hu et al. A multi-instance networks with multiple views for classification of mammograms
Mastouri et al. A morphological operation-based approach for Sub-pleural lung nodule detection from CT images
Delmoral et al. Segmentation of pathological liver tissue with dilated fully convolutional networks: A preliminary study
Guo et al. Thyroid nodule ultrasonic imaging segmentation based on a deep learning model and data augmentation
Chen et al. Research related to the diagnosis of prostate cancer based on machine learning medical images: A review
Chen et al. Coronary arteries segmentation based on the 3D discrete wavelet transform and 3D neutrosophic transform
Song et al. Segmentation of bone metastasis in CT images based on modified HED
Amritha et al. Liver tumor segmentation and classification using deep learning
Radhi et al. An automatic segmentation of breast ultrasound images using U-Net model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination