CN114332572B - Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network - Google Patents

Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network Download PDF

Info

Publication number
CN114332572B
CN114332572B CN202111532955.8A CN202111532955A CN114332572B CN 114332572 B CN114332572 B CN 114332572B CN 202111532955 A CN202111532955 A CN 202111532955A CN 114332572 B CN114332572 B CN 114332572B
Authority
CN
China
Prior art keywords
fusion
feature
foreground
background
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111532955.8A
Other languages
Chinese (zh)
Other versions
CN114332572A (en
Inventor
张煜
宁振源
邸小慧
钟升洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern Medical University
Original Assignee
Southern Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern Medical University filed Critical Southern Medical University
Priority to CN202111532955.8A priority Critical patent/CN114332572B/en
Publication of CN114332572A publication Critical patent/CN114332572A/en
Application granted granted Critical
Publication of CN114332572B publication Critical patent/CN114332572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a method for extracting multiscale fusion characteristic parameters of a breast lesion ultrasonic image by using a hierarchical dense characteristic fusion network based on saliency map guidance. The method combines the linear spectral clustering super-pixel method and the multi-scale region grouping method to obtain the characteristic representation graph, avoids the loss of useful information, and then builds a three-branch hierarchical dense characteristic fusion network to extract and fuse foreground characteristics and background characteristics, so as to extract multi-scale fusion characteristic parameters of the breast lesion ultrasonic image. The foreground and background two progressive dense feature extraction branch networks take the original image and the corresponding saliency map as inputs together to effectively extract foreground and background features related to the classification task respectively. And according to the known correlation and supplementary information between the foreground and the background, the layered feature fusion branch network fuses the foreground and the background information in a multi-scale manner to obtain more accurate and more obvious multi-scale fusion feature parameters.

Description

Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network
Technical Field
The patent relates to the technical field of computer vision and medical images, and provides a method for extracting multiscale fusion characteristic parameters of breast lesion ultrasonic images based on a hierarchical dense characteristic fusion network guided by a saliency map.
Background
At present, breast ultrasound imaging is the most commonly used examination technology in clinic, and has the advantages of no radiation, no wound, low cost and the like. However, it takes a lot of time for the operator to continuously review the ultrasound images, and the experience and requirements for the operator are strict to ensure accurate results. In addition, the judgment result between operators may also be different. Therefore, in order to assist operators in better interpretation of breast ultrasound images, researchers have proposed many computer-aided diagnosis methods to assist operators in making decisions to improve the accuracy of feature extraction of breast ultrasound images.
Among existing deep learning methods, many methods can be successfully applied to prediction of a region of interest of a breast ultrasound image. However, creating a robust model still presents certain challenges due to the relatively complex pattern, low contrast and blurred boundaries of the region of interest (foreground) and surrounding tissue (background) in the breast ultrasound image. Therefore, it is necessary to provide a method for extracting multiscale fusion feature parameters of breast lesion ultrasound images based on a hierarchical dense feature fusion network guided by saliency maps to overcome the defects of the prior art.
Disclosure of Invention
The invention aims to provide a method for extracting multiscale fusion characteristic parameters of an ultrasonic image of breast lesions by using a hierarchical dense characteristic fusion network based on saliency map guidance. According to the method, the foreground saliency map and the background saliency map are used as a priori information to guide the network to learn the discriminative characteristic representation, so that the accuracy of extracting the breast ultrasonic image characteristics is improved.
The above object of the present invention is achieved by the following technical means.
The method for extracting the multiscale fusion characteristic parameters of the breast lesion ultrasonic image based on the hierarchical dense characteristic fusion network guided by the saliency map comprises the following steps:
s1, reading an original ultrasonic image of the breast lesion.
S2, randomly selecting at least three marking points in a lesion area on an original ultrasonic image of the breast lesion.
And S3, processing the original ultrasonic image of the breast lesions in the step S1 by using a linear spectral clustering super-pixel method to obtain a low-level characteristic representation.
And (3) processing the original ultrasonic image of the breast lesions in the step S1 by using a multi-scale region grouping method to obtain a high-level characteristic representation.
And S4, respectively selecting target areas by using the marking points in the step S2 in the low-level characteristic representation diagram and the high-level characteristic representation diagram obtained in the step S3, and carrying out weighted summation on the target areas in the two selected images to obtain a foreground salient diagram.
S5, executing the reverse operation on the foreground saliency map obtained in the step S4 to obtain a background saliency map.
S6, inputting the foreground salient feature obtained in the step S4 and the original ultrasonic image of the breast lesions in the step S1 into a foreground feature extraction network branch together, and extracting foreground salient features to obtain a foreground salient feature image.
And (3) inputting the background saliency map obtained in the step (S5) and the original ultrasonic image of the breast lesions in the step (S1) into a background feature extraction network branch together, and executing the extraction of the background saliency features to obtain a background saliency feature map.
And S7, in the hierarchical dense feature fusion branch network, the foreground salient feature map and the background salient feature map extracted in the step S6 are fused and learned to obtain a multi-scale fusion feature map.
And S8, inputting the multi-scale fusion feature map obtained in the step S7 into a multi-scale fusion unit to perform training of a classifier, and obtaining multi-scale fusion feature parameters.
Specifically, in step S1, the original ultrasound image of breast lesions is a single-channel two-dimensional image.
Specifically, in step S2, the area formed by the at least three marker points on the original ultrasound image of the breast lesion is a target area containing breast lesion information.
In a specific embodiment, three marking points are provided, and in step S3, the original ultrasonic image of breast lesions is processed by using a linear spectral clustering super-pixel method, and the original ultrasonic image of breast lesions is denoted as I.
FirstSetting different numbers of super pixel blocksThree super-pixel images p (I, n) are obtained i ) Then using the three marker points selected in step S2 +.>Selecting target areas on the super-pixel images respectively to obtain target area images, carrying out weighted summation on the three target area images by using weights of 1:1:1, obtaining a low-level characteristic representation according to a formula (1), and marking the low-level characteristic representation as y l
y l =∑ ij b j ⊙p(I,n i ) … … formula (1).
Wherein, as indicated by the letter, the three mark points were selected to be the target region. I represents that super-pixel clustering is carried out on the original ultrasonic image I of the breast lesion for the ith time, n i Is the number of three different superpixel blocks required for image classification, n respectively 1 =8,n 2 =15,n 3 =50,i=1,2,3。b j Representing the three marking points b selected in step S2 1 ,b 2 ,b 3 ,j=1,2,3。
Further, in step S3, the original ultrasound image I of the breast lesion is processed by using a multi-scale region grouping method, so as to obtain three object suggestion graphs q (I, m i ),c represents three different scales of the object suggestion graph, and then the multi-scale graph is normalized to the same scale +.>Wherein m is i ∈M{q(I,m i ) And integrating the three marker points selected in the step S2 to obtain a complete multi-scale cluster map, and finally selecting and fusing target areas on the multi-scale cluster map, obtaining a high-level characteristic representation map according to a formula (2), and marking the high-level characteristic representation map as y h
Wherein, as indicated by the letter, the three mark points were selected to be the target region. b j Representing the three marking points b selected in step S2 1 ,b 2 ,b 3 ,j=1,2,3。
Further, in step S4, the low-level features are represented as a graph y according to equation (3) l And high-level feature representation diagram y h Weighted summation is carried out according to the weight coefficient of 1:2, a foreground saliency map is obtained, and is marked as y f
y f =w 1 y l +w 2 y h … … equation (3).
Wherein w is 1 Representation of low-level features representation diagram y l Is shown in the foreground saliency map y f The ratio of w 2 Representation of high-level features representation diagram y h Is shown in the foreground saliency map y f Is a ratio of the number of the first and second groups.
The foreground saliency map y f Is an image containing breast lesion information.
Further, in step S5, the foreground saliency map y is mapped according to the formula (4) f Performing inverse operation to obtain a background saliency map, denoted as y b
Wherein,representing the negation operation.
Further, in step S6, the specific process of foreground salient feature map extraction is as follows:
s61-1, in a foreground feature extraction branch network, an original ultrasonic image I of the breast lesion and a foreground saliency map y are processed f Commonly used as input, and extracted by a foreground shallow feature extraction moduleScene low-order features. The foreground shallow feature extraction module consists of convolution operation with convolution kernel size of 5×5, convolution operation with convolution kernel size of 1×1, normalization operation and activation operation.
S61-2, the foreground low-order features obtained in the step S61-1 sequentially pass through a first foreground progressive dense feature extraction module, a first foreground transition module, a second foreground progressive dense feature extraction module, a second foreground transition module and a third foreground progressive dense feature extraction module to extract foreground high-order features.
Each foreground progressive dense feature extraction module comprises three convolution units, wherein the convolution units consist of convolution operation with a convolution kernel size of 1×1, convolution operation with a convolution kernel size of 3×3, convolution operation with a convolution kernel size of 1×1, normalization operation and activation operation.
Each foreground transition module is composed of convolution operation with the convolution kernel size of 3×3, normalization operation, activation operation and maximum pooling operation.
Further, in step S6, the specific process of extracting the background saliency feature image is as follows:
s62-1, in a background feature extraction branch network, an original ultrasonic image I of the breast lesion and a background saliency map y are processed b And the background low-order features are extracted through a background shallow feature extraction module by taking the background low-order features as input. The background shallow feature extraction module consists of convolution operation with convolution kernel size of 5×5, convolution operation with convolution kernel size of 1×1, normalization operation and activation operation.
S62-2, the background low-order features obtained in the step S62-1 sequentially pass through a first background progressive dense feature extraction module, a first background transition module, a second background progressive dense feature extraction module, a second background transition module and a third background progressive dense feature extraction module to extract background high-order features.
Each background progressive dense feature extraction module comprises three convolution units, wherein the convolution units consist of convolution operation with a convolution kernel size of 1×1, convolution operation with a convolution kernel size of 3×3 and convolution operation with a convolution kernel size of 1×1, normalization operation and activation operation.
Each background transition module is respectively composed of convolution operation with the convolution kernel size of 3×3, normalization operation, activation operation and maximum pooling operation.
Further, in step S7, in the hierarchical feature fusion branch network, the specific process of fusing the foreground salient feature map and the background salient feature map is as follows:
s7-1, inputting an output feature map of the first foreground progressive dense feature extraction module and the first background progressive dense feature extraction module into a first feature fusion module, wherein the first feature fusion module specifically comprises:
firstly, respectively executing convolution operation with convolution kernel size of 1 x 1 and convolution operation with convolution kernel size of 3 x 3, normalization operation and activation operation on an output feature map of the first foreground progressive dense feature extraction module and an output feature map of the first background progressive dense feature extraction module, and then integrating on a channel to obtain a fusion feature map;
then, the fusion feature map is continuously subjected to convolution operation with the convolution kernel size of 1 multiplied by 1, convolution operation with the convolution kernel size of 3 multiplied by 3, convolution operation with the convolution kernel size of 1 multiplied by 1, normalization operation and activation operation, and after the fusion feature is learned, the fusion feature map is divided into two paths, wherein one path is input into a second fusion feature module, and the other path is input into a multi-scale fusion unit after the maximum pooling operation of 4 multiplied by 4.
S7-2, inputting output feature graphs of the second foreground progressive dense feature extraction module and the second background progressive dense feature extraction module into a second feature fusion module, wherein the output feature graphs specifically comprise:
firstly, the output feature map of the second foreground progressive dense feature extraction module and the output feature map of the second background progressive dense feature extraction module are respectively subjected to convolution operation with the convolution kernel size of 1 multiplied by 1, convolution operation with the convolution kernel size of 3 multiplied by 3, normalization operation and activation operation, and then are connected with the feature map output by the first feature fusion module in a channel manner to obtain a fusion feature map;
then, the fusion feature map is divided into two paths after convolution operation with the convolution kernel size of 1×1, convolution operation with the convolution kernel size of 3×3, convolution operation with the convolution kernel size of 1×1, normalization operation and activation operation, wherein one path is input to a third fusion feature module, and the other path is input to a multi-scale fusion unit.
S7-3, inputting the feature graphs output by the third foreground progressive dense feature extraction module and the third background progressive dense feature extraction module into a third feature fusion module, wherein the feature graphs specifically comprise:
firstly, the output feature map of the third foreground progressive dense feature extraction module and the output feature map of the third background progressive dense feature extraction module are respectively subjected to convolution operation with the convolution kernel size of 1×1, convolution operation with the convolution kernel size of 3×3, normalization operation and activation operation, and then are connected with the feature map output by the second feature fusion module in a channel manner to obtain a fusion feature map;
then, the fusion feature map is input into the multi-scale fusion module after convolution operation with the convolution kernel size of 1×1, convolution operation with the convolution kernel size of 3×3, convolution operation with the convolution kernel size of 1×1, normalization operation and activation operation.
S7-4, performing channel connection on the three fusion feature images obtained in the step S7-1, the step S7-2 and the step S7-3 in a multi-scale fusion unit to obtain a multi-scale fusion feature image, wherein the method specifically comprises the following steps:
and (3) carrying out multi-scale fusion on the three fusion feature images obtained in the steps S7-1, S7-2 and S7-3 in the channel direction to obtain a multi-scale fusion feature image. The multi-scale fusion feature map comprises multi-scale information features of foreground and background information.
Further, in step S8, foreground and background multiscale information features in the multiscale fusion feature map are processed and integrated, and the multiscale fusion feature map continues to undergo convolution operation with a convolution kernel size of 1×1, convolution operation with a convolution kernel size of 3×3, convolution operation with a convolution kernel size of 1×1, normalization operation and activation operation, so as to obtain local information features.
And then, generating global information features by using global average pooling.
Finally, the global information characteristic passes through a temporary return layer and a full connection layer with a decline rate of 0.2 to obtain a multi-scale fusion characteristic parameter of the original ultrasonic image of the breast lesion, wherein the range of the multi-scale fusion characteristic parameter is [0,1].
The invention provides a method for extracting multiscale fusion characteristic parameters of a breast lesion ultrasonic image by using a hierarchical dense characteristic fusion network based on saliency map guidance, which comprises the following steps: s1, reading an original ultrasonic image of breast lesions; s2, randomly selecting at least three marking points in a lesion area on an original ultrasonic image of breast lesions; s3, processing the original ultrasonic image of the breast lesions in the step S1 by using a linear spectral clustering super-pixel method to obtain a low-level characteristic representation; processing the original ultrasonic image of the breast lesions in the step S1 by using a multi-scale region grouping method to obtain a high-level characteristic representation; s4, respectively selecting target areas by using the marking points in the step S2 in the low-level characteristic representation diagram and the high-level characteristic representation diagram obtained in the step S3, and carrying out weighted summation on the target areas in the two selected images to obtain a foreground salient diagram; s5, performing a reversal operation on the foreground saliency map obtained in the step S4 to obtain a background saliency map; s6, inputting the foreground salient feature obtained in the step S4 and the original ultrasonic image of the breast lesions in the step S1 into a foreground feature extraction network branch together, and extracting foreground salient features to obtain a foreground salient feature image; the background saliency map obtained in the step S5 and the original ultrasonic image of the breast lesions in the step S1 are input into a background feature extraction network branch together, background saliency feature extraction is performed, and a background saliency feature map is obtained; s7, in the hierarchical dense feature fusion branch network, the foreground salient feature map and the background salient feature map extracted in the step S6 are fused and learned to obtain a multi-scale fusion feature map; and S8, inputting the multi-scale fusion feature map obtained in the step S7 into a multi-scale fusion unit to perform training of a classifier, and obtaining multi-scale fusion feature parameters. The invention fully utilizes the complementation and correlation between the extracted foreground (breast lesions) and background (surrounding tissues) features through the two layered feature fusion branch networks, integrates the features, and guides the network to more accurately obtain the multiscale fusion feature parameters of the original ultrasonic image of the breast lesions so as to improve the significant feature extraction capability of the original ultrasonic image of the breast lesions, thereby improving the feature extraction accuracy of the ultrasonic image of the breast.
Drawings
The invention is further illustrated by the accompanying drawings, the content of which does not constitute any limitation of the invention.
Fig. 1 is a flow chart of the invention for extracting multiscale fusion characteristic parameters of an ultrasonic image of breast lesions based on a hierarchical dense characteristic fusion network guided by a saliency map.
Fig. 2 is a three-branch network frame diagram of the present invention.
Fig. 3 is a frame diagram of a single feature fusion module in the hierarchical dense feature fusion branch network of the present invention, where there is no output feature diagram of the previous feature fusion module when the module is the first feature fusion module.
Fig. 4 is a result of feature extraction of a dataset a by applying the method for extracting multiscale fusion feature parameters of a breast lesion ultrasound image based on a hierarchical dense feature fusion network guided by a saliency map of the present invention.
Fig. 5 is a result of feature extraction of a dataset B by applying the method for extracting multiscale fusion feature parameters of a breast lesion ultrasound image based on a saliency map-guided hierarchical dense feature fusion network of the present invention.
Detailed Description
The invention is further illustrated with reference to the following examples.
Example 1
A method for extracting multiscale fusion characteristic parameters of a breast lesion ultrasonic image based on a hierarchical dense characteristic fusion network guided by a saliency map is used for improving accuracy of the multiscale fusion characteristic parameters of the breast lesion ultrasonic image. As shown in fig. 1, the method comprises the steps of:
s1, reading an original ultrasonic image I of the breast lesion. The image data is acquired by special ultrasonic imaging equipment and is a single-channel two-dimensional image.
S2, randomly selecting at least three marking points in a lesion area on the original ultrasonic image I of the breast lesion. The number of marker points is not strictly defined, and in general, the more marker points, the more accurately the region in which the breast lesion is located. In practice, as the number of marker points increases, the higher the complexity of the operation and the longer the operation time. The determination of an area from three marker points is the most fundamental principle. According to the method, three marking points are selected to determine the lesion area, so that a more ideal result can be obtained under the conditions of less time consumption and less operation cost. The selection of the marking points is completed by a medical image analysis professional with abundant experience, so that the area surrounded by the selected marking points on the original ultrasonic image of the breast focus is ensured to be a target area containing breast lesion information.
S3, processing the original ultrasonic image I of the breast lesions in the step S1 by using a linear spectral clustering super-pixel method to obtain a low-level characteristic representation diagram y l The specific process is as follows:
because the sizes of different human tissues are different, in order to ensure that the generated characteristic representation can comprise a complete lesion area, different numbers of super-pixel blocks are firstly arrangedThree super-pixel images p (I, n) are obtained i ) Then using the three marker points selected in step S2 +.>Selecting target areas on the super-pixel image respectively, carrying out weighted summation on the images of the three target areas by using a weight of 1:1:1, and obtaining a low-level characteristic representation diagram y according to a formula (1) l
y l =∑ ij b j ⊙p(I,n i ) … … formula (1).
Wherein, as indicated by "; i represents that super-pixel clustering is carried out on the original ultrasonic image I of the breast lesion for the ith time, n i Is the number of three different super pixel blocks set by the experiment, n is respectively 1 =8,n 2 =15,n 3 =50,i=1,2,3;b j Representing the three marking points b selected in step S2 1 ,b 2 ,b 3 ,j=1,2,3。
From experimental observation, it is known that the characteristic representation obtained by processing only by using the linear spectral clustering super-pixel method cannot perfectly cover the whole lesion area, and may cause loss of part of useful information. The invention selects a multi-scale regional grouping method to remedy the defects, and the specific process is as follows: processing the original ultrasonic image I of the breast lesion in the step S1 by using a multi-scale region grouping method to obtain three object suggestion diagrams q (I, m) with different scales i ),c represents three different scales of the object suggestion graph, and then the multi-scale graph is normalized to the same scale +.>Wherein m is i ∈M{q(I,m i ) And integrating the three marking points selected in the step S2 to form a complete multi-scale cluster map, and finally selecting and fusing target areas on the multi-scale cluster map to obtain a high-level characteristic representation y according to a formula (2) h
Wherein, as indicated by the letter, the three mark points were selected to be the target region. b j Representing the three marking points b selected in step S2 1 ,b 2 ,b 3 ,j=1,2,3。
S4, representing the graph y by using the low-level features obtained in the step S3 by the mark points in the step S2 l And high-level feature representation diagram y h Is a target lesion area in the patient. Representing the low-level features in accordance with equation (3) to form y l And high-level feature representation diagram y h Weighted summation is carried out according to the weight coefficient of 1:2, and a foreground saliency map y is obtained f
y f =w 1 y l +w 2 y h … … equation (3).
Wherein w is 1 Representation of low-level features representation diagram y l Is shown in the foreground saliency map y f The ratio of w 2 Representation of high-level features representation diagram y h Is shown in the foreground saliency map y f Is a ratio of the number of the first and second groups.
S5, according to the formula (4), the foreground saliency map y obtained in the step S4 is compared with the foreground saliency map y obtained in the step S4 f Performing inverse operation to obtain a background saliency map y b
Wherein,representing the negation operation.
S6, the foreground saliency map y obtained in the step S4 is processed f The foreground salient feature extraction method comprises the steps of inputting the foreground salient feature extraction method and an original ultrasonic image I of breast lesions in the step S1 into a foreground feature extraction network branch together, and extracting foreground salient features to obtain a foreground salient feature map, wherein the specific process is as follows:
s61-1, in a foreground feature extraction branch network, an original ultrasonic image I of the breast lesion and a foreground saliency map y are processed f And the foreground shallow feature extraction module is used as input to extract the foreground low-order features preliminarily. The foreground shallow feature extraction module consists of convolution operation with convolution kernel size of 5×5, convolution operation with convolution kernel size of 1×1, normalization operation and activation operation, and can obtain more texture features in larger receiving domain.
S61-2, the foreground low-order features obtained in the step S61-1 sequentially pass through a first foreground progressive dense feature extraction module, a first foreground transition module, a second foreground progressive dense feature extraction module, a second foreground transition module and a third foreground progressive dense feature extraction module to extract foreground high-order features;
each foreground progressive dense feature extraction module comprises three convolution units, wherein the convolution units consist of convolution operation with a convolution kernel size of 1×1, convolution operation with a convolution kernel size of 3×3, convolution operation with a convolution kernel size of 1×1, normalization operation and activation operation, and can effectively extract foreground and background specific useful features. Dense links acting between convolution units are mainly used for progressive propagation of features, continuously extracting useful features from low order to high order.
Each foreground transition module consists of convolution operation with convolution kernel size of 3×3, normalization operation, activation operation, and maximum pooling operation. The transition module is mainly used for solving the problem of feature redundancy, reducing the number of channels and the spatial resolution of features, reducing the calculation cost and improving the calculation efficiency.
Meanwhile, the background saliency map y obtained in the step S5 b The background salient feature extraction method comprises the steps of inputting the background salient feature extraction method and an original ultrasonic image I of breast lesions in the step S1 into a background feature extraction network branch together, and extracting background salient features to obtain a background salient feature map, wherein the specific process is as follows:
s62-1, in a background feature extraction branch network, an original ultrasonic image I of the breast lesion and a background saliency map y are processed b And the background low-order features are extracted through a background shallow feature extraction module by taking the background low-order features as input. The background shallow feature extraction module consists of convolution operation with convolution kernel size of 5×5, convolution operation with convolution kernel size of 1×1, normalization operation and activation operation.
S62-2, the background low-order features obtained in the step S62-1 sequentially pass through a first background progressive dense feature extraction module, a first background transition module, a second background progressive dense feature extraction module, a second background transition module and a third background progressive dense feature extraction module to extract background high-order features.
The parameter configuration of the invention for the structure of the progressive dense feature extraction module is shown in table 1.
TABLE 1 parameter configuration of progressive dense feature extraction module architecture
In table 1, BN represents a normalization operation, and ReLU represents an activation operation.
S7, in the layered feature fusion branch network, the foreground salient feature map and the background salient feature map extracted in the step S6 are fused and learned by utilizing the correlation or complementation existing between the foreground salient feature map and the background salient feature map, so as to obtain a multi-scale fusion feature map, as shown in FIG. 3, and the specific process is as follows:
s7-1, inputting an output feature map of the first foreground progressive dense feature extraction module and the first background progressive dense feature extraction module into a first feature fusion module, wherein the first feature fusion module specifically comprises:
firstly, respectively executing convolution operation with convolution kernel size of 1 x 1 and convolution operation with convolution kernel size of 3 x 3, normalization operation and activation operation on an output feature map of the first foreground progressive dense feature extraction module and an output feature map of the first background progressive dense feature extraction module, and then integrating on a channel to obtain a fusion feature map;
then, the fusion feature map is continuously subjected to convolution operation with the convolution kernel size of 1 multiplied by 1, convolution operation with the convolution kernel size of 3 multiplied by 3, convolution operation with the convolution kernel size of 1 multiplied by 1, normalization operation and activation operation, and after the fusion feature is learned, the fusion feature map is divided into two paths, wherein one path is input into a second fusion feature module, and the other path is input into a multi-scale fusion unit after the maximum pooling operation of 4 multiplied by 4.
S7-2, inputting output feature graphs of the second foreground progressive dense feature extraction module and the second background progressive dense feature extraction module into a second feature fusion module, wherein the output feature graphs specifically comprise:
firstly, the output feature map of the second foreground progressive dense feature extraction module and the output feature map of the second background progressive dense feature extraction module are respectively subjected to convolution operation with the convolution kernel size of 1 multiplied by 1, convolution operation with the convolution kernel size of 3 multiplied by 3, normalization operation and activation operation, and then are connected with the feature map output by the first feature fusion module in a channel manner to obtain a fusion feature map;
then, the fusion feature map is divided into two paths after convolution operation with the convolution kernel size of 1×1, convolution operation with the convolution kernel size of 3×3, convolution operation with the convolution kernel size of 1×1, normalization operation and activation operation, wherein one path is input to a third fusion feature module, and the other path is input to a multi-scale fusion unit.
S7-3, inputting the feature graphs output by the third foreground progressive dense feature extraction module and the third background progressive dense feature extraction module into a third feature fusion module, wherein the feature graphs specifically comprise:
firstly, the output feature map of the third foreground progressive dense feature extraction module and the output feature map of the third background progressive dense feature extraction module are respectively subjected to convolution operation with the convolution kernel size of 1×1, convolution operation with the convolution kernel size of 3×3, normalization operation and activation operation, and then are connected with the feature map output by the second feature fusion module in a channel manner to obtain a fusion feature map;
then, the fusion feature map is input into the multi-scale fusion module after convolution operation with the convolution kernel size of 1×1, convolution operation with the convolution kernel size of 3×3, convolution operation with the convolution kernel size of 1×1, normalization operation and activation operation.
S7-4, performing channel connection on the three fusion feature images obtained in the step S7-1, the step S7-2 and the step S7-3 in a multi-scale fusion unit to obtain a multi-scale feature image, wherein the method specifically comprises the following steps:
and (3) carrying out multi-scale fusion on the three fusion feature images obtained in the steps S7-1, S7-2 and S7-3 in the channel direction to obtain a multi-scale fusion feature image. The multi-scale fusion feature map comprises multi-scale information features of foreground and background information.
Further, in step S8, foreground and background multiscale information features in the multiscale fusion feature map are processed and integrated, and the multiscale fusion feature map continues to undergo convolution operation with a convolution kernel size of 1×1, convolution operation with a convolution kernel size of 3×3, convolution operation with a convolution kernel size of 1×1, normalization operation and activation operation, so as to obtain local information features.
And then, generating global information features by using global average pooling.
Finally, the global information characteristic passes through a temporary return layer and a full connection layer with a decline rate of 0.2 to obtain a multi-scale fusion characteristic parameter of the original ultrasonic image of the breast lesion, wherein the range of the multi-scale fusion characteristic parameter is [0,1].
In step S7, three feature fusion modules in the hierarchical feature fusion branch network respectively receive feature graphs with complementary multi-scales from different stages in the foreground feature extraction branch network and the background feature extraction branch network. The feature maps of the foreground and background branches are subjected to convolution operation with convolution kernel size of 1×1 and convolution operation with convolution kernel size of 3×3, and normalization operation and activation operation are performed for preventing gradient disappearance and enhancing sparseness of the network.
In different feature fusion modules, except the first feature fusion module, each feature fusion module connects the foreground feature image and the background feature image with the feature image obtained by the previous feature fusion module on a channel, and the output fusion feature image further extracts high-order features related to tasks through convolution operation with the convolution kernel size of 1×1, convolution operation with the convolution kernel size of 3×3, convolution operation with the convolution kernel size of 1×1, normalization operation, activation and other operations.
The high-order feature images obtained by the first and second feature fusion modules are respectively subjected to maximum pooling operation of 4 multiplied by 4 and 2 multiplied by 2, and are input into a multi-scale fusion unit together with the feature images in the third feature fusion module to be connected on a channel, so that more detail texture features are reserved in an auxiliary mode, and operation cost is reduced.
The effect of the finally obtained multi-scale fusion characteristic parameter applied to the texture analysis of the breast ultrasound image can evaluate the robustness of the method according to six performance indexes, namely the accuracy, the precision, the Fl-score, the sensitivity, the specificity and the AUC. The accuracy refers to the ratio of the number of correctly classified samples to the total number of samples, and the result is better when the value is closer to 1. The accuracy represents the probability that the prediction was correct in the samples predicted to be positive. Fl-score is the harmonic mean of the precision and recall, and acts to increase the outcome of both while reducing the difference between the two, the closer to 1 the more accurate. Sensitivity represents the ratio of a true positive sample to the sum of true positive and false negative samples. Specificity represents the ratio of the true negative sample to the sum of the false positive and true negative samples. The higher sensitivity and specificity are indicative of lower false alarm rates and false alarm rates, respectively. AUC is the area enclosed by the Receiver Operating Characteristic (ROC) curve, with a closer to 1 indicating a higher authenticity of the method. Table 2 shows performance index values of the method (HDFA-Net) for extracting multi-scale fusion feature parameters of breast lesion ultrasound images and other methods for feature extraction of data set A based on a hierarchical dense feature fusion network guided by a saliency map. Fig. 4 shows the result of feature extraction of the dataset a by the HDFA-Net method of the present invention and other methods from a visual representation of the actual medical image.
Table 2 comparison of performance index values for feature extraction of dataset a by this and other methods
As can be seen from the results in Table 2, the method for extracting the multiscale fusion characteristic parameters of the galactophore lesion ultrasound image (HDFA-Net) based on the hierarchical dense characteristic fusion network guided by the saliency map is superior to ML-Net, MT-Net and F in various performance index values 2 -Net, MG-Net and FCN-Net and the like based on different network structures.
The method for extracting the multiscale fusion characteristic parameters of the breast lesion ultrasonic image based on the hierarchical dense characteristic fusion network guided by the saliency map is used for constructing a three-branch hierarchical dense characteristic fusion network to extract and fuse foreground characteristics and background characteristics, and is used for extracting the multiscale fusion characteristic parameters of the breast lesion ultrasonic image. The foreground and background intensive feature extraction branch networks take the original image and the corresponding saliency map as common input, and are used for effectively extracting foreground and background features related to the classification task respectively. And according to the known correlation and supplementary information between the foreground and the background, the layered feature fusion branch network fuses the foreground and the background information in a multi-scale manner to obtain more accurate and more obvious multi-scale fusion feature parameters.
The method has less operation time and operation cost, and the characteristic representation diagram obtained by processing by combining the linear spectral clustering super-pixel method and the multi-scale region grouping method can cover the characteristic information of the whole region of interest to the greatest extent, so that the loss of useful information is avoided. In the hierarchical dense feature fusion network, dense links acting between convolution units are used for progressive propagation of features, and effective features from low order to high order are continuously extracted. In the hierarchical dense feature fusion network, the transition module is used for solving the problem of feature redundancy, and reducing the channel number and the spatial resolution of the features, so that the calculation cost is reduced, and the calculation efficiency is improved.
Example 2
The multiscale fusion feature parameters obtained by applying the method as in example 1 to dataset B were applied in breast ultrasound image texture analysis. Table 3 shows performance index values of the method (HDFA-Net) for extracting multi-scale fusion feature parameters of breast lesion ultrasound images and other methods for feature extraction of dataset B based on a hierarchical dense feature fusion network guided by saliency maps. Fig. 5 shows the result of feature extraction of the dataset B by the HDFA-Net method and other methods of the present invention from a practical medical image.
Table 3 comparison of performance index values for feature extraction of dataset B by this and other methods
From the results in Table 3, the method for extracting the multiscale fusion characteristic parameters of the galactophore lesion ultrasound image (HDFA-Net) based on the hierarchical dense characteristic fusion network guided by the saliency map is superior to ML-Net, MT-Net and F in various performance index values 2 -Net, MG-Net and FCN-Net and the like based on different network structures.
The method for extracting the multiscale fusion characteristic parameters of the breast lesion ultrasonic image based on the hierarchical dense characteristic fusion network guided by the saliency map is used for constructing a three-branch hierarchical dense characteristic fusion network to extract and fuse foreground characteristics and background characteristics, and is used for extracting the multiscale fusion characteristic parameters of the breast lesion ultrasonic image. The foreground and background intensive feature extraction branch networks take the original image and the corresponding saliency map as common input, and are used for effectively extracting foreground and background features related to the classification task respectively. And according to the known correlation and supplementary information between the foreground and the background, the layered feature fusion branch network fuses the foreground and the background information in a multi-scale manner to obtain more accurate and more obvious multi-scale fusion feature parameters.
The method has less operation time and operation cost, and the characteristic representation diagram obtained by processing by combining the linear spectral clustering super-pixel method and the multi-scale region grouping method can cover the characteristic information of the whole region of interest to the greatest extent, so that the loss of useful information is avoided. In the hierarchical dense feature fusion network, dense links acting between convolution units are used for progressive propagation of features, and effective features from low order to high order are continuously extracted. In the hierarchical dense feature fusion network, the transition module is used for solving the problem of feature redundancy, and reducing the channel number and the spatial resolution of the features, so that the calculation cost is reduced, and the calculation efficiency is improved.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (10)

1. The method for extracting the multiscale fusion characteristic parameters of the breast lesion ultrasonic image based on the hierarchical dense characteristic fusion network guided by the saliency map is characterized by comprising the following steps of:
s1, reading an original ultrasonic image of breast lesions;
s2, randomly selecting at least three marking points in a lesion area on an original ultrasonic image of breast lesions;
s3, processing the original ultrasonic image of the breast lesions in the step S1 by using a linear spectral clustering super-pixel method to obtain a low-level characteristic representation;
processing the original ultrasonic image of the breast lesions in the step S1 by using a multi-scale region grouping method to obtain a high-level characteristic representation;
s4, respectively selecting target areas by using the marking points in the step S2 in the low-level characteristic representation diagram and the high-level characteristic representation diagram obtained in the step S3, and carrying out weighted summation on the target areas in the two selected images to obtain a foreground salient diagram;
s5, performing a reversal operation on the foreground saliency map obtained in the step S4 to obtain a background saliency map;
s6, inputting the foreground salient feature obtained in the step S4 and the original ultrasonic image of the breast lesions in the step S1 into a foreground feature extraction network branch together, and extracting foreground salient features to obtain a foreground salient feature image;
the background saliency map obtained in the step S5 and the original ultrasonic image of the breast lesions in the step S1 are input into a background feature extraction network branch together, background saliency feature extraction is performed, and a background saliency feature map is obtained;
s7, in the hierarchical dense feature fusion branch network, the foreground salient feature map and the background salient feature map extracted in the step S6 are fused and learned to obtain a multi-scale fusion feature map;
and S8, inputting the multi-scale fusion feature map obtained in the step S7 into a multi-scale fusion unit to perform training of a classifier, and obtaining multi-scale fusion feature parameters.
2. The method for extracting multiscale fusion feature parameters of a breast lesion ultrasound image based on a hierarchical dense feature fusion network guided by saliency maps according to claim 1, wherein in step S1, the breast lesion original ultrasound image is a single-channel two-dimensional image.
3. The method for extracting multiscale fusion feature parameters of a breast lesion ultrasound image based on a hierarchical dense feature fusion network guided by saliency maps according to claim 2, wherein in step S2, the region formed by the at least three marker points on the breast lesion original ultrasound image is a target region containing breast lesion information.
4. The method for extracting multiscale fusion feature parameters of a breast lesion ultrasound image based on a hierarchical dense feature fusion network guided by saliency maps of claim 3,
in step S3, the original ultrasonic image of the breast lesion in step S1 is processed by using a linear spectral clustering super-pixel method, so as to obtain a low-level characteristic representation, which specifically comprises the following steps:
s31-1, setting a super pixel block to obtain a super pixel image p (I, n) i );
S31-2, selecting target areas on the super-pixel images by using all the mark points selected in the step S2 respectively to obtain target area images;
s31-3, carrying out weighted summation on the target area image, obtaining a low-level characteristic representation according to a formula (1), and marking as y l
y l =∑ ij b j ⊙p(I,n i ) … … formula (1);
wherein the original ultrasonic image of the breast lesion is marked as I; as indicated by the letter "; i represents that super-pixel clustering is carried out on the original ultrasonic image of the breast lesion at the ith time, n i Representing the number of super pixel blocks for performing super pixel clustering on the original ultrasonic image of the breast lesion at the ith time, wherein i is a positive integer; b j And (3) representing the j-th mark point selected in the step S2, wherein j is a positive integer.
5. The method for extracting multi-scale fusion feature parameters of a breast lesion ultrasound image based on a hierarchical dense feature fusion network guided by saliency maps according to claim 3, wherein in step S3, the original ultrasound image of the breast lesion in step S1 is processed by using a multi-scale region grouping method to obtain a high-level feature representation map, specifically:
s32-1, setting the scale of the object suggestion graph to obtain an object suggestion graph q (I, m i );
S32-2, normalizing the multi-scale map to the same scaleIntegrating into a complete multi-scale cluster map;
s32-3, selecting and fusing target areas on the multi-scale cluster map by using all the marking points selected in the step S2, obtaining a high-level characteristic representation map according to a formula (2), and marking the high-level characteristic representation map as y h
Wherein, as follows, the operation of selecting the target region with all the mark points in the step S2; b j The j-th mark point selected in the step S2 is represented, and j is a positive integer; m is m i ∈M{q(I,m i )}。
6. The method for extracting multiscale fusion feature parameters of a breast lesion ultrasound image based on a hierarchical dense feature fusion network guided by saliency maps according to claim 4 or 5, wherein in step S4, a low-level feature representation map y is expressed according to formula (3) l And high-level feature representation diagram y h Weighted summation is carried out according to the weight coefficient of 1:2, a foreground saliency map is obtained and is marked as y f
y f =w 1 y l +w 2 y h … … equation (3);
wherein w is 1 Representation of low-level features representation diagram y l Is shown in the foreground saliency map y f The ratio of w 2 Representation of high-level features representation diagram y h Is shown in the foreground saliency map y f The ratio of (3);
the foreground saliency map y f Is an image containing breast lesion information.
7. The method for extracting multi-scale fusion feature parameters of a breast lesion ultrasound image based on a hierarchical dense feature fusion network guided by saliency maps according to claim 6, wherein in step S5, the foreground saliency map y is mapped according to formula (4) f Performing inverse operation to obtain a background saliency map, denoted as y b
Wherein,representing the negation operation.
8. The method for extracting multi-scale fusion feature parameters of breast lesion ultrasound images based on a hierarchical dense feature fusion network guided by saliency maps according to claim 7, wherein in step S6, the specific process of foreground saliency feature map extraction is as follows:
s61-1, in a foreground feature extraction branch network, an original ultrasonic image I of the breast lesion and a foreground saliency map y are processed f The foreground shallow layer features are used as input together, and the foreground low-order features are extracted through a foreground shallow layer feature extraction module;
s61-2, the foreground low-order features obtained in the step S61-1 sequentially pass through a first foreground progressive dense feature extraction module, a first foreground transition module, a second foreground progressive dense feature extraction module, a second foreground transition module and a third foreground progressive dense feature extraction module to extract foreground high-order features.
9. The method for extracting multi-scale fusion feature parameters of breast lesion ultrasound images based on a hierarchical dense feature fusion network guided by saliency maps according to claim 8, wherein in step S6, the specific process of extracting background saliency feature maps is as follows:
s62-1, in a background feature extraction branch network, an original ultrasonic image I of the breast lesion and a background saliency map y are processed b The method comprises the steps of jointly taking the background low-order characteristics as input, and extracting the background low-order characteristics through a background shallow characteristic extraction module;
s62-2, the background low-order features obtained in the step S62-1 sequentially pass through a first background progressive dense feature extraction module, a first background transition module, a second background progressive dense feature extraction module, a second background transition module and a third background progressive dense feature extraction module to extract background high-order features.
10. The method for extracting multi-scale fusion feature parameters of breast lesion ultrasound images based on the hierarchical dense feature fusion network guided by saliency maps according to claim 9, wherein in step S7, in the hierarchical feature fusion branch network, the specific process of fusing the foreground saliency feature map and the background saliency feature map is as follows:
s7-1, inputting the output feature graphs of the first foreground progressive dense feature extraction module and the first background progressive dense feature extraction module into a first feature fusion module for learning fusion features;
the obtained fusion characteristic diagram is divided into two paths, one path is input into a second fusion characteristic module, and the other path is input into a multi-scale fusion unit;
s7-2, inputting the output feature graphs of the second foreground progressive dense feature extraction module and the second background progressive dense feature extraction module to a second feature fusion module for learning fusion features;
the obtained fusion characteristic diagram is divided into two paths, one path is input into a third fusion characteristic module, and the other path is input into a multi-scale fusion unit;
s7-3, inputting the feature graphs output by the third foreground progressive dense feature extraction module and the third background progressive dense feature extraction module into a third feature fusion module, learning fusion features, and inputting the obtained fusion feature graphs into the multi-scale fusion module;
s7-4, performing channel connection on the three fusion feature images obtained in the step S7-1, the step S7-2 and the step S7-3 in a multi-scale fusion unit to obtain a multi-scale fusion feature image.
CN202111532955.8A 2021-12-15 2021-12-15 Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network Active CN114332572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111532955.8A CN114332572B (en) 2021-12-15 2021-12-15 Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111532955.8A CN114332572B (en) 2021-12-15 2021-12-15 Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network

Publications (2)

Publication Number Publication Date
CN114332572A CN114332572A (en) 2022-04-12
CN114332572B true CN114332572B (en) 2024-03-26

Family

ID=81052895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111532955.8A Active CN114332572B (en) 2021-12-15 2021-12-15 Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network

Country Status (1)

Country Link
CN (1) CN114332572B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115423806B (en) * 2022-11-03 2023-03-24 南京信息工程大学 Breast mass detection method based on multi-scale cross-path feature fusion
CN116630680B (en) * 2023-04-06 2024-02-06 南方医科大学南方医院 Dual-mode image classification method and system combining X-ray photography and ultrasound

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2034439A1 (en) * 2007-09-07 2009-03-11 Thomson Licensing Method for establishing the saliency map of an image
CN107680106A (en) * 2017-10-13 2018-02-09 南京航空航天大学 A kind of conspicuousness object detection method based on Faster R CNN
CN107992874A (en) * 2017-12-20 2018-05-04 武汉大学 Image well-marked target method for extracting region and system based on iteration rarefaction representation
WO2020211522A1 (en) * 2019-04-15 2020-10-22 京东方科技集团股份有限公司 Method and device for detecting salient area of image
CN113379691A (en) * 2021-05-31 2021-09-10 南方医科大学 Breast lesion deep learning segmentation method based on prior guidance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2034439A1 (en) * 2007-09-07 2009-03-11 Thomson Licensing Method for establishing the saliency map of an image
CN107680106A (en) * 2017-10-13 2018-02-09 南京航空航天大学 A kind of conspicuousness object detection method based on Faster R CNN
CN107992874A (en) * 2017-12-20 2018-05-04 武汉大学 Image well-marked target method for extracting region and system based on iteration rarefaction representation
WO2020211522A1 (en) * 2019-04-15 2020-10-22 京东方科技集团股份有限公司 Method and device for detecting salient area of image
CN113379691A (en) * 2021-05-31 2021-09-10 南方医科大学 Breast lesion deep learning segmentation method based on prior guidance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
融合上下文信息的多尺度图像显著性检测;陈南而;陈莹;;小型微型计算机系统;20170915(09);全文 *

Also Published As

Publication number Publication date
CN114332572A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN109523521B (en) Pulmonary nodule classification and lesion positioning method and system based on multi-slice CT image
CN112270660B (en) Nasopharyngeal carcinoma radiotherapy target area automatic segmentation method based on deep neural network
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
JP2022518446A (en) Medical image detection methods and devices based on deep learning, electronic devices and computer programs
Deng et al. Classification of breast density categories based on SE-Attention neural networks
CN105913086A (en) Computer-aided mammary gland diagnosing method by means of characteristic weight adaptive selection
CN114332572B (en) Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network
CN110853011B (en) Method for constructing convolutional neural network model for pulmonary nodule detection
CN104484886B (en) A kind of dividing method and device of MR images
Balamurugan et al. Brain tumor segmentation and classification using hybrid deep CNN with LuNetClassifier
Narayanan et al. Understanding deep neural network predictions for medical imaging applications
CN112700461B (en) System for pulmonary nodule detection and characterization class identification
Yonekura et al. Improving the generalization of disease stage classification with deep CNN for glioma histopathological images
CN114565572A (en) Cerebral hemorrhage CT image classification method based on image sequence analysis
CN116580394A (en) White blood cell detection method based on multi-scale fusion and deformable self-attention
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN115546466A (en) Weak supervision image target positioning method based on multi-scale significant feature fusion
Solanki et al. Brain tumour detection and classification by using deep learning classifier
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
CN113379691B (en) Breast lesion deep learning segmentation method based on prior guidance
Parraga et al. A review of image-based deep learning algorithms for cervical cancer screening
CN113870194B (en) Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics
CN115775252A (en) Magnetic resonance image cervical cancer tumor segmentation method based on global local cascade
Perkonigg et al. Detecting bone lesions in multiple myeloma patients using transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant