CN114332572A - Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map guided hierarchical dense characteristic fusion network - Google Patents

Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map guided hierarchical dense characteristic fusion network Download PDF

Info

Publication number
CN114332572A
CN114332572A CN202111532955.8A CN202111532955A CN114332572A CN 114332572 A CN114332572 A CN 114332572A CN 202111532955 A CN202111532955 A CN 202111532955A CN 114332572 A CN114332572 A CN 114332572A
Authority
CN
China
Prior art keywords
feature
fusion
foreground
background
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111532955.8A
Other languages
Chinese (zh)
Other versions
CN114332572B (en
Inventor
张煜
宁振源
邸小慧
钟升洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern Medical University
Original Assignee
Southern Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern Medical University filed Critical Southern Medical University
Priority to CN202111532955.8A priority Critical patent/CN114332572B/en
Publication of CN114332572A publication Critical patent/CN114332572A/en
Application granted granted Critical
Publication of CN114332572B publication Critical patent/CN114332572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on a saliency map guided hierarchical dense characteristic fusion network. The method combines a linear spectral clustering superpixel method and a multi-scale region grouping method to obtain a characteristic representation picture, avoids loss of useful information, and then builds a three-branch hierarchical dense characteristic fusion network to extract and fuse foreground characteristics and background characteristics for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters. The two progressive dense feature extraction branch networks of the foreground and the background take the original image and the corresponding saliency map as input together and are respectively used for effectively extracting the foreground and the background features relevant to the classification task. According to the known correlation and complementary information between the foreground and the background, the hierarchical feature fusion branch network performs multi-scale fusion on the foreground and the background information to obtain more accurate and more obvious multi-scale fusion feature parameters.

Description

Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map guided hierarchical dense characteristic fusion network
Technical Field
The patent relates to the technical field of computer vision and medical imaging, and provides a method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on a layered dense characteristic fusion network guided by a saliency map.
Background
Currently, breast ultrasound imaging is the most common examination technique used clinically, and has the advantages of no radiation, no wound, low cost and the like. However, the continuous review of the ultrasound images requires significant operator time, and the experience and requirements of the operator are very strict to ensure accurate and consistent results. In addition, the judgment results between operators may be different. Therefore, in order to assist the operator to better interpret the breast ultrasound image, researchers have proposed many computer-aided diagnosis methods to assist the operator in making decisions to improve the accuracy of breast ultrasound image feature extraction.
Among the existing deep learning methods, many methods can be successfully applied to the prediction of the region of interest of the breast ultrasound image. However, establishing a robust model still presents certain challenges due to the characteristics of relatively complex patterns, low contrast, and fuzzy boundaries of the region of interest (foreground) and the surrounding tissues (background) in the breast ultrasound image. Therefore, it is necessary to provide a method for extracting breast lesion ultrasound image multi-scale fusion feature parameters based on a saliency map-guided hierarchical dense feature fusion network to overcome the defects of the prior art.
Disclosure of Invention
The invention aims to provide a method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on a saliency map-guided hierarchical dense characteristic fusion network. The method uses the foreground saliency map and the background saliency map as a priori information to guide the feature representation of network learning discriminability, and improves the accuracy of breast ultrasound image feature extraction.
The above object of the present invention is achieved by the following technical measures.
The method for extracting the breast lesion ultrasonic image multi-scale fusion characteristic parameters based on the saliency map guided hierarchical dense characteristic fusion network comprises the following steps:
and S1, reading the breast lesion original ultrasonic image.
S2, at least three marking points in the lesion area are randomly selected on the original ultrasonic image of the breast lesion.
And S3, processing the breast lesion original ultrasonic image in the step S1 by using a linear spectral clustering superpixel method to obtain a low-level feature representation.
The original ultrasound image of the breast lesion in step S1 is processed using a multi-scale region grouping method to obtain a high-level feature representation.
And S4, in the low-level feature representation diagram and the high-level feature representation diagram obtained in the step S3, respectively selecting target areas by using the marking points in the step S2, and performing weighted summation on the target areas in the two selected images to obtain a foreground significant diagram.
And S5, performing negation operation on the foreground saliency map obtained in the step S4 to obtain a background saliency map.
And S6, inputting the foreground significant map obtained in the step S4 and the breast lesion original ultrasonic image obtained in the step S1 into a foreground feature extraction network branch together, and extracting foreground significant features to obtain a foreground significant feature map.
And (4) inputting the background saliency map obtained in the step (S5) and the breast lesion original ultrasound image obtained in the step (S1) into a background feature extraction network branch together, and extracting the background saliency features to obtain a background saliency feature map.
And S7, fusing and learning the foreground significance characteristic diagram and the background significance characteristic diagram extracted in the step S6 in a layered dense characteristic fusion branch network to obtain a multi-scale fusion characteristic diagram.
And S8, inputting the multi-scale fusion feature map obtained in the step S7 into a multi-scale fusion unit to execute training of a classifier, and obtaining multi-scale fusion feature parameters.
Specifically, in step S1, the original ultrasound image of the breast lesion is a single-channel two-dimensional image.
Specifically, in step S2, the area formed by the at least three marker points on the original ultrasound image of the breast lesion is a target area containing breast lesion information.
In an embodiment, three of the above-mentioned marking points are set, and in step S3, the original ultrasound image of the breast lesion is processed by using a linear spectral clustering superpixel method, which is denoted as I.
First, different numbers of superpixel blocks are set
Figure BDA0003412096940000021
Three superpixel images p (I, n) are obtainedi) Then, the three marker points selected in step S2 are used
Figure BDA0003412096940000022
Respectively selecting target areas on the superpixel images to obtain target area images, then carrying out weighted summation on the three target area images according to the weight of 1:1:1, and obtaining a low-level feature representation diagram according to the formula (1), and the representation diagram is marked as yl
yl=∑ijbj⊙p(I,ni) … … equation (1).
Wherein, the lines indicate the operation of selecting the target area by the three mark points. I represents the ith super-pixel clustering of the original ultrasonic image I of the breast lesion, niIs the number of three different superpixel blocks, n respectively, required for image classification1=8,n2=15,n3=50,i=1,2,3。bjIndicates the three marker points b selected in step S21,b2,b3,j=1,2,3。
Further, in step S3, the original ultrasound image I of breast lesion is processed by using a multi-scale region grouping method to obtain three object suggestion maps q (I, m) with different scalesi),
Figure BDA0003412096940000023
c represents three different scales of the object suggestion graph, and then the multi-scale graph is normalized to the same scale
Figure BDA0003412096940000024
Wherein m isi∈M{q(I,mi) And integrating the three marked points into a complete multi-scale clustering graph, finally selecting and fusing a target area on the multi-scale clustering graph by using the three marked points selected in the step S2, and obtaining a high-level feature representation graph marked as y according to a formula (2)h
Figure BDA0003412096940000031
Wherein, the lines indicate the operation of selecting the target area by the three mark points. bjIndicates the three marker points b selected in step S21,b2,b3,j=1,2,3。
Further, in step S4, the low-level feature representation y is expressed according to formula (3)lAnd high level feature representation yhCarrying out weighted summation according to the weight coefficient 1:2 to obtain a foreground significant graph which is marked as yf
yf=w1yl+w2yh… … formula (3).
Wherein, w1Representation of Low-level feature representation ylIs in the foreground saliency map yfRatio of (1) to (b), w2Representation of high level feature representation yhIs in the foreground saliency map yfThe ratio of (1).
The foreground saliency map yfIs an image containing breast lesion information.
Further, in step S5, the foreground saliency map y is plotted according to equation (4)fPerforming an inversion operation to obtain a background saliency map, which is marked as yb
Figure BDA0003412096940000032
Wherein,
Figure BDA0003412096940000033
indicating an inversion operation.
Further, in step S6, the specific process of foreground significant feature map extraction is as follows:
s61-1, in the foreground feature extraction branch network, the breast lesion original ultrasonic image I and the foreground significant image y are extractedfAnd the foreground low-order features are extracted through a foreground shallow feature extraction module by taking the foreground low-order features as input. The foreground shallow layer feature extraction module consists of convolution operation with convolution kernel size of 5 multiplied by 5, convolution operation with convolution kernel size of 1 multiplied by 1, normalization operation and activation operation.
S61-2, the foreground low-order features obtained in the step S61-1 are sequentially subjected to a first foreground progressive dense feature extraction module, a first foreground transition module, a second foreground progressive dense feature extraction module, a second foreground transition module and a third foreground progressive dense feature extraction module to extract foreground high-order features.
Each foreground progressive dense feature extraction module comprises three convolution units, wherein each convolution unit consists of convolution operation with a convolution kernel size of 1 × 1, convolution operation of 3 × 3, convolution operation of 1 × 1, normalization operation and activation operation.
Each foreground transition module consists of convolution operation with convolution kernel size of 3 x 3, normalization operation, activation operation, and maximum pooling operation.
Further, in step S6, the specific process of extracting the background saliency feature map is as follows:
s62-1, in the background feature extraction branch network, the breast lesion original ultrasonic image I and the background saliency map y are extractedbAnd the background low-order features are extracted through a background shallow feature extraction module by taking the background low-order features as input. The background shallow feature extraction module consists of convolution operation with convolution kernel size of 5 × 5, convolution operation with convolution kernel size of 1 × 1, normalization operation and activation operation.
And S62-2, sequentially extracting background high-order features from the background low-order features obtained in the step S62-1 through a first background progressive dense feature extraction module, a first background transition module, a second background progressive dense feature extraction module, a second background transition module and a third background progressive dense feature extraction module.
Each background progressive dense feature extraction module comprises three convolution units, wherein each convolution unit consists of convolution operation with a convolution kernel size of 1 × 1, convolution operation of 3 × 3 and convolution operation of 1 × 1, normalization operation and activation operation.
Each background transition module consists of convolution operation with convolution kernel size of 3 x 3, normalization operation, activation operation and maximum pooling operation.
Further, in step S7, in the hierarchical feature fusion branch network, the specific process of fusing the foreground significant feature map and the background significant feature map is as follows:
s7-1, inputting the output feature maps of the first foreground progressive dense feature extraction module and the first background progressive dense feature extraction module into a first feature fusion module, specifically:
firstly, after the output feature map of the first foreground progressive dense feature extraction module and the output feature map of the first background progressive dense feature extraction module respectively execute convolution operation with convolution kernel size of 1 × 1 and convolution operation, normalization operation and activation operation of 3 × 3, integrating on channels to obtain a fusion feature map;
and then, the fusion feature map is continuously subjected to convolution operation with the convolution kernel size of 1 × 1, convolution operation with the convolution kernel size of 3 × 3, convolution operation with the convolution kernel size of 1 × 1, normalization operation and activation operation, and is divided into two paths after learning of fusion features, wherein one path is input into a second fusion feature module, and the other path is input into a multi-scale fusion unit after being subjected to maximum pooling operation with the convolution kernel size of 4 × 4.
S7-2, the output feature maps of the second foreground progressive dense feature extraction module and the second background progressive dense feature extraction module are input to a second feature fusion module, which specifically is:
firstly, the output feature map of the second foreground progressive dense feature extraction module and the output feature map of the second background progressive dense feature extraction module are respectively subjected to convolution operation with a convolution kernel size of 1 × 1, convolution operation with a convolution kernel size of 3 × 3, normalization operation and activation operation, and then are subjected to channel connection with the feature map output by the first feature fusion module to obtain a fusion feature map;
then, the fusion characteristic diagram is divided into two paths after convolution operation with convolution kernel size of 1 × 1, convolution operation of 3 × 3, convolution operation of 1 × 1, normalization operation and activation operation, wherein one path is input to the third fusion characteristic module, and the other path is input to the multi-scale fusion unit.
S7-3, inputting the feature maps output by the third foreground progressive dense feature extraction module and the third background progressive dense feature extraction module into a third feature fusion module, specifically:
firstly, the output feature map of the third foreground progressive dense feature extraction module and the output feature map of the third background progressive dense feature extraction module are respectively subjected to convolution operation with a convolution kernel size of 1 × 1, convolution operation with a convolution kernel size of 3 × 3, normalization operation and activation operation, and then are subjected to channel connection with the feature map output by the second feature fusion module to obtain a fusion feature map;
then, the fusion feature map is input to the multi-scale fusion module after being subjected to convolution operation with a convolution kernel size of 1 × 1, convolution operation of 3 × 3, convolution operation of 1 × 1, normalization operation and activation operation.
S7-4, performing channel connection on the three fusion feature maps obtained in the steps S7-1, S7-2 and S7-3 in a multi-scale fusion unit to obtain a multi-scale fusion feature map, which specifically comprises the following steps:
and performing multi-scale fusion on the three fusion feature maps obtained in the steps S7-1, S7-2 and S7-3 in the channel direction to obtain a multi-scale fusion feature map. The multi-scale fusion feature map comprises multi-scale information features of foreground and background information.
Further, in step S8, the foreground and background multi-scale information features in the multi-scale fusion feature map are processed and integrated, and the multi-scale fusion feature map is continuously subjected to convolution operation with a convolution kernel size of 1 × 1, convolution operation with a convolution kernel size of 3 × 3, convolution operation with a convolution kernel size of 1 × 1, normalization operation, and activation operation, so as to obtain local information features.
And then, generating a global information characteristic from the local information characteristic by using global average pooling.
And finally, the global information characteristic passes through a temporary regression layer and a full connection layer with the reduction rate of 0.2 to obtain a multi-scale fusion characteristic parameter of the breast lesion original ultrasonic image, wherein the range of the multi-scale fusion characteristic parameter is [0,1 ].
The invention provides a method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on a saliency map guided hierarchical dense characteristic fusion network, which comprises the following steps: s1, reading the breast lesion original ultrasonic image; s2, randomly selecting at least three marking points in the lesion area on the breast lesion original ultrasonic image; s3, processing the breast lesion original ultrasonic image in the step S1 by using a linear spectral clustering superpixel method to obtain a low-level feature representation; processing the breast lesion original ultrasonic image in the step S1 by using a multi-scale region grouping method to obtain a high-level feature representation; s4, in the low-level feature representation diagram and the high-level feature representation diagram obtained in the step S3, the marking points in the step S2 are used for respectively selecting target areas, and the target areas in the two selected images are weighted and summed to obtain a foreground significant diagram; s5, performing negation operation on the foreground saliency map obtained in the step S4 to obtain a background saliency map; s6, inputting the foreground significant map obtained in the step S4 and the breast lesion original ultrasonic image in the step S1 into a foreground feature extraction network branch together, and extracting foreground significant features to obtain a foreground significant feature map; inputting the background saliency map obtained in the step S5 and the breast lesion original ultrasound image obtained in the step S1 into a background feature extraction network branch together, and performing extraction of background saliency features to obtain a background saliency feature map; s7, fusing and learning the foreground significance characteristic diagram and the background significance characteristic diagram extracted in the step S6 in a layered dense characteristic fusion branch network to obtain a multi-scale fusion characteristic diagram; and S8, inputting the multi-scale fusion feature map obtained in the step S7 into a multi-scale fusion unit to execute training of a classifier, and obtaining multi-scale fusion feature parameters. The invention fully utilizes the complementarity and the correlation between the extracted foreground (breast lesion) and background (surrounding tissues) features through two layered feature fusion branch networks, integrates the extracted foreground (breast lesion) and background (surrounding tissues) features, and guides the networks to more accurately obtain the multi-scale fusion feature parameters of the breast lesion original ultrasonic image so as to improve the remarkable feature extraction capability of the breast lesion original ultrasonic image and further improve the accuracy of the feature extraction of the breast ultrasonic image.
Drawings
The invention is further illustrated by means of the attached drawings, the content of which is not in any way limitative of the invention.
FIG. 1 is a flow chart of the method for extracting the multi-scale fusion characteristic parameters of the breast lesion ultrasonic image based on the saliency map-guided hierarchical dense characteristic fusion network.
Fig. 2 is a diagram of a three-branch network framework of the present invention.
Fig. 3 is a block diagram of a single feature fusion module in the hierarchically dense feature fusion branching network of the present invention, where when the module is the first feature fusion module, the output feature diagram of the last feature fusion module does not exist.
Fig. 4 is a result of feature extraction performed on the data set a by applying the method for extracting breast lesion ultrasonic image multi-scale fusion feature parameters based on the saliency map-guided hierarchical dense feature fusion network of the present invention.
Fig. 5 is a result of feature extraction performed on the data set B by applying the method for extracting breast lesion ultrasonic image multi-scale fusion feature parameters based on the saliency map-guided hierarchical dense feature fusion network of the present invention.
Detailed Description
The invention is further illustrated by the following examples.
Example 1
A method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on a saliency map guided hierarchical dense characteristic fusion network is used for improving the accuracy of obtaining the multi-scale fusion characteristic parameters of a breast lesion ultrasonic image. As shown in fig. 1, the method comprises the steps of:
and S1, reading the breast lesion original ultrasonic image I. The image data is acquired by a special ultrasonic imaging device and is a single-channel two-dimensional image.
S2, at least three marking points in the lesion area are randomly selected on the original ultrasonic image I of the breast lesion. The number of markers is not critical to the selection, and in general, the more markers, the more accurately the region in which the breast lesion is located. In fact, as the number of tag points increases, the more complex the operation becomes and the longer the operation time becomes. The determination of an area by three marked points is the most basic principle. The method is practiced, three marking points are selected to determine the lesion area, and a more ideal result can be obtained under the conditions of less time consumption and less operation expense. The selection of the marking points needs to be completed by experienced medical image analysis professionals, and the area defined by the selected marking points on the breast lesion ultrasonic original image is ensured to be a target area containing breast lesion information.
S3, processing the original ultrasonic image I of the breast lesion in the step S1 by using a linear spectral clustering superpixel method to obtain a low-level feature representation ylThe specific process is as follows:
because of the different sizes of different human tissues, in order to ensure that the generated characteristic representation diagram can include complete lesion regions, firstly, different numbers of superpixel blocks are set
Figure BDA0003412096940000071
Three superpixel images p (I, n) are obtainedi) Then, the three marker points selected in step S2 are used
Figure BDA0003412096940000072
Respectively selecting target areas on the superpixel image, then carrying out weighted summation on the images of the three target areas by the weight of 1:1:1, and obtaining a low-level feature representation y according to the formula (1)l
yl=∑ijbj⊙p(I,ni) … … equation (1).
Wherein, the three mark points represent the operation of selecting the target area; i represents the ith super-pixel clustering of the original ultrasonic image I of the breast lesion, niIs three different superpixel block numbers set by experiment, and each is n1=8,n2=15,n3=50,i=1,2,3;bjIndicates the three marker points b selected in step S21,b2,b3,j=1,2,3。
From experimental observations, it is known that the feature representation obtained by processing only using the linear spectral clustering superpixel method cannot perfectly cover the whole lesion region, and may cause the loss of part of useful information. The invention selects a multi-scale regional grouping method to make up the defects, and the specific process is as follows: processing the breast lesion original ultrasonic image I in the step S1 by using a multi-scale region grouping method to obtain three object suggestion maps q (I, m) with different scalesi),
Figure BDA0003412096940000073
c represents three different scales of the object suggestion graph, and then the multi-scale graph is normalized to the same scale
Figure BDA0003412096940000074
Wherein m isi∈M{q(I,mi) Integrating into a complete multi-scale cluster map, finally selecting a target area on the multi-scale cluster map by using the three marking points selected in the step S2 and fusing the target area,obtaining a high-level feature representation y according to formula (2)h
Figure BDA0003412096940000075
Wherein, the lines indicate the operation of selecting the target area by the three mark points. bjIndicates the three marker points b selected in step S21,b2,b3,j=1,2,3。
S4, using the marked points in the step S2 to respectively represent the low-level features obtained in the step S3 as the representation ylAnd high level feature representation yhOf the target lesion area. Representing low-level features in graph y according to equation (3)lAnd high level feature representation yhCarrying out weighted summation according to the weight coefficient 1:2 to obtain a foreground saliency map yf
yf=w1yl+w2yh… … formula (3).
Wherein, w1Representation of Low-level feature representation ylIs in the foreground saliency map yfRatio of (1) to (b), w2Representation of high level feature representation yhIs in the foreground saliency map yfThe ratio of (1).
S5, according to the formula (4), comparing the foreground saliency map y obtained in the step S4fPerforming negation operation to obtain a background saliency map yb
Figure BDA0003412096940000076
Wherein,
Figure BDA0003412096940000081
indicating an inversion operation.
S6, the foreground saliency map y obtained in the step S4fInputting the foreground significance characteristics and the breast lesion original ultrasonic image I in the step S1 into a foreground characteristic extraction network branch together, and extracting the foreground significance characteristics to obtain a foreground significance characteristic diagram, specificallyThe process is as follows:
s61-1, in the foreground feature extraction branch network, the breast lesion original ultrasonic image I and the foreground significant image y are extractedfAnd the foreground low-order features are preliminarily extracted through a foreground shallow feature extraction module by taking the foreground low-order features as input. The foreground shallow layer feature extraction module consists of convolution operation with convolution kernel size of 5 multiplied by 5, convolution operation with convolution kernel size of 1 multiplied by 1, normalization operation and activation operation, and can obtain more texture features in a larger receiving domain.
S61-2, sequentially extracting foreground high-order features from the foreground low-order features obtained in the step S61-1 through a first foreground progressive intensive feature extraction module, a first foreground transition module, a second foreground progressive intensive feature extraction module, a second foreground transition module and a third foreground progressive intensive feature extraction module;
each foreground progressive dense feature extraction module comprises three convolution units, each convolution unit comprises convolution operation with convolution kernel size of 1 x 1, convolution operation of 3 x 3, convolution operation of 1 x 1, normalization operation and activation operation, and specific useful features of the foreground and the background can be effectively extracted. The dense links acting between convolution units are mainly used for the progressive propagation of features, and useful features of low order to high order are extracted continuously.
Each foreground transition module consists of convolution operation with convolution kernel size of 3 × 3, normalization operation, activation operation, and maximum pooling operation. The transition module is mainly used for solving the problem of feature redundancy, reducing the number of channels and the spatial resolution of features, reducing the calculation cost and improving the calculation efficiency.
Meanwhile, the background saliency map y obtained in step S5 is usedbInputting the breast lesion original ultrasound image I and the breast lesion original ultrasound image I in step S1 into a background feature extraction network branch, and performing extraction of the background significant features to obtain a background significant feature map, which includes the following specific processes:
s62-1, in the background feature extraction branch network, the breast lesion original ultrasonic image I and the background saliency map y are extractedbAnd the background low-order features are extracted through a background shallow feature extraction module by taking the background low-order features as input.The background shallow feature extraction module is composed of convolution operation with convolution kernel size of 5 × 5, convolution operation with convolution kernel size of 1 × 1, normalization operation and activation operation.
And S62-2, sequentially extracting background high-order features from the background low-order features obtained in the step S62-1 through a first background progressive dense feature extraction module, a first background transition module, a second background progressive dense feature extraction module, a second background transition module and a third background progressive dense feature extraction module.
The parameter configuration for the progressive dense feature extraction module structure of the present invention is shown in table 1.
TABLE 1 parameter configuration for progressive dense feature extraction Module architecture
Figure BDA0003412096940000082
Figure BDA0003412096940000091
In table 1, BN denotes a normalization operation and ReLU denotes an activation operation.
S7, in the hierarchical feature fusion branch network, fusing and learning the foreground significant feature map and the background significant feature map extracted in step S6 by using the correlation or the complementarity existing between the foreground significant feature map and the background significant feature map, so as to obtain a multi-scale fusion feature map, as shown in fig. 3, the specific process is as follows:
s7-1, inputting the output feature maps of the first foreground progressive dense feature extraction module and the first background progressive dense feature extraction module into a first feature fusion module, specifically:
firstly, after the output feature map of the first foreground progressive dense feature extraction module and the output feature map of the first background progressive dense feature extraction module respectively execute convolution operation with convolution kernel size of 1 × 1 and convolution operation, normalization operation and activation operation of 3 × 3, integrating on channels to obtain a fusion feature map;
and then, the fusion feature map is continuously subjected to convolution operation with the convolution kernel size of 1 × 1, convolution operation with the convolution kernel size of 3 × 3, convolution operation with the convolution kernel size of 1 × 1, normalization operation and activation operation, and is divided into two paths after learning of fusion features, wherein one path is input into a second fusion feature module, and the other path is input into a multi-scale fusion unit after being subjected to maximum pooling operation with the convolution kernel size of 4 × 4.
S7-2, the output feature maps of the second foreground progressive dense feature extraction module and the second background progressive dense feature extraction module are input to a second feature fusion module, which specifically is:
firstly, the output feature map of the second foreground progressive dense feature extraction module and the output feature map of the second background progressive dense feature extraction module are respectively subjected to convolution operation with a convolution kernel size of 1 × 1, convolution operation with a convolution kernel size of 3 × 3, normalization operation and activation operation, and then are subjected to channel connection with the feature map output by the first feature fusion module to obtain a fusion feature map;
then, the fusion characteristic diagram is divided into two paths after convolution operation with convolution kernel size of 1 × 1, convolution operation of 3 × 3, convolution operation of 1 × 1, normalization operation and activation operation, wherein one path is input to the third fusion characteristic module, and the other path is input to the multi-scale fusion unit.
S7-3, inputting the feature maps output by the third foreground progressive dense feature extraction module and the third background progressive dense feature extraction module into a third feature fusion module, specifically:
firstly, the output feature map of the third foreground progressive dense feature extraction module and the output feature map of the third background progressive dense feature extraction module are respectively subjected to convolution operation with a convolution kernel size of 1 × 1, convolution operation with a convolution kernel size of 3 × 3, normalization operation and activation operation, and then are subjected to channel connection with the feature map output by the second feature fusion module to obtain a fusion feature map;
then, the fusion feature map is input to the multi-scale fusion module after being subjected to convolution operation with a convolution kernel size of 1 × 1, convolution operation of 3 × 3, convolution operation of 1 × 1, normalization operation and activation operation.
S7-4, performing channel connection on the three fusion feature maps obtained in the steps S7-1, S7-2 and S7-3 in a multi-scale fusion unit to obtain a multi-scale feature map, which specifically comprises the following steps:
and performing multi-scale fusion on the three fusion feature maps obtained in the steps S7-1, S7-2 and S7-3 in the channel direction to obtain a multi-scale fusion feature map. The multi-scale fusion feature map comprises multi-scale information features of foreground and background information.
Further, in step S8, the foreground and background multi-scale information features in the multi-scale fusion feature map are processed and integrated, and the multi-scale fusion feature map is continuously subjected to convolution operation with a convolution kernel size of 1 × 1, convolution operation with a convolution kernel size of 3 × 3, convolution operation with a convolution kernel size of 1 × 1, normalization operation, and activation operation, so as to obtain local information features.
And then, generating a global information characteristic from the local information characteristic by using global average pooling.
And finally, the global information characteristic passes through a temporary regression layer and a full connection layer with the reduction rate of 0.2 to obtain a multi-scale fusion characteristic parameter of the breast lesion original ultrasonic image, wherein the range of the multi-scale fusion characteristic parameter is [0,1 ].
It should be noted that, in step S7, the three feature fusion modules in the hierarchical feature fusion branch network respectively receive the multi-scale mutually complementary feature maps from different stages in the foreground feature extraction branch network and the background feature extraction branch network. The characteristic graphs of the foreground branch and the background branch are respectively subjected to convolution operation with convolution kernel size of 1 multiplied by 1 and convolution operation with convolution kernel size of 3 multiplied by 3, and normalization operation and activation operation are performed for preventing gradient disappearance and enhancing the sparsity of a network.
In different feature fusion modules, except for a first feature fusion module, each feature fusion module connects a foreground feature map and a background feature map with a feature map obtained by a previous feature fusion module on a channel, and the output fusion feature maps are subjected to convolution operation with a convolution kernel size of 1 × 1, convolution operation with a convolution kernel size of 3 × 3, convolution operation with a convolution kernel size of 1 × 1, normalization operation, activation and other operations, so as to further extract high-order features related to tasks.
The high-order feature maps obtained by the first feature fusion module and the second feature fusion module are respectively subjected to maximum pooling operations of 4 multiplied by 4 and 2 multiplied by 2, and are input to the multi-scale fusion unit together with the feature map in the third feature fusion module for channel connection, so that more detail texture features are retained in an auxiliary manner, and the operation cost is reduced.
The effect of the finally obtained multi-scale fusion characteristic parameters applied to mammary ultrasonic image texture analysis can be used for evaluating the robustness of the method from six performance indexes of accuracy, precision, Fl-score, sensitivity, specificity and AUC. Wherein, the accuracy refers to the ratio of the number of correctly classified samples to the total number of samples, and the closer the value is to 1, the better the result is. The accuracy rate represents the probability that the prediction is correct in the samples that are predicted to be positive. Fl-score is the harmonic mean of precision and recall, and serves to improve the results while narrowing the difference between the two, with greater accuracy closer to 1. Sensitivity represents the ratio of true positive samples to the sum of true positive and false negative samples. Specificity represents the ratio of true negative samples to the sum of false positive and true negative samples. Higher sensitivity and specificity indicate lower miss rates and false positive rates, respectively. AUC is the area enclosed by Receiver Operating Characteristic (ROC) curves, and a closer 1 indicates a higher authenticity of the method. Table 2 shows performance index values of feature extraction on the dataset a by the method (HDFA-Net) for extracting breast lesion ultrasonic image multi-scale fusion feature parameters based on the saliency map-guided hierarchical dense feature fusion network and other methods of the present invention. Fig. 4 is a view showing the results of feature extraction on the data set a by the HDFA-Net method and other methods of the present invention from an actual medical image.
TABLE 2 comparison of Performance index values for feature extraction of dataset A by the present method and other methods
Figure BDA0003412096940000111
As can be seen from the results in Table 2, the present inventionThe method (HDFA-Net) for extracting the breast lesion ultrasonic image multi-scale fusion characteristic parameters based on the hierarchical dense characteristic fusion network guided by the saliency map is superior to ML-Net, MT-Net and F in all performance index values2-Net, MG-Net and FCN-Net, etc. based on different network structures.
The method for extracting the breast lesion ultrasonic image multi-scale fusion characteristic parameters based on the hierarchical dense characteristic fusion network guided by the saliency map builds a three-branch hierarchical dense characteristic fusion network to extract and fuse foreground characteristics and background characteristics, and is used for extracting the breast lesion ultrasonic image multi-scale fusion characteristic parameters. The two progressive dense feature extraction branch networks of the foreground and the background take the original image and the corresponding saliency map as common input and are respectively used for effectively extracting the foreground and the background features relevant to the classification task. According to the known correlation and complementary information between the foreground and the background, the hierarchical feature fusion branch network performs multi-scale fusion on the foreground and the background information to obtain more accurate and more obvious multi-scale fusion feature parameters.
The method has the advantages that the operation time and the operation cost are less, the characteristic representation graph obtained by processing the linear spectral clustering superpixel method and the multi-scale region grouping method is jointly used, the characteristic information of the whole region of interest can be covered to the maximum extent, and the loss of useful information is avoided. In the layered dense feature fusion network, dense links acting between convolution units are used for the progressive propagation of features, and effective features from low order to high order are continuously extracted. In the hierarchical dense feature fusion network, the transition module is used for solving the problem of feature redundancy and reducing the number of channels and the spatial resolution of features, thereby reducing the calculation overhead and improving the calculation efficiency.
Example 2
The multi-scale fusion feature parameters obtained by applying the method of example 1 to the data set B are applied in breast ultrasound image texture analysis. Table 3 shows performance index values of feature extraction on the data set B by the method (HDFA-Net) for extracting breast lesion ultrasonic image multi-scale fusion feature parameters based on the saliency map-guided hierarchical dense feature fusion network and other methods of the present invention. Fig. 5 visually shows the result of feature extraction on the data set B by the HDFA-Net method and other methods of the present invention from an actual medical image.
TABLE 3 comparison of Performance index values for feature extraction of data set B by this and other methods
Figure BDA0003412096940000121
As can be seen from the results in Table 3, the method (HDFA-Net) for extracting the breast lesion ultrasonic image multi-scale fusion characteristic parameters based on the saliency map-guided hierarchical dense characteristic fusion network is superior to ML-Net, MT-Net and F in various performance index values2-Net, MG-Net and FCN-Net, etc. based on different network structures.
The method for extracting the breast lesion ultrasonic image multi-scale fusion characteristic parameters based on the hierarchical dense characteristic fusion network guided by the saliency map builds a three-branch hierarchical dense characteristic fusion network to extract and fuse foreground characteristics and background characteristics, and is used for extracting the breast lesion ultrasonic image multi-scale fusion characteristic parameters. The two progressive dense feature extraction branch networks of the foreground and the background take the original image and the corresponding saliency map as common input and are respectively used for effectively extracting the foreground and the background features relevant to the classification task. According to the known correlation and complementary information between the foreground and the background, the hierarchical feature fusion branch network performs multi-scale fusion on the foreground and the background information to obtain more accurate and more obvious multi-scale fusion feature parameters.
The method has the advantages that the operation time and the operation cost are less, the characteristic representation graph obtained by processing the linear spectral clustering superpixel method and the multi-scale region grouping method is jointly used, the characteristic information of the whole region of interest can be covered to the maximum extent, and the loss of useful information is avoided. In the layered dense feature fusion network, dense links acting between convolution units are used for the progressive propagation of features, and effective features from low order to high order are continuously extracted. In the hierarchical dense feature fusion network, the transition module is used for solving the problem of feature redundancy and reducing the number of channels and the spatial resolution of features, thereby reducing the calculation overhead and improving the calculation efficiency.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the protection scope of the present invention, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on a saliency map guided hierarchical dense characteristic fusion network is characterized by comprising the following steps:
s1, reading the breast lesion original ultrasonic image;
s2, randomly selecting at least three marking points in the lesion area on the breast lesion original ultrasonic image;
s3, processing the breast lesion original ultrasonic image in the step S1 by using a linear spectral clustering superpixel method to obtain a low-level feature representation;
processing the breast lesion original ultrasonic image in the step S1 by using a multi-scale region grouping method to obtain a high-level feature representation;
s4, in the low-level feature representation diagram and the high-level feature representation diagram obtained in the step S3, the marking points in the step S2 are used for respectively selecting target areas, and the target areas in the two selected images are weighted and summed to obtain a foreground significant diagram;
s5, performing negation operation on the foreground saliency map obtained in the step S4 to obtain a background saliency map;
s6, inputting the foreground significant map obtained in the step S4 and the breast lesion original ultrasonic image in the step S1 into a foreground feature extraction network branch together, and extracting foreground significant features to obtain a foreground significant feature map;
inputting the background saliency map obtained in the step S5 and the breast lesion original ultrasound image obtained in the step S1 into a background feature extraction network branch together, and performing extraction of background saliency features to obtain a background saliency feature map;
s7, fusing and learning the foreground significance characteristic diagram and the background significance characteristic diagram extracted in the step S6 in a layered dense characteristic fusion branch network to obtain a multi-scale fusion characteristic diagram;
and S8, inputting the multi-scale fusion feature map obtained in the step S7 into a multi-scale fusion unit to execute training of a classifier, and obtaining multi-scale fusion feature parameters.
2. The method for extracting the multi-scale fusion feature parameters of the breast lesion ultrasonic image based on the saliency map-guided hierarchical dense feature fusion network according to claim 1, wherein in step S1, the breast lesion original ultrasonic image is a single-channel two-dimensional image.
3. The method for extracting the multi-scale fusion feature parameters of the breast lesion ultrasonic image based on the saliency map-guided hierarchical dense feature fusion network according to claim 2, wherein in step S2, the region formed by the at least three marker points on the breast lesion original ultrasonic image is a target region containing breast lesion information.
4. The method for extracting breast lesion ultrasonic image multi-scale fusion feature parameters based on the saliency map-guided hierarchical dense feature fusion network according to claim 3,
in step S3, the linear spectral clustering superpixel method is used to process the breast lesion original ultrasound image in step S1 to obtain a low-level feature representation, specifically:
s31-1, setting superpixel blocks to obtain superpixel image p (I, n)i);
S31-2, respectively selecting a target area on the super-pixel image by using all the mark points selected in the step S2 to obtain a target area image;
s31-3, carrying out weighted summation on the target area images, and obtaining a low-level feature representation diagram, which is marked as y, according to the formula (1)l
yl=∑ijbj⊙p(I,ni) … … equation (1);
wherein, the breast lesion original ultrasonic image is marked as I; as an operation to select the target area with all the mark points in step S2; i represents the ith super-pixel clustering of the original ultrasonic image of the breast lesion, niThe number of superpixel blocks set for superpixel clustering on the original breast lesion image for the ith time is represented, wherein i is a positive integer; bjThe j is the positive integer, which represents the j-th mark point selected in step S2.
5. The method for extracting breast lesion ultrasonic image multi-scale fusion feature parameters based on the saliency map guided hierarchical dense feature fusion network of claim 3, wherein in step S3, the original breast lesion ultrasonic image in step S1 is processed by using a multi-scale region grouping method to obtain a high-level feature representation, specifically:
s32-1, setting the scale of the object suggestion graph to obtain an object suggestion graph q (I, m)i);
S32-2, normalizing the multi-scale graph to the same scale
Figure FDA0003412096930000021
Integrating into a complete multi-scale cluster map;
s32-3, selecting and fusing target areas on the multi-scale cluster map by using all the mark points selected in the step S2, and obtaining a high-level feature representation diagram, which is marked as y, according to the formula (2)h
Figure FDA0003412096930000022
Wherein "" indicates that the target is selected with all mark points in step S2Operation of the region; bjIndicating the jth mark point selected in the step S2, wherein j is a positive integer; m isi∈M{q(I,mi)}。
6. The method for extracting the multi-scale fusion feature parameters of the breast lesion ultrasonic image based on the saliency map-guided hierarchical dense feature fusion network according to claim 4 or 5, wherein in step S4, the low-level feature representation y is expressed according to formula (3)lAnd high level feature representation yhCarrying out weighted summation according to the weight coefficient of 1:2 to obtain a foreground significant graph which is marked as yf
yf=w1yl+w2yh… … formula (3);
wherein, w1Representation of Low-level feature representation ylIs in the foreground saliency map yfRatio of (1) to (b), w2Representation of high level feature representation yhIs in the foreground saliency map yfThe ratio of (1);
the foreground saliency map yfIs an image containing breast lesion information.
7. The method for extracting the multi-scale fusion feature parameters of the breast lesion ultrasonic image based on the saliency map-guided hierarchical dense feature fusion network according to claim 6, wherein in step S5, the foreground saliency map y is subjected to equation (4)fPerforming an inversion operation to obtain a background saliency map, which is marked as yb
Figure FDA0003412096930000031
Wherein,
Figure FDA0003412096930000032
indicating an inversion operation.
8. The method for extracting the multi-scale fusion feature parameters of the breast lesion ultrasonic image based on the saliency map-guided hierarchical dense feature fusion network according to claim 7, wherein in step S6, the foreground saliency feature map extraction specifically includes the following steps:
s61-1, in the foreground feature extraction branch network, the breast lesion original ultrasonic image I and the foreground significant image y are extractedfThe foreground low-order features are extracted through a foreground shallow feature extraction module;
s61-2, the foreground low-order features obtained in the step S61-1 are sequentially subjected to a first foreground progressive dense feature extraction module, a first foreground transition module, a second foreground progressive dense feature extraction module, a second foreground transition module and a third foreground progressive dense feature extraction module to extract foreground high-order features.
9. The method for extracting the multi-scale fusion feature parameters of the breast lesion ultrasonic image based on the saliency map-guided hierarchical dense feature fusion network according to claim 8, wherein in step S6, the specific process of extracting the background saliency feature map is as follows:
s62-1, in the background feature extraction branch network, the breast lesion original ultrasonic image I and the background saliency map y are extractedbThe background low-order features are extracted through a background shallow feature extraction module;
and S62-2, sequentially extracting background high-order features from the background low-order features obtained in the step S62-1 through a first background progressive dense feature extraction module, a first background transition module, a second background progressive dense feature extraction module, a second background transition module and a third background progressive dense feature extraction module.
10. The method for extracting breast lesion ultrasound image multi-scale fusion feature parameters based on the saliency map guided hierarchical dense feature fusion network according to claim 9, wherein in step S7, in the hierarchical feature fusion branch network, the specific process of fusing the foreground saliency feature map and the background saliency feature map is as follows:
s7-1, inputting the output feature graphs of the first foreground progressive dense feature extraction module and the first background progressive dense feature extraction module into a first feature fusion module for learning fusion features;
the obtained fusion characteristic diagram is divided into two paths, one path is input into a second fusion characteristic module, and the other path is input into a multi-scale fusion unit;
s7-2, inputting the output feature graphs of the second foreground progressive dense feature extraction module and the second background progressive dense feature extraction module into a second feature fusion module for learning fusion features;
the obtained fusion characteristic diagram is divided into two paths, one path is input into a third fusion characteristic module, and the other path is input into a multi-scale fusion unit;
s7-3, inputting the feature graphs output by the third foreground progressive dense feature extraction module and the third background progressive dense feature extraction module into a third feature fusion module, learning fusion features, and inputting the obtained fusion feature graph into the multi-scale fusion module;
s7-4, performing channel connection on the three fusion feature maps obtained in the steps S7-1, S7-2 and S7-3 in the multi-scale fusion unit to obtain a multi-scale fusion feature map.
CN202111532955.8A 2021-12-15 2021-12-15 Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network Active CN114332572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111532955.8A CN114332572B (en) 2021-12-15 2021-12-15 Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111532955.8A CN114332572B (en) 2021-12-15 2021-12-15 Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network

Publications (2)

Publication Number Publication Date
CN114332572A true CN114332572A (en) 2022-04-12
CN114332572B CN114332572B (en) 2024-03-26

Family

ID=81052895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111532955.8A Active CN114332572B (en) 2021-12-15 2021-12-15 Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network

Country Status (1)

Country Link
CN (1) CN114332572B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115423806A (en) * 2022-11-03 2022-12-02 南京信息工程大学 Breast mass detection method based on multi-scale cross-path feature fusion
CN116630680A (en) * 2023-04-06 2023-08-22 南方医科大学南方医院 Dual-mode image classification method and system combining X-ray photography and ultrasound
CN117392428A (en) * 2023-09-04 2024-01-12 深圳市第二人民医院(深圳市转化医学研究院) Skin disease image classification method based on three-branch feature fusion network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2034439A1 (en) * 2007-09-07 2009-03-11 Thomson Licensing Method for establishing the saliency map of an image
CN107680106A (en) * 2017-10-13 2018-02-09 南京航空航天大学 A kind of conspicuousness object detection method based on Faster R CNN
CN107992874A (en) * 2017-12-20 2018-05-04 武汉大学 Image well-marked target method for extracting region and system based on iteration rarefaction representation
WO2020211522A1 (en) * 2019-04-15 2020-10-22 京东方科技集团股份有限公司 Method and device for detecting salient area of image
CN113379691A (en) * 2021-05-31 2021-09-10 南方医科大学 Breast lesion deep learning segmentation method based on prior guidance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2034439A1 (en) * 2007-09-07 2009-03-11 Thomson Licensing Method for establishing the saliency map of an image
CN107680106A (en) * 2017-10-13 2018-02-09 南京航空航天大学 A kind of conspicuousness object detection method based on Faster R CNN
CN107992874A (en) * 2017-12-20 2018-05-04 武汉大学 Image well-marked target method for extracting region and system based on iteration rarefaction representation
WO2020211522A1 (en) * 2019-04-15 2020-10-22 京东方科技集团股份有限公司 Method and device for detecting salient area of image
CN113379691A (en) * 2021-05-31 2021-09-10 南方医科大学 Breast lesion deep learning segmentation method based on prior guidance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈南而;陈莹;: "融合上下文信息的多尺度图像显著性检测", 小型微型计算机系统, no. 09, 15 September 2017 (2017-09-15) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115423806A (en) * 2022-11-03 2022-12-02 南京信息工程大学 Breast mass detection method based on multi-scale cross-path feature fusion
CN116630680A (en) * 2023-04-06 2023-08-22 南方医科大学南方医院 Dual-mode image classification method and system combining X-ray photography and ultrasound
CN116630680B (en) * 2023-04-06 2024-02-06 南方医科大学南方医院 Dual-mode image classification method and system combining X-ray photography and ultrasound
CN117392428A (en) * 2023-09-04 2024-01-12 深圳市第二人民医院(深圳市转化医学研究院) Skin disease image classification method based on three-branch feature fusion network

Also Published As

Publication number Publication date
CN114332572B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
JP7143008B2 (en) Medical image detection method and device based on deep learning, electronic device and computer program
Malathi et al. Brain tumour segmentation using convolutional neural network with tensor flow
CN109523521B (en) Pulmonary nodule classification and lesion positioning method and system based on multi-slice CT image
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN110309860B (en) Method for classifying malignancy degree of lung nodule based on convolutional neural network
CN110021425B (en) Comparison detector, construction method thereof and cervical cancer cell detection method
CN109447998B (en) Automatic segmentation method based on PCANet deep learning model
Deng et al. Classification of breast density categories based on SE-Attention neural networks
CN114332572A (en) Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map guided hierarchical dense characteristic fusion network
CN112700461B (en) System for pulmonary nodule detection and characterization class identification
CN114529516B (en) Lung nodule detection and classification method based on multi-attention and multi-task feature fusion
JP2022547722A (en) Weakly Supervised Multitask Learning for Cell Detection and Segmentation
CN114842238A (en) Embedded mammary gland ultrasonic image identification method
CN114550169A (en) Training method, device, equipment and medium for cell classification model
CN114549462A (en) Focus detection method, device, equipment and medium based on visual angle decoupling Transformer model
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN113781387A (en) Model training method, image processing method, device, equipment and storage medium
CN114565572A (en) Cerebral hemorrhage CT image classification method based on image sequence analysis
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
CN117474876A (en) Deep learning-based kidney cancer subtype auxiliary diagnosis and uncertainty evaluation method
Alisha et al. Cervical Cell Nuclei Segmentation On Pap Smear Images Using Deep Learning Technique
Rajkumar et al. Darknet-53 convolutional neural network-based image processing for breast cancer detection
Park et al. Classification of cervical cancer using deep learning and machine learning approach
Chhabra et al. Comparison of different edge detection techniques to improve quality of medical images
CN108154107B (en) Method for determining scene category to which remote sensing image belongs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant