CN116485791A - Automatic detection method and system for double-view breast tumor lesion area based on absorbance - Google Patents
Automatic detection method and system for double-view breast tumor lesion area based on absorbance Download PDFInfo
- Publication number
- CN116485791A CN116485791A CN202310715680.4A CN202310715680A CN116485791A CN 116485791 A CN116485791 A CN 116485791A CN 202310715680 A CN202310715680 A CN 202310715680A CN 116485791 A CN116485791 A CN 116485791A
- Authority
- CN
- China
- Prior art keywords
- image
- tumor
- double
- breast
- absorbance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000002835 absorbance Methods 0.000 title claims abstract description 72
- 230000003902 lesion Effects 0.000 title claims abstract description 62
- 208000026310 Breast neoplasm Diseases 0.000 title claims abstract description 51
- 238000001514 detection method Methods 0.000 title claims abstract description 50
- 206010006187 Breast cancer Diseases 0.000 title claims abstract description 46
- 206010028980 Neoplasm Diseases 0.000 claims abstract description 61
- 210000000481 breast Anatomy 0.000 claims abstract description 53
- 230000004927 fusion Effects 0.000 claims abstract description 28
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 230000009466 transformation Effects 0.000 claims abstract description 15
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 4
- 238000010586 diagram Methods 0.000 claims description 31
- 238000000034 method Methods 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 13
- 230000002708 enhancing effect Effects 0.000 claims description 12
- 230000007246 mechanism Effects 0.000 claims description 12
- 238000002604 ultrasonography Methods 0.000 claims description 10
- 230000005540 biological transmission Effects 0.000 claims description 9
- 101100441251 Arabidopsis thaliana CSP2 gene Chemical group 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 4
- 102100027557 Calcipressin-1 Human genes 0.000 claims description 3
- 101100247605 Homo sapiens RCAN1 gene Proteins 0.000 claims description 3
- 101150064416 csp1 gene Proteins 0.000 claims description 3
- 230000009977 dual effect Effects 0.000 claims description 3
- 238000004806 packaging method and process Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 230000003044 adaptive effect Effects 0.000 claims description 2
- 238000002372 labelling Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 claims 1
- 230000007547 defect Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 3
- 210000001519 tissue Anatomy 0.000 description 3
- 238000010521 absorption reaction Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000004195 computer-aided diagnosis Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000005075 mammary gland Anatomy 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009469 supplementation Effects 0.000 description 2
- 206010006272 Breast mass Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000004907 gland Anatomy 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The invention discloses an absorbance-based double-view breast tumor lesion area automatic detection method and system, which relate to the field of medical image processing and comprise the following steps: s1, acquiring a breast ultrasonic tumor gray image dataset, marking data and performing image preprocessing; s2, carrying out absorbance transformation on the preprocessed image to obtain an ultrasonic absorbance image; s3, taking the breast ultrasonic tumor gray level image and the corresponding ultrasonic absorbance image as double views, and inputting a double-view detection model; s4, the double-view detection model respectively performs feature extraction on the double views to effectively reflect the tumor interested region in the double views; the step S4 comprises the following steps: feature fusion is carried out by embedding feature graphs of different scales of the double views into the DFT unit. The invention combines the breast ultrasonic tumor gray level image and the absorbance image to make up for the defect of insufficient breast ultrasonic tumor gray level image information; and the binary relation between the gray level image and the absorbance image is dynamically learned by using the DFT unit, and the gray level image and the absorbance image are fused and interacted, so that the relevance and the complementarity are enhanced.
Description
Technical Field
The invention relates to the technical field of medical image processing, in particular to an absorbance-based double-view breast tumor lesion area automatic detection method and system.
Background
The correct interpretation of ultrasound images requires a large accumulation of clinical experience, which is time consuming and subject to subjectivity; meanwhile, the breast ultrasonic tumor image has the problems of speckle noise, low contrast and the like, and is easy to produce misjudgment only by human eyes. To enhance the accuracy of tumor detection, the development of the digital age should be complied with, and a breast tumor computer-aided diagnosis (Computer Aided Diagnosis, CAD) technique should be applied to breast ultrasound tumor image analysis. The detection of the lesion area of the breast tumor is one of the most important steps in the CAD technology, and the realization of the efficient and accurate detection of the lesion area of the breast tumor has important significance and application value.
In recent years, advanced study of mammary gland CAD based on deep learning has been advanced. Zhang et al introduced multi-scale and multi-resolution extraction of candidate boundaries based on the Faster RCNN network to improve detection of smaller-sized breast tumors with accuracy up to 91.30% (Zhang Z, zhang X, lin X, et al Ultrasonic Diagnosis of Breast Nodules Using Modified Faster R-CNN [ J ]. Ultrasonic Imaging, 2019,41 (6): 353-367.). Xu Lifang et al construct SE-Res2Net networks on the basis of YOLOv3 and design novel downsampling modules to alleviate problems of blurred boundaries, large noise and low contrast of breast ultrasound tumor images, which result in difficult feature extraction and easy occurrence of false leak detection, and improve feature extraction capacity by 4.56 percentage points compared with the basic network (Xu Lifang, fu Zhijie, mo Hongwei. Breast ultrasound tumor recognition based on improved YOLOv3 algorithm [ J ]. Intelligent systems theory, 2021,16 (01): 21-29.). In conclusion, deep learning has made a good research progress in the field of detection of lesion areas of breast tumors. However, most of researches are biased to solve the problem that the background gray value and the characteristic distinction of a lesion area are small due to a single-view ultrasonic image imaging mode, so that tumors with small shapes are easy to ignore, and the difference of gray similar tissues and the phenomenon of gland overlapping during imaging are difficult to distinguish, so that the detection of breast ultrasonic tumor images is inaccurate.
Disclosure of Invention
The invention aims to solve the problem of inaccurate breast ultrasonic tumor image detection in the prior art.
The technical scheme adopted for solving the technical problems is as follows: the method for automatically detecting the lesion area of the double-view breast tumor based on the absorbance comprises the following steps:
s1, acquiring a breast ultrasonic tumor gray image dataset, marking the position of a breast tumor in the dataset, preprocessing the dataset, and generating a preprocessed breast ultrasonic tumor image dataset;
s2, carrying out absorbance transformation on the preprocessed image according to an ultrasonic transmission principle to obtain an ultrasonic absorbance image;
s3, taking the preprocessed breast ultrasonic tumor gray level image and the corresponding ultrasonic absorbance image as double views, and inputting a double-view detection model;
s4, the double-view detection model respectively performs feature extraction on the double views through a double-flow backhaul backbone network, and effectively reflects the tumor interested areas in the double views through multi-layer convolution;
the step S4 comprises the following steps: the breast ultrasonic tumor gray image with different scales is characterizedAnd absorbance image featuresEmbedding DFT units to perform feature fusion, wherein i=2, 3,4, and fusing the feature imagesAndadded back to the original characteristic diagramAndand sequentially outputting feature maps P2, P3 and P4 of the features with different scales to fully utilize feature information among different views, improve detection of a breast tumor lesion area, and finally fusing the feature maps P2, P3 and P4 to output as a predicted image.
Preferably, in the step S1, the preprocessing mainly includes removing labeling information around the ultrasound image and enhancing contrast, and the calculation formula of enhancing contrast is as follows:
wherein ,is the value of the pixel point of the original image,to correspond to the pixel values after the enhancement processing,the logarithmic function is represented, and the constant C is used to satisfy the gray dynamic range of the transformed image.
The contrast is enhanced mainly by expanding low-value gray scale and compressing high-value gray scale through logarithmic transformation, so that details of the breast ultrasonic tumor image are easier to see clearly, and the accuracy of the breast ultrasonic tumor image detection is improved.
Preferably, the step S2 includes the following steps:
s21, converting the gray value of each pixel in the preprocessed image into an absorbance value, wherein the calculation formula is as follows:
wherein ,is the value of the absorption degree, and the absorption degree,is the gray value of the pixel and,is the average gray value of the background;
s22, converting the absorbance value by linear transformationMapping to [0,255]And in the range, converting the absorbance value into a depth information image to obtain an ultrasonic absorbance image.
Preferably, in the step S3, before the dual-view is input into the dual-view detection model, mosaic enhancement and adaptive image scaling are performed on the dual-view; the mosaic enhancement is to randomly select 4 pictures to randomly cut and rotate, and then splice and synthesize images.
Preferably, in the step S4, a YOLOv5 trunk extraction network is adopted to extract features of the breast ultrasound tumor gray level image and the absorbance image respectively, so as to effectively reflect the lesion area; wherein, the double-flow backhaul backbone network adopts a CSPDarknet network.
Preferably, the step S4 includes the following steps:
s41, slicing the double views by adopting a focus module; adopting a CBM structure to extract the characteristic information of the double views; adopting a CSP1 structure, and enhancing the extraction of the features through a cross-stage hierarchical structure;
s42, extracting features of lesion areas of breast tumors by adopting CBM and CSP2 structures respectively to obtain feature graphsAndis input into a DFT unit which looks at differentMapping the images to the same feature space, enhancing the attention to the feature information of the lesion area of the breast tumor, and outputting a primary fusion feature imageAndwill beAndadded to the original characteristic diagramAndoutputting P2;
s43, adopting a CBM structure and a CSP2 structure to map the characteristic diagramAndfurther extracting feature information of the lesion area of the breast tumor to obtain a feature mapAndinputting the images into a DFT unit, further relieving the influence of noise, enhancing the detection of a lesion area, and outputting a secondary fusion characteristic diagramAndwill beAndadded to the original characteristic diagramAndon, output P3;
s44, adopting a CBM structure to secondarily fuse the characteristic information of the lesion area of the breast tumorAndfurther extraction is carried out; the SPP structure is adopted to fuse the characteristic information of the secondary at different sizesAndextracting; adopting a CBM structure, extracting the characteristics of the lesion area and obtaining a characteristic mapAndin the DFT unit, the enhancement and compensation of the dual-view to the characteristics of the lesion area are realized, and the three fusion characteristic graphs are outputAndwill beAndadded to the original characteristic diagramAndon, output P4;
s45, fusing the feature maps P2, P3 and P4, and outputting the fused feature maps as a prediction result.
Preferably, in the step S4, feature maps of different scales of the breast ultrasound tumor gray level image and the absorbance image are obtainedAndthe process of embedding the DFT unit for fusion comprises the following steps: feature maps of different scales for double viewsAndinputting a DFT unit, mutually fusing and interacting information between an ultrasonic absorbance image and a gray level image based on a reflection mechanism and a transmission mechanism, adding the fused characteristic images back to the original characteristic images, and sequentially outputting P2, P3 and P4; usingRepresenting a fusion feature, expressed as:
wherein Is a breast ultrasonic tumor gray level image characteristic diagram,in order to make the image feature map of the absorbance,as a function of the fusion,represented as mammary gland hyperfunctionAcoustic tumor gray image feature mapAnd an absorbance image feature mapThrough the feature map output after fusionAnd。
preferably, the feature fusion in the DFT unit includes the following steps:
s51, respectively inputting breast ultrasonic tumor gray image characteristicsAnd absorbance image featuresIs the ith layer feature map of (2)Andwherein i=2, 3,4, c is a channel, H is high, W is wide, and the corresponding sequence is obtained by feature mapping flattening and arranging the matrix orderAnd;
s52, connecting each sequence, and adding a learnable position for embedding to obtain an input sequenceThe method comprises the steps of carrying out a first treatment on the surface of the Position embedding is a trainable parameter with dimension of 2HW×C, so that the model can distinguish the space information among different labels token during training;
s53, through Layer Norm normalization operation, stability of data characteristic distribution is guaranteed;
s54, packaging a plurality of complex relations from different representation subspaces at different positions through a multi-head attention mechanism;
s55, through Layer Norm normalization operation, stability of data characteristic distribution is guaranteed;
s56, calculating an output sequence O by using nonlinear transformation, wherein the shape of the output sequence O is the same as that of an input sequence I, and the calculation formula is as follows:
wherein ,the non-linear transformation is represented by a non-linear transformation,representing complex relationships of input sequence I with different positionsAs a result of the addition,;
s57, converting the output sequence O into a recalibration result by the inverse operation of step S51Andand added as supplementary information to the original feature mapAndas input to the i+1 layer.
Preferably, in the step S54, the input sequence I is first projected onto three weight matrices to calculate a set of query Q, key K, and value V, where the calculation formula is:
wherein ,、 andas a matrix of weights, the weight matrix,;
second, through self-care layerThe attention weight output Z is calculated using the scaling dot product between Q and K, with the formula:
wherein ,is a scale factor for preventingThe normalized exponential function converges to a region with extremely small gradient when the dot product increases;
finally, a multi-head attention mechanism is adoptedComputing multiple complex relationships expressing different locationsThe calculation formula is as follows:
wherein h represents the number of front faces,represents the attention weight of the ith token,the function performs a cascading operation on the features,representation ofIs provided with a projection matrix of (a),、anda weight matrix representing the query Q, key K, and value V corresponding to the ith token.
The invention also provides an absorbance-based dual-view breast tumor lesion area automatic detection system, which comprises:
the image acquisition module is used for acquiring a breast ultrasonic tumor gray image data set, marking the position of the breast tumor in the data set, preprocessing the data set and generating a preprocessed breast ultrasonic tumor image data set;
the absorbance conversion module is used for carrying out absorbance conversion on the preprocessed image according to the ultrasonic transmission principle to obtain an ultrasonic absorbance image;
the double-view module takes the preprocessed breast ultrasonic tumor gray level image and the preprocessed breast ultrasonic tumor absorbance image as double views, and inputs the images into the detection module;
the double-view detection model is used for respectively extracting features of the double views through a double-flow backhaul main network and respectively and effectively reflecting the tumor region of interest in the double views through multi-layer convolution;
wherein, the detection module comprises a DFT unit, and the breast ultrasonic tumor gray image features with different scalesAnd absorbance image featuresEmbedding DFT units to perform feature fusion, wherein i=2, 3,4, and fusing the feature imagesAndadded back to the original characteristic diagramAndand sequentially outputting feature maps P2, P3 and P4 of the features with different scales to fully utilize feature information among different views, improve detection of a breast tumor lesion area, and finally fusing the feature maps P2, P3 and P4 to output as a predicted image.
The invention has the following beneficial effects:
the reflection mechanism and the transmission mechanism based on the gray level image and the absorbance image are utilized, and the two are combined to play a good complementary role on the characteristic information of the lesion area, so that the ultrasonic absorbance image is incorporated into the ultrasonic gray level image to serve as the basis of a network, the defect of insufficient information of the single-view ultrasonic gray level image is overcome, and the enhancement and the supplementation of the characteristics of the lesion area by the double views are realized.
In the process of extracting the features of the double views by the double-flow backsheene, the binary relation between the gray level image and the absorbance image is dynamically learned by the DFT module, and the information between the double views is mutually fused and interacted, so that the relevance and complementarity between the information of different views can be enhanced, the potential interaction between the gray level image and the absorbance image can be robustly captured, the possibility that the lesion area of the breast tumor is regarded as noise and other tissues is reduced, the lesion area can be effectively reflected, and the accuracy of detecting the breast ultrasonic tumor image is improved.
The present invention will be described in further detail with reference to the drawings and examples, but the present invention is not limited to the examples.
Drawings
FIG. 1 is a step diagram of an absorbance-based dual-view breast tumor lesion automatic detection method according to an embodiment of the present invention;
FIG. 2 is a diagram of a network structure of a dual view detection model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a feature fusion flow of a DFT unit according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an absorbance-based dual-view breast tumor lesion automatic detection system according to an embodiment of the invention.
Detailed Description
Referring to fig. 1, a step diagram of an absorbance-based dual-view breast tumor lesion area automatic detection method according to an embodiment of the invention includes the following steps:
s1, acquiring a breast ultrasonic tumor gray image dataset, marking the position of a breast tumor in the dataset, preprocessing the dataset, and generating a preprocessed breast ultrasonic tumor image dataset;
s2, carrying out absorbance transformation on the preprocessed image according to an ultrasonic transmission principle to obtain an ultrasonic absorbance image;
s3, taking the preprocessed breast ultrasonic tumor gray level image and the corresponding ultrasonic absorbance image as double views, and inputting a double-view detection model;
s4, the double-view detection model respectively performs feature extraction on the double views through a double-flow backhaul backbone network, and effectively reflects the tumor interested areas in the double views through multi-layer convolution;
the step S4 comprises the following steps: the breast ultrasonic tumor gray image with different scales is characterizedAnd absorbance image featuresEmbedding DFT units to perform feature fusion, wherein i=2, 3,4, and fusing the feature imagesAndadded back to the original characteristic diagramAndand sequentially outputting feature maps P2, P3 and P4 of the features with different scales to fully utilize feature information among different views, improve detection of a breast tumor lesion area, and finally fusing the feature maps P2, P3 and P4 to output as a predicted image.
Referring to fig. 2, which is a network structure diagram of a dual-view detection model according to an embodiment of the present invention, the step S4 includes the following steps:
s41, slicing the double views by adopting a focus module; adopting a CBM structure to extract the characteristic information of the double views; adopting a CSP1 structure, and enhancing the extraction of the features through a cross-stage hierarchical structure;
s42, extracting features of lesion areas of breast tumors by adopting CBM and CSP2 structures respectively to obtain feature graphsAndinputting the information into a DFT unit, mapping different views into the same feature space by the DFT unit, enhancing the attention to the feature information of the lesion area of the breast tumor, and outputting a primary fusion feature mapAndwill beAndadded to the original characteristic diagramAndoutputting P2;
s43, adopting a CBM structure and a CSP2 structure to map the characteristic diagramAndfurther extracting feature information of the lesion area of the breast tumor to obtain a feature mapAndinputting the images into a DFT unit, further relieving the influence of noise, enhancing the detection of a lesion area, and outputting a secondary fusion characteristic diagramAndwill beAndadded to the original characteristic diagramAndon, output P3;
s44, adopting a CBM structure to secondarily fuse the characteristic information of the lesion area of the breast tumorAndfurther extraction is carried out; the SPP structure is adopted to fuse the characteristic information of the secondary at different sizesAndextracting; adopting a CBM structure, extracting the characteristics of the lesion area and obtaining a characteristic mapAndin the DFT unit, the enhancement and compensation of the dual-view to the characteristics of the lesion area are realized, and the three fusion characteristic graphs are outputAndwill beAndadded to the original characteristic diagramAndon, output P4;
s45, fusing the feature maps P2, P3 and P4, and outputting the fused feature maps as a prediction result.
Referring to fig. 3, a schematic diagram of a feature fusion flow of a DFT unit according to an embodiment of the invention is shown, including the following steps:
s51, respectively inputting breast ultrasonic tumor gray image characteristicsAnd absorbance image featuresIs the ith layer feature map of (2)Andwherein i=2, 3,4, c is a channel, H is high, W is wide, and the corresponding sequence is obtained by feature mapping flattening and arranging the matrix orderAnd;
s52, connecting each sequence, and adding a learnable position for embedding to obtain an input sequenceThe method comprises the steps of carrying out a first treatment on the surface of the Position embedding is a trainable parameter with dimension of 2HW×C, so that the model can distinguish the space information among different labels token during training;
s53, through Layer Norm normalization operation, stability of data characteristic distribution is guaranteed;
s54, packaging a plurality of complex relations from different representation subspaces at different positions through a multi-head attention mechanism;
s55, through Layer Norm normalization operation, stability of data characteristic distribution is guaranteed;
s56, calculating an output sequence O by using nonlinear transformation, wherein the shape of the output sequence O is the same as that of an input sequence I, and the calculation formula is as follows:
wherein ,the non-linear transformation is represented by a non-linear transformation,representing complex relationships of input sequence I with different positionsAs a result of the addition,;
s57, converting the output sequence O into a recalibration result by the inverse operation of step S51Andand added as supplementary information to the original feature mapAndas input to the i+1 layer.
Specifically, in S54, the input sequence I is projected onto three weight matrices to calculate a set of query Q, key K, and value V, where the calculation formula is:
wherein ,、 andas a matrix of weights, the weight matrix,;
second, through self-care layerThe attention weight output Z is calculated using the scaling dot product between Q and K, with the formula:
wherein ,is a scale factor for preventingThe normalized exponential function converges to a region with extremely small gradient when the dot product increases;
finally, a multi-head attention mechanism is adoptedComputing multiple complex relationships expressing different locationsThe calculation formula is as follows:
wherein h represents the number of front faces,represents the attention weight of the ith token,the function performs a cascading operation on the features,representation ofIs provided with a projection matrix of (a),、anda weight matrix representing the query Q, key K, and value V corresponding to the ith token.
Referring to fig. 4, a schematic structural diagram of an absorbance-based dual-view breast tumor lesion area automatic detection system according to an embodiment of the invention includes:
the image acquisition module is used for acquiring a breast ultrasonic tumor gray image data set, marking the position of the breast tumor in the data set, preprocessing the data set and generating a preprocessed breast ultrasonic tumor image data set;
the absorbance conversion module is used for carrying out absorbance conversion on the preprocessed image according to the ultrasonic transmission principle to obtain an ultrasonic absorbance image;
the double-view module takes the preprocessed breast ultrasonic tumor gray level image and the preprocessed breast ultrasonic tumor absorbance image as double views, and inputs the images into the detection module;
the double-view detection model is used for respectively extracting features of the double views through a double-flow backhaul main network and respectively and effectively reflecting the tumor region of interest in the double views through multi-layer convolution;
wherein, the detection module comprises a DFT unit, and the breast ultrasonic tumor gray image features with different scalesAnd absorbance image featuresEmbedding DFT units to perform feature fusion, wherein i=2, 3,4, and fusing the feature imagesAndadded back to the original characteristic diagramAndand sequentially outputting feature maps P2, P3 and P4 of the features with different scales to fully utilize feature information among different views, improve detection of a breast tumor lesion area, and finally fusing the feature maps P2, P3 and P4 to output as a predicted image.
Therefore, the method and the system for automatically detecting the lesion area of the double-view breast tumor based on the absorbance image transformation have better complementary effect on the characteristic information of the lesion area by combining the gray level image and the absorbance image, better make up the defect of insufficient ultrasonic gray level image information, and realize the enhancement and the supplementation of the characteristics of the lesion area by the double-view breast tumor based on the absorbance image transformation; in the process of extracting the characteristics of the double views through the double main stream backup, the binary relation between the gray level image and the absorbance image is dynamically learned by using the DFT module, and the information between the double views is mutually fused and interacted, so that the relevance and complementarity between the information of different views can be enhanced, the potential interaction between the gray level image and the absorbance image can be robustly captured, the possibility that the lesion area of the breast tumor is regarded as noise and other tissues is reduced, the lesion area can be effectively reflected, and the accuracy of detecting the breast ultrasonic tumor image is improved.
The foregoing is only illustrative of the present invention and is not to be construed as limiting thereof, but rather as various modifications, equivalent arrangements, improvements, etc., within the spirit and principles of the present invention.
Claims (10)
1. The automatic detection method for the lesion area of the double-view breast tumor based on the absorbance is characterized by comprising the following steps of:
s1, acquiring a breast ultrasonic tumor gray image dataset, marking the position of a breast tumor in the dataset, preprocessing the dataset, and generating a preprocessed breast ultrasonic tumor image dataset;
s2, carrying out absorbance transformation on the preprocessed image according to an ultrasonic transmission principle to obtain an ultrasonic absorbance image;
s3, taking the preprocessed breast ultrasonic tumor gray level image and the corresponding ultrasonic absorbance image as double views, and inputting a double-view detection model;
s4, the double-view detection model respectively performs feature extraction on the double views through a double-flow backhaul backbone network, and effectively reflects the tumor interested areas in the double views through multi-layer convolution;
the step S4 comprises the following steps: the breast ultrasonic tumor gray image with different scales is characterizedAnd absorbency image feature->Embedding DFT units to perform feature fusion, wherein i=2, 3,4, and the fused feature map is +.> and />Add back to the original characteristic diagram> and />And sequentially outputting feature maps P2, P3 and P4 of the features with different scales to fully utilize feature information among different views, improve detection of a breast tumor lesion area, and finally fusing the feature maps P2, P3 and P4 to output as a predicted image.
2. The method for automatically detecting a lesion area of a breast tumor in two views based on absorbability according to claim 1, wherein in S1, the preprocessing mainly comprises removing labeling information around an ultrasound image and enhancing contrast, and the calculation formula of enhancing contrast is as follows:
;
wherein ,is the value of the pixel point of the original image, +.>For the pixel value after corresponding enhancement processing, +.>The logarithmic function is represented, and the constant C is used to satisfy the gray dynamic range of the transformed image.
3. The method for automatically detecting a lesion in a breast tumor in two views based on absorbability according to claim 1, wherein the step S2 comprises the steps of:
s21, converting the gray value of each pixel in the preprocessed image into an absorbance value, wherein the calculation formula is as follows:
;
wherein ,is the absorbance value, +.>Is the gray value of the pixel, ">Is the average gray value of the background;
s22, converting the absorbance value by linear transformationMapping to [0,255]And in the range, converting the absorbance value into a depth information image to obtain an ultrasonic absorbance image.
4. The method for automatically detecting a lesion area of a double-view breast tumor based on absorbability according to claim 1, wherein in S3, mosaic enhancement and adaptive image scaling are performed on the double-view before the double-view is input into the double-view detection model; the mosaic enhancement is to randomly select 4 pictures to randomly cut and rotate, and then splice and synthesize images.
5. The method for automatically detecting a lesion area of a breast tumor in double views based on absorbability according to claim 1, wherein in the step S4, a YOLOv5 trunk extraction network is adopted to extract features of a breast ultrasonic tumor gray level image and an absorbability image respectively so as to effectively reflect the lesion area; wherein, the double-flow backhaul backbone network adopts a CSPDarknet network.
6. The method for automatically detecting a lesion in a breast tumor in two views based on absorbability according to claim 5, wherein the step S4 comprises the steps of:
s41, slicing the double views by adopting a focus module; adopting a CBM structure to extract the characteristic information of the double views; adopting a CSP1 structure, and enhancing the extraction of the features through a cross-stage hierarchical structure;
s42, extracting features of lesion areas of breast tumors by adopting CBM and CSP2 structures respectively to obtain feature graphs and />Inputting the information into a DFT unit, wherein the DFT unit maps different views into the same feature space, strengthens the attention to the feature information of the lesion area of the breast tumor, and outputs a primary fusion feature map-> and />Will-> and />Added to the original feature map-> and />Outputting P2;
s43, adopting a CBM structure and a CSP2 structure to map the characteristic diagram and />Further extracting characteristic information of breast tumor lesion area to obtain a characteristic diagram +.> and />Inputting into DFT unit, further relieving noise effect, enhancing detection of lesion region, and outputting secondary fusion characteristic diagram +.> and />Will-> and />Added to the original feature map-> and />On, output P3;
s44, adopting a CBM structure to secondarily fuse the characteristic information of the lesion area of the breast tumor and />Further extraction is carried out; the SPP structure is adopted to carry out the secondary fusion of the characteristic information on different sizes> and />Extracting; adopting a CBM structure, extracting the characteristics of a lesion area to obtain a characteristic diagram +.> and />In the DFT unit, the enhancement and compensation of the dual view to the characteristics of the lesion area are realized, and a three-time fusion characteristic diagram is output> and />Will-> and />Added to the original feature map-> and />On, output P4;
s45, fusing the feature maps P2, P3 and P4, and outputting the fused feature maps as a prediction result.
7. The method for automatically detecting a lesion of a breast tumor in two views based on absorbances according to claim 1, wherein in S4, a feature map of different scales of a breast ultrasound tumor gray level image and an absorbances image is obtained and />The process of embedding the DFT unit for fusion comprises the following steps: feature map of different scales of the double view +.> and />Inputting a DFT unit, mutually fusing and interacting information between an ultrasonic absorbance image and a gray level image based on a reflection mechanism and a transmission mechanism, adding the fused characteristic images back to the original characteristic images, and sequentially outputting P2, P3 and P4; use->Representing a fusion feature, expressed as:
;
wherein Is a breast ultrasonic tumor gray level image characteristic diagram, < >>For the absorbency image feature map, < >>As a fusion function +.>Expressed as a breast ultrasound tumor gray level image feature map +.>And absorbency image feature map->By means of the feature map output after fusion +.> and />。
8. The method for automatically detecting a lesion in a breast tumor in two views based on absorbability according to claim 7, wherein the feature fusion in the DFT unit comprises the steps of:
s51, respectively inputting breast ultrasonic tumor gray image characteristicsAnd absorbency image feature->I-th layer feature map-> and />Wherein i=2, 3,4, c is channel, H is high, W is wide, and the corresponding sequence +_j is obtained by feature mapping flattening and arranging the matrix order> and />;
S52, connecting each sequence, and adding a learnable position for embedding to obtain an input sequenceThe method comprises the steps of carrying out a first treatment on the surface of the Position embedding is a trainable parameter of dimension 2HW CThe model can distinguish the space information among different marked token during training;
s53, through Layer Norm normalization operation, stability of data characteristic distribution is guaranteed;
s54, packaging a plurality of complex relations from different representation subspaces at different positions through a multi-head attention mechanism;
s55, through Layer Norm normalization operation, stability of data characteristic distribution is guaranteed;
s56, calculating an output sequence O by using nonlinear transformation, wherein the shape of the output sequence O is the same as that of an input sequence I, and the calculation formula is as follows:
;
wherein ,representing a nonlinear transformation>Representing the complex relationship of the input sequence I with different positions +.>The result of the addition, +.>;
S57, converting the output sequence O into a recalibration result by the inverse operation of step S51 and />And added as supplementary information to the original feature map +.> and />As input to the i+1 layer.
9. The method for automatically detecting a lesion in a breast tumor in two views based on absorbances according to claim 8, wherein in S54, the input sequence I is projected onto three weight matrices to calculate a set of query Q, key K and value V, and the calculation formula is:
;
;
;
wherein ,、 /> and />Is a weight matrix>;
Second, through self-care layerThe attention weight output Z is calculated using the scaling dot product between Q and K, with the formula:
;
wherein ,is a scale factor for preventing +.>The normalized exponential function converges to a region with extremely small gradient when the dot product increases;
finally, a multi-head attention mechanism is adoptedCalculating a plurality of complex relations expressing different positions +.>The calculation formula is as follows:
;
;
wherein h represents the number of front faces,attention weight representing the ith token, +.>The function performs cascade operation on the features, +.>Representation->Projection matrix of>、/> and />A weight matrix representing the query Q, key K, and value V corresponding to the ith token.
10. Automatic detection system of dual view breast tumor lesion area based on absorbance, its characterized in that includes:
the image acquisition module is used for acquiring a breast ultrasonic tumor gray image data set, marking the position of the breast tumor in the data set, preprocessing the data set and generating a preprocessed breast ultrasonic tumor image data set;
the absorbance conversion module is used for carrying out absorbance conversion on the preprocessed image according to the ultrasonic transmission principle to obtain an ultrasonic absorbance image;
the double-view module takes the preprocessed breast ultrasonic tumor gray level image and the preprocessed breast ultrasonic tumor absorbance image as double views, and inputs the images into the detection module;
the double-view detection model is used for respectively extracting features of the double views through a double-flow backhaul main network and respectively and effectively reflecting the tumor region of interest in the double views through multi-layer convolution;
wherein, the detection module comprises a DFT unit, and the breast ultrasonic tumor gray image features with different scalesAnd absorbency image feature->Embedding DFT units to perform feature fusion, wherein i=2, 3,4, and the fused feature map is +.> and />Add back to the original characteristic diagram> and />And sequentially outputting feature maps P2, P3 and P4 of the features with different scales to fully utilize feature information among different views, improve detection of a breast tumor lesion area, and finally fusing the feature maps P2, P3 and P4 to output as a predicted image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310715680.4A CN116485791B (en) | 2023-06-16 | 2023-06-16 | Automatic detection method and system for double-view breast tumor lesion area based on absorbance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310715680.4A CN116485791B (en) | 2023-06-16 | 2023-06-16 | Automatic detection method and system for double-view breast tumor lesion area based on absorbance |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116485791A true CN116485791A (en) | 2023-07-25 |
CN116485791B CN116485791B (en) | 2023-09-29 |
Family
ID=87227132
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310715680.4A Active CN116485791B (en) | 2023-06-16 | 2023-06-16 | Automatic detection method and system for double-view breast tumor lesion area based on absorbance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116485791B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117392119A (en) * | 2023-12-07 | 2024-01-12 | 华侨大学 | Tumor lesion area detection method and device based on position priori and feature perception |
CN118379285A (en) * | 2024-06-21 | 2024-07-23 | 华侨大学 | Method and system for detecting breast tumor lesion area based on feature difference dynamic fusion |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104143101A (en) * | 2014-07-01 | 2014-11-12 | 华南理工大学 | Method for automatically identifying breast tumor area based on ultrasound image |
WO2018180386A1 (en) * | 2017-03-30 | 2018-10-04 | 国立研究開発法人産業技術総合研究所 | Ultrasound imaging diagnosis assistance method and system |
CN110264461A (en) * | 2019-06-25 | 2019-09-20 | 南京工程学院 | Microcalciffcation point automatic testing method based on ultrasonic tumor of breast image |
CN110264462A (en) * | 2019-06-25 | 2019-09-20 | 电子科技大学 | A kind of breast ultrasound tumour recognition methods based on deep learning |
CN111832563A (en) * | 2020-07-17 | 2020-10-27 | 江苏大学附属医院 | Intelligent breast tumor identification method based on ultrasonic image |
CN112529878A (en) * | 2020-12-15 | 2021-03-19 | 西安交通大学 | Multi-view semi-supervised lymph node classification method, system and equipment |
CN113870194A (en) * | 2021-09-07 | 2021-12-31 | 燕山大学 | Deep layer characteristic and superficial layer LBP characteristic fused breast tumor ultrasonic image processing device |
CN115409832A (en) * | 2022-10-28 | 2022-11-29 | 新疆畅森数据科技有限公司 | Triple negative breast cancer classification method based on ultrasound image and omics big data |
CN116109610A (en) * | 2023-02-23 | 2023-05-12 | 四川大学华西医院 | Method and system for segmenting breast tumor in ultrasonic examination report image |
-
2023
- 2023-06-16 CN CN202310715680.4A patent/CN116485791B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104143101A (en) * | 2014-07-01 | 2014-11-12 | 华南理工大学 | Method for automatically identifying breast tumor area based on ultrasound image |
WO2018180386A1 (en) * | 2017-03-30 | 2018-10-04 | 国立研究開発法人産業技術総合研究所 | Ultrasound imaging diagnosis assistance method and system |
JPWO2018180386A1 (en) * | 2017-03-30 | 2019-11-07 | 国立研究開発法人産業技術総合研究所 | Ultrasound image diagnosis support method and system |
CN110264461A (en) * | 2019-06-25 | 2019-09-20 | 南京工程学院 | Microcalciffcation point automatic testing method based on ultrasonic tumor of breast image |
CN110264462A (en) * | 2019-06-25 | 2019-09-20 | 电子科技大学 | A kind of breast ultrasound tumour recognition methods based on deep learning |
CN111832563A (en) * | 2020-07-17 | 2020-10-27 | 江苏大学附属医院 | Intelligent breast tumor identification method based on ultrasonic image |
CN112529878A (en) * | 2020-12-15 | 2021-03-19 | 西安交通大学 | Multi-view semi-supervised lymph node classification method, system and equipment |
CN113870194A (en) * | 2021-09-07 | 2021-12-31 | 燕山大学 | Deep layer characteristic and superficial layer LBP characteristic fused breast tumor ultrasonic image processing device |
CN115409832A (en) * | 2022-10-28 | 2022-11-29 | 新疆畅森数据科技有限公司 | Triple negative breast cancer classification method based on ultrasound image and omics big data |
CN116109610A (en) * | 2023-02-23 | 2023-05-12 | 四川大学华西医院 | Method and system for segmenting breast tumor in ultrasonic examination report image |
Non-Patent Citations (1)
Title |
---|
包凌云等: "《深度学习在乳腺超声中的研究进展》", 《浙江医学》, no. 8, pages 785 - 790 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117392119A (en) * | 2023-12-07 | 2024-01-12 | 华侨大学 | Tumor lesion area detection method and device based on position priori and feature perception |
CN117392119B (en) * | 2023-12-07 | 2024-03-12 | 华侨大学 | Tumor lesion area detection method and device based on position priori and feature perception |
CN118379285A (en) * | 2024-06-21 | 2024-07-23 | 华侨大学 | Method and system for detecting breast tumor lesion area based on feature difference dynamic fusion |
CN118379285B (en) * | 2024-06-21 | 2024-09-17 | 华侨大学 | Method and system for detecting breast tumor lesion area based on feature difference dynamic fusion |
Also Published As
Publication number | Publication date |
---|---|
CN116485791B (en) | 2023-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111652321B (en) | Marine ship detection method based on improved YOLOV3 algorithm | |
CN111627019B (en) | Liver tumor segmentation method and system based on convolutional neural network | |
CN116485791B (en) | Automatic detection method and system for double-view breast tumor lesion area based on absorbance | |
US20220239844A1 (en) | Neural 3D Video Synthesis | |
Fang et al. | GroupTransNet: Group transformer network for RGB-D salient object detection | |
CN111091010A (en) | Similarity determination method, similarity determination device, network training device, network searching device and storage medium | |
Zhou et al. | A superior image inpainting scheme using Transformer-based self-supervised attention GAN model | |
García-González et al. | Background subtraction by probabilistic modeling of patch features learned by deep autoencoders | |
Guo et al. | 3D semantic segmentation based on spatial-aware convolution and shape completion for augmented reality applications | |
Correia et al. | 3D reconstruction of human bodies from single-view and multi-view images: A systematic review | |
CN117934824A (en) | Target region segmentation method and system for ultrasonic image and electronic equipment | |
CN117291895A (en) | Image detection method, device, equipment and storage medium | |
Yuan et al. | A full-set tooth segmentation model based on improved PointNet++ | |
Afzal et al. | Discriminative feature abstraction by deep L2 hypersphere embedding for 3D mesh CNNs | |
Zhong et al. | Hierarchical attention-guided multiscale aggregation network for infrared small target detection | |
CN117974693B (en) | Image segmentation method, device, computer equipment and storage medium | |
CN113034371A (en) | Infrared and visible light image fusion method based on feature embedding | |
US12079948B2 (en) | Multidimentional image editing from an input image | |
CN117253034A (en) | Image semantic segmentation method and system based on differentiated context | |
Shanqing et al. | A multi-level feature weight fusion model for salient object detection | |
Qin et al. | Self-supervised single-image 3D face reconstruction method based on attention mechanism and attribute refinement | |
CN115688234A (en) | Building layout generation method, device and medium based on conditional convolution | |
Zhang | [Retracted] An Intelligent and Fast Dance Action Recognition Model Using Two‐Dimensional Convolution Network Method | |
Gunasekaran et al. | An efficient technique for three-dimensional image visualization through two-dimensional images for medical data | |
CN116343019A (en) | Target detection method for remote sensing image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |