CN113870194B - Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics - Google Patents
Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics Download PDFInfo
- Publication number
- CN113870194B CN113870194B CN202111045957.4A CN202111045957A CN113870194B CN 113870194 B CN113870194 B CN 113870194B CN 202111045957 A CN202111045957 A CN 202111045957A CN 113870194 B CN113870194 B CN 113870194B
- Authority
- CN
- China
- Prior art keywords
- image
- lbp
- unit
- shallow
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 206010006187 Breast cancer Diseases 0.000 title claims abstract description 38
- 230000004927 fusion Effects 0.000 title claims abstract description 36
- 208000026310 Breast neoplasm Diseases 0.000 title claims abstract description 33
- 238000012545 processing Methods 0.000 title claims abstract description 19
- 238000000605 extraction Methods 0.000 claims abstract description 48
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 206010028980 Neoplasm Diseases 0.000 claims abstract description 13
- 238000000034 method Methods 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000002604 ultrasonography Methods 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 7
- 230000003211 malignant effect Effects 0.000 claims description 6
- 238000013526 transfer learning Methods 0.000 claims description 6
- 201000007295 breast benign neoplasm Diseases 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 5
- 230000000877 morphologic effect Effects 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000000638 solvent extraction Methods 0.000 claims description 5
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 238000003672 processing method Methods 0.000 claims description 4
- 238000010191 image analysis Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 abstract description 3
- 210000000481 breast Anatomy 0.000 description 6
- 230000003902 lesion Effects 0.000 description 6
- 238000010606 normalization Methods 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 206010054107 Nodule Diseases 0.000 description 3
- 201000011510 cancer Diseases 0.000 description 3
- 238000004195 computer-aided diagnosis Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 208000004434 Calcinosis Diseases 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000002308 calcification Effects 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 206010006272 Breast mass Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 210000002364 input neuron Anatomy 0.000 description 1
- 230000036210 malignancy Effects 0.000 description 1
- 238000009607 mammography Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000004205 output neuron Anatomy 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Abstract
The invention provides a breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics, which comprises an image database establishing unit, an image preprocessing unit, a region of interest extracting unit, a deep layer characteristic extracting unit, a shallow layer LBP characteristic extracting unit, a characteristic fusion unit and a classifying unit. The system comprises an image database establishing unit, an image preprocessing unit, an interest region extracting unit, a deep layer feature extracting unit, a shallow layer LBP feature extracting unit, a feature fusion unit and a classification unit which are connected in a communication mode. According to the invention, the depth feature extraction of the image based on the deep learning network can describe the high-level complex features of the tumor image, the feature extraction based on the rotation-invariant local binary mode can describe the low-level simple features of the tumor image, and the feature fusion can combine the information to obtain more regional features, so that the classification performance can be effectively improved.
Description
Technical Field
The invention relates to the field of image processing and identification, in particular to a breast tumor ultrasonic image processing device with deep layer characteristics and shallow layer LBP characteristics fused.
Background
Breast tumor disease is one of the most common malignant tumors in women, and therefore early diagnosis of breast lesions and identification of malignant lesions from benign lesions are extremely important for prognosis of patients. Ultrasound examination (US) has the advantages of real-time dynamics, low cost, no radiation and high repeatability compared to other breast tumor imaging methods, including Digital Mammography (DM), magnetic Resonance (MR) and Computed Tomography (CT). It has become an important component of clinical medical examinations. The method provides necessary imaging information for clinical suspicious breast tumor examination and becomes a main screening method for early breast cancer. Current diagnostic analysis of breast ultrasound images is primarily performed by imaging physicians and standardized according to the breast imaging report and data system (BI-RADS) for describing and classifying breast lesions. However, identification of benign and malignant nodules in the breast is primarily dependent on ultrasound characteristics such as echogenicity, composition, shape, edges and calcification. Different features have different correlations with malignant tumors, and morphological features of benign and malignant lesions substantially overlap. Furthermore, the diagnosis of images is largely dependent on the experience of radiologists, and thus, considerable variation may occur between observers. With the popularity of breast cancer screening, the number of breast stage patients increases year by year. Improving screening efficiency has become a major problem for radiologists. The advent of Computer Aided Diagnosis (CAD) technology has made it possible to solve these problems.
With the continuous development of machine learning deep learning, computer aided diagnosis is widely applied in the field of medical image processing, previous researches focus on manually extracting features of ultrasonic images, such as echogenicity, components, shapes, edges, calcifications and the like, then performing feature selection algorithm to optimize the extracted features to obtain optimal feature combinations, and finally classifying by using classifiers such as Support Vector Machines (SVMs), random Forests (RFs) and the like. The traditional machine learning method relies on a feature extraction and feature selection method which is complex in manual design, and the extracted features are high in dimension, large in workload and low in efficiency. Recently, some classical convolutional neural networks are applied to automatic extraction of breast tumor ultrasonic image features, but because medical images have the defects of small data volume, insignificant lesion areas and large noise, and the existing convolutional neural network models are all proposed based on large-scale two-dimensional image data and cannot adapt to the medical image data, the general networks are insensitive to the features of breast tumor images and are difficult to extract representative features.
Disclosure of Invention
In order to overcome the defects in the prior art, a breast tumor ultrasonic image processing device based on fusion of deep layer characteristics and shallow layer LBP characteristics is provided. The device adopts a strategy based on feature fusion, and utilizes depth features and shallow LBP features extracted by an improved ResNet50 model to classify benign and malignant breast tumor nodules. The device can effectively repair quality problems such as unclear edges and noise of tumor images, and extract representative characteristics of breast tumor nodules.
Specifically, the invention provides a breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics, which comprises an image database establishing unit, an image preprocessing unit, a region of interest extracting unit, a deep layer characteristic extracting unit, a shallow layer LBP characteristic extracting and extracting unit, a characteristic fusion unit and a classifying unit;
the image database establishing unit is used for establishing a breast tumor ultrasonic image database and obtaining breast tumor nodule images and diagnosis results;
the image preprocessing unit is used for carrying out data enhancement on the ultrasonic image, simultaneously carrying out orthometric equalization enhancement and Sobel operator edge extraction on the tumor image respectively, and fusing an original tumor image, an orthometric equalization image and an edge extraction image;
the image preprocessing unit performs an image preprocessing operation including the steps of:
s11, carrying out histogram equalization on an original ultrasonic image to enhance the contrast of the image with a smaller dynamic range;
s12, carrying out Sobel operator operation on the gray scale approximation value of the image brightness function on the original ultrasonic image to provide accurate edge direction information, wherein the Sobel operator is used for detecting the image edge, and the Sobel operator extraction calculation formula is as follows:
wherein A is an original image, G X For convolution in the x-direction, G Y Is convolution in the y-direction;
s13, fusing the original ultrasonic image, the orthographic equalization image and the edge extraction image;
the interest region extraction unit is used for extracting an interest region to obtain focus focused on image analysis, and specifically comprises the following steps:
extracting a region of interest by adopting a method of maximum inter-class threshold segmentation (Otsu) and morphological processing;
extracting color information of an ultrasonic image and converting the color information into an a component of a lab space;
an Otsu algorithm is adopted to realize automatic threshold segmentation of the image, and the image is divided into a background part and a target part according to the gray characteristic of the image;
converting the gray level image into a binary image, and obtaining a boundary contour curve and an interested region of the image by using a morphological processing method;
the deep feature extraction unit is used for training a pre-trained convolutional neural network model and automatically extracting image depth features with different interestingness and different scales;
the shallow LBP feature extraction unit is used for extracting rotation invariance Local Binary Pattern (LBP) features serving as shallow texture features of the image;
the feature fusion unit is used for fusing the depth features and the shallow features to obtain final feature vectors;
the classification unit is used for inputting the finally obtained characteristics into the SVM to realize benign and malignant classification of the breast tumor ultrasonic image.
Preferably, the deep feature extraction unit performs image depth feature extraction, specifically including the following steps:
s21, building a basic model ResNet50, loading weights of image Net pre-training by using transfer learning, and removing a top full-connection layer;
s22, adding a global average pooling layer and Batch Normalization (BN represents batch normalization)), and fully connecting the layers;
s23, preventing and controlling the overfitting by adopting a Dropout regularization method.
Preferably, the shallow LBP feature extraction unit performs shallow LBP feature extraction, specifically including the following steps:
s31, partitioning the LBP characteristic image, and dividing a detection window into 16 x 16 small areas;
s32, comparing gray values of 8 adjacent pixels with one pixel in each region, if surrounding pixel values are larger than the central pixel value, marking the position of the pixel point as 1, otherwise, marking the position as 0. Thus, 8 points in the 3*3 neighborhood can be compared to generate 8-bit binary numbers, and the LBP value of the central pixel point of the window can be obtained. The calculation formula of LBP is as follows:
wherein, p represents the p-th pixel point except the central pixel point in the 3*3 window, I (c) represents the gray value of the central pixel point, I (p) represents the gray value of the p-th pixel point, and the calculation formula of s (x) is as follows:
s33, calculating a histogram of each regional characteristic image, and normalizing the histogram;
s34, sequentially arranging the histograms of the areas in a row according to a space sequence to form LBP characteristic vectors.
Preferably, the feature fusion unit performs feature fusion specifically including the following steps:
s41, adopting an early fusion strategy, fusing multiple layers of features, and training a classifier on the fused features;
s42, respectively carrying out parameter normalization on the BN layer on the depth features and the shallow LBP features extracted by the ResNet50 subjected to transfer learning, and then carrying out concat operation. For single-channel output, if the two input channels are X 1 ,X 2 ,...,X C And Y 1 ,Y 2 ,...,Y C The single output channel of the concat is then:
preferably, the specific steps of classifying by the classifying unit are as follows:
s51, taking the fusion characteristic extracted in the S42 as the input of the SVM;
s52, performing simple scaling operation on the data, and selecting a kernel function;
s53, adopting five times of cross verification, and selecting optimal parameters C and g;
s54, classifying benign and malignant breast tumor ultrasonic images by using the obtained optimal parameters.
Compared with the prior art, the invention has the following beneficial effects:
(1) The invention is based on the preprocessing of channel fusion, improves the contrast of an ultrasonic image by using the orthographic equalization, and extracts the edge information of the ultrasonic image by using the Sobel operator, so that the original image, the orthographic equalization and the Sobel edge extraction can improve the image quality problems of blurred edges, large noise and the like of the ultrasonic image, and are beneficial to the sensitive extraction of the depth feature extractor to the breast tumor ultrasonic image features and improve the precision and accuracy of the image extraction.
(2) According to the invention, the depth feature extraction of the image based on the deep learning network can describe the high-level complex features of the tumor image, the feature extraction based on the rotation-invariant local binary mode can describe the low-level simple features of the tumor image, and the feature fusion can combine the information to obtain more regional features, so that the classification performance can be effectively improved.
(3) The invention adopts an end-to-end process, does not use a large number of manual features in the traditional image recognition, and ensures that the image processing is quicker than the traditional feature-based training method. In addition, the processing method of the device is simple to operate, the classified processing result can be obtained by directly inputting the breast tumor ultrasonic image, the image processing is quick and high in precision, and a large amount of manpower and material resources are saved.
Drawings
FIG. 1 is a schematic block diagram of the structure of the present invention;
FIG. 2 is a schematic flow chart of the method of the present invention;
fig. 3a is an original image in example 2;
FIG. 3b is an image of the Sobel operator edge extraction of the original image in example 2;
FIG. 3c is the image of example 2 after orthogonalization of the original image;
FIG. 3d is a fused image in example 2;
FIG. 4 is a schematic diagram of the residual structure of example 3;
fig. 5 is a feature extraction and fusion flow chart of embodiment 5.
Detailed Description
Exemplary embodiments, features and aspects of the present invention will be described in detail below with reference to the attached drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
Specifically, the invention provides a breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics, which comprises an image database establishing unit 1, an image preprocessing unit 2, a region of interest extracting unit 3, a deep layer characteristic extracting unit 4, a shallow layer LBP characteristic extracting unit 5, a characteristic fusion unit 6 and a classifying unit 7.
The image database establishing unit 1 is used for establishing a breast tumor ultrasonic image database and acquiring breast tumor nodule images and diagnosis results;
the image preprocessing unit 2 is used for carrying out data enhancement on the ultrasonic image, simultaneously carrying out orthometric equalization enhancement and Sobel operator edge extraction on the tumor image respectively, and fusing an original tumor image, an orthometric equalization image and an edge extraction image;
the interest region extraction unit 3 is used for extracting an interest region to obtain focus focused on image analysis;
the deep feature extraction unit 4 is used for training a pre-trained convolutional neural network model and automatically extracting image depth features with different experiences and different scales;
the shallow LBP feature extraction unit 5 is configured to extract a rotation invariance Local Binary Pattern (LBP) feature as a shallow texture feature of an image;
the feature fusion unit 6 is used for fusing the depth features and the shallow features to obtain final feature vectors;
the classification unit 7 is used for inputting the finally obtained features into the SVM to realize benign and malignant classification of the breast tumor ultrasonic image.
Preferably, the image preprocessing unit performs an image preprocessing operation including the steps of:
s11, carrying out histogram equalization on an original ultrasonic image to enhance the contrast of the image with a smaller dynamic range;
s12, performing Sobel operator operation on the original ultrasonic image to obtain an approximate value of the gray scale of the image brightness function so as to provide accurate edge direction information. The Sobel operator is used for detecting the image edge, and the Sobel operator extracts a calculation formula as follows:
wherein A is an original image, G X For convolution in the x-direction, G Y Is convolution in the y-direction.
S13, fusing the original ultrasonic image, the orthographic equalization image and the edge extraction image.
Preferably, the deep feature extraction unit performs image depth feature extraction specifically including the steps of:
s21, building a basic model ResNet50, loading weights of image Net pre-training by using transfer learning, and removing a top full-connection layer;
s22, adding a global average pooling layer, batch Normalization (BN) and a full connection layer;
s23, preventing and controlling the overfitting by adopting a Dropout regularization method.
Preferably, the shallow LBP feature extraction unit performs shallow LBP feature extraction, specifically including the following steps:
s31, partitioning the LBP characteristic image, and dividing a detection window into 16 x 16 small areas;
s32, comparing gray values of 8 adjacent pixels with one pixel in each region, if surrounding pixel values are larger than the central pixel value, marking the position of the pixel point as 1, otherwise, marking the position as 0. Thus, 8 points in the 3*3 neighborhood can be compared to generate 8-bit binary numbers, and the LBP value of the central pixel point of the window can be obtained. The calculation formula of LBP is as follows:
wherein, p represents the p-th pixel point except the central pixel point in the 3*3 window, I (c) represents the gray value of the central pixel point, I (p) represents the gray value of the p-th pixel point, and the calculation formula of s (x) is as follows:
s33, calculating a histogram of each regional characteristic image, and normalizing the histogram;
s34, sequentially arranging the histograms of the areas in a row according to a space sequence to form LBP characteristic vectors.
Preferably, the feature fusion unit performs feature fusion specifically including the following steps:
s41, adopting an early fusion strategy, fusing multiple layers of features, and training a classifier on the fused features;
s42, respectively extracting depth features and shallow LBP features from the ResNet50 subjected to transfer learningAnd carrying out parameter normalization of the BN layer, and then carrying out concat operation. For single-channel output, if the two input channels are X 1 ,X 2 ,...,X C And Y 1 ,Y 2 ,...,Y C The single output channel of the concat is then:
preferably, the specific steps of classifying by the classifying unit are as follows:
s51, taking the fusion characteristic extracted in the S42 as the input of the SVM;
s52, performing simple scaling operation on the data, and selecting a kernel function;
s53, adopting five times of cross verification, and selecting optimal parameters C and g;
s54, classifying benign and malignant breast tumor ultrasonic images by using the obtained optimal parameters.
Examples
Establishing a breast tumor ultrasonic image database, and obtaining breast tumor nodule images and diagnosis results
The raw dataset was collected from breast nodule patients by a second hospital at the university of Hebei medical science, all samples were labeled and classified by an experienced radiologist. The entire dataset had 2422 original images, including 1907 malignancy images and 515 Zhang Liangxing tumor images.
Data preprocessing:
aiming at the quality problems of low contrast, noise, artifacts and the like of an ultrasonic image, the invention adopts a three-channel fusion-based method to respectively fuse an original image, a orthographic equalized image and a Sobel operator edge extraction image. Taking an image as an example, fig. 3a is an original ultrasound image, fig. 3b is a Sobel operator edge extraction image, fig. 3c is a straight-side equalization image, and fig. 3d is a fusion image.
Depth feature extraction:
(1) Residual structure
The main design of the residual unit is two, far-picking connection and identity mapping, the far-jumping makes the residual possible, and the identity mapping makes the network deepen figure 4 is a residual structure schematic diagram.
When the input is x, the learned feature is marked as H (x), the residual error can be learned ideally, and the original learned feature is F (x) +x. When the residual is 0, the stacking layer only performs identity mapping, at least network performance is not reduced, and in practice, the residual is not 0, which also enables the stacking layer to learn new features based on input features, thereby having better performance.
(2) Training ResNet50 model
The method uses a ResNet50 model with a top full-connection layer removed as a basic model, loads pre-trained weights on an ImageNet, and simultaneously adds a global average pooling layer, batch Normalization (BN) and the full-connection layer;
and after the full connection layer is added, adopting a Dropout regularization method to prevent and treat overfitting. Setting the Dropout proportion to 0.5, firstly randomly (temporarily) deleting half of hidden neurons in the network, keeping the input and output neurons unchanged, then forward propagating the input x through the modified network, and then backward propagating the obtained loss result through the modified network. After a small batch of training samples has performed this process, the corresponding parameters are updated according to a random gradient descent method on neurons that have not been deleted.
Shallow LBP feature extraction:
the specific steps of shallow LBP feature extraction are as follows:
partitioning the LBP characteristic image, and dividing a detection window into 16 x 16 small areas;
for one pixel in each region, the gray values of the adjacent 8 pixels are compared with the gray values, if the surrounding pixel values are larger than the central pixel value, the position of the pixel point is marked as 1, otherwise, the position is marked as 0. Thus, 8 points in the 3*3 neighborhood can be compared to generate 8-bit binary numbers, and the LBP value of the central pixel point of the window can be obtained;
calculating a histogram of each regional characteristic image, and normalizing the histogram;
sequentially arranging the histograms of each block of region into a row according to a space sequence to form LBP characteristic vectors;
feature fusion:
and fusing depth features extracted by the ResNet50 subjected to transfer learning and shallow texture features extracted by the LBP, specifically, normalizing the two parts of features through pooling operation, and cascading (concat) the two parts of features to obtain fused features.
SVM classification:
taking the fusion features extracted in the embodiment 5 as the input of the SVM; performing simple scaling operation on the data, and selecting a kernel function; five times of cross validation is adopted, and optimal parameters C and g are selected; and classifying benign and malignant breast tumor ultrasonic images by using the obtained optimal parameters.
Finally, it should be noted that: the embodiments described above are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the spirit of the invention.
Claims (5)
1. A breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics is characterized in that: the device comprises an image database establishing unit, an image preprocessing unit, a region of interest extracting unit, a deep layer feature extracting unit, a shallow layer LBP feature extracting unit, a feature fusion unit and a classification unit;
the image database establishing unit is used for establishing a breast tumor ultrasonic image database and acquiring breast tumor nodule images;
the image preprocessing unit is used for carrying out data enhancement processing on the ultrasonic image, simultaneously carrying out orthometric equalization enhancement and sobel operator edge extraction on the tumor image respectively, and fusing the original tumor image, the orthometric equalization image and the edge extraction image;
the image preprocessing unit performs image preprocessing operation, specifically including the following steps:
s11, carrying out histogram equalization on an original ultrasonic image to enhance the contrast of the image with a smaller dynamic range;
s12, performing Sobel operator operation on the original ultrasonic image to obtain an approximate value of the gray level of the image brightness function so as to provide accurate edge direction information, wherein the Sobel operator is used for detecting the image edge, and the Sobel operator extraction calculation formula is as follows:
wherein A is an original image, G X For convolution in the x-direction, G Y Is convolution in the y-direction;
s13, fusing the original ultrasonic image, the orthographic equalization image and the edge extraction image;
the region of interest extraction unit is used for extracting a region of interest of an image to obtain focus of attention of image analysis, and specifically comprises the following steps:
extracting a region of interest by adopting a maximum inter-class threshold segmentation algorithm and a morphological processing method;
extracting color information of an ultrasonic image and converting the color information into an a component of a lab space;
an image is divided into a background part and a target part according to the gray characteristic of the image by adopting a maximum inter-class threshold segmentation algorithm to realize automatic threshold segmentation of the image;
converting the gray level image into a binary image, and obtaining a boundary contour curve and an interested region of the image by using a morphological processing method;
the deep feature extraction unit is used for training a pre-trained convolutional neural network model and automatically extracting image depth features with different interestingness and different scales;
the shallow LBP characteristic extraction unit is used for extracting rotation invariance local binary pattern characteristics as shallow texture characteristics of an image, and specifically comprises the steps of firstly partitioning the LBP characteristic image, and dividing a detection window into a plurality of areas of 16 x 16;
secondly, taking one pixel in each area as a central pixel, comparing the gray values of 8 adjacent pixels with the central pixel, if the gray value of the surrounding pixels is larger than the gray value of the central pixel, marking the position of the central pixel as 1, otherwise marking as 0, and comparing 8 pixel points in each 3*3 adjacent area to generate 8-bit binary numbers to obtain the LBP value of the central pixel point of each 16 x 16 area window;
calculating a histogram of each regional characteristic image, and normalizing the histogram; finally, sequentially arranging the histograms of each block of region into a row according to a space sequence to form LBP characteristic vectors;
the feature fusion unit is used for fusing the depth features and the shallow features to obtain final feature vectors;
the classification unit is used for inputting the finally obtained characteristics into the SVM to realize benign and malignant classification of the breast tumor ultrasonic image.
2. The breast tumor ultrasound image processing apparatus of claim 1, wherein the deep and shallow LBP features are fused, wherein: the deep feature extraction unit performs image depth feature extraction, and specifically comprises the following steps:
s21, building a basic model ResNet50, loading weights of image Net pre-training by using transfer learning, and removing a top full-connection layer;
s22, adding a global average pooling layer, a BN layer and a full connection layer;
s23, adopting a Dropout regularization method to prevent and treat the overfitting.
3. The breast tumor ultrasound image processing apparatus of claim 2, wherein the deep and shallow LBP features are fused, wherein: the shallow LBP feature extraction unit performs shallow LBP feature extraction specifically including the following steps:
s31, partitioning the LBP characteristic image, and dividing a detection window into 16 x 16 small areas;
s32, comparing gray values of 8 adjacent pixels with each pixel in each region, if surrounding pixel values are larger than a central pixel value, marking the position of the pixel as 1, otherwise, marking the position as 0, and comparing 8 points in 3*3 adjacent regions to generate 8-bit binary numbers to obtain an LBP value of the central pixel of the window, wherein the LBP is calculated according to the following formula:
wherein, p represents the p-th pixel point except the central pixel point in the 3*3 window, I (c) represents the gray value of the central pixel point, I (p) represents the gray value of the p-th pixel point, and the calculation formula of s (x) is as follows:
s33, calculating a histogram of each regional characteristic image, and normalizing the histogram;
s34, sequentially arranging the histograms of the areas in a row according to a space sequence to form LBP characteristic vectors.
4. A breast tumor ultrasound image processing apparatus according to claim 3, wherein the deep and shallow LBP features are fused, and wherein: the feature fusion unit performs feature fusion and specifically comprises the following steps:
s41, adopting an early fusion strategy, fusing multiple layers of features, and training a classifier on the fused features;
s42, respectively normalizing parameters of BN layer by depth features and shallow LBP features extracted by the ResNet50 after transfer learning, and then performing concat operation, wherein for single-channel output, if two paths of input channels are respectively X 1 ,X 2 ,...,X C And Y 1 ,Y 2 ,...,Y C The single output channel of the concat is then:
wherein Z is concat K is the ith convolution kernel, which is the single output channel of concat.
5. The breast tumor ultrasound image processing apparatus of claim 4, wherein the deep and shallow LBP features are fused, wherein: the specific steps of classifying by the classifying unit are as follows:
s51, taking the fusion characteristic extracted in the step S42 as the input of an SVM;
s52, performing simple scaling operation on the data, and selecting a kernel function;
s53, adopting five times of cross verification, and selecting optimal parameters C and g;
s54, classifying benign and malignant breast tumor ultrasonic images by using the obtained optimal parameters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111045957.4A CN113870194B (en) | 2021-09-07 | 2021-09-07 | Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111045957.4A CN113870194B (en) | 2021-09-07 | 2021-09-07 | Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113870194A CN113870194A (en) | 2021-12-31 |
CN113870194B true CN113870194B (en) | 2024-04-09 |
Family
ID=78994716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111045957.4A Active CN113870194B (en) | 2021-09-07 | 2021-09-07 | Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113870194B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116485791B (en) * | 2023-06-16 | 2023-09-29 | 华侨大学 | Automatic detection method and system for double-view breast tumor lesion area based on absorbance |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780448A (en) * | 2016-12-05 | 2017-05-31 | 清华大学 | A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features |
CN109886986A (en) * | 2019-01-23 | 2019-06-14 | 北京航空航天大学 | A kind of skin lens image dividing method based on multiple-limb convolutional neural networks |
CN111275116A (en) * | 2020-01-20 | 2020-06-12 | 太原理工大学 | Breast tumor ultrasonic image classification method based on three-dimensional convolution neural network |
AU2020103938A4 (en) * | 2020-12-07 | 2021-02-11 | Capital Medical University | A classification method of diabetic retinopathy grade based on deep learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110222700A (en) * | 2019-05-30 | 2019-09-10 | 五邑大学 | SAR image recognition methods and device based on Analysis On Multi-scale Features and width study |
-
2021
- 2021-09-07 CN CN202111045957.4A patent/CN113870194B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780448A (en) * | 2016-12-05 | 2017-05-31 | 清华大学 | A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features |
CN109886986A (en) * | 2019-01-23 | 2019-06-14 | 北京航空航天大学 | A kind of skin lens image dividing method based on multiple-limb convolutional neural networks |
CN111275116A (en) * | 2020-01-20 | 2020-06-12 | 太原理工大学 | Breast tumor ultrasonic image classification method based on three-dimensional convolution neural network |
AU2020103938A4 (en) * | 2020-12-07 | 2021-02-11 | Capital Medical University | A classification method of diabetic retinopathy grade based on deep learning |
Non-Patent Citations (1)
Title |
---|
融合彩色分割和加权非参数变换的立体匹配;陈华等;《计量学报》;20170722;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113870194A (en) | 2021-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
George et al. | Remote computer-aided breast cancer detection and diagnosis system based on cytological images | |
CN111986150B (en) | The method comprises the following steps of: digital number pathological image Interactive annotation refining method | |
WO2018120942A1 (en) | System and method for automatically detecting lesions in medical image by means of multi-model fusion | |
Mahmood et al. | Breast lesions classifications of mammographic images using a deep convolutional neural network-based approach | |
CN111028206A (en) | Prostate cancer automatic detection and classification system based on deep learning | |
Ben-Ari et al. | Domain specific convolutional neural nets for detection of architectural distortion in mammograms | |
Balamurugan et al. | Brain tumor segmentation and classification using hybrid deep CNN with LuNetClassifier | |
Fanizzi et al. | Hough transform for clustered microcalcifications detection in full-field digital mammograms | |
CN112700461B (en) | System for pulmonary nodule detection and characterization class identification | |
Xu et al. | Using transfer learning on whole slide images to predict tumor mutational burden in bladder cancer patients | |
Patel | Predicting invasive ductal carcinoma using a reinforcement sample learning strategy using deep learning | |
CN110766670A (en) | Mammary gland molybdenum target image tumor localization algorithm based on deep convolutional neural network | |
Unni et al. | Tumour detection in double threshold segmented mammograms using optimized GLCM features fed SVM | |
JP2022547722A (en) | Weakly Supervised Multitask Learning for Cell Detection and Segmentation | |
Mahmood et al. | Breast mass detection and classification using deep convolutional neural networks for radiologist diagnosis assistance | |
Chen et al. | Breast tumor classification in ultrasound images by fusion of deep convolutional neural network and shallow LBP feature | |
CN113870194B (en) | Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics | |
Zebari et al. | CNN-based Deep Transfer Learning Approach for Detecting Breast Cancer in Mammogram Images | |
Shankara et al. | Detection of lung cancer using convolution neural network | |
Thapa et al. | Deep learning for breast cancer classification: Enhanced tangent function | |
CN115880245A (en) | Self-supervision-based breast cancer disease classification method | |
Vijayarajan et al. | A novel comparative study on breast cancer detection using different types of classification techniques | |
Balanica et al. | Breast cancer diagnosis based on spiculation feature and neural network techniques | |
Kalsoom et al. | An efficient liver tumor detection using machine learning | |
Cai et al. | A novel approach to segment and classify regional lymph nodes on computed tomography images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |