CN113450309A - Breast cancer ultrasonic image processing method, electronic device and storage medium - Google Patents
Breast cancer ultrasonic image processing method, electronic device and storage medium Download PDFInfo
- Publication number
- CN113450309A CN113450309A CN202110590945.3A CN202110590945A CN113450309A CN 113450309 A CN113450309 A CN 113450309A CN 202110590945 A CN202110590945 A CN 202110590945A CN 113450309 A CN113450309 A CN 113450309A
- Authority
- CN
- China
- Prior art keywords
- features
- ultrasonic image
- interpretable
- abstract
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 206010006187 Breast cancer Diseases 0.000 title claims abstract description 38
- 208000026310 Breast neoplasm Diseases 0.000 title claims abstract description 38
- 238000003860 storage Methods 0.000 title claims abstract description 9
- 238000003672 processing method Methods 0.000 title abstract description 10
- 239000011159 matrix material Substances 0.000 claims abstract description 56
- 238000000034 method Methods 0.000 claims abstract description 49
- 238000013135 deep learning Methods 0.000 claims abstract description 31
- 238000013507 mapping Methods 0.000 claims abstract description 28
- 230000008569 process Effects 0.000 claims abstract description 19
- 210000000481 breast Anatomy 0.000 claims abstract description 12
- 238000013528 artificial neural network Methods 0.000 claims abstract description 10
- 238000002604 ultrasonography Methods 0.000 claims description 31
- 230000000877 morphologic effect Effects 0.000 claims description 28
- 238000012545 processing Methods 0.000 claims description 19
- 206010028980 Neoplasm Diseases 0.000 claims description 13
- 238000003062 neural network model Methods 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 6
- 238000009826 distribution Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 208000004434 Calcinosis Diseases 0.000 claims description 3
- 230000002308 calcification Effects 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 230000009191 jumping Effects 0.000 claims description 3
- 230000005087 leaf formation Effects 0.000 claims description 3
- 230000006978 adaptation Effects 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 claims description 2
- 230000002596 correlated effect Effects 0.000 claims description 2
- 238000004519 manufacturing process Methods 0.000 claims description 2
- 230000008470 skin growth Effects 0.000 claims description 2
- 238000003745 diagnosis Methods 0.000 description 9
- 210000005075 mammary gland Anatomy 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000001467 acupuncture Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000012443 analytical study Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000012010 growth Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
- G06T7/45—Analysis of texture based on statistical description of texture using co-occurrence matrix computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/49—Analysis of texture based on structural texture description, e.g. using primitives or placement rules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The invention discloses a semantic interpretable breast cancer-based ultrasonic image processing method, electronic equipment and a computer-readable storage medium, wherein the method comprises the following steps: firstly, extracting interpretable features from an ultrasonic image of a breast; secondly, extracting abstract features of the ultrasonic image; and thirdly, calculating a weight coefficient matrix W of different features in the mapping process by a semantic low-rank regression algorithm according to the interpretable features and the abstract features, and selecting the appearance features with the weight higher than a preset value to explain the concerned information in the deep learning discriminant model. The method and the embodiments of the invention can semantically explain the judgment rule of the neural network and reduce manual intervention, can map the deeply-learned unexplainable abstract features to the interpretable semantic feature space, and can accurately assist in identifying the interpretable breast cancer ultrasonic image according to the importance degree of each feature.
Description
Technical Field
The present invention relates to the field of ultrasound-based image processing, and more particularly, to a semantic interpretable breast cancer-based ultrasound image processing method, an electronic device, and a computer-readable storage medium.
Background
In recent years, with the development of computer technology, an artificial intelligence auxiliary diagnosis system has become a current main research and development trend, and development and research are currently carried out in the fields of ophthalmology, oncology, imaging and the like. The clinical diagnosis means of the breast cancer meets the requirement of the design of a computer-aided processing system, and the promotion of the breast ultrasound-aided processing system has important significance in clinical application.
The traditional auxiliary processing system develops algorithms such as posterior probability, decision tree and logical reasoning based on the theories such as expert system theory and mode recognition. The essence of deep learning, namely the deep neural network, is that a plurality of neuron layers are stacked, parameters of the deep neural network are automatically learned by calculating big data, and the accuracy of image processing can be improved. However, the deep learning method can be concluded without giving reasons, and the learning model is a result of stacking unknown parameters, and cannot learn how to learn the knowledge by explaining the deep neural network. If a model is completely uninterpretable, its use in many areas is limited because there is no way to give more reliable information. Especially in the medical application scene, the output of the neural network model does not have clear pathological meaning, and the unpredictability and the unexplainable property do not meet the diagnosis and treatment requirements of doctors and patients.
Current analytical studies on ultrasound images for ultrasound-assisted diagnosis have not solved the interpretable problem of ultrasound images for breast cancer. For example, chinese patent, patent application No. CN202010860099.8, discloses a dynamic interpretable reasoning auxiliary diagnosis method for medical ultrasound examination process, which utilizes medical ultrasound knowledge graph to identify involved entities in the examination process in real time, and implements interpretability of the reasoning process through reasoning path of the involved entities; and step-by-step guidance for scanning of an ultrasonic doctor and dynamic reasoning diagnosis of diseases are realized through path migration and effective reasoning path ranking of entities in the knowledge map. The method provides assistance and guidance for an operator in the ultrasonic scanning process, and provides no solution for accurately distinguishing the target object in the ultrasonic image. As another example, the chinese patent, patent application No. CN201310065959.9, discloses a method and an apparatus for processing an ultrasound image and a breast cancer diagnosis device, which performs image over-segmentation on a received ultrasound image to obtain a segment of a multi-layer structure. The device is a segmentation method for the mammary gland image and cannot provide an auxiliary analysis function.
In summary, the current method for processing ultrasound images of breast cancer by using a computer has a problem of insufficient interpretability of a determination result, and an ultrasound image processing method with a definite medical information interpretation is urgently needed to be provided.
Disclosure of Invention
The invention aims to provide a method for processing a breast cancer ultrasonic image based on semantic interpretability.
The semantic interpretable breast cancer-based ultrasonic image processing method comprises the following steps:
the first step, extracting interpretable features from the ultrasound image of the breast, includes: extracting a gray level co-occurrence matrix and textural features of the ultrasonic image, and extracting a plurality of morphological features, wherein the morphological features are the same as or similar to the features described by the BI-RADS, and the morphological features represent the shape, the orientation, the boundary and the internal textural features of the tumor;
secondly, abstract features of the ultrasonic image are extracted, wherein the abstract features of the ultrasonic image comprise: learning the features to be extracted of each convolution core in the deep neural network model by utilizing a regression algorithm for the used deep neural network model to form a feature map, representing the feature map by using the global mean value of the feature map, and taking the feature map mean value vector formed by the last layer of convolution layer as an abstract feature vector;
and thirdly, calculating a weight coefficient matrix W of different features in a mapping process by a semantic low-rank regression algorithm according to the interpretable features and the abstract features, selecting the image features with weights higher than a preset value to explain information concerned in the deep learning discrimination model, wherein the information is used for performing auxiliary discrimination on the ultrasonic image of the breast cancer, and the selected image features can improve the credibility of a feature test model.
Optionally, the first step comprises the following specific steps:
s11, extracting a gray level co-occurrence matrix from the ultrasound image, specifically: describing textures by analyzing the characteristic that the gray distribution repeatedly and alternately changes on a spatial position, wherein the characteristic comprises the steps of determining the frequency of occurrence of a pixel pair, of which the value of a position (i, j) in a gray co-occurrence matrix is along the direction theta and one pixel value which is d away from the direction theta is i and the other pixel value is j, on the basis of the direction theta and the step length d so as to calculate the comprehensive information of the ultrasonic image on the direction, the interval, the change amplitude and the speed, wherein the theta is 0 DEG, 45 DEG, 90 DEG or 135 DEG;
s12, extracting a gray gradient co-occurrence matrix of pixel gray and gradient based on the ultrasonic image, wherein the gray gradient co-occurrence matrix of the image is the joint distribution of image gray pixels and image gradient size, and the image gradient is obtained through a differential operator and is used for detecting a gray jumping part in the ultrasonic image;
fusing the pixel gray scale and the gray scale gradient co-occurrence matrix of the gradient to acquire the texture arrangement and the pixel change information of the ultrasonic image;
s13, extracting texture features from the ultrasonic image to quantitatively describe information in the ultrasonic image, wherein the texture features comprise features of extracting gray standard deviation, energy, gray entropy, gray mean and correlation from the ultrasonic image;
s14, extracting morphological features from the ultrasound image to describe the shape, orientation, boundary and internal texture features of the tumor, including describing at least one of:
tumor circularity, aspect ratio, and angle of skin growth, leaf formation, edge roughness, edge needling, number of internal calcifications, border ambiguity.
Optionally, the specific steps of the second step include:
and (3) abstract feature extraction of the ultrasonic image by using a deep neural network model, wherein the abstract feature extraction of the ultrasonic image by using a CAM (computer-aided manufacturing) and/or Grad-CAM method is included.
Optionally, the third step includes:
extracting the abstract features through a semantic low-rank regression algorithm to improve the credibility of the abstract features; wherein,
defining abstract featuresIs input data, x, of the semantic low-rank regression algorithmtCorresponding to the t (t is more than or equal to 1 and less than or equal to N) sample deep learning abstract features, and d is an abstract feature dimension; is provided withAn interpretable feature being an output of the semantic low-rank regression algorithm, c being a dimension of the interpretable feature, ytIs the corresponding interpretable feature of the t-th sample;
the mathematical description formula of the regression model of the semantic low-rank regression algorithm is as follows:
wherein,l representing loss of mapping2Norm,/, of2The norm represents the difference degree between the abstract characteristic and the appearance characteristic after the abstract characteristic passes through the mapping matrix, the mapping loss is increased along with the increase of the difference degree, and the expression of the mapping matrix is reduced along with the increase of the mapping loss; theta is a regular penalty coefficient used for regular low-rank penalty term RankW;
the features of deep learning abstractions of different samples are correlated, the W matrix is low-rank, the trace of the matrix approximates the rank of the matrix, and TrW is the trace of the matrix and is defined as:
σiis the ith singular value of W, WTIs the transpose of W.
The weight coefficient matrix W is a weight in the mapping process, specifically, a total weight obtained by calculating a comprehensive calculation of the appearance features in the N sample mapping operations, wherein the influence degree of the appearance features on the discriminator on the sample set X is:
and selecting the image features with the weight higher than the preset value to explain the concerned information in the deep learning discrimination model, wherein the information acts on the pattern classification discriminator to improve the credibility of the feature test model.
Optionally, the deep neural network model is a model of a CNN deep learning network.
Optionally, the method further comprises: selecting the aspect characteristics according to the weight; the adaptation of the ultrasound image processing effect and the computational resources is adjusted based on the number of selected features. Another object of the present invention is to provide an electronic device, including a memory and a processor, wherein the memory stores an executable program, and the processor executes the executable program to implement the following steps:
firstly, extracting interpretable features from an ultrasonic image of a breast;
secondly, extracting abstract features of the ultrasonic image;
and thirdly, calculating a weight coefficient matrix W of different features in the mapping process by a semantic low-rank regression algorithm according to the interpretable features and the abstract features, and selecting the object features with the weight higher than a preset value to explain the information concerned in the deep learning discrimination model, wherein the information is used for improving the credibility of the feature test model when the breast cancer ultrasonic image is subjected to auxiliary discrimination.
Optionally, the electronic device processor of the present application executes the executable program, and the specific steps implemented include:
the first step, comprising: extracting a gray level co-occurrence matrix and textural features of the ultrasonic image, and extracting a plurality of morphological features, wherein the morphological features are the same as or similar to the features described by the BI-RADS, and the morphological features represent the shape, the orientation, the boundary and the internal textural features of the tumor;
the second step includes: and for the used deep neural network, learning the features to be extracted by each convolution core in the deep neural network model by using a regression algorithm to form a feature map, representing the feature map by using the global mean value of the feature map, and taking the feature map mean value vector formed by the last layer of convolution layer as an abstract feature vector.
It is yet another object of the present invention to provide a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement:
according to the method for processing the breast cancer ultrasonic image based on semantic interpretability.
Compared with the prior art, the semantic interpretable breast cancer-based ultrasonic image processing method, the electronic device and the computer-readable storage medium have the advantages that:
the invention utilizes the gray level co-occurrence matrix, the textural features and the morphological features of the breast cancer ultrasonic image and uses a regression algorithm to extract the features of the deep learning mode to form a feature map, and the feature map is represented by the global mean value of the feature map. And finally, calculating by using a low-rank regression algorithm to obtain the importance of different features in image discrimination, realizing the interpretability of breast cancer ultrasonic image auxiliary discrimination, and having semantic meanings consistent with clinical features.
Aiming at the problem that the output result in the existing ultrasonic breast auxiliary identification technology has no medical interpretability, the method solves the problem that the output result can not be interpreted when a deep learning method is used for carrying out ultrasonic auxiliary identification on the breast cancer, and improves the credibility in clinic. The method realizes that the characteristics are selected in a targeted manner while the discrimination effect is improved by using a deep learning method, the result has a semantic interpretation discrimination process, and a bridge between doctor knowledge and computer algorithm knowledge is established to help the doctor understand and receive the classification result. The consistency between the decision process and the output result of the deep learning method and the evidence-based medical concept promotes the application of the deep learning algorithm in the breast tumor diagnosis.
Drawings
Fig. 1 is a flowchart of a method for processing an ultrasound image based on a semantically interpretable breast cancer according to an embodiment of the present application;
fig. 2 is a schematic diagram illustrating gray level co-occurrence matrix calculation using a sliding window for an original image according to an embodiment of the present application;
fig. 3 is a flowchart of an embodiment of an ultrasound image processing method according to the present application.
Detailed Description
The invention aims to provide a semantic interpretable breast cancer-based ultrasonic image processing method, an electronic device and a computer-readable storage medium, which can be used for semantically interpreting a discrimination rule of a neural network and reducing manual intervention. BI-RADS (Breast Imaging Reporting and Data System) is used as the basis for diagnosis of Breast ultrasound images, has the best clinical interpretability, and the description is consistent with the professional knowledge of doctors, so that doctors can understand the description and the meaning of the description. Extracting the features in the BI-RADS description specification as interpretable features and establishing a mapping relation between the deep learning abstract features and the interpretable features can help a doctor to understand the decision process of the classification model and improve the credibility of the deep learning classifier. The regression model can map the deep learning unexplainable abstract features to the interpretable semantic feature space, observe the deviation of numerical values and the importance degree of each feature in the interpretable semantic feature space, and achieve the medical interpretation of the ultrasonic breast cancer diagnosis result.
The invention is described in detail below with reference to the drawings and specific examples.
The invention introduces the breast cancer ultrasonic image processing method based on semantic interpretability, and satisfies the interpretation of the output result in the clinical practical process by auxiliary identification through the interpretation of the characteristics of the image and the deep learning, thereby enhancing the understandability of doctors. The basic idea is that by extracting image gray level co-occurrence matrix, textural features and morphological features and combining with a feature map generated by an interpretable method of deep learning, a low-rank regression model is utilized to map abstract features which are not interpretable by the deep learning to an interpretable semantic feature space, and interpretable breast cancer ultrasonic-assisted identification is carried out in the interpretable semantic feature space according to the importance degree of each feature. With reference to fig. 1 and 3, the specific contents include:
in a first step S1, interpretable features are extracted from the ultrasound image of the breast, including: extracting a gray level co-occurrence matrix and textural features of the ultrasonic image, and extracting a plurality of morphological features, wherein the morphological features are the same as or similar to the features described by the BI-RADS, so that the morphological features conform to the common recognition of clinical breast cancer features and provide diagnostic explanation understandable by doctors, and the morphological features represent the shape, the position, the boundary and the internal textural features of the tumor;
the first part comprises the following specific steps:
1.1, extracting a gray level co-occurrence matrix from the ultrasonic image, specifically: the texture is described by analyzing the characteristic that the gray distribution repeatedly and alternately changes on the spatial position, and the method comprises the steps of determining the frequency of occurrence of a pixel pair with a position (i, j) having a value along the direction theta and a pixel value i and a pixel value j which are separated by d for an image represented in a matrix form on the basis of a given direction theta and a step length d so as to calculate the comprehensive information of the direction, the interval, the change amplitude and the speed of the ultrasonic image, wherein the theta is 0 DEG, 45 DEG, 90 DEG or 135 DEG. For example, fig. 2 is a schematic diagram of performing gray level co-occurrence matrix calculation on an original ultrasound image by using a sliding window, where a left matrix is in a state before the sliding window slides, and a right matrix is in a state after the sliding window slides one pixel to the right.
And 1.2, extracting a gray gradient co-occurrence matrix of pixel gray and gradient based on the ultrasonic image, wherein the gray gradient co-occurrence matrix of the image is the joint distribution of the image gray pixel and the image gradient, and the image gradient is obtained through various differential operators and is used for detecting the gray jumping part in the ultrasonic image. The fusion of the two can help to find information of image texture arrangement and pixel change.
1.3, extracting texture features from the ultrasonic image of the mammary gland to quantitatively describe information in the ultrasonic image, wherein 5 features such as a gray standard deviation, energy, a gray entropy value, a gray mean value, correlation and the like are extracted from the ultrasonic image.
And 1.4, extracting morphological characteristics from the ultrasonic image of the mammary gland to describe the shape, the orientation, the boundary and the internal texture characteristics of the tumor, wherein the morphological characteristics comprise roundness, aspect ratio, growth angle of skin, leaf formation, edge roughness, edge acupuncture, internal calcification number, border ambiguity and the like of the tumor.
In a second step S2, abstract features of the ultrasound image are extracted, which includes: and (3) learning the features to be extracted of each convolution core in the deep neural network model by utilizing a regression algorithm for the used deep neural network model to form a feature map, representing the feature map by using the global mean value of the feature map, and taking the feature map mean value vector formed by the last layer of convolution layer as an abstract feature vector.
2.1, abstract feature extraction of the ultrasonic image by using a deep neural network, wherein the abstract feature extraction of the ultrasonic image can use, but is not limited to, a CAM and/or Grad-CAM method.
And S3, calculating a weight coefficient matrix W of different features in the mapping process by a semantic low-rank regression algorithm according to the interpretable features and the abstract features, selecting interpretable image-bearing features with weights higher than a preset value to explain information concerned in the deep learning discrimination model, wherein the information is used for performing auxiliary discrimination on the breast cancer ultrasonic image, and the selected image-bearing features can improve the credibility of a feature test model which can be used for testing the image-bearing features.
3.1, extracting the abstract features through a semantic low-rank regression algorithm, and finding out features which play an important role in auxiliary judgment of breast cancer ultrasonic images so as to improve the credibility of the abstract features. Here, the low rank regression algorithm defines abstract featuresInput data of the abstract features, xtCorresponding to the t (t is more than or equal to 1 and less than or equal to N) th sample depth learning abstract feature, d is abstract feature dimensionAn interpretable feature being an output of the semantic low-rank regression algorithm, c being a dimension of the interpretable feature, ytIs the corresponding interpretable feature of the t-th sample, the mathematical description of the regression model of the semantic low-rank regression algorithm:
wherein, the first term on the right side of the formulaL representing loss of mapping2Norm (i.e., square norm), l2The norm represents the difference degree between the abstract characteristic and the specific characteristic after the abstract characteristic passes through the mapping matrix, and the larger the difference degree is, the larger the loss is, and the worse the expression of the mapping matrix is. Theta is a regular penalty coefficient and is used for a regular low-rank penalty term RankW, since deep learning abstract characteristics of different samples generally have correlation, according to the priori knowledge, a W matrix is low-rank, and by a rank approximation theory, a rank of the matrix needs to be approximated by using a trace of the matrix, and TrW is a trace of the matrix and can be defined as:
σiis the ith singular value of W, WTIs the transpose of W.
The weight coefficient matrix W is a weight in the mapping process, specifically, a total weight obtained by calculating a comprehensive calculation of the appearance features in the N sample mapping operations, wherein the influence degree of the appearance features on the discriminator on the sample set X is:
and W' obtained by calculation, selecting the image-bearing features with the weight higher than the preset value to explain the highly concerned information in the deep learning discrimination model, and acting the information on the pattern classification discriminator to play a greater role in the pattern classification discriminator so as to improve the credibility of the feature test model.
An embodiment of the present application further provides an electronic device, including a memory and a processor, where the memory stores an executable program, and the processor executes the executable program to implement the following steps:
the first step, extracting interpretable features from the ultrasound image of the breast, includes: extracting a gray level co-occurrence matrix and textural features of the ultrasonic image, and extracting a plurality of morphological features, wherein the morphological features are the same as or similar to the features described by the BI-RADS, and the morphological features represent the shape, the orientation, the boundary and the internal textural features of the tumor;
secondly, abstract features of the ultrasonic image are extracted, wherein the abstract features of the ultrasonic image comprise: learning the features to be extracted of each convolution kernel by using a regression algorithm by using a deep neural network to form a feature map, representing the feature map by using the global mean value of the feature map, and taking the feature map mean value vector formed by the last layer of convolution layer as an abstract feature vector;
and thirdly, calculating a weight coefficient matrix W of different features in the mapping process by a semantic low-rank regression algorithm according to the interpretable features and the abstract features, and selecting the object features with the weight higher than a preset value to explain the information concerned in the deep learning discrimination model, wherein the information is used for improving the credibility of the feature test model when the breast cancer ultrasonic image is subjected to auxiliary discrimination.
The invention also discloses an electronic device, which comprises a memory and a processor, wherein the memory is stored with an executable program, and the processor executes the executable program to realize the following steps:
firstly, extracting interpretable features from an ultrasonic image of a breast;
secondly, extracting abstract features of the ultrasonic image;
and thirdly, calculating a weight coefficient matrix W of different features in the mapping process by a semantic low-rank regression algorithm according to the interpretable features and the abstract features, and selecting the object features with the weight higher than a preset value to explain the information concerned in the deep learning discrimination model, wherein the information is used for improving the credibility of the feature test model when the breast cancer ultrasonic image is subjected to auxiliary discrimination.
As a specific implementation manner, in combination with the foregoing, when the processor of the electronic device according to the embodiment of the present invention executes the executable program, the specific implementation manner may be:
the first step, comprising: extracting a gray level co-occurrence matrix and textural features of the ultrasonic image, and extracting a plurality of morphological features, wherein the morphological features are the same as or similar to the features described by the BI-RADS, and the morphological features represent the shape, the orientation, the boundary and the internal textural features of the tumor;
the second step includes: and (3) learning the features to be extracted of each convolution kernel by utilizing a deep neural network and a regression algorithm to form a feature map, representing the feature map by using the global mean value of the feature map, and taking the feature map mean value vector formed by the last convolution layer as an abstract feature vector.
The present invention also discloses a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement:
according to the method for processing the ultrasonic image of the breast cancer based on the semantic interpretability.
The invention and its embodiments have been described above schematically, without limitation, and the invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The representation in the drawings is only one of the embodiments of the invention, the actual construction is not limited thereto, and any reference signs in the claims shall not limit the claims concerned. Therefore, if a person skilled in the art receives the teachings of the present invention, without inventive design, a similar structure and an embodiment to the above technical solution should be covered by the protection scope of the present patent. Furthermore, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. Several of the elements recited in the product claims may also be implemented by one element in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Claims (9)
1. A method for processing ultrasound images of breast cancer based on semantic interpretability, comprising:
the first step, extracting interpretable features from the ultrasound image of the breast, includes: extracting a gray level co-occurrence matrix and textural features of the ultrasonic image, and extracting a plurality of morphological features, wherein the morphological features are the same as or similar to the features described by the BI-RADS, and the morphological features represent the shape, the orientation, the boundary and the internal textural features of the tumor;
secondly, abstract features of the ultrasonic image are extracted, wherein the abstract features of the ultrasonic image comprise: learning the features to be extracted of each convolution core in the deep neural network model by utilizing a regression algorithm for the used deep neural network model to form a feature map, representing the feature map by using the global mean value of the feature map, and taking the feature map mean value vector formed by the last layer of convolution layer as an abstract feature vector;
and thirdly, calculating a weight coefficient matrix W of different features in a mapping process by a semantic low-rank regression algorithm according to the interpretable features and the abstract features, selecting the image features with weights higher than a preset value to explain information concerned in the deep learning discrimination model, wherein the information is used for performing auxiliary discrimination on the ultrasonic image of the breast cancer, and the selected image features can improve the credibility of a feature test model.
2. The method for processing the ultrasound image based on the semantic interpretable breast cancer according to claim 1, wherein the concrete steps of the first step are as follows:
s11, extracting a gray level co-occurrence matrix from the ultrasound image, specifically: describing textures by analyzing the characteristic that the gray distribution repeatedly and alternately changes on a spatial position, wherein the characteristic comprises the steps of determining the frequency of occurrence of a pixel pair, of which the value of a position (i, j) in a gray co-occurrence matrix is along the direction theta and one pixel value which is d away from the direction theta is i and the other pixel value is j, on the basis of the direction theta and the step length d so as to calculate the comprehensive information of the ultrasonic image on the direction, the interval, the change amplitude and the speed, wherein the theta is 0 DEG, 45 DEG, 90 DEG or 135 DEG;
s12, extracting a gray gradient co-occurrence matrix of pixel gray and gradient based on the ultrasonic image, wherein the gray gradient co-occurrence matrix of the image is the joint distribution of image gray pixels and image gradient size, and the image gradient is obtained through a differential operator and is used for detecting a gray jumping part in the ultrasonic image;
fusing the pixel gray scale and the gray scale gradient co-occurrence matrix of the gradient to acquire the texture arrangement and the pixel change information of the ultrasonic image;
s13, extracting texture features from the ultrasonic image to quantitatively describe information in the ultrasonic image, wherein the texture features comprise features of extracting gray standard deviation, energy, gray entropy, gray mean and correlation from the ultrasonic image;
s14, extracting morphological features from the ultrasound image to describe the shape, orientation, boundary and internal texture features of the tumor, including describing at least one of:
tumor circularity, aspect ratio, and angle of skin growth, leaf formation, edge roughness, edge needling, number of internal calcifications, border ambiguity.
3. The method for processing ultrasonic image based on semantic interpretable breast cancer according to claim 1, wherein the concrete steps of the second step comprise:
and (3) abstract feature extraction of the ultrasonic image by using a deep neural network model, wherein the abstract feature extraction of the ultrasonic image by using a CAM (computer-aided manufacturing) and/or Grad-CAM method is included.
4. The method for processing ultrasound images based on semantically interpretable breast cancer according to claim 1, wherein the third step comprises the specific steps of:
extracting the abstract features through a semantic low-rank regression algorithm to improve the credibility of the abstract features; wherein,
defining abstract featuresIs input data, x, of the semantic low-rank regression algorithmtCorresponding to the t (t is more than or equal to 1 and less than or equal to N) sample deep learning abstract features, and d is an abstract feature dimension; is provided withAn interpretable feature being an output of the semantic low-rank regression algorithm, c being a dimension of the interpretable feature, ytIs the corresponding interpretable feature of the t-th sample;
the mathematical description formula of the regression model of the semantic low-rank regression algorithm is as follows:
wherein,l representing loss of mapping2Norm,/, of2The norm represents the difference degree between the abstract characteristic and the appearance characteristic after the abstract characteristic passes through the mapping matrix, the mapping loss is increased along with the increase of the difference degree, and the expression of the mapping matrix is reduced along with the increase of the mapping loss; theta is a regular penalty coefficient used for regular low-rank penalty term RankW;
the features of deep learning abstractions of different samples are correlated, the W matrix is low-rank, the trace of the matrix approximates the rank of the matrix, and TrW is the trace of the matrix and is defined as:
σiis the ith singular value of W, WTIs the transpose of W.
The weight coefficient matrix W is a weight in the mapping process, specifically, a total weight obtained by calculating a comprehensive calculation of the appearance features in the N sample mapping operations, wherein the influence degree of the appearance features on the discriminator on the sample set X is:
and selecting the image features with the weight higher than the preset value to explain the concerned information in the deep learning discrimination model, wherein the information acts on the pattern classification discriminator to improve the credibility of the feature test model.
5. The method of claim 1 or 3, wherein the deep neural network model is a model of a CNN deep learning network.
6. The method of semantically-interpretable breast cancer-based ultrasound image processing according to claim 4, further comprising: selecting the aspect characteristics according to the weight; the adaptation of the ultrasound image processing effect and the computational resources is adjusted based on the number of selected features.
7. An electronic device comprising a memory having an executable program stored therein and a processor executing the executable program to perform the steps of:
firstly, extracting interpretable features from an ultrasonic image of a breast;
secondly, extracting abstract features of the ultrasonic image;
and thirdly, calculating a weight coefficient matrix W of different features in a mapping process by a semantic low-rank regression algorithm according to the interpretable features and the abstract features, selecting the image-bearing features with the weight higher than a preset value to explain information concerned in the deep learning discrimination model, wherein the information is used for performing auxiliary discrimination on the breast cancer ultrasonic image, and the selected image-bearing features can improve the credibility of the feature test model.
8. The electronic device of claim 7,
the first step, comprising: extracting a gray level co-occurrence matrix and textural features of the ultrasonic image, and extracting a plurality of morphological features, wherein the morphological features are the same as or similar to the features described by the BI-RADS, and the morphological features represent the shape, the orientation, the boundary and the internal textural features of the tumor;
the second step includes: and for the used deep neural network, learning the features to be extracted by each convolution core in the deep neural network model by using a regression algorithm to form a feature map, representing the feature map by using the global mean value of the feature map, and taking the feature map mean value vector formed by the last layer of convolution layer as an abstract feature vector.
9. A computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement:
the method for ultrasound image processing based on semantically interpretable breast cancer according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110590945.3A CN113450309A (en) | 2021-05-28 | 2021-05-28 | Breast cancer ultrasonic image processing method, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110590945.3A CN113450309A (en) | 2021-05-28 | 2021-05-28 | Breast cancer ultrasonic image processing method, electronic device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113450309A true CN113450309A (en) | 2021-09-28 |
Family
ID=77810393
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110590945.3A Pending CN113450309A (en) | 2021-05-28 | 2021-05-28 | Breast cancer ultrasonic image processing method, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113450309A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114403925A (en) * | 2022-01-21 | 2022-04-29 | 山东黄金职业病防治院 | Breast cancer ultrasonic detection system |
CN118299032A (en) * | 2024-03-29 | 2024-07-05 | 江南大学 | Construction method of breast cancer early screening model and screening auxiliary system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598709A (en) * | 2018-11-29 | 2019-04-09 | 东北大学 | Mammary gland assistant diagnosis system and method based on fusion depth characteristic |
CN111767952A (en) * | 2020-06-30 | 2020-10-13 | 重庆大学 | Interpretable classification method for benign and malignant pulmonary nodules |
-
2021
- 2021-05-28 CN CN202110590945.3A patent/CN113450309A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598709A (en) * | 2018-11-29 | 2019-04-09 | 东北大学 | Mammary gland assistant diagnosis system and method based on fusion depth characteristic |
CN111767952A (en) * | 2020-06-30 | 2020-10-13 | 重庆大学 | Interpretable classification method for benign and malignant pulmonary nodules |
Non-Patent Citations (1)
Title |
---|
付贵山: "深度学习乳腺超声图像分类器及其可解释性研究", 《万方数据》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114403925A (en) * | 2022-01-21 | 2022-04-29 | 山东黄金职业病防治院 | Breast cancer ultrasonic detection system |
CN118299032A (en) * | 2024-03-29 | 2024-07-05 | 江南大学 | Construction method of breast cancer early screening model and screening auxiliary system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kasinathan et al. | Automated 3-D lung tumor detection and classification by an active contour model and CNN classifier | |
Al-Antari et al. | Deep learning computer-aided diagnosis for breast lesion in digital mammogram | |
Ibrahim et al. | Breast cancer segmentation from thermal images based on chaotic salp swarm algorithm | |
Patil et al. | Automated mammogram breast cancer detection using the optimized combination of convolutional and recurrent neural network | |
Yang et al. | Multifeature-based surround inhibition improves contour detection in natural images | |
Al-Faris et al. | Breast MRI tumour segmentation using modified automatic seeded region growing based on particle swarm optimization image clustering | |
Kumar et al. | An improved Gabor wavelet transform and rough K-means clustering algorithm for MRI brain tumor image segmentation | |
Bozkurt | Skin lesion classification on dermatoscopic images using effective data augmentation and pre-trained deep learning approach | |
Teramoto et al. | Computer-aided classification of hepatocellular ballooning in liver biopsies from patients with NASH using persistent homology | |
Song et al. | Kidney segmentation in CT sequences using SKFCM and improved GrowCut algorithm | |
CN113450309A (en) | Breast cancer ultrasonic image processing method, electronic device and storage medium | |
Zuo et al. | An embedded multi-branch 3D convolution neural network for false positive reduction in lung nodule detection | |
Lee et al. | Unsupervised segmentation of lung fields in chest radiographs using multiresolution fractal feature vector and deformable models | |
Rehman et al. | Automatic melanoma detection and segmentation in dermoscopy images using deep RetinaNet and conditional random fields | |
Nigudgi et al. | RETRACTED ARTICLE: Lung cancer CT image classification using hybrid-SVM transfer learning approach | |
Shah et al. | Non-invasive multi-channel deep learning convolutional neural networks for localization and classification of common hepatic lesions | |
CN116884623A (en) | Medical rehabilitation prediction system based on laser scanning imaging | |
Zhang et al. | Fully multi-target segmentation for breast ultrasound image based on fully convolutional network | |
Rani et al. | Radon transform-based improved single seeded region growing segmentation for lung cancer detection using AMPWSVM classification approach | |
Wang et al. | Automatic vessel segmentation on fundus images using vessel filtering and fuzzy entropy | |
Rachapudi et al. | Diabetic retinopathy detection by optimized deep learning model | |
Krishna et al. | Optimization empowered hierarchical residual VGGNet19 network for multi-class brain tumour classification | |
Li et al. | A dual attention-guided 3D convolution network for automatic segmentation of prostate and tumor | |
Pang et al. | Image segmentation based on the hybrid bias field correction | |
Chhabra et al. | Comparison of different edge detection techniques to improve quality of medical images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210928 |
|
RJ01 | Rejection of invention patent application after publication |