CN111260639A - Multi-view information-collaborative breast benign and malignant tumor classification method - Google Patents

Multi-view information-collaborative breast benign and malignant tumor classification method Download PDF

Info

Publication number
CN111260639A
CN111260639A CN202010061740.1A CN202010061740A CN111260639A CN 111260639 A CN111260639 A CN 111260639A CN 202010061740 A CN202010061740 A CN 202010061740A CN 111260639 A CN111260639 A CN 111260639A
Authority
CN
China
Prior art keywords
layer
model
benign
relu
breast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010061740.1A
Other languages
Chinese (zh)
Inventor
张聚
俞伦端
周海林
吴崇坚
吕金城
陈坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhijiang College of ZJUT
Original Assignee
Zhijiang College of ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhijiang College of ZJUT filed Critical Zhijiang College of ZJUT
Priority to CN202010061740.1A priority Critical patent/CN111260639A/en
Publication of CN111260639A publication Critical patent/CN111260639A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The classification method of breast benign and malignant tumors based on multi-view information cooperation comprises the following steps: step 1) medical image preprocessing, namely performing data enhancement processing on images of four visual angles of mammary gland molybdenum target X-rays; step 2) constructing a multi-view convolutional network sub-model for the image of each view; and 3) constructing a multi-view information cooperation convolutional neural network model, unifying the output of the sub-models of the four views to the same neuron classification layer, and finally inputting a Sigmoid function to obtain a classification result. Adding a penalty function aiming at the false negative case, and giving greater penalty to reduce the occurrence of the false negative condition; step 4), classifying benign and malignant breast tumors: inputting the breast molybdenum target X-ray image to be detected into the multi-view information cooperation convolution neural network model constructed in the step 3), and outputting the network to obtain the benign and malignant result of the tumor. The method can improve the accuracy of the classification of the benign and malignant tumors, and simultaneously avoid the problem of poor generalization capability of the model caused by the lack of case image data and the insufficient training data of a neural network.

Description

Multi-view information-collaborative breast benign and malignant tumor classification method
Technical Field
The invention relates to a classification method of benign and malignant breast tumors.
Technical Field
The incidence and mortality of breast cancer has risen over the last half century. In China, the average growth rate of about 21 ten thousand years of newly-increased female breast cancer cases per year reaches 3.5%, the disease rate is increased twice of the average increase rate in the world, and the ranking is the first in the world. Breast cancer is without any sign. It is possible that only where a lump is found but there is no discomfort, the cause of cancer can be excluded only by a physical examination every year. In the diagnosis and treatment process of breast cancer, medical means such as ultrasound, molybdenum target X-ray radiographic examination of breast, nuclear Magnetic Resonance (MRI), CT examination, pathology, gene and the like are mainly used for auxiliary diagnosis and treatment. Among them, ultrasound and molybdenum targets are mainly used for primary screening of breast cancer, MRI for evaluation of therapeutic effect, pathology for definitive diagnosis of cancer and evaluation of treatment regimen.
The molybdenum target X-ray radiographic examination of mammary gland is the first choice and the simplest and most reliable noninvasive detection means for diagnosing mammary gland diseases at present, has relatively little pain, is simple and easy to operate, has high resolution and good repeatability, and the retained images can be compared before and after without the limitation of age and body shape. The method has high diagnosis reference and greatly improves the early-stage discovery rate of breast cancer in European and American countries. With the development of computer technology, more and more computer-aided diagnosis and treatment methods are applied to early diagnosis of cancer. Medical personnel can acquire focus information from the checked image through a computer image segmentation technology, and the accuracy of diagnosis and treatment is improved.
The deep learning method is applied to the field of medical image segmentation, helps doctors diagnose various diseases more accurately, in a time-saving and labor-saving manner, and becomes a novel computer-aided diagnosis method. Classification of benign and malignant breast tumors requires accurate segmentation of target lesion information and classification of tumors according to the lesion information by means of deep learning. However, in the field, a large amount of training data sets are lacked, and the identification of breast tumors still has great difficulty. Therefore, the invention provides a breast benign and malignant tumor classification method based on multi-view information cooperation, which utilizes limited breast molybdenum target X-ray image data to classify breast benign and malignant tumors by deep learning. By using the method and the device for classifying the breast benign and malignant tumors to learn the images of the four visual angles of the breast molybdenum target X-ray image, dependence on huge data volume can be effectively reduced, and the classification accuracy of the breast benign and malignant tumors is improved.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a breast tumor classification method based on multi-view information cooperation convolutional neural network.
The method of the invention constructs an information cooperation sub-model for the images of four visual angles of each mammary gland molybdenum target X-ray image respectively. A fine-tuning pre-trained DenseNet exists in each sub-model, and focus information is segmented from the mammary molybdenum target X-ray image of the visual angle. And finally, the four information cooperation models are used for segmenting the breast tumor by using an adaptive weighting scheme in the error reverse retransmission process, and classifying the breast tumor according to the focus information. In addition, the invention also introduces a penalty function to reduce the false positive rate and the false negative rate. Tests show that the invention can effectively classify benign and malignant breast tumors and is applied to routine hospital clinical detection work.
In order to make the objects, technical solutions and advantages of the present invention clearer, the following further describes the technical solutions of the present invention, and the classification method of benign and malignant breast tumors by multi-view information cooperation includes the following specific steps:
step 1) preprocessing a medical image;
each mammary molybdenum target X-ray image case consists of images of L-CC right head and foot position, R-CC left head and foot position, L-MLO left oblique position and R-MLO right oblique position with four visual angles. And data enhancement processing is carried out on the acquired mammary gland molybdenum target X-ray image, so that the generalization capability and the anti-interference capability of the model of the method are improved. 80% of the processed data is used as a training set of the neural network of the invention, 10% is used as a verification set, and the last 10% is used as a test set.
Step 2), constructing a multi-view convolutional network sub-model;
to allow the DenseNet pre-trained with ImageNet natural image dataset to accurately segment breast tumors. The method removes the last fully connected layer of the network structure and then adds 2048, 1024 and 2 neurons respectively. And (3) randomly initializing the weights of the three full-connection layers by using an Xaiver algorithm, and setting the activation function of the last layer as a Sigmoid function.
Step 3), constructing a multi-view information cooperation convolution neural network model;
the multi-view information cooperation convolution neural network model provided by the invention is composed of the four sub-models provided in the step 2), and two neurons of each sub-model output layer are connected to the same neuron classification layer of the multi-view information cooperation model and then are a Sigmoid function. The output of this neuron classification layer is the prediction made by the entire model.
In the event of misclassification, misdiagnosis of a malignant tumor as "benign" (false negative) is more costly than misdiagnosis of benign as "malignant" (false positive) because clinical practice may erroneously believe that the tumor is "benign", thereby losing the opportunity for effective early treatment of breast cancer. To solve this problem, the present invention proposes a penalty cross entropy loss algorithm, which provides a method to distinguish false negative and false positive tumors by penalizing each error differently.
Step 4), classifying benign and malignant breast tumors;
inputting the breast molybdenum target X-ray image to be detected into the multi-view information cooperation convolution neural network model constructed in the step 3), and outputting the network to obtain the benign and malignant result of the tumor, which shows that the invention can assist in classifying the benign and malignant breast tumors.
The invention has the following advantages:
1. the multi-view image information of different tumors is combined and fully utilized, and the accuracy of classification of benign and malignant tumors can be improved.
2. The problem that due to the fact that the image data of the disease is deficient, the neural network training data is insufficient, and the model generalization capability is poor is solved.
Drawings
FIG. 1 is a schematic diagram of the overall network framework architecture of a software system implementing the method of the present invention;
FIG. 2 is a schematic diagram of a DenseBlock;
FIG. 3 is a schematic diagram of a DenseBlock network structure;
FIG. 4 is a schematic input/output diagram of a DenseBlock profile;
FIG. 5 is a schematic diagram of the structure of DenseNet-B;
the specific implementation mode is as follows:
the invention is explained in detail below with reference to the drawings
The network structure of the breast benign and malignant tumor classification method based on multi-view information cooperation is shown in figure 1, and the method comprises the following specific steps:
step 1) preprocessing a medical image;
each mammary gland molybdenum target X-ray image case consists of images of four visual angles, namely L-CC right head foot position, R-CC left head foot position, L-MLO left oblique position and R-MLO right oblique position. And performing data enhancement operations such as one or more of turning, rotating, zooming, cutting and translating on the acquired mammary gland molybdenum target X-ray images at four optional visual angles, and improving the anti-interference and generalization capabilities of the model. The image after data enhancement is uniformly cut into the image with the size of 224 × 224 pixels, so that the model can be better learned. Finally, 80% of the processed data is used as a training set of the neural network of the present invention, 10% is used as a validation set, and the last 10% is used as a test set.
Step 2), constructing a multi-view convolutional network sub-model;
2.1 constructing a DenseNet network;
the images of the L-CC right head foot position, the R-CC left head foot position, the L-MLO left oblique position and the R-MLO right oblique position of the mammary gland molybdenum target X-ray image respectively construct a sub-model of a multi-view convolution network. Each sub-model consists of a DenseNet network, through which tumors of different perspectives are classified. The DenseBlock in DenseNet directly connects all layers on the premise of ensuring the maximum information transmission between layers in the network, and the specific network structure is shown in fig. 2. The method has the advantages of reducing gradient disappearance, strengthening feature transfer, effectively utilizing features and the like, and simultaneously reduces the number of parameters to a certain extent and improves the efficiency of neural network training.
In DenseNet, each layer is connected to all previous layers in the channel dimension and serves as input for the next layer. For a network of L-layer, DenseNet comprises
Figure BDA0002374729320000041
In DenseNet, all previous layers are connected as inputs:
xl=Hl([x1,x2,...,xl-1]) (1)
wherein, above HlRepresentative of (. cndot.) is a non-linear transformation function, which is a combinatorial operation that may include a series of BN (BatchNormalization), ReLU, Pooling, and Conv operations. Where the L-layer and L-1 layer may actually contain multiple convolutional layers therebetween.
CNN networks generally go through Pooling or stride > 1Conv to reduce the size of the signature, while densnet requires consistent signature size. In order to solve the problem, a denseBlock + Transition structure is used in the DenseNet network, wherein the denseBlock is a module comprising a plurality of layers, the feature maps of each layer have the same size, and a dense connection mode is adopted between the layers. And the Transition module is connected with two adjacent DenseBlock and reduces the size of the feature map by Pooling. Fig. 3 shows a network structure of DenseNet, which contains 3 denseblocks in total, and each DenseBlock is connected together through Transition.
In DenseBlock, the feature maps of the respective layers are uniform in size. Nonlinear combinatorial function H in DenseBlockl(. cndot.) adopts the structure of BN + ReLU +3x3Conv, as shown in FIG. 4. After convolution of all layers in the DenseBlock, k feature maps are output, namely the number of channels of the obtained feature maps is k. k is called growth in DenseNet, which is a hyperparameter. Generally, better performance can be obtained with smaller k (e.g., 1 and 2). Assuming that the number of channels of the profile of the input layer is k _0, the number of channels input to the L layer is k _0+ k (L-1), and thus as the number of layers increases, the input to DenseBlock is very large although k is set small, but due to the reuse of features, only k features per layer are unique to itself.
Since the input of the back layer is very large, the inside of the DenseBlock can adopt a bottleeck layer to reduce the amount of calculation, mainly adding 1x1Conv to the original structure, as shown in fig. 5, that is, BN + ReLU +1x1Conv + BN + ReLU +3x3Conv, which is called a DenseNet-B structure. Wherein 1x1Conv gets 4k feature maps which serves to reduce the number of features and thereby increase computational efficiency.
For the Transition layer, it is mainly to connect two adjacent DenseBlock and reduce the feature map size. The Transition layer comprises a convolution of 1x1 and AvgPooling of 2x2, with the structure BN + ReLU +1x1Conv +2x2 AvgPooling. In addition, the Transition layer can function as a compression model. Assuming that the number of signature channels obtained by a DenseBlock followed by a Transition is m, the Transition layer can generate [ theta ]m]A feature (by convolution layer) where θ ∈ (0, 1)]Is the compression factor. When θ is 1, the feature number is unchanged through the Transition layer, i.e., no compression, and when the compression factor is less than 1, this structure is called DenseNet-C. The combination structure of the DenseBlock structure using the bottletech layer and the Transition with the compression factor less than 1 is called DenseNet-BC
The DenseNet constructed by the invention consists of 10 sublayers, wherein the DenseBlock-BC comprises four DenseBlock-BC blocks and three Transition layers:
a first layer: and an input layer for inputting the processed 224X 224 pixel size mammary molybdenum target X-ray image into the network.
A second layer: the parameter size of the convolution layer after passing through a 7 × 7 convolution layer (convolution kernel number is 2k) with stride being 2 is 112 × 112.
And a third layer: the pooling layer is 3 × 3 maximum pooling layer with stride ═ 2, and the output parameter is 56 × 56.
A fourth layer: DenseBlock, the first DenseBlock block contains 6 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures with 56 × 56 output parameters.
And a fifth layer: the Transition layer, which includes a 1x1 convolution and a 2x2AvgPooling with stride 2, is intended to reduce the profile size. After this layer, the output parameters were reduced to 28 × 28.
A sixth layer: DenseBlock, this block contains 12 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures with 28 × 28 output parameters.
A seventh layer: a Transition layer, which again comprises a 1x1 convolution and a 2x2AvgPooling with stride 2, reduces the number of output parameters to 14 x 14 after compression.
An eighth layer: DenseBlock, this block contains 24 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures with 14 × 14 output parameters.
A ninth layer: the Transition layer, structure as above, reduces the compressed parameters to 7 × 7.
A tenth layer: DenseBlock, this block contains 24 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures with 7 × 7 output parameters.
The eleventh layer: the output layer, first passing through a 7 × 7 globalAvgPooling layer, in order to enable the DenseNet trained by the ImageNet natural image dataset to identify breast tumor information and segment breast tumors, the method removes the last fully-connected layer of the network structure, and then 2048, 1024 and 2 neurons are added respectively. And (3) randomly initializing the weights of the three full-connection layers by using an Xaiver algorithm, and setting the activation function of the last layer as a Sigmoid function.
Step 3), constructing a multi-view information cooperation convolution neural network model;
the multi-view information cooperation convolutional neural network model provided by the invention consists of four sub-models provided in the step 2), two neurons of an output layer of each sub-model are connected to the same neuron classification layer of the multi-view information cooperation model and then input to a Sigmoid function:
Figure BDA0002374729320000061
the output of this classification layer is the prediction result of the multi-view information cooperation model of the present invention, which can be expressed as:
Figure BDA0002374729320000062
wherein WkjWhere { k ═ 1,2,3,4} is a set of weights between each multiview information cooperation submodel output layer and the classification layer,
Figure BDA0002374729320000063
is a weighted sum of the outputs of the four sub-models,
Figure BDA0002374729320000064
is the weighted sum of the outputs of each sub-model, and the function f (is) is the Sigmoid function.
In the event of misclassification, misdiagnosis of a malignant tumor as benign (false negative) may be more costly than misdiagnosis of benign as malignant (false positive) because clinical practice may erroneously believe that the tumor is "benign," thereby losing the opportunity to effectively treat early-stage breast cancer. To solve this problem, the present invention proposes a penalty cross entropy loss algorithm, which provides a method to distinguish false negative and false positive tumors by penalizing each error differently.
l(yn,Pn)=-δn[ynlog(Pn)+(1-yn)log(1-Pn)](4)
The penalty factors are as follows:
Figure BDA0002374729320000065
in the present invention, setting C to 2 gives a greater penalty to false negative cases.
Step 4), classifying benign and malignant breast tumors;
inputting the breast molybdenum target X-ray image to be detected into the multi-view information cooperation convolution neural network model constructed in the step 3), and outputting the network to obtain the benign and malignant result of the tumor, which shows that the invention can assist in classifying the benign and malignant breast tumors.
The invention combines and fully utilizes multi-view image information of different breast tumors, can improve the accuracy of classification of benign and malignant tumors, and simultaneously avoids the problem of poor model generalization capability caused by insufficient neural network training data due to deficient image data of the breast tumors.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.

Claims (1)

1. The classification method of the breast benign and malignant tumors based on multi-view information cooperation comprises the following specific steps:
step 1) preprocessing a medical image;
each mammary gland molybdenum target X-ray image case consists of images of four visual angles, namely an L-CC right head foot position, an R-CC left head foot position, an L-MLO left oblique position and an R-MLO right oblique position; carrying out data enhancement operations such as one or more of turning, rotating, zooming, cutting and translating on the collected mammary gland molybdenum target X-ray images at four arbitrary visual angles, and uniformly cutting the data-enhanced images into images with the size of 224 × 224 pixels; finally, 80% of the processed data is used as a training set of the neural network, 10% is used as a verification set, and the rest 10% is used as a test set;
step 2), constructing a multi-view convolutional network sub-model;
2.1 constructing a DenseNet network;
L-CC right foot position, R-CC left foot position, L-MLO left oblique position and R-MLO right oblique position of mammary gland molybdenum target X-ray imageConstructing the images of four visual angles into a sub-model of a multi-visual angle convolution network respectively; each sub-model consists of a DenseNet, and tumors with different view angles are classified through DenseNet; in DenseNet, each layer is connected to all previous layers in the channel dimension, where the profile size of each layer is the same and is used as input for the next layer; for a network of L-layer, DenseNet comprises
Figure FDA0002374729310000011
In DenseNet, all previous layers are connected as inputs:
xl=Hl([x1,x2,...,xl-1]) (1)
wherein, the above Hl(. -) represents a non-linear transformation function, which is a combinatorial operation that may include a series of Batch Normalization, ReLU, Pooling, and Conv operations; (ii) a Where L and L-1 layers may actually contain multiple convolutional layers therebetween;
the DenseNet consists of 10 sublayers, including four DenseBlock-BC blocks, three Transition layers:
a first layer: an input layer for inputting the processed 224X 224 pixel size mammary molybdenum target X-ray image into the network;
a second layer: a convolution layer with a parameter size of 112 × 112 after 7 × 7 convolution layers of stride ═ 2;
and a third layer: 3 × 3 maximum pooling layers of stride ═ 2, and output parameters of 56 × 56;
a fourth layer: the first DenseBlock block comprises 6 structures of BN + ReLU +1x1Conv + BN + ReLU +3x3Conv, and output parameters are 56 × 56;
and a fifth layer: a Transition layer comprising a convolution of 1x1 and a 2x2AvgPooling of stride 2; after passing through this layer, the output parameters were reduced to 28 × 28;
a sixth layer: DenseBlock, this block contains 12 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures, the output parameter is 28 x 28;
a seventh layer: a Transition layer, which also comprises a convolution of 1x1 and 2x2AvgPooling of stride 2, and the number of output parameters is reduced to 14 by 14 after compression;
an eighth layer: DenseBlock, this block contains 24 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures, the output parameter is 14 x 14;
a ninth layer: the Transition layer has the same structure as the Transition layer, and the compressed parameters are reduced to 7 × 7;
a tenth layer: DenseBlock, this block contains 24 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures, the output parameter is 7 x 7;
the eleventh layer: an output layer, which firstly passes through a 7 × 7 globalavgPooling layer, removes the last full connection layer of the network structure, and then respectively adds 2048, 1024 and 2 neurons, so that the DenseNet can identify breast tumor information; randomly initializing the weights of the three full-connection layers by using an Xaiver algorithm, and setting the activation function of the last layer as a Sigmoid function;
step 3), constructing a multi-view information cooperation convolution neural network model;
the multi-view information cooperation convolutional neural network model consists of the four sub-models provided in the step 2), two neurons of an output layer of each sub-model are connected to the same neuron classification layer of the multi-view information cooperation model, and then the two neurons are input into a Sigmoid function:
Figure FDA0002374729310000021
the output of this classification layer is the prediction result of the multi-view information cooperation model of the present invention, which can be expressed as:
Figure FDA0002374729310000022
wherein WkjWhere { k ═ 1,2,3,4} is a set of weights between each multiview information cooperation submodel output layer and the classification layer,
Figure FDA0002374729310000023
is fourThe weighted sum of the outputs of the sub-models,
Figure FDA0002374729310000024
is the weighted sum of the outputs of each sub-model, the f (.) function is the Sigmoid function;
under the condition of error classification, the punishment cross entropy loss algorithm provides a method for distinguishing false negative tumors from false positive tumors by carrying out different punishments on each error;
l(yn,Pn)=-δn[ynlog(Pn)+(1-yn)log(1-Pn)](4)
the penalty factors are as follows:
Figure FDA0002374729310000031
setting C to 2 gives a greater penalty to false negative cases;
step 4), classifying benign and malignant breast tumors;
inputting the breast molybdenum target X-ray image to be detected into the multi-view information cooperation convolution neural network model constructed in the step 3), and outputting the network to obtain the benign and malignant result of the tumor.
CN202010061740.1A 2020-01-19 2020-01-19 Multi-view information-collaborative breast benign and malignant tumor classification method Withdrawn CN111260639A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010061740.1A CN111260639A (en) 2020-01-19 2020-01-19 Multi-view information-collaborative breast benign and malignant tumor classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010061740.1A CN111260639A (en) 2020-01-19 2020-01-19 Multi-view information-collaborative breast benign and malignant tumor classification method

Publications (1)

Publication Number Publication Date
CN111260639A true CN111260639A (en) 2020-06-09

Family

ID=70944145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010061740.1A Withdrawn CN111260639A (en) 2020-01-19 2020-01-19 Multi-view information-collaborative breast benign and malignant tumor classification method

Country Status (1)

Country Link
CN (1) CN111260639A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709950A (en) * 2020-08-20 2020-09-25 成都金盘电子科大多媒体技术有限公司 Mammary gland molybdenum target AI auxiliary screening method
CN112287970A (en) * 2020-09-27 2021-01-29 山东师范大学 Mammary gland energy spectrum image classification system, equipment and medium based on multi-view multi-mode
CN112883992A (en) * 2020-12-11 2021-06-01 太原理工大学 Breast cancer lump classification method based on attention ResNet model
CN113139931A (en) * 2021-03-17 2021-07-20 杭州迪英加科技有限公司 Thyroid slice image classification model training method and device
CN113658151A (en) * 2021-08-24 2021-11-16 泰安市中心医院 Mammary gland lesion magnetic resonance image classification method and device and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210528A (en) * 2019-05-15 2019-09-06 太原理工大学 The good pernicious classification method of breast X-ray image based on DenseNet-II neural network model

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210528A (en) * 2019-05-15 2019-09-06 太原理工大学 The good pernicious classification method of breast X-ray image based on DenseNet-II neural network model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GAO HUANG ET AL.: "Densely Connected Convolutional Networks", 《IEEE》 *
YUTONG XIE ET AL.: "Knowledge-based Collaborative Deep Learning for Benign-Malignant Lung Nodule Classification on Chest CT", 《IEEE》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709950A (en) * 2020-08-20 2020-09-25 成都金盘电子科大多媒体技术有限公司 Mammary gland molybdenum target AI auxiliary screening method
CN111709950B (en) * 2020-08-20 2020-11-06 成都金盘电子科大多媒体技术有限公司 Mammary gland molybdenum target AI auxiliary screening method
CN112287970A (en) * 2020-09-27 2021-01-29 山东师范大学 Mammary gland energy spectrum image classification system, equipment and medium based on multi-view multi-mode
CN112883992A (en) * 2020-12-11 2021-06-01 太原理工大学 Breast cancer lump classification method based on attention ResNet model
CN113139931A (en) * 2021-03-17 2021-07-20 杭州迪英加科技有限公司 Thyroid slice image classification model training method and device
CN113139931B (en) * 2021-03-17 2022-06-03 杭州迪英加科技有限公司 Thyroid section image classification model training method and device
CN113658151A (en) * 2021-08-24 2021-11-16 泰安市中心医院 Mammary gland lesion magnetic resonance image classification method and device and readable storage medium
CN113658151B (en) * 2021-08-24 2023-11-24 泰安市中心医院 Mammary gland lesion magnetic resonance image classification method, device and readable storage medium

Similar Documents

Publication Publication Date Title
Mei et al. SANet: A slice-aware network for pulmonary nodule detection
CN112489061B (en) Deep learning intestinal polyp segmentation method based on multi-scale information and parallel attention mechanism
CN111260639A (en) Multi-view information-collaborative breast benign and malignant tumor classification method
WO2018120942A1 (en) System and method for automatically detecting lesions in medical image by means of multi-model fusion
Omonigho et al. Breast cancer: tumor detection in mammogram images using modified alexnet deep convolution neural network
Bharati et al. Deep learning for medical image registration: A comprehensive review
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN112348800A (en) Dense neural network lung tumor image identification method fusing multi-scale features
CN111709446B (en) X-ray chest radiography classification device based on improved dense connection network
CN111275103A (en) Multi-view information cooperation type kidney benign and malignant tumor classification method
KR102407248B1 (en) Deep Learning based Gastric Classification System using Data Augmentation and Image Segmentation
Cai et al. Identifying architectural distortion in mammogram images via a se-densenet model and twice transfer learning
Pradhan et al. Lung cancer detection using 3D convolutional neural networks
Akkar et al. Diagnosis of lung cancer disease based on back-propagation artificial neural network algorithm
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
Qiu et al. A deep learning approach for segmentation, classification, and visualization of 3-D high-frequency ultrasound images of mouse embryos
Li et al. Multi-view unet for automated gi tract segmentation
Khalifa et al. Deep learning for image segmentation: a focus on medical imaging
Yang et al. Lesion classification of wireless capsule endoscopy images
Qiao et al. A fusion of multi-view 2D and 3D convolution neural network based MRI for Alzheimer’s disease diagnosis
CN113889235A (en) Unsupervised feature extraction system for three-dimensional medical image
CN112766332A (en) Medical image detection model training method, medical image detection method and device
Abed et al. Detection and Segmentation of Breast Cancer Using Auto Encoder Deep Neural Networks
Wu et al. DFUNET: A Residual Network for Retinal Vessel
CN117351489B (en) Head and neck tumor target area delineating system for whole-body PET/CT scanning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200609