CN111275103A - Multi-view information cooperation type kidney benign and malignant tumor classification method - Google Patents

Multi-view information cooperation type kidney benign and malignant tumor classification method Download PDF

Info

Publication number
CN111275103A
CN111275103A CN202010061728.0A CN202010061728A CN111275103A CN 111275103 A CN111275103 A CN 111275103A CN 202010061728 A CN202010061728 A CN 202010061728A CN 111275103 A CN111275103 A CN 111275103A
Authority
CN
China
Prior art keywords
layer
kidney
benign
view
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010061728.0A
Other languages
Chinese (zh)
Inventor
张聚
俞伦端
周海林
吴崇坚
吕金城
陈坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhijiang College of ZJUT
Original Assignee
Zhijiang College of ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhijiang College of ZJUT filed Critical Zhijiang College of ZJUT
Priority to CN202010061728.0A priority Critical patent/CN111275103A/en
Publication of CN111275103A publication Critical patent/CN111275103A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The method for classifying the benign and malignant tumors of the kidney by the cooperation of multi-view information comprises the following steps: step 1) medical image preprocessing, namely performing data enhancement processing on images of three view angles of a kidney CT; step 2) constructing a multi-view convolutional network sub-model for the image of each view; and 3) constructing a multi-view information cooperation convolutional neural network model, unifying the output of the sub-models of the three views to the same neuron classification layer, and finally inputting a Sigmoid function to obtain a classification result. Adding a penalty function aiming at the false positive case, and giving greater penalty to reduce the occurrence of the false positive condition; step 4) classifying the benign and malignant kidney tumors: and (3) inputting the kidney CT image to be detected into the multi-view information cooperation convolution neural network model constructed in the step 3), and outputting by a network to obtain the benign and malignant result of the tumor. The invention combines and fully utilizes multi-view image information of different kidney tumors, can improve the accuracy of benign and malignant classification of the kidney tumors, and simultaneously avoids the problem of poor model generalization capability caused by insufficient neural network training data due to deficient image data of the patient.

Description

Multi-view information cooperation type kidney benign and malignant tumor classification method
Technical Field
The invention relates to a method for classifying benign and malignant tumors of kidney.
Technical Field
The kidney tumor is one of ten common malignant tumors of human beings, and accounts for about 3% -5% of the incidence rate of the tumor. In recent years, the incidence of renal tumors has been increasing worldwide, especially in renal cell carcinoma, which is increasing at a rate of 2% to 3% per 10 years. According to the statistics of the American cancer society in recent years, 63990 patients, 65340 patients and 73820 patients with new kidney tumors in the United states from 2017 to 2019 respectively; 14400, 14970 and 14770 patients with newly increased death respectively. According to the latest records of the Chinese tumor registration center, 66800 new renal tumor patients and 234000 new death patients are added in 2015 years in China, and multiple studies make internal disorder or usurp record that the incidence and death rate of renal cancer in China are on the rise in recent years and seriously harm the life health of people.
According to the 2016 classification standard of world health organization, kidney tumors can be classified into malignant tumors and benign tumors according to histological type and invasive nature. It is known that different renal tumor types have different clinical prognoses, gene expression patterns and therapeutic approaches. Benign renal tumors are more recommended to remain nephron surgery or active monitoring methods, while renal malignant tumors are mostly treated with radical nephrectomy surgery or radiofrequency ablation. Therefore, the method has very important clinical value for avoiding the missing of the chance of reserving the kidney due to misdiagnosis of the kidney benign tumor patient as the kidney malignant tumor patient and improving the early diagnosis accuracy of the kidney tumor.
At present, the imaging examination is one of the important means for early detection and early diagnosis of kidney tumor, the ultrasonic examination is simple and easy to operate, but the accuracy of diagnosis is closely related to the experience and the manipulation of a clinician, and the diagnosis of the pathological type of the kidney cancer lacks specificity. MRI examination has high resolution on soft tissue, but is not the first choice for renal cancer patients due to high cost and long scanning imaging time, so the present invention is based on preoperative conventional CT examination.
The deep learning method is applied to the field of medical image segmentation, helps doctors diagnose various diseases more accurately, in a time-saving and labor-saving manner, and becomes a novel computer-aided diagnosis method. The classification of benign and malignant tumors of the kidney requires accurate segmentation of the lesion information of the target by a deep learning method and classification of tumors according to the lesion information. However, in the field, a large amount of training data sets are lacked, and the identification of the kidney tumor still has great difficulty. Therefore, the invention provides a method for classifying benign and malignant kidney tumors through multi-view information cooperation, which utilizes limited renal CT image data to classify the benign and malignant kidney tumors through deep learning. By using the invention to learn the images of three visual angles of one example of the kidney CT image, the dependence on huge data volume can be effectively reduced, and the accuracy of the classification of benign and malignant tumors of the kidney can be improved.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a method for classifying benign and malignant tumors of kidney by multi-view information cooperation.
The method of the invention constructs an information cooperation sub-model for the images of three visual angles of each instance of the kidney CT image respectively. Each sub-model has a fine-tuning pre-trained DenseNet to respectively segment the focus information from the kidney CT image of the visual angle. And finally, the three information cooperation models simultaneously use a self-adaptive weighting scheme to segment the kidney tumor in the error reverse retransmission process, and classify the kidney tumor according to the focus information. In addition, the invention also introduces a penalty function to reduce the false positive rate and the false negative rate. Tests show that the invention can effectively classify benign and malignant tumors of the kidney and is applied to routine hospital clinical detection work.
In order to make the objects, technical solutions and advantages of the present invention clearer, the following further describes the technical solutions of the present invention, and the method for classifying benign and malignant tumors of kidney with multi-view information cooperation includes the following specific steps:
step 1) preprocessing a medical image;
each kidney CT image case consists of images from three views, sagittal, coronal, and transverse planes. And the acquired kidney CT image is subjected to data enhancement processing, so that the generalization capability and the anti-interference capability of the model of the method are improved. 80% of the processed data are used as a training set of the neural network of the present invention, 10% are used as a validation set, and the remaining 10% are used as a test set.
Step 2), constructing a multi-view convolutional network sub-model;
to allow the DenseNet pre-trained with ImageNet natural image dataset to accurately segment kidney tumors. The method removes the last fully connected layer of the network structure and then adds 2048, 1024 and 2 neurons respectively. And (3) randomly initializing the weights of the three full-connection layers by using an Xaiver algorithm, and setting the activation function of the last layer as a Sigmoid function.
Step 3), constructing a multi-view information cooperation convolution neural network model;
the multi-view information cooperation convolution neural network model provided by the invention is composed of the three sub-models provided in the step 2), and two neurons of each sub-model output layer are connected to the same neuron classification layer of the multi-view information cooperation model and then are a Sigmoid function. The output of this neuron classification layer is the prediction made by the entire model.
In the case of misclassification when classifying renal tumors, misidentification of a benign tumor as "malignant" (false positives) is more costly than misdiagnosis of a malignant tumor as "benign" (false negatives), as clinical practice may erroneously believe that the tumor is "malignant," resulting in over-treatment, such that the patient misses opportunities to retain the kidney. To solve this problem, the present invention proposes a penalty cross entropy loss algorithm, which provides a method to distinguish false negative and false positive tumors by penalizing each error differently.
Step 4), classifying the benign and malignant kidney tumors;
and (3) inputting the CT image of the kidney to be detected into the multi-view information cooperation convolution neural network model constructed in the step 3), and outputting a good and malignant result of the tumor by a network, which shows that the method can assist in the classification of the good and malignant of the kidney tumor.
The invention has the following advantages:
1. the multi-view image information of different tumors is combined and fully utilized, and the accuracy of classification of benign and malignant tumors can be improved.
2. The problem that due to the fact that the image data of the disease is deficient, the neural network training data is insufficient, and the model generalization capability is poor is solved.
Drawings
FIG. 1 is a kidney multi-angle CT view;
FIG. 2 is an overall block diagram of the software system of the method of the present invention;
FIG. 3 is a schematic diagram of a DenseBlock;
FIG. 4 is a schematic diagram of a DenseBlock network structure;
FIG. 5 is a schematic input/output diagram of a DenseBlock profile;
FIG. 6 is a schematic diagram of the structure of DenseNet-B;
the specific implementation mode is as follows:
the invention is explained in detail below with reference to the drawings
The network structure of the multi-view information-collaborative kidney benign and malignant tumor classification method is shown in fig. 1, and the method comprises the following specific steps:
step 1) preprocessing a medical image;
each kidney CT image case consists of images from three views, sagittal, coronal, and transverse planes. And performing data enhancement operations such as one or more of turning, rotating, zooming, cutting and translating on the collected kidney CT images at any three visual angles, so that the anti-interference and generalization capability of the model is improved. The image after data enhancement is uniformly cut into the image with the size of 224 × 224 pixels, so that the model can be better learned. Finally, 80% of the processed data is used as a training set of the neural network of the present invention, 10% is used as a validation set, and the last 10% is used as a test set.
Step 2), constructing a multi-view convolutional network sub-model;
2.1 construction of DenseNet network
The sagittal, coronal and transverse plane images of the kidney CT image are respectively constructed into a sub-model of a multi-view convolution network. Each sub-model consists of a DenseNet network, through which tumors of different perspectives are classified. The DenseBlock in DenseNet directly connects all layers on the premise of ensuring the maximum information transmission between layers in the network, and the specific network structure is shown in fig. 2. The method has the advantages of reducing gradient disappearance, strengthening feature transfer, effectively utilizing features and the like, and simultaneously reduces the number of parameters to a certain extent and improves the efficiency of neural network training.
In DenseNet, each layer is connected to all previous layers in the channel dimension and serves as input for the next layer. For a network of L-layer, DenseNet comprises
Figure BDA0002374728670000041
In DenseNet, all previous layers are connected as inputs:
xl=Hl([x1,x2,...,xl-1]) (1)
wherein, above HlThe expression (. cndot.) is a non-linear transformation function, which is a combinatorial operation that may include a series of BN (batch normalization), ReLU, Pooling and Conv operations. Where the L-layer and L-1 layer may actually contain multiple convolutional layers therebetween.
CNN networks generally go through Pooling or stride > 1Conv to reduce the size of the signature, while densnet requires consistent signature size. In order to solve the problem, a denseBlock + Transition structure is used in the DenseNet network, wherein the denseBlock is a module comprising a plurality of layers, the feature maps of each layer have the same size, and a dense connection mode is adopted between the layers. And the Transition module is connected with two adjacent DenseBlock and reduces the size of the feature map by Pooling. Fig. 3 shows a network structure of DenseNet, which contains 3 denseblocks in total, and each DenseBlock is connected together through Transition.
In DenseBlock, the feature maps of the respective layers are uniform in size. Nonlinear combinatorial function H in DenseBlockl(. cndot.) adopts a structure of BN + ReLU +3x3Conv, as shown in FIG. 4. After convolution of all layers in the DenseBlock, k feature maps are output, namely the number of channels of the obtained feature maps is k. k is called growth in DenseNet, which is a hyperparameter. Generally, better performance can be obtained with smaller k (e.g., 1 and 2). Assuming that the number of channels of the profile of the input layer is k _0, the number of channels input to the L layer is k _0+ k (L-1), and thus as the number of layers increases, the input to DenseBlock is very large although k is set small, but due to the reuse of features, only k features per layer are unique to itself.
Since the input of the back layer is very large, the inside of the DenseBlock can adopt a bottleeck layer to reduce the amount of calculation, mainly adding 1x1Conv to the original structure, as shown in fig. 5, that is, BN + ReLU +1x1Conv + BN + ReLU +3x3Conv, which is called a DenseNet-B structure. Wherein 1x1Conv yields 4k feature maps which serve to reduce the number of features and thereby increase computational efficiency.
For the Transition layer, it is mainly to connect two adjacent DenseBlock and reduce the feature map size. The Transition layer comprises a convolution of 1x1 and AvgPooling of 2x2, with the structure BN + ReLU +1x1Conv +2x2 AvgPooling. In addition, the Transition layer can function as a compression model. Assuming that the number of signature channels obtained by a DenseBlock followed by a Transition is m, the Transition layer can generate [ theta ]m]A feature (by convolution layer) where θ ∈ (0, 1)]Is the compression factor. When θ is 1, the feature number is unchanged through the Transition layer, i.e., no compression, and when the compression factor is less than 1, this structure is called DenseNet-C. For using bottlenThe DenseBlock structure of the eck layer and the Transition combination structure with the compression coefficient less than 1 are called DenseNet-BC
The DenseNet constructed by the invention consists of 10 sublayers, wherein the DenseBlock-BC comprises four DenseBlock-BC blocks and three Transition layers:
a first layer: and an input layer for inputting the processed kidney CT image with 224 × 224 pixels into the network.
A second layer: the parameter size of the convolution layer after passing through a 7 × 7 convolution layer (convolution kernel number is 2k) with stride being 2 is 112 × 112.
And a third layer: the pooling layer is 3 × 3 maximum pooling layer with stride ═ 2, and the output parameter is 56 × 56.
A fourth layer: DenseBlock, the first DenseBlock block contains 6 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures with 56 × 56 output parameters.
And a fifth layer: the Transition layer, which includes a 1x1 convolution and a 2x2AvgPooling with stride 2, is intended to reduce the profile size. After this layer, the output parameters were reduced to 28 × 28.
A sixth layer: DenseBlock, this block contains 12 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures, with 28 × 28 output parameters.
A seventh layer: a Transition layer, which again comprises a 1x1 convolution and a 2x2AvgPooling with stride 2, reduces the number of output parameters to 14 x 14 after compression.
An eighth layer: DenseBlock, this block contains 24 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures, with 14 × 14 output parameters.
A ninth layer: the Transition layer, structure as above, reduces the compressed parameters to 7 × 7.
A tenth layer: DenseBlock, this block contains 24 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures, with 7 × 7 output parameters.
The eleventh layer: the output layer firstly passes through a 7 × 7 globalavgPooling layer, in order to enable DenseNet trained by ImageNet natural image data set to identify kidney tumor information and segment kidney tumors, the method removes the last full-connection layer of the network structure, and then 2048, 1024 and 2 neurons are respectively added. And (3) randomly initializing the weights of the three full-connection layers by using an Xaiver algorithm, and setting the activation function of the last layer as a Sigmoid function.
Step 3), constructing a multi-view information cooperation convolution neural network model;
the multi-view information cooperation convolutional neural network model provided by the invention consists of three submodels provided in the step 2), wherein two neurons of an output layer of each submodel are connected to the same neuron classification layer of the multi-view information cooperation model and then input to a Sigmoid function:
Figure BDA0002374728670000061
the output of this classification layer is the prediction result of the multi-view information cooperation model of the present invention, which can be expressed as:
Figure BDA0002374728670000062
wherein WkjWhere { k ═ 1,2,3} is a set of weights between each multiview information cooperation submodel output layer and the classification layer,
Figure BDA0002374728670000063
is a weighted sum of the outputs of the three submodels,
Figure BDA0002374728670000064
is the weighted sum of the outputs of each sub-model, and the function f (is) is the Sigmoid function.
In the case of misclassification when classifying renal tumors, misidentification of a benign tumor as "malignant" (false positives) is more costly than misdiagnosis of a malignant tumor as "benign" (false negatives), as clinical practice may erroneously believe that the tumor is "malignant," resulting in over-treatment, such that the patient misses opportunities to retain the kidney. To solve this problem, the present invention proposes a penalty cross entropy loss algorithm, which provides a method to distinguish false negative and false positive tumors by penalizing each error differently.
l(yn,Pn)=-δn[ynlog(Pn)+(1-yn)log(1-Pn)](4)
The penalty factors are as follows:
Figure BDA0002374728670000065
in the present invention, setting C to 2 gives a greater penalty to false negative cases.
Step 4), classifying the benign and malignant kidney tumors;
and (3) inputting the CT image of the kidney to be detected into the multi-view information cooperation convolution neural network model constructed in the step 3), and outputting a good and malignant result of the tumor by a network, which shows that the method can assist in the classification of the good and malignant of the kidney tumor.
The invention combines and fully utilizes multi-view image information of different kidney tumors, can improve the accuracy of benign and malignant classification of the kidney tumors, and simultaneously avoids the problem of poor model generalization capability caused by insufficient neural network training data due to deficient image data of the patient.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.

Claims (1)

1. The method for classifying the benign and malignant kidney tumors through multi-view information cooperation comprises the following specific steps:
step 1) preprocessing a medical image;
each kidney CT image case consists of images of three visual angles of a sagittal plane, a coronal plane and a transverse plane; performing data enhancement operations such as one or more of turning, rotating, zooming, cutting and translating on the collected kidney CT images at any three visual angles, and uniformly cutting the data-enhanced images into images with the size of 224 × 224 pixels; finally, 80% of the processed data is used as a training set of the neural network, 10% is used as a verification set, and the rest 10% is used as a test set;
step 2), constructing a multi-view convolutional network sub-model;
2.1 constructing a DenseNet network;
images of three view angles, namely sagittal, coronal and transverse planes of the kidney CT image are respectively constructed into a sub-model of a multi-view convolution network; each sub-model consists of a DenseNet, and tumors with different view angles are classified through DenseNet; in DenseNet, each layer is connected to all previous layers in the channel dimension, where the profile size of each layer is the same and is used as input for the next layer; for a network of L-layer, DenseNet comprises
Figure FDA0002374728660000011
In DenseNet, all previous layers are connected as inputs:
xl=Hl([x1,x2,...,xl-1]) (1)
wherein, the above Hl(. -) represents a non-linear transformation function, which is a combinatorial operation that may include a series of Batch Normalization, ReLU, Pooling, and Conv operations; where L and L-1 layers may actually contain multiple convolutional layers therebetween;
the DenseNet consists of 10 sublayers, including four DenseBlock-BC blocks, three Transition layers:
a first layer: an input layer for inputting the processed kidney CT image with 224 × 224 pixels into a network;
a second layer: a convolution layer with a parameter size of 112 × 112 after 7 × 7 convolution layers of stride ═ 2;
and a third layer: 3 × 3 maximum pooling layers of stride ═ 2, and output parameters of 56 × 56;
a fourth layer: the first DenseBlock block comprises 6 structures of BN + ReLU +1x1Conv + BN + ReLU +3x3Conv, and output parameters are 56 × 56;
and a fifth layer: a Transition layer comprising a convolution of 1x1 and a 2x2AvgPooling of stride 2; after passing through this layer, the output parameters were reduced to 28 × 28;
a sixth layer: DenseBlock, this block contains 12 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures, the output parameter is 28 x 28;
a seventh layer: a Transition layer, which also comprises a convolution of 1x1 and 2x2AvgPooling of stride 2, and the number of output parameters is reduced to 14 by 14 after compression;
an eighth layer: a DenseBlock, which comprises 24 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures, and has 14 × 14 output parameters;
a ninth layer: the Transition layer has the same structure as the Transition layer, and the compressed parameters are reduced to 7 × 7;
a tenth layer: DenseBlock, this block contains 24 BN + ReLU +1x1Conv + BN + ReLU +3x3Conv structures, the output parameter is 7 x 7;
the eleventh layer: an output layer, which firstly passes through a 7 × 7 globalavgPooling layer, removes the last full connection layer of the network structure, and then respectively adds 2048, 1024 and 2 neurons, so that the DenseNet can identify the kidney tumor information; randomly initializing the weights of the three full-connection layers by using an Xaiver algorithm, and setting the activation function of the last layer as a Sigmoid function;
step 3), constructing a multi-view information cooperation convolution neural network model;
the multi-view information cooperation convolutional neural network model consists of the three submodels provided in the step 2), wherein two neurons of an output layer of each submodel are connected to the same neuron classification layer of the multi-view information cooperation model and then input into a Sigmoid function:
Figure FDA0002374728660000021
the output of this classification layer is the prediction result of the multi-view information cooperation model of the present invention, which can be expressed as:
Figure FDA0002374728660000022
wherein WkjWhere { k ═ 1,2,3} is a set of weights between each multiview information cooperation submodel output layer and the classification layer,
Figure FDA0002374728660000023
is a weighted sum of the outputs of the three submodels,
Figure FDA0002374728660000024
is the weighted sum of the outputs of each sub-model, the f (.) function is the Sigmoid function;
under the condition of error classification, the punishment cross entropy loss algorithm provides a method for distinguishing false negative tumors from false positive tumors by carrying out different punishments on each error;
l(yn,Pn)=-δn[ynlog(Pn)+(1-yn)log(1-Pn)](4)
the penalty factors are as follows:
Figure FDA0002374728660000025
setting C to 2 gives a greater penalty to false negative cases;
step 4), classifying the benign and malignant kidney tumors;
and (3) inputting the to-be-detected kidney CT image into the multi-view information cooperation convolution neural network model constructed in the step 3), and outputting the network to obtain the benign and malignant result of the kidney tumor.
CN202010061728.0A 2020-01-19 2020-01-19 Multi-view information cooperation type kidney benign and malignant tumor classification method Withdrawn CN111275103A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010061728.0A CN111275103A (en) 2020-01-19 2020-01-19 Multi-view information cooperation type kidney benign and malignant tumor classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010061728.0A CN111275103A (en) 2020-01-19 2020-01-19 Multi-view information cooperation type kidney benign and malignant tumor classification method

Publications (1)

Publication Number Publication Date
CN111275103A true CN111275103A (en) 2020-06-12

Family

ID=71003390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010061728.0A Withdrawn CN111275103A (en) 2020-01-19 2020-01-19 Multi-view information cooperation type kidney benign and malignant tumor classification method

Country Status (1)

Country Link
CN (1) CN111275103A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364920A (en) * 2020-11-12 2021-02-12 西安电子科技大学 Thyroid cancer pathological image classification method based on deep learning
CN113139930A (en) * 2021-03-17 2021-07-20 杭州迪英加科技有限公司 Thyroid slice image classification method and device, computer equipment and storage medium
CN114792315A (en) * 2022-06-22 2022-07-26 浙江太美医疗科技股份有限公司 Medical image visual model training method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210528A (en) * 2019-05-15 2019-09-06 太原理工大学 The good pernicious classification method of breast X-ray image based on DenseNet-II neural network model

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210528A (en) * 2019-05-15 2019-09-06 太原理工大学 The good pernicious classification method of breast X-ray image based on DenseNet-II neural network model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GAO HUANG ET AL.: "Densely Connected Convolutional Networks", 《IEEE》 *
YUTONG XIE ET AL.: "Knowledge-based Collaborative Deep Learning for Benign-Malignant Lung Nodule Classification on Chest CT", 《IEEE》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364920A (en) * 2020-11-12 2021-02-12 西安电子科技大学 Thyroid cancer pathological image classification method based on deep learning
CN112364920B (en) * 2020-11-12 2023-05-23 西安电子科技大学 Thyroid cancer pathological image classification method based on deep learning
CN113139930A (en) * 2021-03-17 2021-07-20 杭州迪英加科技有限公司 Thyroid slice image classification method and device, computer equipment and storage medium
CN113139930B (en) * 2021-03-17 2022-07-15 杭州迪英加科技有限公司 Thyroid slice image classification method and device, computer equipment and storage medium
CN114792315A (en) * 2022-06-22 2022-07-26 浙江太美医疗科技股份有限公司 Medical image visual model training method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111260639A (en) Multi-view information-collaborative breast benign and malignant tumor classification method
CN111275103A (en) Multi-view information cooperation type kidney benign and malignant tumor classification method
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN116681958B (en) Fetal lung ultrasonic image maturity prediction method based on machine learning
CN112990344B (en) Multi-view classification method for pulmonary nodules
JP7244974B1 (en) Pathological image feature extractor training method, training device, electronic device, storage medium, and pathological image classification system based on feature separation
CN112348800A (en) Dense neural network lung tumor image identification method fusing multi-scale features
AR A deep learning-based lung cancer classification of CT images using augmented convolutional neural networks
Huang et al. Automatic retinal vessel segmentation based on an improved U-Net approach
CN115035127A (en) Retinal vessel segmentation method based on generative confrontation network
Shi et al. Automatic detection of pulmonary nodules in CT images based on 3D Res-I network
Sengun et al. Automatic liver segmentation from CT images using deep learning algorithms: a comparative study
KR102407248B1 (en) Deep Learning based Gastric Classification System using Data Augmentation and Image Segmentation
Xu et al. Application of artificial intelligence technology in medical imaging
CN112767374A (en) Alzheimer disease focus region semantic segmentation algorithm based on MRI
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
Chen et al. A new classification method in ultrasound images of benign and malignant thyroid nodules based on transfer learning and deep convolutional neural network
Tyagi et al. An amalgamation of vision transformer with convolutional neural network for automatic lung tumor segmentation
CN112562851B (en) Construction method and system of oral cancer cervical lymph metastasis diagnosis algorithm
Baloni et al. Detection of hydrocephalus using deep convolutional neural network in medical science
Mathina Kani et al. Classification of skin lesion images using modified Inception V3 model with transfer learning and augmentation techniques
Liu et al. Recognition of cervical precancerous lesions based on probability distribution feature guidance
Essaf et al. Review on deep learning methods used for computer-aided lung cancer detection and diagnosis
Han et al. Brain Tumor Recognition Based on Data Augmentation and Convolutional Neural Network
CN113658151B (en) Mammary gland lesion magnetic resonance image classification method, device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200612