CN113065646A - Method capable of realizing generalization performance of KI67 pathological image neural network model - Google Patents

Method capable of realizing generalization performance of KI67 pathological image neural network model Download PDF

Info

Publication number
CN113065646A
CN113065646A CN202110528905.6A CN202110528905A CN113065646A CN 113065646 A CN113065646 A CN 113065646A CN 202110528905 A CN202110528905 A CN 202110528905A CN 113065646 A CN113065646 A CN 113065646A
Authority
CN
China
Prior art keywords
model
data
domain
pruning
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110528905.6A
Other languages
Chinese (zh)
Inventor
蔡佳桐
杨林
祝骋路
吴同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Diyingjia Technology Co ltd
Original Assignee
Hangzhou Diyingjia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Diyingjia Technology Co ltd filed Critical Hangzhou Diyingjia Technology Co ltd
Priority to CN202110528905.6A priority Critical patent/CN113065646A/en
Publication of CN113065646A publication Critical patent/CN113065646A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method capable of realizing generalization performance of a KI67 pathological image neural network model. The method comprises the following steps of firstly, sorting existing labeled data, taking data of a single source as a source domain, and taking other data as an intrusion domain; secondly, training a model on a source domain data set until convergence; thirdly, merging the source domain data and the intrusion domain data into a mixed domain; fourthly, replacing the source domain data in the training data with a mixed domain, and continuing to train the converged model; fifthly, observing the weight change of the model and pruning the model according to the weight change; sixthly, fine adjustment is carried out on the model after pruning by using mixed domain data; and seventhly, evaluating and testing and visualizing the result. According to the method, data of different sources are divided into a source domain and an intrusion domain, the sensitivity of model weight to domain transformation is measured in the process of replacing the source domain with the intrusion domain, the weight corresponding to the specific characteristics of the domain is discarded, and the generalization performance of the model on the data of different sources is improved.

Description

Method capable of realizing generalization performance of KI67 pathological image neural network model
Technical Field
The invention relates to the technical field of image analysis, in particular to a method capable of realizing generalization performance of a KI67 pathological image neural network model.
Background
In recent years, with the popularization of digital pathological construction in hospitals, the application of artificial intelligence in pathological section interpretation is increasingly wide. The artificial intelligence method can learn the capabilities of dividing cell categories and distinguishing yin and yang from the marking data, not only helps doctors to improve the film reading efficiency, but also can be applied to links such as pathological interpretation teaching. KI67 immunohistochemistry was used to assess tumor malignancy in a variety of disease species. The KI67 index is the average value of the proportion of positive tumor cells in all tumor cells on not less than ten pathological images, and is generally used as an index for measuring the malignancy degree of tumors. At present, deep learning (artificial intelligence) combines the traditional image algorithm as the post-processing technology, and can relatively accurately judge, position and count all cells on the pathological image of the visual field under the microscope. From this, the computer can calculate a reasonable value of kI 67. The method has the characteristics of high precision and less time consumption, and can be widely applied to clinical pathological section interpretation.
However, the deep learning algorithm has very limited application in practice. This is mainly caused by the large difference in multi-source data. The pathological images collected in clinic show obvious differences among different hospitals and different disease types. These differences arise from staining patterns, cell arrangement, cell morphology, size, etc. For example, some sections stained more uniformly for positive tumor cells, while others stained cells in a pattern with only irregular areas in the center of the cells. In addition, the cells are arranged in different ways and have different sizes among different disease species. If a deep learning model is trained by uniformly adopting single-source and single-disease data, the model performs well on the source data, and performs poorly on other source data. If all source data are uniformly trained into a model, the model is not sufficient to extract common features in different source data, and thus the accuracy is low on unknown sources (hospitals or disease types other than training data). Therefore, the ability to improve models with generic features extracted from different source data is one of the ways to improve the generalization ability of models. In fact, there are some common characteristics of the same type of cell across different sources of pathological data. For example, positive tumor cells appear reddish in their entirety. May appear red, reddish brown, purplish red, dark red, light red, etc. on different sources. Also, negative tumor cells tend to assume bluish hues, which may appear blue, bluish-green, and so forth, across different sources.
Disclosure of Invention
The invention aims to provide a method capable of realizing the generalization performance of a KI67 pathological image neural network model, which adopts single source data as a source domain and other source data as an invasion domain, trains a model on the source domain until the training data is replaced to be the invasion domain after convergence, and performs pruning to help the model to better learn general characteristics and improve the generalization performance of the model, thereby achieving higher precision on unknown source data.
In order to achieve the purpose, the invention provides the following technical scheme: a method for realizing generalization performance of a neural network model of a KI67 pathological image comprises the following steps:
the method comprises the steps of firstly, sorting existing labeled data, collecting KI67 pathological sections of different disease types from different sources, taking one single source data with the largest data quantity as a source domain, and taking data of other sources or disease types as an invasion domain;
secondly, training a model on a source domain data set until convergence;
thirdly, combining the source domain data and the intrusion domain data and then randomly scrambling, wherein the scrambled data is called a mixed domain;
fourthly, replacing the source domain data in the training data with a mixed domain, and continuing to train the converged model;
fifthly, observing the weight change of the model and pruning the model according to the weight change;
sixthly, fine adjustment is carried out on the model after pruning by using mixed domain data;
and seventhly, evaluating and testing and visualizing the result.
Preferably, in the first step, the single-source data is slices of the same disease species collected by the same model equipment of the same hospital at the same magnification, and the slices can be from different patients.
Preferably, in the second step, the model refers to a deep neural network model with strong learning ability, which is used to learn a feature set that performs well on the source domain, wherein the feature set includes KI67 common features and features specific to the source domain.
Preferably, in the fifth step, the model pruning specifically comprises the following operations: in the process of each model iteration, searching the weight with the maximum gradient change of p% in all parameters of the front model, including the covered parameters in the previous iteration, covering the covered parameters by using a non-zero mask, namely a mask of one, setting the covered weight to zero, and setting the uncovered weight as the initial weight of the model before the next forward propagation of the model; and iterating for n times until the sparsity of the model reaches the target sparsity, and stopping the pruning operation of the model.
Preferably, p% (1-diversity) × 100%/n, this pruning is linear pruning, i.e. each time covering the same number of weights.
Preferably, the model in the fifth step loses the capability of partially extracting features through pruning, and in order to restore the precision of the model, the model is finely adjusted by adopting mixed domain data in the sixth step so as to restore the accuracy. Each batch of data used for fine tuning needs to contain at least one piece of source domain data and one piece of intrusion domain data to ensure that the fine tuning process is unbiased.
Preferably, in the fifth step, the pruning manner may also be non-linear pruning, i.e. each time covering a different number of weights.
Preferably, in the seventh step, the model performance is evaluated, the Hungarian algorithm is adopted to match the model prediction cells with the labeled cells, the accuracy rate, the recall rate and the F1-score index are measured, the model is analyzed and output, and the result is visualized.
Compared with the prior art, the invention has the following beneficial effects:
the invention discloses a method for realizing generalization performance of a neural network model by model pruning aiming at KI67 pathological images, which divides data of different sources into a source domain and an intrusion domain, measures the sensitivity of model weight to domain transformation in the process of replacing the source domain with the intrusion domain, discards the weight corresponding to the specific characteristic of the domain, and improves the generalization performance of the model on the data of different sources, thereby achieving higher precision on unknown source data.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a graph of the results of KI67 cell identification over the unknown domain for three methods;
FIG. 3 is a schematic view of the model prediction result visualization of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to 3, the present invention provides a technical solution: method capable of realizing generalization performance of KI67 pathological image neural network model
The method comprises the following steps of firstly, sorting existing labeled data, taking data from a single source (slices of the same disease type collected by equipment of the same model in the same hospital under the same multiplying power and from different patients) as a source domain, taking other data as an invasion domain, generally selecting the data from the single source with the largest data quantity as the source domain, and selecting the data from the single source with the largest category in the KI67 multi-category segmentation problem.
And secondly, training a model on a source data set until convergence, wherein the model generally refers to a deep neural network model with strong learning capacity, and if the model is pre-trained on a public data set (such as ImageNet), the model can be better represented, so that the model learns a feature set which is well represented on the source domain, wherein the feature set comprises KI67 general features and features which belong to the source domain, and the features which are specific to the source domain refer to that the features can be used as criteria for category judgment on the source domain and are not suitable for other domains.
And thirdly, combining the source domain data and the intrusion domain data and then randomly scrambling, wherein the scrambled data is called a mixed domain.
And fourthly, replacing source domain data in the training data with a mixed domain, and continuing to train the converged model, wherein the significance of the step is to provide a disturbance for the neural network, the disturbance is the transformation of the domain, before the model is converged on the source domain, which means that the weight approaches to be stable, and in the following training process, the change of the domain breaks the stability, so that the neural network is forced to change to adapt to the new domain.
And fifthly, observing the weight change of the model and pruning the model according to the weight change. The specific operation is as follows: during each iteration of the model, the weight with the largest change of the previous p% gradient (the corresponding gradient is the largest) is found and covered (pruned) by a non-zero mask, namely one, and the uncovered weight is set as the initial weight of the model before the model is propagated forward next time. This is iterated n times (typically, 3-4 iterations for model pruning are sufficient, i.e. a small amount of mixed domain data for model pruning). And stopping the model pruning operation until the sparsity of the model reaches the target sparsity. Generally, sparsity is greater than 0.95, i.e., the clipped weights typically account for only a small portion of all the weights of the model. p% ((1-diversity) × 100%/n), this pruning is linear pruning, i.e. each time covering the same number of weights.
In the fifth step, the pruning manner may also be non-linear pruning, i.e. each time covering a different number of weights.
And sixthly, fine-tuning the pruned model by using the mixed domain data (finetune). In the step five, the model loses the capability of partially extracting features through pruning, and although the performance of the pruned model is reduced, the model has the potential of learning the invariant domain representation. Finally, we adopt the mixed domain data to further fine-tune the parameters of the model which are not pruned, so as to restore the accuracy. The data for each batch of fine tuning needs to contain at least one piece of source domain data and one piece of intrusion domain data. This ensures that the fine tuning process is unbiased.
7) Evaluation and testing and result visualization. And evaluating the model performance. And matching the model prediction cells with the labeled cells by using a Hungarian algorithm, and measuring indexes such as Precision (Precision), Recall (Recall), F1-score and the like. Analyzing the model output and visualizing the result.
Theoretical derivation:
the pruning method adopted by the invention aims to filter a part of weights of the model, so that the precision of the pruned model is not changed greatly on the training data, and the pruned model can be well represented on the domain of unknown sources. To achieve this goal, we divide the domain of known origin into a single source domain and an intrusion domain. The model is first trained on a single source domain. And then fusing and disorganizing the source domain and the intrusion domain which does not participate in training, and continuously replacing the training data with the mixed domain. In this case, the objective problem corresponds to the following optimization.
Figure BDA0003066378130000071
Wherein DsAnd DiSamples from the source domain and the intrusion domain, respectively;
W*is the expected optimal W' and represents the parameters of the training above;
the total loss C (D | W) is obtained by calculating the sum of the losses of the samples under the parameter W.
Typically, C (D) of the trimmed modelsI W') is greater than C (D)s| W). The equation can be simplified as follows:
Figure BDA0003066378130000072
Dmrepresents a pooled sample containing DsAnd Di。C(Ds| W) does not contribute to the selection of W, and therefore can be eliminated, the above optimization problem can be simplified as:
Figure BDA0003066378130000073
the above objective problem can be solved by measuring each parameter w to C (D)m| W). Specifically, when the model is at DmDuring training, the model parameters W are updated by back propagation according to the gradient descent rule. In the process, the calculation formula of the echelon
Figure BDA0003066378130000074
Is C (D)m| W) derivative of W. Accordingly, the magnitude | g | of the gradient of a single parameter W ∈ W can be viewed as its pair C (D)m| W). From an intuitive perspective, model convergence depends on the updating of parameters. Parameters that have adapted to the source domain need to be changed significantly to accommodate the hybrid domain and can be considered highly domain dependent. Therefore we can do this by keeping W with a smaller | g |, i.e., WrTo approximate the expected residual parameter W*
Figure BDA0003066378130000081
Wherein c ispIs a scalar quantity related to the remaining percentage p of the parameter.
Comparison method and experimental results:
three methods were compared experimentally: empirical risk minimization method, empirical risk minimization-fine tuning method, and model pruning method. The empirical risk minimization method is taken as a baseline (baseline), and no operation for improving the generalization of the model is applied. The empirical risk minimization-fine tuning method is used as a comparison method for model pruning, and compared with a pruning method, except that the pruning percentage is set to be 0%, the rest has no difference. FIG. 1 shows the results of KI67 cell recognition over the unknown domain by three methods. Wherein an unknown domain refers to a domain that has not participated in the model training and validation process. In this experiment, 16 KI67 sections from different hospitals and disease species were unknown for 41 cases of 40 times lower size 1920 x 1080. The model prediction result visualization graph is shown in fig. 2, wherein the dashed circle region is a region with inconsistent results of the three methods, and it can be seen that the performance of the model pruning generalization method is significantly better than that of the other two methods.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (8)

1. A method for realizing generalization performance of a neural network model of a KI67 pathological image is characterized by comprising the following steps:
the method comprises the steps of firstly, sorting existing labeled data, collecting KI67 pathological sections of different disease types from different sources, taking one single source data with the largest data quantity as a source domain, and taking data of other sources or disease types as an invasion domain;
secondly, training a model on a source domain data set until convergence;
thirdly, combining the source domain data and the intrusion domain data and then randomly scrambling, wherein the scrambled data is called a mixed domain;
fourthly, replacing the source domain data in the training data with a mixed domain, and continuing to train the converged model;
fifthly, observing the weight change of the model and pruning the model according to the weight change;
sixthly, fine adjustment is carried out on the model after pruning by using mixed domain data;
and seventhly, evaluating and testing and visualizing the result.
2. The method for realizing the generalization performance of the neural network model for KI67 pathological images according to claim 1, wherein: in the first step, the single source data is the section of the same disease type collected by the same type of equipment in the same hospital under the same multiplying power, and can be from different patients.
3. The method for realizing the generalization performance of the neural network model for KI67 pathological images according to claim 1, wherein: in the second step, the model refers to a deep neural network model with strong learning ability, which is used to learn a feature set that performs well on the source domain, wherein the feature set includes KI67 common features and features specific to the source domain.
4. The method for realizing the generalization performance of the neural network model for the KI67 pathological image according to claim 1, wherein in the fifth step, the concrete operations of model pruning are as follows: in the process of each model iteration, searching the weight with the maximum p% gradient change in all parameters of the model, including the covered parameters in the previous iteration, covering the covered parameters by using a non-zero mask, namely a mask of one, setting the covered weight to zero, and setting the uncovered weight as the initial weight of the model before the model is propagated forward next time; and iterating for n times until the sparsity of the model reaches the target sparsity, and stopping the pruning operation of the model.
5. The method for realizing the generalization performance of the neural network model for KI67 pathological images according to claim 4, wherein: p% = (1-granularity) × 100%/n, this pruning is linear pruning, i.e. each time covering the same number of weights.
6. The method for realizing the generalization performance of the neural network model for KI67 pathological images according to claim 5, wherein: and in the fifth step, the model loses the capability of partially extracting features through pruning, in order to restore the precision of the model, the model is finely adjusted by adopting mixed domain data in the sixth step, finally, the remaining parameters on the mixed domain are further finely adjusted to restore the accuracy, and each batch of data for fine adjustment needs to contain at least one piece of source domain data and one piece of intrusion domain data so as to ensure that the fine adjustment process is unbiased.
7. The method for realizing the generalization performance of the neural network model for KI67 pathological images according to claim 1, wherein: in the fifth step, the pruning manner may also be non-linear pruning, i.e. each time covering a different number of weights.
8. The method for realizing the generalization performance of the neural network model for KI67 pathological images according to claim 1, wherein: and seventhly, evaluating the model performance, matching the model prediction cells and the labeled cells by adopting a Hungarian algorithm, measuring the accuracy rate, the recall rate and the F1-score index, analyzing the model output, and visualizing the result.
CN202110528905.6A 2021-05-14 2021-05-14 Method capable of realizing generalization performance of KI67 pathological image neural network model Pending CN113065646A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110528905.6A CN113065646A (en) 2021-05-14 2021-05-14 Method capable of realizing generalization performance of KI67 pathological image neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110528905.6A CN113065646A (en) 2021-05-14 2021-05-14 Method capable of realizing generalization performance of KI67 pathological image neural network model

Publications (1)

Publication Number Publication Date
CN113065646A true CN113065646A (en) 2021-07-02

Family

ID=76568690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110528905.6A Pending CN113065646A (en) 2021-05-14 2021-05-14 Method capable of realizing generalization performance of KI67 pathological image neural network model

Country Status (1)

Country Link
CN (1) CN113065646A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004059473A1 (en) * 2004-12-10 2006-06-22 Roche Diagnostics Gmbh Method and device for the examination of medically relevant fluids and tissue samples
US20100128988A1 (en) * 2008-11-26 2010-05-27 Agilent Technologies, Inc. Cellular- or Sub-Cellular-Based Visualization Information Using Virtual Stains
CN106874663A (en) * 2017-01-26 2017-06-20 中电科软件信息服务有限公司 Cardiovascular and cerebrovascular disease Risk Forecast Method and system
CN108986889A (en) * 2018-06-21 2018-12-11 四川希氏异构医疗科技有限公司 A kind of lesion identification model training method, device and storage equipment
CN109544529A (en) * 2018-11-19 2019-03-29 南京信息工程大学 Pathological image data enhancement methods towards deep learning model training and study
CN111353545A (en) * 2020-03-09 2020-06-30 大连理工大学 Plant disease and insect pest identification method based on sparse network migration
CN111931931A (en) * 2020-09-29 2020-11-13 杭州迪英加科技有限公司 Deep neural network training method and device for pathology full-field image
CN112270666A (en) * 2020-11-03 2021-01-26 辽宁工程技术大学 Non-small cell lung cancer pathological section identification method based on deep convolutional neural network
CN112541923A (en) * 2020-12-03 2021-03-23 南开大学 Cup optic disk segmentation method based on fundus image data set migration learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004059473A1 (en) * 2004-12-10 2006-06-22 Roche Diagnostics Gmbh Method and device for the examination of medically relevant fluids and tissue samples
US20100128988A1 (en) * 2008-11-26 2010-05-27 Agilent Technologies, Inc. Cellular- or Sub-Cellular-Based Visualization Information Using Virtual Stains
CN106874663A (en) * 2017-01-26 2017-06-20 中电科软件信息服务有限公司 Cardiovascular and cerebrovascular disease Risk Forecast Method and system
CN108986889A (en) * 2018-06-21 2018-12-11 四川希氏异构医疗科技有限公司 A kind of lesion identification model training method, device and storage equipment
CN109544529A (en) * 2018-11-19 2019-03-29 南京信息工程大学 Pathological image data enhancement methods towards deep learning model training and study
CN111353545A (en) * 2020-03-09 2020-06-30 大连理工大学 Plant disease and insect pest identification method based on sparse network migration
CN111931931A (en) * 2020-09-29 2020-11-13 杭州迪英加科技有限公司 Deep neural network training method and device for pathology full-field image
CN112270666A (en) * 2020-11-03 2021-01-26 辽宁工程技术大学 Non-small cell lung cancer pathological section identification method based on deep convolutional neural network
CN112541923A (en) * 2020-12-03 2021-03-23 南开大学 Cup optic disk segmentation method based on fundus image data set migration learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOBIN HU 等: "Coarse-to-Fine Adversarial Networks and Zone-Based Uncertainty Analysis for NK/T-Cell Lymphoma Segmentation in CT/PET Images", 《IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS》 *
周茂: "基于深度学习的医学CT图像分割技术研究", 《中国优秀硕士学位论文全文数据库 (医药卫生科技辑)》 *

Similar Documents

Publication Publication Date Title
CN108305249B (en) Rapid diagnosis and scoring method of full-scale pathological section based on deep learning
Javed et al. Cellular community detection for tissue phenotyping in colorectal cancer histology images
Sangamithraa et al. Lung tumour detection and classification using EK-Mean clustering
CN111126386B (en) Sequence domain adaptation method based on countermeasure learning in scene text recognition
CN108764329A (en) A kind of construction method of lung cancer pathology image data set
CN108198147A (en) A kind of method based on the multi-source image fusion denoising for differentiating dictionary learning
CN104820841B (en) Hyperspectral classification method based on low order mutual information and spectrum context waveband selection
CN111382803B (en) Communication signal feature fusion method based on deep learning
CN110781953B (en) Lung cancer pathological section classification method based on multi-scale pyramid convolution neural network
CN109871901A (en) A kind of unbalanced data classification method based on mixing sampling and machine learning
CN112543934A (en) Method for determining degree of abnormality, corresponding computer readable medium and distributed cancer analysis system
CN116153495A (en) Prognosis survival prediction method for immunotherapy of esophageal cancer patient
Yu et al. A recognition method of soybean leaf diseases based on an improved deep learning model
CN110119540A (en) A kind of multi output gradient promotion tree modeling method for survival risk analysis
CN113657449A (en) Traditional Chinese medicine tongue picture greasy classification method containing noise labeling data
CN114305449A (en) Motor imagery classification method based on electroencephalogram lead mutual information quantity
CN113065646A (en) Method capable of realizing generalization performance of KI67 pathological image neural network model
CN111582370B (en) Brain metastasis tumor prognostic index reduction and classification method based on rough set optimization
CN108647493A (en) A kind of clear cell carcinoma of kidney personalization prognostic evaluation methods based on multi-gene expression characteristic spectrum
Saxena et al. Study of Computerized Segmentation & Classification Techniques: An Application to Histopathological Imagery
Martin et al. A graph based neural network approach to immune profiling of multiplexed tissue samples
CN116228759A (en) Computer-aided diagnosis system and apparatus for renal cell carcinoma type
Bull et al. Extended correlation functions for spatial analysis of multiplex imaging data
Mulmule et al. Classification of cervical cytology overlapping cell images with transfer learning architectures
CN109671468A (en) A kind of feature gene selection and cancer classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210702

RJ01 Rejection of invention patent application after publication