CN113902724A - Method, device, equipment and storage medium for classifying tumor cell images - Google Patents

Method, device, equipment and storage medium for classifying tumor cell images Download PDF

Info

Publication number
CN113902724A
CN113902724A CN202111212018.4A CN202111212018A CN113902724A CN 113902724 A CN113902724 A CN 113902724A CN 202111212018 A CN202111212018 A CN 202111212018A CN 113902724 A CN113902724 A CN 113902724A
Authority
CN
China
Prior art keywords
target
characteristic data
classification
matrix
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111212018.4A
Other languages
Chinese (zh)
Other versions
CN113902724B (en
Inventor
王琳婧
甄鑫
杨蕊梦
梁芳蓉
张书旭
廖煜良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cancer Center of Guangzhou Medical University
Original Assignee
Cancer Center of Guangzhou Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cancer Center of Guangzhou Medical University filed Critical Cancer Center of Guangzhou Medical University
Priority to CN202111212018.4A priority Critical patent/CN113902724B/en
Publication of CN113902724A publication Critical patent/CN113902724A/en
Application granted granted Critical
Publication of CN113902724B publication Critical patent/CN113902724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention belongs to the technical field of machine learning, and discloses a method, a device, equipment and a storage medium for classifying tumor cell images, wherein L fusion sequences are obtained by carrying out permutation and combination on all different MRI sequences, first characteristic data samples of sample objects are fused into second characteristic data samples according to the different MRI sequences included in each fusion sequence, then training is carried out through a classifier to construct L classification systems capable of identifying at least two tumor types, a target classification system with classification performance reaching preset conditions is selected from the L classification systems, so that a target fusion sequence with better fusion effect is determined, K target first characteristic data of a target object to be detected are fused into one target second characteristic data through a conversion matrix of the target fusion sequence, the target second characteristic data are input into a target classification system for classification, and imaging information from a plurality of MRI sequences can be fused for classification, the characteristics of the tumor can be better described, and the classification accuracy can be improved.

Description

Method, device, equipment and storage medium for classifying tumor cell images
Technical Field
The invention belongs to the technical field of machine learning, and particularly relates to a method, a device, equipment and a storage medium for classifying tumor cell images.
Background
Glioblastoma multiforme and single-shot brain metastases are two of the more common aggressive malignant brain tumors, with widely varying treatment strategies: the standard of treatment for the new diagnosis of glioblastoma is maximal tumor resection followed by radiation therapy and temozolomide chemotherapy; while the primary treatment modality for single-shot brain metastases is stereotactic radiation therapy. And they have similar image appearance on conventional Magnetic Resonance Imaging (MRI), lesions may have necrotic centers within the tumor and heterogeneous enhancement components surrounded by edematous regions around the tumor. Therefore, precise preoperative differential diagnosis of glioblastoma and single-shot brain metastases is critical to individualized treatment decision selection.
Pathological examination can clearly identify the two tumors, but this invasive surgical identification is not desirable when the tumors are near or involve a male area or the patient is too weak to perform surgery. Therefore, there is a need for a fast, non-invasive, robust method to identify primary tumors and thus determine the correct treatment.
In recent years, imaging omics have been widely used in clinical diagnosis, prognosis and prediction of tumor treatment response. Many studies identify glioblastomas and single brain metastases based on conventional MRI sequences or more advanced MRI techniques such as diffusion-weighted imaging (DWI), magnetic resonance tensor imaging (DTI), or amide proton transfer imaging (APT). These studies show that the imaging group can extract image texture information which can not be captured by naked eyes from images, and the discrimination capability of glioblastoma and single-shot brain metastasis is improved.
However, many previous studies have performed imaging analysis or simple multiple sequence comparisons on a single MRI sequence, such as a contrast-enhanced T1weighted image (T1weighted image, T1WI), an Apparent Diffusion Coefficient (ADC), or magnetic resonance Diffusion Tensor Imaging (DTI). The choice of MRI sequences is usually random and heuristic and the advantage-disadvantage between individual MRI sequences is not clear. In fact, different MRI sequences can be seen as multimodal images depicting the tumor from different angles. To date, no one has attempted to exploit the potential of integrating the iconomics features obtained from multiple MRI sequences to identify glioblastomas from single-shot brain metastases. Therefore, the classification accuracy of glioblastoma and single-shot brain metastasis in the prior art needs to be improved.
Disclosure of Invention
The invention aims to provide a method, a device, equipment and a storage medium for classifying tumor cell images, which can fuse imaging information from a plurality of MRI sequences, can better describe the characteristics of tumors and improve the classification accuracy.
The first aspect of the embodiments of the present invention discloses a classification method for tumor cell images, including:
combining L fusion sequences according to the M magnetic resonance scanning sequences, wherein each fusion sequence comprises at least two magnetic resonance scanning sequences, and the value of L is determined according to M;
for each magnetic resonance scanning sequence, respectively acquiring N first characteristic data samples corresponding to N sample objects one to one and label data of each sample object, wherein the label data is used for representing tumor types to which the sample objects belong, and the tumor types at least comprise two;
for each fusion sequence, calculating a transformation matrix according to the first characteristic data samples of all sample objects; according to the conversion matrix, fusing first characteristic data samples belonging to the same sample object among various magnetic resonance scanning sequences to obtain N second characteristic data samples corresponding to the N sample objects one by one;
inputting the N second characteristic data samples of each fusion sequence and the label data corresponding to each second characteristic data sample into a classifier for training so as to construct L classification systems;
taking any one first classification system with the classification performance reaching a preset condition in the L classification systems as a target classification system;
fusing K target first characteristic data of the target object to be detected into target second characteristic data according to the conversion matrix of the target fusion sequence corresponding to the target classification system; the K target first characteristic data correspond to K magnetic resonance scanning sequences contained in the target fusion sequence one by one;
and inputting the target second characteristic data into the target classification system, and determining target label data of the target object to be detected according to an output result of the target classification system.
Further, the multi-classification model system comprises a plurality of classification models, and the number of the classification models is determined according to the number of the classifiers and the number of the feature selection algorithms; the M magnetic resonance scan sequences include MRI T1WI, CE _ T1WI, T2WI, and T2_ FLAIR sequences; the tumor types include glioblastoma type and single brain metastasis type.
The second aspect of the embodiments of the present invention discloses a classification device for tumor cell images, including:
a combination unit, configured to combine L fusion sequences according to M magnetic resonance scanning sequences, where each fusion sequence includes at least two magnetic resonance scanning sequences, and a value of L is determined according to M;
an obtaining unit, configured to obtain, for each magnetic resonance scanning sequence, N first characteristic data samples corresponding to N sample objects one to one and tag data of each sample object, where the tag data is used to characterize a tumor type to which the sample object belongs, and the tumor types include at least two;
a first fusion unit, configured to calculate, for each fusion sequence, a transformation matrix from first characteristic data samples of all sample objects; according to the conversion matrix, fusing first characteristic data samples belonging to the same sample object among various magnetic resonance scanning sequences to obtain N second characteristic data samples corresponding to the N sample objects one by one;
the training unit is used for inputting the N second characteristic data samples of each fusion sequence and the label data corresponding to each second characteristic data sample into a classifier for training so as to construct L classification systems;
a determining unit, configured to use any one of the first classification systems, of the L classification systems, whose classification performance meets a preset condition as a target classification system;
the second fusion unit is used for fusing the K target first characteristic data of the target object to be detected into target second characteristic data according to the conversion matrix of the target fusion sequence corresponding to the target classification system; the K target first characteristic data correspond to K magnetic resonance scanning sequences contained in the target fusion sequence one by one;
and the classification unit is used for inputting the target second characteristic data into the target classification system and determining target label data of the target object to be detected according to an output result of the target classification system.
A third aspect of an embodiment of the present invention discloses an electronic device, including a memory storing executable program codes and a processor coupled to the memory; the processor calls the executable program code stored in the memory for performing the method of classifying a tumor cell image disclosed in the first aspect.
A fourth aspect of the present embodiments discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the classification method for a tumor cell image disclosed in the first aspect.
The method, the device, the equipment and the storage medium for classifying the tumor cell images have the advantages that L fusion sequences are obtained by carrying out permutation and combination on all different MRI sequences, first characteristic data samples of the same sample object are fused into second characteristic data samples according to different MRI sequences included in each fusion sequence, then the second characteristic data samples are trained through a classifier to construct L classification systems capable of identifying at least two tumor types, a target classification system with classification performance reaching preset conditions is selected from the L classification systems, so that a target fusion sequence with better fusion effect is determined, K target first characteristic data of a target object to be detected are fused into one target second characteristic data through a conversion matrix of the target fusion sequence, and then the target second characteristic data are input into the target classification system for classification, the method can realize individual disease diagnosis and auxiliary guidance of clinical decision, and can also fuse imaging information from a plurality of MRI sequences for classification, and the fused imaging information can better describe the characteristics of the tumor, so that the classification accuracy can be improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles and effects of the invention.
Unless otherwise specified or defined, the same reference numerals in different figures refer to the same or similar features, and different reference numerals may be used for the same or similar features.
FIG. 1 is a flowchart of a method for classifying tumor cell images according to an embodiment of the present invention;
FIG. 2 is a flow chart of a training and testing process of a multi-classification model system according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a classification apparatus for tumor cell images according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Description of reference numerals:
301. a combination unit; 302. an acquisition unit; 303. a first fusion unit; 304. a training unit; 305. a determination unit; 306. a second fusion unit; 307. a classification unit; 401. a memory; 402. a processor.
Detailed Description
Unless specifically stated or otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. In the case of combining the technical solutions of the present invention in a realistic scenario, all technical and scientific terms used herein may also have meanings corresponding to the purpose of achieving the technical solutions of the present invention. As used herein, "first and second …" are used merely for name differentiation and do not denote any particular quantity or order. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
As used herein, unless otherwise specified or defined, the terms "comprises," "comprising," and "comprising" are used interchangeably to refer to the term "comprising," and are used interchangeably herein.
It is needless to say that technical contents or technical features which are contrary to the object of the present invention or clearly contradicted by the object of the present invention should be excluded. In order to facilitate an understanding of the invention, specific embodiments thereof will be described in more detail below with reference to the accompanying drawings.
As shown in fig. 1 and fig. 2, an embodiment of the present invention discloses a method for classifying tumor cell images, including:
(I) training procedure
And S10, combining L fusion sequences according to the M magnetic resonance scanning sequences, wherein each fusion sequence comprises at least two magnetic resonance scanning sequences.
M is a positive integer greater than or equal to two, L is a positive integer greater than or equal to one, and the value of L is determined according to M. The method aims at least two different magnetic resonance scanning (MRI) sequences, namely fusion classification can be carried out according to the classification method of the tumor cell images disclosed by the invention, M magnetic resonance scanning sequences are combined in a non-permutation and non-combination mode, and at most, a certain number of combination modes, namely L fusion sequences, can be obtained. For example, in the present embodiment, the M MRI sequences may include four sequences of clinical T1WI, CE _ T1WI, T2WI and T2_ FLAIR, and then there are at most 11 combinations of 2 or more elements from the 4 different MRI sequences, so as to obtain 11 fused sequences, as shown in table 3 below.
S20, for each magnetic resonance scanning sequence, respectively acquiring N first characteristic data samples corresponding to N sample objects one to one, and tag data of each sample object, where the tag data is used to characterize a tumor type to which the sample object belongs, and the number of the tumor types includes at least two.
In step S20, N sample objects may be determined first, tumor type labeling is performed on each sample object to obtain tag data of each sample object, then, for each MRI sequence, N one-to-one corresponding multi-modal sample images obtained by performing magnetic resonance scanning on the N sample objects are obtained first, and then, radiologic feature extraction is performed on each multi-modal sample image to obtain N corresponding first characteristic data samples. It will be appreciated that the N first characteristic data samples for each MRI sequence correspond to the label data of the sample object to which they belong, respectively.
In this embodiment, the tumor type may include both types of glioblastoma and single-shot brain metastasis, and the label data may include two types, one type for glioblastoma and the other type for single-shot brain metastasis. For example, the tag data is set to 0 or 1, then 0 refers to glioblastoma, 1 refers to single-shot brain metastasis; and vice versa.
In the present embodiment, assuming that N is 121, 121 pathologically confirmed patient subjects of glioblastoma and single-shot brain metastasis can be determined as sample subjects, and then images obtained by scanning 121 sample subjects before surgery based on four different MRI sequences, i.e., T1WI, CE _ T1WI, T2WI, and T2_ FLAIR, are acquired as multi-modal sample images. Of the 121 pathologically confirmed glioblastoma and single-shot brain metastasis patients, 61 glioblastoma patients and 60 single-shot brain metastasis patients can be included. Referring also to table 1 below, table 1 details the clinical information of the sample subjects, wherein GBM stands for glioblastoma and META stands for single brain metastasis.
TABLE 1 clinical information of GBM and META sample subjects
Figure BDA0003309280380000071
The specific implementation manner of performing the radiologic feature extraction on each multi-modal sample image to obtain the corresponding N first characteristic data samples may be to perform target delineation on each layer of each multi-modal sample image according to operation parameters input by a user (e.g., at least two experienced radiodiagnostic experts) to obtain a two-dimensional region of interest (ROI) of each layer, and then store the two-dimensional regions of interest of all layers as three-dimensional mask image data (i.e., mask image); and finally, performing radiology feature extraction on each mask image data to obtain N first characteristic data samples.
For example, the open source python package radiomics software can be used to perform the radiology feature extraction on each region of interest, and 109 radiology features of each multi-modal sample image are obtained as the first characteristic data sample. Because the sample objects have serious data imbalance, the label data of the first characteristic data sample can be subjected to data standardization, then the class equalization is carried out on the data by using a Synthesized Minority Oversampling (SMOTE) algorithm, the synthesized characteristic sample is introduced to carry out oversampling on a minority of sample objects with single-onset brain metastasis, and then the subsequent processing is carried out, so that the negative influence caused by the data class imbalance is overcome.
As shown in table 2 below, 109 radiologic features can be classified into three categories, namely, shape features including (n ═ 15), first-order statistical features including (n ═ 19), and second-order statistical features (image gray scale distribution, commonly referred to as texture features), respectively.
Table 2109 radiology features
Figure BDA0003309280380000081
S30, calculating a conversion matrix according to the first characteristic data samples of all the sample objects aiming at each fusion sequence; and according to the conversion matrix, fusing the first characteristic data samples belonging to the same sample object among various magnetic resonance scanning sequences to obtain N second characteristic data samples corresponding to the N sample objects one to one.
It should be noted that for at least two MRI sequences included in each fusion sequence, the 109 radiologic features described above are included in the first characteristic data sample of each sample object under each MRI sequence. For each radiologic feature, a multi-sequence feature matrix of each radiologic feature can be constructed according to the first characteristic data samples of all sample objects included by the fusion sequence, the multi-sequence feature matrix comprises feature matrices of various MRI sequences required to be fused based on the fusion sequence, a transformation matrix of each radiologic feature is calculated from the multi-sequence feature matrix based on a feature fusion method weakening correlation between classes, and then the multi-sequence feature matrix of each radiologic feature is fused according to the transformation matrix to obtain a first fusion vector of each radiologic feature.
Specifically, for each fusion sequence, the following calculation process may be performed for each of the 109 radiologic features, as in steps S301 to S307:
s301, constructing a multi-sequence feature matrix X of each radiologic feature, and calculating and obtaining feature vectors of various sample objects in the multi-sequence feature matrix X through the following formulas (1) and (2)
Figure BDA0003309280380000097
And the mean of the feature vectors of all sample objects
Figure BDA0003309280380000091
Figure BDA0003309280380000092
Figure BDA0003309280380000093
Wherein c (c ═ 2) represents the number of tumor classes, and n represents the number of tumor classesiThe number of samples representing class i, i ═ 1, …, c; x is the number ofijThe jth sample, j 1,2, n, representing class ii
S302, feature vectors of various sample objects
Figure BDA0003309280380000094
And the mean of the feature vectors of all sample objects
Figure BDA0003309280380000095
Inputting the following formula (3), calculating the inter-class correlation matrix S of the multi-sequence feature matrix XbxAnd a covariance matrix phibx
Figure BDA0003309280380000096
Wherein T is a transposed symbol,
Figure BDA0003309280380000101
s303, according to the correlation matrix S between classesbxAnd a covariance matrix phibxCalculating the inter-class correlation matrix SbxIs transposed matrix ST bx=Φbx TΦbxThen transpose matrix S using equation (4) belowT bxDiagonalization, output ST bxThe eigenvector matrix P when diagonalized.
Figure BDA0003309280380000102
Wherein the content of the first and second substances,
Figure BDA0003309280380000103
representing a matrix of eigenvalues.
S304, inputting ST bxThe first r maximum eigenvalues Λ are taken from Pr×rAnd (5) forming a new eigenvector matrix Q by the corresponding eigenvectors and outputting the new eigenvector matrix Q.
QTbx T Φbx)Q=Λr×r (5)
Where r represents the dimension after fusion, and in this embodiment r is 1.
S305, correlating the classes to form a correlation matrix SbxCovariance matrix phibxInputting the new characteristic vector matrix Q into the formula (6), and calculating SbxThe first r most important eigenvalues Λ of(r×r)And corresponding feature vectors
Figure BDA0003309280380000104
Output SbxThe first r most important eigenvalues Λ of(r×r)And its corresponding feature vector
Figure BDA0003309280380000105
Figure BDA0003309280380000106
S306, inputting SbxThe first r most important eigenvalues Λ of(r×r)And its corresponding feature vector
Figure BDA0003309280380000107
And (4) calculating a conversion matrix w of each radiology characteristic according to the formula (7).
Figure BDA0003309280380000108
S307, inputting the multi-sequence feature matrix X of each radiology group feature and the corresponding conversion matrix w, and fusing the multi-sequence feature matrix X of each radiology group feature according to a formula (8) to obtain a fused first fusion vector f.
f=wTXp×N (8)
Wherein p (p ═ 2,3,4) represents the number of MRI sequences that need to be fused per fusion sequence, N represents the number of samples,
Figure BDA0003309280380000109
for each fusion sequence, the above steps S301 to S307 are repeatedly performed until after traversing 109 radiologic features, a first fusion vector F for each radiologic feature can be obtained, and then the first fusion vectors F for all radiologic features are spliced to obtain a second fusion vector F, where the second fusion vector F corresponds to all sample objects, so that the second fusion vector F can be divided according to different sample objects to obtain N second characteristic data samples corresponding to N sample objects one to one. It is to be understood that the N second characteristic data samples also correspond to the tag data of the sample object to which they belong.
Further, for each fusion sequence, the transformation matrices W for each radiologic feature obtained in step S306 can be concatenated to generate a final transformation matrix W.
Repeating the steps S301 to S307 until all the fusion sequences are traversed, so as to obtain L final transformation matrices W and L second fusion vectors F, where each second fusion vector F includes N second characteristic data samples, and the N second characteristic data samples are obtained by fusing all the MRI sequences to be fused of the fusion sequence to which the second characteristic data sample belongs.
And S40, inputting the N second characteristic data samples of each fusion sequence and the label data corresponding to the second characteristic data samples into a classifier for training to construct L classification systems.
In step S40, the number of classifiers may include one or more. When the number of classifiers includes a plurality, the classification system is specifically a multi-classification model system. It is contemplated that the classification capabilities of a classification model are associated with the original classifier that it uses, and that different classifiers may produce inconsistent results even if applied on the same task. Therefore, in the present embodiment, the training data (i.e., the N second characteristic data samples) are preferably input into a plurality of different classifiers for training, so as to construct a multi-classification model system, which includes a plurality of classification models. The plurality of different classifiers may improve the robustness of the classification system compared to a single classifier, and may also provide more reliable and repeatable classification performance.
Specifically, step S40 may include: for each fusion sequence, respectively carrying out feature selection on each second characteristic data sample through n feature selection algorithms to obtain corresponding n feature representative sets; the characteristic representative sets correspond to the characteristic selection algorithm one by one, each characteristic representative set comprises a specified number (such as 50) of significant characteristic representative samples, and the label data corresponding to the characteristic representative sets is the same as that of the second characteristic data samples to which the characteristic representative sets belong; and then, sequentially inputting the plurality of characteristic representative sets and label data corresponding to the characteristic representative sets into m different classifiers for training to construct L multi-classification model systems. Each multi-classification model system comprises a plurality of classification models, the number of the classification models is determined according to the number of the feature selection algorithms and the number of the classifiers, the number of the classification models is m multiplied by n, and n and m are positive integers.
In this embodiment, as shown in fig. 2, the value of n may be 5, and the 5 feature selection algorithms may respectively include DISR, JMI, SPEC, l1_121, and f _ score operators; the value of m may be 3, and the 3 classifiers may include a logistic regression classifier, a support vector machine classifier, and a gradient tree boosting classifier.
The above 5 feature selection algorithms and 3 different classifiers are combined in a cross manner, so that 15 classification models can be formed to form a multi-classification model system. Specifically, a scimit-leann machine learning software package in a Python programming language environment can be adopted to train a logistic regression classifier, a support vector machine classifier and a gradient tree lifting classifier, and the scimit-leann machine learning software package is combined with the above 5 feature selection algorithms to obtain 15 classification models. Then, the second characteristic data sample of each fusion sequence and the corresponding label data can be input into each classification model of the 15 classification models, 10-fold cross validation is adopted, a designated characteristic selection algorithm in each classification model firstly obtains a characteristic representative set, then the characteristic representative set is input into a classifier to call a fit function to train and predict, and a classification result is stored and output.
And S50, taking any first classification system with the classification performance reaching the preset condition in the L classification systems as a target classification system.
Further, AUC (area Under cut) of each classification model can be calculated according to the classification results of 15 classification models included in the multi-classification model system, and the average AUC of these classification models is used as the performance index of the whole multi-classification model system, so as to evaluate the classification performance of the multi-classification model system after fusion of different fusion sequences, wherein the value range of the AUC is between 0.5 and 1. The closer the AUC is to 1.0, the higher the authenticity, indicating better classification performance.
When the average AUC of 15 classification models included in the multi-classification model system reaches a specified threshold, it indicates that the classification performance of the multi-classification model system reaches a preset condition, and the specified threshold may be set by a developer according to an actual requirement, or determined according to the AUC of a single-sequence classification system. For example, the specified threshold may be the AUC maximum of the classification system for all single sequences, e.g., 0.936 of the T2_ FLAIR sequence of table 3 below. From table 3, it can be determined that the multi-classification model systems respectively corresponding to the T1WI + T2_ FLAIR fusion sequence and the CE _ T1WI + T2WI + T2_ FLAIR fusion sequence with AUC greater than 0.936 are all the first classification systems, and any one of the first classification systems is taken as the target classification system.
TABLE 3 List of Performance (AUC) of Single sequences and fusion sequences of the invention
Figure BDA0003309280380000131
As can be seen from table 3 above, when multiple MRI sequences are fused in accordance with the embodiments of the present invention, the AUC obtained by fusing the sequences is generally higher than that of a single sequence; among the best performances achieved were the T1WI + T2_ FLAIR fusion sequence, with an AUC value of 0.946, higher than both the single T1WI sequence and the T2_ FLAIR sequence.
Preferably, in some other embodiments, after at least one first classification system with the classification performance reaching the preset condition is determined from the L classification systems, a second classification system with the best classification performance may be determined from the at least one first classification system, such as the multi-classification model system corresponding to the above-mentioned T1WI + T2_ FLAIR fusion sequence, as the target classification system, and accordingly, the above-mentioned T1WI + T2_ FLAIR fusion sequence is the target fusion sequence.
Alternatively, in the embodiment of the present invention, the sample of the single-shot brain metastasis is set as a positive case, and the sample of the glioblastoma is set as a negative case, so the AUC of each classification model is calculated as shown in the following formula (9):
Figure BDA0003309280380000141
in the formula (I), the compound is shown in the specification,
Figure BDA0003309280380000142
representative samples that proved to be single-shot brain metastases were added up,
Figure BDA0003309280380000143
the ranking serial numbers of the samples representing the ith sample proved to be the single-shot brain metastasis are ranked from small to large according to the probability scores output by the classification model; n is a radical of0Number of glioblastomas to confirm, N1The number of single-shot brain metastases.
(II) test procedure
Preferably, after the T1WI + T2_ FLAIR fusion sequence is determined as the target fusion sequence, 62 pathologically confirmed patients with glioblastoma and single brain metastasis can be determined as the test subjects, and images obtained by scanning 62 test subjects before surgery based on two different MRI sequences, i.e., T1WI and T2_ FLAIR, included in the target fusion sequence are acquired as multi-modal test images, so as to perform application tests on the constructed target classification system. Of the 62 pathologically confirmed glioblastoma and single-shot brain metastasis patients, 33 glioblastoma patients and 29 single-shot brain metastasis patients can be included.
Then, tumor type marking is carried out on each test object to obtain label data of each test object, then, for each MRI sequence, 62 multi-mode test images which are obtained by carrying out magnetic resonance scanning on 62 test objects in a one-to-one correspondence mode are obtained, radiologic feature extraction is carried out on each multi-mode test image respectively to obtain 62 corresponding first characteristic test data, and the label data of the first characteristic test data correspond to the label data of the test object to which the first characteristic test data belong. The embodiment of performing the radiology feature extraction on the multimodal test image refers to a process of performing the radiology feature extraction on the multimodal sample image in a training process, which is not described herein again.
Further, according to a transformation matrix of a T1WI + T2_ FLAIR fusion sequence, 62 pieces of first characteristic test data under a T1WI sequence and 62 pieces of first characteristic test data under a T2_ FLAIR sequence are fused into 62 pieces of second characteristic test data, then the 62 pieces of second characteristic test data are input into the target classification system one by one, so that 15 classification models of the target classification system classify each piece of second characteristic test data one by one, a probability score that each piece of second characteristic test data belongs to a certain brain tumor type is output, and a brain tumor type classification result of each test object is determined according to the probability scores output by the 15 classification models. Based on the results of classifying the brain tumor types of 62 pathologically confirmed test subjects of glioblastoma and single brain metastasis, the classification performance including classification accuracy, AUC, sensitivity and specificity of the first 3 classification models with a larger AUC of the target classification system was calculated, as shown in table 4 below.
TABLE 4 Classification Performance of the first 3 classification models with larger AUC values for the target classification System
Rate of accuracy AUC Sensitivity of the composition Specificity of
Model 1 0.934 0.871 0.893 0.853
Model 2 0.918 0.839 0.857 0.824
Model 3 0.895 0.847 0.839 0.853
From the test results, the classification method of tumor cell images disclosed in the embodiment of the present invention can construct a multi-classification model system that fully utilizes image data of glioblastoma with multiple MRI sequences and single-onset brain metastasis (e.g., images of different MRI sequences such as T1WI, CE _ T1WI, T2WI, T2_ FLAIR) to a final brain tumor type prediction result, fuse the characteristic data to be tested of different MRI sequences by weakening the inter-class correlation fusion algorithm to generate new characteristic data to be tested, and pass the new characteristic data to be tested generated after fusion through different classifiers and different feature selection algorithms, a plurality of classification models can be constructed for performance comparison, and different MRI sequences, classifiers of different types and different feature selection algorithms can be fused to achieve more reliable classification results and improve the robustness of a classification system.
(III) sorting Process
S60, fusing K target first characteristic data of the target object to be detected into target second characteristic data according to the conversion matrix of the target fusion sequence corresponding to the target classification system; the K target first characteristic data correspond to K magnetic resonance scanning sequences included in the target fusion sequence one by one.
The target object to be detected is a patient object to be subjected to brain tumor type prediction, K tumor cell images to be detected of the target object to be detected, which are obtained by scanning the target object to be detected based on the K MRI sequences, can be obtained according to the K MRI sequences included in the target fusion sequence, then, the radiologic feature extraction is performed on the K tumor cell images to be detected of the target object to be detected respectively, K target first characteristic data (namely, characteristic data to be detected) are obtained, and then, the K target first characteristic data are fused into a new characteristic space according to a final transformation matrix W obtained by splicing the transformation matrices W of each radiologic feature of the target fusion sequence, so that one target second characteristic data (namely, new characteristic data to be detected) is obtained.
For example, after determining the T1WI + T2_ FLAIR fusion sequence as the target fusion sequence, K is equal to 2, an image obtained by scanning the target object to be tested based on two different MRI sequences, i.e., T1WI and T2_ FLAIR, can be obtained as the tumor cell image to be tested, and the following processing of the tumor cell image to be tested refers to the description of the training process and the testing process, which is not repeated herein.
And S70, inputting the second characteristic data of the target into the target classification system, and determining the target label data of the target object to be detected according to the output result of the target classification system.
In step S70, referring to the above embodiments, the target second characteristic data may be input to the target classification system, so that the 15 classification models of the target classification system respectively process the target second characteristic data, and output a probability score that the target second characteristic data belongs to a certain brain tumor type, and finally, according to the probability score output by the 15 classification models, the target label data of the target object to be detected is determined, where the target label data may be 0 or 1, that is, refer to glioblastoma or single brain metastasis, so that the characteristic data to be detected of the target object to be detected may be fused through the target fusion sequence, and then the target classification system with better performance is used for classification, thereby realizing accurate classification of the target object to be detected, and having higher classification efficiency.
As shown in fig. 3, the embodiment of the present invention discloses a classification apparatus for tumor cell images, which includes a combination unit 301, an acquisition unit 302, a first fusion unit 303, a training unit 304, a determination unit 305, a second fusion unit 306, and a classification unit 307, wherein,
a combining unit 301, configured to combine L fusion sequences according to the M magnetic resonance scanning sequences, where each fusion sequence includes at least two magnetic resonance scanning sequences, and a value of L is determined according to M;
an obtaining unit 302, configured to obtain, for each magnetic resonance scanning sequence, N first characteristic data samples corresponding to N sample objects one to one and tag data of each sample object, where the tag data is used to characterize a tumor type to which the sample object belongs, and the tumor types include at least two;
a first fusion unit 303, configured to calculate, for each fusion sequence, a transformation matrix from the first characteristic data samples of all sample objects; according to the conversion matrix, fusing first characteristic data samples belonging to the same sample object among various magnetic resonance scanning sequences to obtain N second characteristic data samples corresponding to N sample objects one by one;
a training unit 304, configured to input the N second characteristic data samples of each fusion sequence and the tag data corresponding to each second characteristic data sample into a classifier for training, so as to construct L classification systems;
a determining unit 305, configured to use any first classification system, of the L classification systems, whose classification performance meets a preset condition as a target classification system;
a second fusion unit 306, configured to fuse K target first characteristic data of the target object to be detected into target second characteristic data according to a transformation matrix of a target fusion sequence corresponding to the target classification system; the K target first characteristic data correspond to K magnetic resonance scanning sequences contained in the target fusion sequence one by one;
the classifying unit 307 is configured to input the target second characteristic data into the target classification system, and determine target label data of the target object according to an output result of the target classification system.
In this embodiment, the acquiring unit 302 may include the following sub-units, not shown:
the marking subunit is used for marking the tumor type of each sample object to obtain the label data of each sample object;
the acquisition subunit is used for respectively acquiring N multi-mode sample images obtained by scanning N sample objects for each magnetic resonance scanning sequence;
and the extraction subunit is used for performing radiologic feature extraction on each multi-modal sample image to obtain N first characteristic data samples.
Further, the extraction subunit may be specifically configured to perform target area delineation on each layer of each multi-modal sample image according to an operation parameter input by a user, obtain a two-dimensional region of interest of each layer, and store the two-dimensional regions of interest of all layers as three-dimensional mask image data; and finally, performing radiology feature extraction on each mask image data to obtain N first characteristic data samples.
In this embodiment, each first characteristic data sample comprises a plurality of radiologic features; the first merging section 303 may include the following sub-sections, not shown:
a constructing subunit, configured to construct a multi-sequence feature matrix for each radiologic feature according to the first characteristic data samples of all sample objects, the multi-sequence feature matrix including feature matrices based on various magnetic resonance scanning sequences;
the calculation subunit is used for calculating and obtaining a conversion matrix of each radiology group characteristic according to the multi-sequence characteristic matrix;
the fusion subunit is used for fusing the multi-sequence feature matrix of each radiology group feature according to the conversion matrix to obtain a first fusion vector of each radiology group feature;
the splicing subunit is used for splicing the first fusion vectors of all the radiology features to obtain a second fusion vector; the second fusion vector includes N second characteristic data samples corresponding to the N sample objects one to one.
Further optionally, the computing subunit is specifically configured to compute an inter-class correlation matrix and a covariance matrix of the multi-sequence feature matrix according to feature vectors of various types of sample objects in the multi-sequence feature matrix and a mean value of feature vectors of all sample objects; calculating a transposed matrix of the correlation matrix between classes according to the correlation matrix between the classes and the covariance matrix, and diagonalizing the transposed matrix to obtain a feature vector matrix during diagonalization; extracting the eigenvectors corresponding to the first r maximum eigenvalues from the eigenvector matrix to form a new eigenvector matrix; according to the inter-class correlation matrix, the covariance matrix and the new eigenvector matrix, calculating the first r most important eigenvalues of the inter-class correlation matrix and corresponding eigenvectors thereof; and finally, calculating to obtain a conversion matrix of each radiology group characteristic according to the first r most important characteristic values of the correlation matrix among the classes and the corresponding characteristic vectors.
Further, the classification system is a multi-classification model system; the training unit 304 described above may include the following, not shown, sub-units:
the characteristic selection subunit is used for respectively carrying out characteristic selection on each second characteristic data sample through a plurality of characteristic selection algorithms aiming at each fusion sequence to obtain a plurality of corresponding characteristic representative sets; the characteristic representative sets correspond to the characteristic selection algorithm one by one, and each characteristic representative set comprises a specified number of characteristic representative samples;
and the training subunit is used for respectively and sequentially inputting the plurality of characteristic representative sets and the label data corresponding to each characteristic representative set into the plurality of classifiers for training so as to construct the L multi-classification model systems.
In some other embodiments, the determining unit 305 is specifically configured to determine at least one first classification system, of the L classification systems, whose classification performance meets a preset condition; and determining a second classification system with the best classification performance from the at least one first classification system as a target classification system.
As shown in fig. 4, an embodiment of the present invention discloses an electronic device, which includes a memory 401 storing executable program codes and a processor 402 coupled to the memory 401;
the processor 402 calls the executable program code stored in the memory 401 to execute the classification method of the tumor cell image described in the above embodiments.
The embodiment of the invention also discloses a computer readable storage medium which stores a computer program, wherein the computer program enables a computer to execute the classification method of the tumor cell image described in each embodiment.
The above embodiments are provided to illustrate, reproduce and deduce the technical solutions of the present invention, and to fully describe the technical solutions, the objects and the effects of the present invention, so as to make the public more thoroughly and comprehensively understand the disclosure of the present invention, and not to limit the protection scope of the present invention.
The above examples are not intended to be exhaustive of the invention and there may be many other embodiments not listed. Any alterations and modifications without departing from the spirit of the invention are within the scope of the invention.

Claims (10)

1. A method for classifying a tumor cell image, comprising:
combining L fusion sequences according to the M magnetic resonance scanning sequences, wherein each fusion sequence comprises at least two magnetic resonance scanning sequences, and the value of L is determined according to M;
for each magnetic resonance scanning sequence, respectively acquiring N first characteristic data samples corresponding to N sample objects one to one and label data of each sample object, wherein the label data is used for representing tumor types to which the sample objects belong, and the tumor types at least comprise two;
for each fusion sequence, calculating a transformation matrix according to the first characteristic data samples of all sample objects; according to the conversion matrix, fusing first characteristic data samples belonging to the same sample object among various magnetic resonance scanning sequences to obtain N second characteristic data samples corresponding to the N sample objects one by one;
inputting the N second characteristic data samples of each fusion sequence and the label data corresponding to each second characteristic data sample into a classifier for training so as to construct L classification systems;
taking any one first classification system with the classification performance reaching a preset condition in the L classification systems as a target classification system;
fusing K target first characteristic data of the target object to be detected into target second characteristic data according to the conversion matrix of the target fusion sequence corresponding to the target classification system; the K target first characteristic data correspond to K magnetic resonance scanning sequences contained in the target fusion sequence one by one;
and inputting the target second characteristic data into the target classification system, and determining target label data of the target object to be detected according to an output result of the target classification system.
2. The method for classifying a tumor cell image according to claim 1, wherein the acquiring N first characteristic data samples corresponding to N sample subjects one-to-one for each of the magnetic resonance scan sequences and the label data of each of the sample subjects comprises:
marking the tumor type of each sample object to obtain label data of each sample object;
for each magnetic resonance scanning sequence, respectively acquiring N multi-mode sample images obtained by scanning N sample objects;
performing radiologic feature extraction on each multi-modal sample image to obtain N first characteristic data samples.
3. The method for classifying tumor cell images according to claim 2, wherein the performing a radiologic feature extraction on each of the multi-modal sample images to obtain N first characteristic data samples comprises:
according to the operation parameters input by a user, performing target region delineation on each layer of each multi-modal sample image to obtain a two-dimensional region of interest of each layer;
storing the two-dimensional interest areas of all layers of each multi-modal sample image as three-dimensional mask image data;
and performing radiologic feature extraction on each mask image data to obtain N first characteristic data samples.
4. A method of classifying an image of a tumor cell according to any one of claims 1 to 3, wherein each of said first characteristic data samples comprises a plurality of radiologic features; the calculating a transformation matrix according to the first characteristic data samples of all the sample objects, and fusing the first characteristic data samples belonging to the same sample object among different magnetic resonance scanning sequences according to the transformation matrix to obtain N second characteristic data samples corresponding to the N sample objects one to one, includes:
constructing a multi-sequence feature matrix for each of the radiologic features from the first characteristic data samples of all sample objects, the multi-sequence feature matrix comprising feature matrices based on various magnetic resonance scan sequences;
calculating a conversion matrix of each radiology characteristic according to the multi-sequence characteristic matrix;
fusing the multi-sequence feature matrix of each radiology feature according to the conversion matrix to obtain a first fusion vector of each radiology feature;
splicing the first fusion vectors of all the radiology features to obtain a second fusion vector; wherein the second fusion vector comprises N second characteristic data samples in one-to-one correspondence with the N sample objects.
5. The method for classifying a tumor cell image according to claim 4, wherein said computing a transformation matrix for each of said radiologic features based on said multi-sequence feature matrix comprises:
calculating to obtain an inter-class correlation matrix and a covariance matrix of the multi-sequence feature matrix according to the feature vectors of various sample objects in the multi-sequence feature matrix and the feature vector mean of all the sample objects;
calculating a transposed matrix of the correlation matrix between classes according to the correlation matrix between classes and the covariance matrix, and diagonalizing the transposed matrix to obtain a feature vector matrix during diagonalization;
extracting eigenvectors corresponding to the first r maximum eigenvalues from the eigenvector matrix to form a new eigenvector matrix;
according to the inter-class correlation matrix, the covariance matrix and the new eigenvector matrix, calculating the first r most important eigenvalues of the inter-class correlation matrix and corresponding eigenvectors thereof;
and calculating to obtain a conversion matrix of each radiology characteristic according to the first r most important characteristic values of the inter-class correlation matrix and the corresponding characteristic vectors.
6. The method of classifying a tumor cell image according to any one of claims 1 to 3, wherein the classification system is a multi-classification model system; inputting the N second characteristic data samples of each fusion sequence and the label data corresponding to each second characteristic data sample into a classifier for training to construct L classification systems, including:
for each fusion sequence, respectively carrying out feature selection on each second characteristic data sample through a plurality of feature selection algorithms to obtain a plurality of corresponding feature representative sets; the feature representative sets correspond to the feature selection algorithm one by one, and each feature representative set comprises a specified number of feature representative samples;
and respectively and sequentially inputting the plurality of characteristic representative sets and the label data corresponding to the characteristic representative sets into a plurality of classifiers for training so as to construct a plurality of L multi-classification model systems.
7. The method for classifying a tumor cell image according to claim 1, wherein the step of using any one of the first classification systems having a classification performance meeting a predetermined condition as a target classification system comprises:
determining at least one first classification system with classification performance reaching a preset condition from the L classification systems; and determining a second classification system with the best classification performance from the at least one first classification system as a target classification system.
8. A classification device for tumor cell images, comprising:
a combination unit, configured to combine L fusion sequences according to M magnetic resonance scanning sequences, where each fusion sequence includes at least two magnetic resonance scanning sequences, and a value of L is determined according to M;
an obtaining unit, configured to obtain, for each magnetic resonance scanning sequence, N first characteristic data samples corresponding to N sample objects one to one and tag data of each sample object, where the tag data is used to characterize a tumor type to which the sample object belongs, and the tumor types include at least two;
a first fusion unit, configured to calculate, for each fusion sequence, a transformation matrix from first characteristic data samples of all sample objects; according to the conversion matrix, fusing first characteristic data samples belonging to the same sample object among various magnetic resonance scanning sequences to obtain N second characteristic data samples corresponding to the N sample objects one by one;
the training unit is used for inputting the N second characteristic data samples of each fusion sequence and the label data corresponding to each second characteristic data sample into a classifier for training so as to construct L classification systems;
a determining unit, configured to use any one of the first classification systems, of the L classification systems, whose classification performance meets a preset condition as a target classification system;
the second fusion unit is used for fusing the K target first characteristic data of the target object to be detected into target second characteristic data according to the conversion matrix of the target fusion sequence corresponding to the target classification system; the K target first characteristic data correspond to K magnetic resonance scanning sequences contained in the target fusion sequence one by one;
and the classification unit is used for inputting the target second characteristic data into the target classification system and determining target label data of the target object to be detected according to an output result of the target classification system.
9. An electronic device comprising a memory storing executable program code and a processor coupled to the memory; the processor calls the executable program code stored in the memory for performing the method of classifying a tumor cell image according to any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the method of classifying a tumor cell image according to any one of claims 1 to 7.
CN202111212018.4A 2021-10-18 2021-10-18 Method, device, equipment and storage medium for classifying tumor cell images Active CN113902724B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111212018.4A CN113902724B (en) 2021-10-18 2021-10-18 Method, device, equipment and storage medium for classifying tumor cell images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111212018.4A CN113902724B (en) 2021-10-18 2021-10-18 Method, device, equipment and storage medium for classifying tumor cell images

Publications (2)

Publication Number Publication Date
CN113902724A true CN113902724A (en) 2022-01-07
CN113902724B CN113902724B (en) 2022-07-01

Family

ID=79192628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111212018.4A Active CN113902724B (en) 2021-10-18 2021-10-18 Method, device, equipment and storage medium for classifying tumor cell images

Country Status (1)

Country Link
CN (1) CN113902724B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332547A (en) * 2022-03-17 2022-04-12 浙江太美医疗科技股份有限公司 Medical object classification method and apparatus, electronic device, and storage medium
CN114722925A (en) * 2022-03-22 2022-07-08 北京安德医智科技有限公司 Lesion classification device and nonvolatile computer readable storage medium
CN116485282A (en) * 2023-06-19 2023-07-25 浪潮通用软件有限公司 Data grouping method, equipment and medium based on multidimensional index dynamic competition
CN116883995A (en) * 2023-07-07 2023-10-13 广东食品药品职业学院 Identification system of breast cancer molecular subtype

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220966A (en) * 2017-05-05 2017-09-29 郑州大学 A kind of Histopathologic Grade of Cerebral Gliomas Forecasting Methodology based on image group
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
CN108537137A (en) * 2018-03-19 2018-09-14 安徽大学 Differentiate the multi-modal biological characteristic fusion identification method of correlation analysis based on label
CN109886944A (en) * 2019-02-02 2019-06-14 浙江大学 A kind of white matter high signal intensity detection and localization method based on multichannel chromatogram
US20200000362A1 (en) * 2018-06-29 2020-01-02 Mayo Foundation For Medical Education And Research Systems, methods, and media for automatically diagnosing intraductal papillary mucinous neosplasms using multi-modal magnetic resonance imaging data
US20200104984A1 (en) * 2018-09-29 2020-04-02 Shanghai United Imaging Intelligence Co., Ltd. Methods and devices for reducing dimension of eigenvectors
CN111340135A (en) * 2020-03-12 2020-06-26 广州领拓医疗科技有限公司 Renal mass classification method based on random projection
CN111598864A (en) * 2020-05-14 2020-08-28 北京工业大学 Hepatocellular carcinoma differentiation assessment method based on multi-modal image contribution fusion
CN111599464A (en) * 2020-05-13 2020-08-28 吉林大学第一医院 Novel multi-modal fusion auxiliary diagnosis method based on rectal cancer imaging omics research
CN111862079A (en) * 2020-07-31 2020-10-30 复旦大学附属肿瘤医院 High-grade serous ovarian cancer recurrence risk prediction system based on imagomics
CN112419247A (en) * 2020-11-12 2021-02-26 复旦大学 MR image brain tumor detection method and system based on machine learning
AU2021101379A4 (en) * 2021-03-17 2021-05-13 Bhatele, Kirti Raj Mr A system and method for classifying glioma using fused mri sequence
CN113011462A (en) * 2021-02-22 2021-06-22 广州领拓医疗科技有限公司 Classification and device of tumor cell images

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220966A (en) * 2017-05-05 2017-09-29 郑州大学 A kind of Histopathologic Grade of Cerebral Gliomas Forecasting Methodology based on image group
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
CN108537137A (en) * 2018-03-19 2018-09-14 安徽大学 Differentiate the multi-modal biological characteristic fusion identification method of correlation analysis based on label
US20200000362A1 (en) * 2018-06-29 2020-01-02 Mayo Foundation For Medical Education And Research Systems, methods, and media for automatically diagnosing intraductal papillary mucinous neosplasms using multi-modal magnetic resonance imaging data
US20200104984A1 (en) * 2018-09-29 2020-04-02 Shanghai United Imaging Intelligence Co., Ltd. Methods and devices for reducing dimension of eigenvectors
CN109886944A (en) * 2019-02-02 2019-06-14 浙江大学 A kind of white matter high signal intensity detection and localization method based on multichannel chromatogram
CN111340135A (en) * 2020-03-12 2020-06-26 广州领拓医疗科技有限公司 Renal mass classification method based on random projection
CN111599464A (en) * 2020-05-13 2020-08-28 吉林大学第一医院 Novel multi-modal fusion auxiliary diagnosis method based on rectal cancer imaging omics research
CN111598864A (en) * 2020-05-14 2020-08-28 北京工业大学 Hepatocellular carcinoma differentiation assessment method based on multi-modal image contribution fusion
CN111862079A (en) * 2020-07-31 2020-10-30 复旦大学附属肿瘤医院 High-grade serous ovarian cancer recurrence risk prediction system based on imagomics
CN112419247A (en) * 2020-11-12 2021-02-26 复旦大学 MR image brain tumor detection method and system based on machine learning
CN113011462A (en) * 2021-02-22 2021-06-22 广州领拓医疗科技有限公司 Classification and device of tumor cell images
AU2021101379A4 (en) * 2021-03-17 2021-05-13 Bhatele, Kirti Raj Mr A system and method for classifying glioma using fused mri sequence

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JIALIANG WU 等: "A Multiparametric MR-Based RadioFusionOmics Model with Robust Capabilities of Differentiating Glioblastoma Multiforme from Solitary Brain Metastasis", 《CANCERS》 *
M. HAGHIGHAT 等: "Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition", 《IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY》 *
TATEISHI M. 等: "An initial experience of machine learning based on multi-sequence texture parameters in magnetic resonance imaging to differentiate glioblastoma from brain metastases", 《JOURNAL OF THE NEUROLOGICAL SCIENCES》 *
何强 等: "基于多模态特征和多分类器融合的前列腺癌放疗中直肠并发症预测模型", 《南方医科大学学报》 *
何强: "基于多模态分层融合策略及其在影像组学中的应用研究", 《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332547A (en) * 2022-03-17 2022-04-12 浙江太美医疗科技股份有限公司 Medical object classification method and apparatus, electronic device, and storage medium
CN114332547B (en) * 2022-03-17 2022-07-08 浙江太美医疗科技股份有限公司 Medical object classification method and apparatus, electronic device, and storage medium
CN114722925A (en) * 2022-03-22 2022-07-08 北京安德医智科技有限公司 Lesion classification device and nonvolatile computer readable storage medium
CN116485282A (en) * 2023-06-19 2023-07-25 浪潮通用软件有限公司 Data grouping method, equipment and medium based on multidimensional index dynamic competition
CN116485282B (en) * 2023-06-19 2023-09-29 浪潮通用软件有限公司 Data grouping method, equipment and medium based on multidimensional index dynamic competition
CN116883995A (en) * 2023-07-07 2023-10-13 广东食品药品职业学院 Identification system of breast cancer molecular subtype

Also Published As

Publication number Publication date
CN113902724B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN113902724B (en) Method, device, equipment and storage medium for classifying tumor cell images
Zhuge et al. Brain tumor segmentation using holistically nested neural networks in MRI images
Bandi et al. From detection of individual metastases to classification of lymph node status at the patient level: the camelyon17 challenge
Wolterink et al. Automatic segmentation and disease classification using cardiac cine MR images
Oliver et al. A statistical approach for breast density segmentation
Xu et al. Texture-specific bag of visual words model and spatial cone matching-based method for the retrieval of focal liver lesions using multiphase contrast-enhanced CT images
Bouix et al. On evaluating brain tissue classifiers without a ground truth
US9383347B2 (en) Pathological diagnosis results assessment system, pathological diagnosis results assessment method, and pathological diagnosis results assessment device
CN110837572B (en) Image retrieval method and device, readable storage medium and electronic equipment
Dov et al. Thyroid cancer malignancy prediction from whole slide cytopathology images
García Seco de Herrera et al. Bag–of–colors for biomedical document image classification
Wang et al. Application of neuroanatomical features to tractography clustering
US20090052768A1 (en) Identifying a set of image characteristics for assessing similarity of images
Tiwari et al. Multi-modal data fusion schemes for integrated classification of imaging and non-imaging biomedical data
US11373422B2 (en) Evaluation assistance method, evaluation assistance system, and computer-readable medium
CN110916666B (en) Imaging omics feature processing method for predicting recurrence of hepatocellular carcinoma after surgical resection
CN109919912A (en) A kind of quality evaluating method and device of medical image
Paknezhad et al. Automatic basal slice detection for cardiac analysis
Jin et al. One map does not fit all: Evaluating saliency map explanation on multi-modal medical images
Dhinagar et al. Efficiently Training Vision Transformers on Structural MRI Scans for Alzheimer’s Disease Detection
Gangadharan et al. Comparative analysis of deep learning-based brain tumor prediction models using MRI scan
Su et al. A GAN-based data augmentation method for imbalanced multi-class skin lesion classification
Tyagi et al. [Retracted] Identification and Classification of Prostate Cancer Identification and Classification Based on Improved Convolution Neural Network
CN113011462B (en) Classification and device of tumor cell images
Lin et al. Prostate lesion delineation from multiparametric magnetic resonance imaging based on locality alignment discriminant analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant