WO2024060842A1 - 分类模型获取方法、表达类别确定方法、装置、设备及介质 - Google Patents

分类模型获取方法、表达类别确定方法、装置、设备及介质 Download PDF

Info

Publication number
WO2024060842A1
WO2024060842A1 PCT/CN2023/110354 CN2023110354W WO2024060842A1 WO 2024060842 A1 WO2024060842 A1 WO 2024060842A1 CN 2023110354 W CN2023110354 W CN 2023110354W WO 2024060842 A1 WO2024060842 A1 WO 2024060842A1
Authority
WO
WIPO (PCT)
Prior art keywords
features
radiomics
feature
voxel
target
Prior art date
Application number
PCT/CN2023/110354
Other languages
English (en)
French (fr)
Inventor
胡玉兰
张翠芳
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Publication of WO2024060842A1 publication Critical patent/WO2024060842A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • G06V10/811Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/771Feature selection, e.g. selecting representative features from a multi-dimensional feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B25/00ICT specially adapted for hybridisation; ICT specially adapted for gene or protein expression
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B40/00ICT specially adapted for biostatistics; ICT specially adapted for bioinformatics-related machine learning or data mining, e.g. knowledge discovery or pattern finding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Definitions

  • the present disclosure relates to the field of data processing technology, and in particular to a classification model acquisition method, expression category determination method, device, equipment and medium.
  • isocitrate dehydrogenase (IDH) mutations O6-methylguanine DNA alpha MGMT promoter methylation, chromosome 1p/19q deletion, epidermal growth factor receptor (EGFR) amplification, telomerase reverse transcriptase (TERT) gene promoter (TERTp) mutation, H3F3A mutation, Notch pathways, miRNAs, etc.
  • IDH isocitrate dehydrogenase
  • EGFR epidermal growth factor receptor
  • TERT telomerase reverse transcriptase
  • H3F3A mutation Notch pathways, miRNAs, etc.
  • the expression pattern of the above genes can be used as a physiological parameter for tumor detection and prognosis.
  • This disclosure provides a method for obtaining a classification model.
  • the method includes:
  • the preset model is trained to obtain a classification model, which is used to predict the expression category of the target gene.
  • multiple radiomic features of the tumor area are obtained, including:
  • Feature extraction is performed on the first sub-region image, the second sub-region image and the third sub-region image respectively to obtain multiple radiomic features.
  • multiple radiomic features of the tumor area are obtained, including:
  • T1-weighted type T2-weighted type
  • contrast-enhanced T1-weighted type T2 fluid-attenuated phase-in recovery type
  • the radiomics features corresponding to the extracted image samples of each type are combined to obtain multiple radiomics features.
  • the tumor area is a glioma area in the brain, and the method further includes:
  • the location information includes the brain area to which the glioma area belongs, and/or the location coordinates of the glioma area in the brain;
  • training samples are constructed, including:
  • the first screening factor includes an expression category label and a tumor grade label of the tumor region; multiple radiomics features are screened based on the first screening factor to obtain multiple radiomics feature samples, including :
  • radiomic features are screened to obtain multiple first radiomic features; where the first relationship value is used to characterize the image group The degree of association between biological characteristics and mutations in target genes;
  • radiomics features are screened to obtain multiple second radiomics features; wherein the second relationship value is used to characterize the degree of association between the radiomics feature and the tumor grade;
  • Multiple first radiomics features and multiple second radiomics features are deduplicated to obtain multiple radiomics feature samples.
  • the method also includes:
  • all the imaging omics features included in all the sample objects are screened based on a third screening factor to obtain a supplementary imaging omics feature sample; wherein the third screening factor includes clinical data corresponding to each of the multiple sample objects;
  • training samples are constructed, including:
  • a training sample is constructed based on multiple radiomics feature samples, multiple voxel feature samples, and multiple supplementary radiomics feature samples.
  • all radiomics features included in all sample objects are screened based on the third filtering factor to obtain supplementary radiomics feature samples, including:
  • the radiomics feature matrix includes multiple radiomics features corresponding to multiple sample objects
  • the clinical data matrix includes clinical data corresponding to multiple sample objects
  • the mutual information coefficient matrix includes the mutual information coefficient between each radiomics feature and the clinical data.
  • the mutual information coefficient is used to characterize the degree of association between the radiomics feature and the clinical data.
  • radiomics features included in the radiomics feature matrix are screened to obtain multiple supplementary radiomics feature samples.
  • the second filtering factor includes an expression category label, and multiple voxel features are filtered based on the second filtering factor to obtain multiple voxel feature samples, including:
  • a linear regression model is used to screen out multiple voxel feature samples from multiple candidate voxel features.
  • the method before filtering multiple radiomics features based on the first screening factor to obtain multiple radiomics feature samples, the method further includes:
  • radiomics features are screened to obtain multiple radiomics feature samples, including:
  • multiple radiomic features of the tumor area are obtained, including:
  • the first-order statistical features, texture features and morphological features of the tumor area are combined to obtain multiple radiomic features.
  • a preset model is trained using training samples as input to obtain a classification model, including:
  • the classification model that meets the training end condition is used as the classification model.
  • the training end condition is that the classification model converges or reaches the preset number of updates.
  • the present disclosure also provides a method for determining the expression category of a target gene, the method comprising:
  • the expression category of the target gene of the subject to be tested is determined.
  • the method further includes:
  • the method further includes:
  • the fourth screening factor includes clinical data and/or tumor grading data of the subject to be tested;
  • the fourth screening factor includes clinical data and tumor grade data; based on the fourth screening factor, multiple target radiomic features are screened, including:
  • the target radiomics features screened out based on the third relationship value and the target radiomics features screened out based on the mutual information coefficient are deduplicated to obtain the screened out target radiomics features.
  • the present disclosure also provides a classification model acquisition device, which includes:
  • the feature acquisition module is used to acquire multiple radiomic features and multiple voxel features of the tumor area for the tumor area of the sample object;
  • the feature selection module is used to screen multiple radiomic features based on the first screening factor to obtain multiple radiomic feature samples; and to screen multiple voxel features based on the second screening factor to obtain multiple voxel feature samples.
  • the first filtering factor and the second filtering factor both include the expression category label of the target gene of the sample object;
  • the sample construction module is used to construct training samples based on multiple radiomics feature samples and multiple voxel feature samples;
  • the model training module is used to train the preset model using training samples as input to obtain a classification model.
  • the classification model is used to predict the expression category of the target gene.
  • the present disclosure also provides a device for determining the expression category of a target gene.
  • the device includes:
  • the feature acquisition module is used to acquire multiple target radiomic features and multiple target voxel features of the tumor area of the subject to be tested;
  • a feature input module is used to input multiple target radiomic features and multiple target voxel features into the classification model; wherein the classification model is obtained according to the acquisition method of the classification model;
  • the category determination module is used to determine the expression category of the target gene of the object to be tested based on the output of the classification model.
  • the present disclosure also provides an electronic device, including a memory, a processor and a computer program stored in the memory and executable on the processor, a classification model acquisition method implemented when the processor is executed, or an expression of the target gene achieved during execution Category determination method.
  • the present disclosure also provides a computer-readable storage medium, which stores a computer program that enables a classification model acquisition method to be executed by a processor, or an expression category determination method of a target gene to be implemented during execution.
  • multiple radiomic features and multiple voxel features of the tumor area can be obtained for the tumor area of the sample object; multiple radiomic features are screened based on the first screening factor to obtain Multiple radiomics feature samples; and filtering multiple voxel features based on the second filtering factor to obtain multiple voxel feature samples; and constructing training samples based on multiple radiomics feature samples and multiple voxel feature samples; and then Using training samples as input, the preset model is trained to obtain a classification model, which is used to predict the expression category of the target gene.
  • both the first screening factor and the second screening factor include the expression category label of the target gene of the sample object
  • the expression category label can characterize the expression category of the target gene, such as mutation category, deletion status, etc., so that it can be
  • the expression category of the target gene is a physiological parameter. Radiomic feature samples and voxel feature samples that are closely related to the expression category of the target gene are screened out. These selected radiomic feature samples and voxel feature samples are then used as training samples.
  • the preset model is trained so that the classification model can learn the correlation between the morphological characteristics of the tumor area and the expression category of the target gene, thereby improving the interpretability of the classification model and thus improving the image quality based on the tumor area. Accuracy in predicting expression classes of target genes.
  • the training samples not only include the radiomics features of the tumor area, but also include the voxel features of the tumor area.
  • the radiomics features can reflect the three-dimensional features such as texture and shape of the tumor area
  • the voxel features can reflect Three-dimensional features such as the spatial three-dimensional shape of the tumor area can increase the richness of training samples and thereby improve the accuracy of the classification model.
  • Figure 1 schematically shows the overall flow diagram of the classification model acquisition process
  • Figure 2 schematically shows the step flow chart of the classification model acquisition method
  • FIG. 3 schematically illustrates the complete process of extracting radiomics features of the present disclosure
  • Figure 4 schematically shows a flow chart of the steps for screening radiomics features based on clinical data
  • Figure 5 schematically shows a flow chart of the steps of the method for determining the expression category of a target gene
  • Figure 6 schematically shows a schematic structural framework diagram of the classification model acquisition device
  • Figure 7 schematically shows a schematic structural framework diagram of an expression category determination device for a target gene
  • FIG. 8 schematically shows a structural block diagram of the electronic device of the present disclosure.
  • gliomas brain gliomas
  • IDH genotype classification mutant/wild type
  • 1p/19q chromosome deletion status deficiency/non-deletion
  • MGMT methylation status methylated/unmethylated
  • brain gliomas are markers and signal transduction pathways that are involved in the occurrence and development of gliomas and have a significant impact on the proliferation, metastasis, and invasion of gliomas.
  • TERT telomerase reverse tranase
  • the TERT gene has no transcriptional activity in the vast majority of non-tumor cells, but in 73% of There are TERT gene mutations in tumors, such as promoter mutations, gene Due to translocation and DNA amplification, etc. In other words, the expression categories of the above genes have a certain correlation with tumors.
  • this disclosure proposes a method for determining the expression category of target genes based on radiomics and neural networks. This method can achieve the goal non-invasively.
  • the main core concept of gene detection is to use the radiomic features of MR segmentation images and combine them with multiple feature screening methods to obtain the screened radiomic features.
  • VBM voxel-based morphometry
  • the tumors referred to in this disclosure may be common tumors such as brain glioma, liver tumor, breast tumor, thyroid tumor, lung tumor, melanoma, etc.
  • This disclosure mainly takes brain glioma as an example for explanation.
  • this disclosure proposes a method for determining the expression category of target genes based on radiomics and neural networks, it aims to use the idea of machine learning to build a classification model that can be used to predict the expression categories of target genes. , in order to make the classification model have high interpretability, it is proposed to screen radiomic features (combine multiple features to screen radiomic features) and voxel features (carry out correlation analysis between voxels and target genes) technical means to improve the effectiveness of training samples.
  • FIG1 a schematic diagram of the overall process of obtaining a classification model is shown.
  • a three-dimensional image containing a tumor region can be preprocessed and then segmented to obtain an image of the tumor region.
  • feature extraction is performed on the image of the tumor region to obtain imaging genomics features
  • voxel feature calculation is performed on the preprocessed three-dimensional image to obtain voxel features.
  • feature screening is performed, and various screenings are performed on the radiomics features to obtain the radiomics features after feature screening, and target gene correlation analysis is performed on the voxel features to obtain the filtered voxel features. Then, the filtered voxel features are obtained.
  • the voxel features and radiomic features are fused and sent to the classifier. Perform training to obtain a classification model.
  • FIG. 2 a flow chart of the steps of the classification model acquisition method of the present disclosure is shown. As shown in Figure 2, it may specifically include the following steps:
  • Step S201 For the tumor area of the sample object, obtain multiple radiomic features and multiple voxel features of the tumor area.
  • the sample object may refer to a tumor patient, wherein multiple radiomic features and multiple voxel features of the tumor area may be extracted from the MRI image of the tumor area, and the multiple radiomic features may be The three-dimensional image of the tumor area is obtained by feature extraction.
  • the three-dimensional image is an MRI image. Since the tissue density of the human body is uneven, the medium can be divided into many small cubes with relatively uniform density during scanning. Such small cubes are called voxels, and voxels are components of The basic unit of a three-dimensional image. The smaller the voxel, the clearer the image.
  • radiomics features can reflect two-dimensional features such as slice texture and shape of the tumor area, and voxel features can reflect three-dimensional features such as the spatial three-dimensional shape of the tumor area. In this way, the characteristics of the tumor in various dimensions can be obtained to fully reflect the tumor area. morphological characteristics and rich feature information.
  • the process of obtaining multiple radiomic features can be the following process:
  • the image frames generated by different MRI machines with different slice thicknesses (range: 1 to 10) and pixel spacing at the tumor site are resampled to a uniform slice thickness of 1.0 and a pixel interval of [1,1,1] to obtain a three-dimensional image. ; And use the mean and standard deviation to normalize the three-dimensional image to obtain the final image.
  • the image segmentation module uses the UNet segmentation network model to achieve tumor area segmentation.
  • the UNet segmentation network model training input includes three-dimensional image data of four modalities, labeled as segmentation mask images.
  • the process of obtaining multiple voxel features can be the following process:
  • image frames generated by different MRI machines with different slice thicknesses (range: 1 to 10) and pixel spacing at the tumor site are resampled to a uniform slice thickness of 1.0 and [1,1, 1]
  • this example uses brain glioma as an example. For other types of tumors, you can refer to it.
  • Step S202 Screen multiple radiomics features based on the first screening factor to obtain multiple radiomics feature samples; and screen multiple voxel features based on the second screening factor to obtain multiple voxel feature samples.
  • the first filtering factor and the second filtering factor both include the expression category label of the target gene of the sample object.
  • multiple radiomic features and multiple voxel features can be screened separately based on the expression category label of the target gene.
  • the expression category label of the target gene represents the expression category of the target gene of the sample object, where the target gene's Expression categories include mutant and wild type.
  • the mutant type has a label of 1 and the wild type has a label of 0.
  • screening multiple radiomics features based on the expression category of the target gene may refer to screening out the radiomics features that are highly correlated with the expression category of the target gene among the multiple radiomics features.
  • Screening of multiple voxel features by the expression category of the target gene may refer to filtering out the voxel features that are highly correlated with the expression category of the target gene among the multiple voxel features.
  • the radiomics features that are highly correlated with the mutant type among multiple radiomic features can be screened out, and the multiple voxel features related to the mutant type can be screened out. Voxel features with higher pattern correlation are screened out. While the expression category of the target gene of sample object B is wild type, you can filter out the radiomic features that are highly correlated with the wild type among the multiple radiomic features, and select the radiomic features that are highly correlated with the wild type among the multiple voxel features. Voxel features with higher characteristics are screened out. That is to say, for different sample objects, the radiomic features and voxel features that are highly correlated with the expression category can be screened out according to the expression category of the target gene of the sample object itself.
  • the correlation between the radiomics features and the expression category labels can be obtained by calculating the relationship value between the radiomics features and the expression category labels. And by calculating the relationship value between Tusu features and expression category labels, the correlation between voxel features and expression category labels is obtained.
  • the first screening factor may also include other screening factors in addition to the expression category label of the target gene, such as clinical data, tumor grade, etc. That is to say, for multiple radiomics features, one screening can be performed based on each screening factor to obtain the radiomics features filtered by each screening factor. After that, multiple screening factors can be used to screen the main radiomics features respectively. After the feature combinations are combined and deduplicated, multiple screened radiomics feature samples are obtained.
  • Step S203 Construct training based on multiple radiomics feature samples and multiple voxel feature samples sample.
  • multiple corresponding radiomic feature samples and voxel feature samples can be screened out, and then the multiple radiomic feature samples and voxels selected for each sample object can be filtered out.
  • the feature sample is used as a sample group, and the expression category label of the sample object is used as the label of the sample group, which is used to construct the loss function in subsequent model training.
  • sample objects constitute multiple sample groups, and multiple sample groups constitute training samples.
  • Each sample group includes multiple radiomics feature samples and voxel feature samples corresponding to one sample object, and one corresponding to a sample object. Expression category labels.
  • Step S204 Using the training samples as input, train the preset model to obtain a classification model.
  • the classification model is used to predict the expression category of the target gene.
  • multiple radiomics feature samples and voxel feature samples in each sample group can be fused and then input into a preset model for training, where fusion can refer to merging multiple radiomics feature samples and voxel feature samples.
  • the voxel feature samples are combined into a feature set, and each feature sample in the feature set is input to the preset model.
  • the preset model can be a classifier, for example, the DenseNet network is used as a classifier, where each layer of DenseNet establishes a connection between each layer before this layer, so that the error signal can be easily Propagating to earlier layers so that earlier layers can get direct supervision from the final classification layer can alleviate the vanishing gradient phenomenon and avoid model overfitting.
  • the DenseNet network is used as a classifier, where each layer of DenseNet establishes a connection between each layer before this layer, so that the error signal can be easily Propagating to earlier layers so that earlier layers can get direct supervision from the final classification layer can alleviate the vanishing gradient phenomenon and avoid model overfitting.
  • a classification model After training multiple times, when the preset model converges, or the loss value approaches the minimum value, a classification model can be obtained, which can be used to predict the expression category of target genes in tumor patients.
  • radiomic features and voxel features in this disclosure can be expressed in the form of feature vectors.
  • the target genes in the present disclosure may include: any one of the TERT gene promoter, IDH gene genotyping (mutant/wild type), 1p/19q chromosome deletion status (deletion/non-deletion), and MGMT methylation status (methylated/unmethylated). It is only necessary to pre-label the expression category label of the target gene.
  • the TERT gene promoter its expression category includes a mutation category and a wild category
  • IDH genotyping its expression category includes a mutation category and a wild category
  • 1p/19q chromosome its expression category includes a deletion category and a non-deletion category
  • MGMT its expression category includes a methylated category and an unmethylated category.
  • the target gene used for classification model training may be one or
  • the classification model can predict the expression category of one target gene.
  • the classification model can predict the expression categories of multiple genes at the same time.
  • the classification model can simultaneously output the mutation category of the TERT gene promoter, the deletion status of chromosome 1p/19q, and the methylation status of MGMT.
  • expression category labels can be prepared for each gene so that the classification model can learn them at the same time. Associations between voxel-wise and radiomic features and multiple target genes.
  • the expression category label can characterize the expression category of the target gene
  • the expression category of the target gene can be used as a physiological parameter to filter out images that are closely related to the expression category of the target gene.
  • omics feature samples and voxel feature samples and then use these selected radiomics feature samples and voxel feature samples as training samples to train the preset model, so that the classification model can learn the morphological characteristics and characteristics of the tumor area.
  • the correlation between the expression categories of target genes improves the interpretability of the classification model, thereby improving the accuracy of predicting the expression categories of target genes based on images of the tumor region.
  • the training samples not only include the radiomics features of the tumor area, but also the voxel features of the tumor area, among which the radiomics features can reflect the two-dimensional features such as slice texture and shape of the tumor area, and the voxel features It can reflect three-dimensional features such as the spatial three-dimensional shape of the tumor area, thereby improving the richness of training samples and thereby improving the accuracy of the classification model.
  • two measures are proposed so that the extracted radiomics features can describe the morphological characteristics of the tumor area in more detail.
  • One of the measures A is to extract radiomic features from three sub-regions of the image of the tumor area, which can reflect the morphological characteristics of each sub-region of the tumor area and describe the morphological characteristics of different sub-regions of the tumor.
  • the other Measure B is to perform fine-grained radiomics feature extraction on images of the tumor area. Specifically, it can be to extract features that describe different morphological characteristics of the tumor, such as features that describe the fineness of the tumor's surface (MRI slices). Characteristics that describe the appearance and shape of the tumor, etc.
  • Measure A and Measure B can be combined, that is, multiple fine-grained radiomics feature extraction can be performed on the images of each sub-region.
  • the first sub-region image belonging to the non-enhanced tumor area, the second sub-region image belonging to the tumor-enhanced area, and the third sub-region image belonging to the edema area around the tumor can be extracted from the image sample of the tumor area.
  • Sub-region images perform feature extraction on the first sub-region image, the second sub-region image and the third sub-region image respectively to obtain multiple radiomic features.
  • the tumor area can be image segmented three times. Each image segmentation obtains an image of a sub-region. Then, feature extraction is performed on the image of each sub-region.
  • the sub-region includes the tumor non-enhancing area, the tumor enhancing area and the edema area around the tumor;
  • the tumor non-enhancing area refers to the enhancing tumor area in the tumor area, that is, the tumor core;
  • the tumor enhancing area refers to the enhancing area around the tumor core, which is represented by Enhance the tumor voxel composition;
  • the edema area around the tumor refers to the edema area of the tumor.
  • feature extraction can be performed on the first sub-region image to obtain multiple first radiomic features belonging to the first sub-region image
  • feature extraction can be performed on the second sub-region image to obtain multiple first radiomic features belonging to the second sub-region image.
  • Multiple first radiomics features, multiple second radiomics features, and multiple third radiomics features are combined to obtain multiple radiomics features of the sample object.
  • the number of radiomics features extracted for different subregion images may be the same, for example, N radiomics features are extracted from different subregion images. In this way, a total of 3N radiomic features were extracted from the three sub-region images.
  • the tumor core area, the enhanced tumor core area and the entire tumor area can be divided. Then when extracting radiomic features, the expression levels of different tumor cells can also be divided into areas. Feature extraction is performed on the area, thereby achieving fine-grained feature extraction of the tumor area.
  • the extracted radiomic features can fully reflect the morphological characteristics of the tumor area, as well as the morphological characteristics of tumor cells under different expression levels, thus Enhance the richness of training samples.
  • Measure B Obtain wavelet images and LoG images of image samples of the tumor area; perform multi-scale feature extraction on the image samples, wavelet images and LoG images of the tumor area respectively to obtain first-order statistical features, texture features and morphological features of the tumor area; and combine the first-order statistical features, texture features and morphological features of the tumor area to obtain multiple imaging omics features.
  • the wavelet image may refer to the image obtained by performing wavelet transformation on the image sample of the tumor area
  • the LoG image may refer to: obtaining the first derivative of the image sample of the tumor area, and finally obtaining the edge image of the tumor area.
  • each scale corresponds to a dimension, which can specifically include: first-order statistical dimension, texture dimension and morphological dimension.
  • first-order statistical features, texture features and morphological features can be extracted from image samples in the tumor area
  • first-order statistical features and texture features can be extracted from wavelet images
  • first-order statistical features and texture can be extracted from LoG images. feature.
  • the first-order statistical characteristics of the tumor area can be extracted from the first-order statistical dimension.
  • the first-order statistical dimension can be the characteristic value calculated based on the pixel gray distribution of the image sample, including morphological characteristics and histogram.
  • Image features can reflect the overall morphological characteristics of the tumor area.
  • the morphological features of the tumor area can be extracted from the morphological dimension.
  • the morphological features can be feature values calculated based on the contour lines of the tumor area in the image sample, which can reflect the shape and structure of the tumor in the tumor area.
  • the texture features of the tumor area can be extracted from the texture dimension.
  • the texture features in the image samples of the tumor area can be extracted using statistical methods, geometric methods and model methods.
  • the statistical methods can include the GLCM method (spatial gray level symbiosis). matrix), semivariogram, texture spectrum method, etc.
  • the model method can include the random field model method.
  • texture features can be used to describe the surface properties of tumors, such as the thickness and density of the surface.
  • the first-order statistical features and texture features of the wavelet image can be extracted.
  • the wavelet image is obtained after denoising the image sample.
  • the extracted texture features and first-order statistical features contain fewer noise points. In this way, they can be compared with the texture features and first-order statistical features extracted from the original image samples to obtain multi-dimensional features of image samples at different scales.
  • the LoG image can outline the morphological structure of the tumor area, so that different forms of feature extraction can be performed on the edge image to obtain the first-order statistical features and texture features of the LoG image. In this way, the morphological features and surface properties of the edge lines of the tumor area can be extracted.
  • Measure B feature extraction can be performed on the original image samples, denoised image samples and edge images of the tumor area according to different feature methods, which can be understood as extracting different concerns based on different concerns.
  • First-order statistical features and texture features under the points can be used to describe the morphological characteristics of the tumor area at different observation angles, which can then reflect the tumor in an all-round way. morphological characteristics of the region, thereby enhancing the richness of the training samples.
  • measure A and measure B can be used in combination, that is, for each sub-region image, Feature extraction in multiple dimensions can be performed on the sub-region image to obtain multiple radiomic features of each sub-region image in each dimension. In this way, not only features are extracted from the fine-grained areas of the tumor area, but also radiomics features are extracted from different observation angles for each fine-grained area.
  • the image samples of the tumor area of the sample object may include a T1-weighted image (T1w), a T2-weighted image (T2w), a contrast-enhanced T1-weighted image (T1WCE), and a T2 fluid-attenuated flip recovery image ( T2-FLAIR), when performing feature extraction, feature extraction can be performed on all four types of images.
  • T1w T1-weighted image
  • T2w T2-weighted image
  • T1WCE contrast-enhanced T1-weighted image
  • T2-FLAIR T2 fluid-attenuated flip recovery image
  • multiple types of image samples of the tumor area of the target object can be obtained, including T1-weighted type, T2-weighted type, contrast-enhanced T1-weighted type and T2 fluid attenuation recovery type; for each type respectively Feature extraction is performed on image samples of various types; the radiomics features corresponding to the extracted image samples of each type are combined to obtain multiple radiomics features.
  • T1-weighted T1
  • T1c contrast-enhanced T1-weighted
  • T2-weighted T2
  • FLAIR fluid attenuated inversion recovery
  • Different patterns of images are called a modality, which can provide complementary information to analyze different glioma partitions.
  • T2 and FLAIR highlight peritumoral edema, specifying the entire tumor.
  • T1 and T1c highlight tumors without peritumoral edema, designated as the tumor core.
  • An area of high-intensity enhancement in the tumor core called the enhancing tumor core, can also be observed in T1c. Therefore, applying multimodal images can reduce information uncertainty and improve the accuracy of clinical diagnosis and segmentation.
  • radiomic features can be extracted for image samples of each modality.
  • images of three sub-regions can be extracted from the image samples of each modality (the above-mentioned measure A), and then the image samples of each sub-region can be analyzed in multiple dimensions.
  • Feature extraction Measure B above
  • to obtain multiple radiomics features of image samples in each modality and then combine the multiple radiomics features extracted from image samples of the above four modalities to obtain Multiple radiomic features of sample subjects.
  • FIG. 3 a complete schematic diagram of the process of extracting radiomics features disclosed in the present invention is shown. As shown in Figure 3, it includes image samples of T1 weighted type (TIW image), T2 weighted type (T2W image), contrast-enhanced T1 weighted type (T1WCE image) and T2 fluid attenuation conversion recovery type (FLAIR image). It should be noted that the image samples of these four modalities are all three-dimensional images.
  • image segmentation is performed on the image samples of each modality to obtain the first sub-region image, the second sub-region image and the third sub-region image. Since the image samples of different modalities are used to highlight the characteristics of different sub-regions in different tumor areas, feature extraction is performed on the segmented image samples of each sub-region.
  • measure B can be used to obtain the first-order statistical features, morphological features, and texture features of each sub-region image, the first-order statistical features and texture features of the wavelet image, and the first-order statistical features of the LoG image. and texture features.
  • GLCM gray level co-occurrence matrix
  • GLRLM gray level run length matrix
  • GLSZM gray level size zone matrix
  • NTTDM adjacent gray levels Difference matrix
  • 1318 radiomic features can be obtained for the image sample of each sub-region in each modality.
  • screening multiple radiomics features in the present disclosure may refer to screening multiple radiomics features multiple times, and the screening factors based on each screening may be different. In this way, the multiple screenings may be The extracted radiomics features are combined and deduplicated to obtain the screened radiomics feature samples. Next, we will introduce how to screen radiomic features and voxel features respectively.
  • Screening of radiomics features may include screening of multiple radiomics features of a single sample subject, or may include screening of all radiomics features of all sample subjects. in, When filtering multiple radiomics features of a single sample object, you can filter based on the expression category label and tumor grade label of the tumor area. When filtering all radiomics features of all sample objects, you can filter based on clinical data. filter.
  • the first screening factor may include an expression category label and a tumor grade label of the tumor area; wherein the tumor grade label is used to identify the tumor grade of the sample object, where the tumor grade refers to the tissue of the tumor. Scientific grade is used to indicate the malignancy of tumors.
  • the expression category labels and tumor grade labels of the present disclosure are obtained after the sample subject is diagnosed with a tumor disease, that is, they can be the expression category and tumor grade of the target gene of the diagnosed patient.
  • the radiomics features can be screened based on expression category labels and tumor grade labels respectively, and the radiomics feature samples after filtering are deduplicated to obtain radiomics feature samples.
  • multiple radiomics features can be screened based on the first relationship value between each radiomics feature and the expression category label to obtain multiple first radiomics features; based on each radiomics feature The second relationship value between the feature and the tumor grade label is used to filter multiple radiomic features to obtain multiple second radiomic features; then, multiple first radiomic features and multiple second imaging groups are obtained Deduplicate the scientific features and obtain multiple radiomic feature samples.
  • the first relationship value is used to characterize the degree of association between radiomics features and mutations of the target gene; the second relationship value is used to characterize the degree of association between radiomics features and tumor grade.
  • the Mann-Whitney U (Whitney test) test method can be used to select features that are significantly related to the TERT status label.
  • the Mann-Whitney U (Whitney test) test method is used to evaluate whether the two sampling groups are likely to come from Same group.
  • the sample objects are divided into two groups x1 and x2 according to the TERT status category labels 0 and 1.
  • the TERT expression category label of the sample object in x1 is 0, and the TERT expression category label of the sample object in x2 is 1.
  • This first relationship value can reflect the sample object.
  • the degree of correlation between the radiomics features and the status label is retained, and then the radiomics features with p-value ⁇ 0.05 are retained, thereby completing the first screening of the radiomics features.
  • the Mann-Whitney U test method can also be used to select the tumor grade (high-grade Features that are significantly related to the glioma/low-grade glioma) label, where the tumor grade label can include 0 and 1, 0 represents a high-grade tumor, 1 represents a low-grade tumor, and the tumor grade labels 0 and 1 Divide the sample objects into two groups x3 and x4. Among them, the tumor grade label of the sample object in x3 is 0, and the tumor grade label of the sample object in x4 is 1. Then, calculate the Mann-Whitney between samples x3 and x4 The U test is used to obtain the p-value of each radiomics feature of each sample object, that is, the second relationship value. This second relationship value can reflect the degree of association between the radiomics features of the sample object and the tumor grade label. Then, the radiomics features with p-value ⁇ 0.05 are retained to complete the second screening of radiomics features.
  • the first screening (screening based on the expression category label of the target gene) and the second screening (screening based on the tumor grade label) are independent of each other.
  • the variance selection method can also be used to select image group features with better discriminating ability, and then, based on the first screening factor, multiple image groups are selected from the image group features with better discriminating ability. feature samples.
  • the variance corresponding to each radiomics feature can be determined, and the radiomics features whose variance is greater than the second variance threshold are retained to obtain multiple candidate radiomics features. Afterwards, multiple candidate radiomic features are screened based on the first screening factor to obtain multiple radiomic feature samples.
  • the variance selection method can select features useful for distinguishing samples, that is to say, it can select radiomics features with strong feature expression. Specifically, if the variance of a radiomics feature is close to 0, it means that the sample subjects have basically no difference in this radiomics feature, and this radiomics feature is not useful for distinguishing between sample subjects.
  • a threshold can be set to retain the radiomics features whose variance is greater than the threshold, and obtain the radiomics features selected by the variance selection method.
  • z-score can be used to standardize the data on the radiomics features selected by the variance selection method, and then based on the first screening factor, the radiomics features after data standardization are screened to obtain radiomics feature samples. .
  • the extracted radiomic features that are not very distinguishable between samples can be eliminated first, so that the retained radiomic features are all features with strong feature expression, thereby improving the efficiency of The feature expression intensity of the subsequently screened radiomic features also reduces the subsequent The calculation amount of feature screening is continued and the efficiency of feature screening is improved.
  • all radiomics features are screened based on the third screening factor to obtain supplementary radiomics feature samples; where the third screening factor includes clinical data corresponding to multiple sample subjects.
  • the training sample can be constructed based on multiple radiomics feature samples, multiple voxel feature samples, and multiple supplementary radiomics feature samples.
  • the training samples include multiple sample groups, and each sample group includes multiple radiomics feature samples, multiple voxel feature samples corresponding to a sample object, and multiple supplementary radiomics of the sample object.
  • Feature samples include multiple radiomics feature samples, multiple voxel feature samples corresponding to a sample object, and multiple supplementary radiomics of the sample object.
  • the radiomics feature samples selected from multiple sample objects constitute the radiomics feature subset 1
  • the multiple supplementary radiomics feature samples constitute the radiomics feature subset 2
  • the voxel feature samples constitute the voxel feature
  • the subsets, radiomics feature subset 1, radiomics feature subset 2 and voxel feature subset are used as training samples for training the preset model.
  • Figure 4 shows a schematic flow chart of screening radiomics features based on clinical data. As shown in Figure 4, the specific steps may include the following:
  • Step S401 Obtain a radiomics feature matrix and a clinical data matrix; wherein, the radiomics feature matrix includes multiple radiomics features corresponding to multiple sample objects, and the clinical data matrix includes clinical data corresponding to multiple sample objects.
  • Step S402 Obtain the mutual information coefficient matrix based on the radiomics feature matrix and the clinical data feature matrix.
  • the mutual information coefficient matrix includes the mutual information coefficient between each radiomic feature and clinical data.
  • the mutual information coefficient is used to characterize the degree of correlation between the radiomic feature and clinical data.
  • Step S403 Based on the mutual information coefficient matrix, screen all radiomics features included in the radiomics feature matrix to obtain multiple supplementary radiomics feature samples.
  • the radiomics features of multiple sample objects can be screened based on their respective corresponding clinical data, and a supplementary radiomics feature sample selected for each sample object is obtained.
  • mutual information can be used to measure the correlation between radiomics features and clinical data features. Specifically, assuming N sample objects, the radiomics feature matrix is A N*M and the clinical data feature matrix is B N*S .
  • M is the number of radiomics features extracted for each sample object, for example, as shown above In this example, 15816 radiomics features are extracted, then M is 15816. Of course, after using the variance selection method to select some radiomics features, M is the number of radiomics features selected by the method selection method. S is the number of clinical data for each sample subject.
  • clinical data includes age, gender, systolic blood pressure, diastolic blood pressure, disease history, malignant tumor history, medication information, surgical conditions, survival time, etc.
  • each type of data in the clinical data can be converted into clinical data features, and S represents the number of clinical data features.
  • the process of converting clinical data into clinical data features can be as follows:
  • one-hot encoding can also be performed on each type of clinical data to obtain the clinical data characteristics corresponding to each type of clinical data.
  • the mutual information coefficient matrix can be obtained based on the radiomics feature matrix and the clinical data feature matrix. Specifically, the interaction between each radiomics feature of each sample object and different clinical data features of the sample object can be calculated. Information coefficient, the mutual information coefficient matrix C S*M is obtained; in this way, Each row of the coefficient matrix C S*M represents the mutual information coefficient corresponding to the M radiomic features of a sample object. Then, the K best radiomic features can be selected from each row, and the S rows (S Features selected from each clinical data feature) are combined and deduplicated to obtain multiple supplementary radiomics feature samples.
  • multiple supplementary radiomics feature samples can be grouped according to the sample objects they belong to, and the complementary radiomics feature samples corresponding to each sample object are obtained, and then the supplementary radiomics features corresponding to each sample object are obtained.
  • Samples can be divided into sample groups of the sample object as training samples.
  • the radiomics features related to the clinical data features of each sample object can be screened out, that is to say, the radiomics features can be screened out based on the clinical data. Radiomic features closely related to the patient's condition are used for model training to improve the interpretability of the model.
  • the corresponding complementary radiomics feature samples can be screened out through the mutual information matrix. Therefore, compared with a single calculation
  • the correlation between the radiomics features of each sample object and the clinical data features can screen out the complementary radiomics feature samples corresponding to multiple sample objects at one time, improving the screening efficiency.
  • the radiomics characteristics and clinical data characteristics of different sample subjects are included in the unified matrix space for calculation. Therefore, when selecting the supplementary radiomics feature samples of one sample subject, the clinical data of other sample subjects can be used.
  • the correlation between features and radiomics features thereby constructing a medical association between clinical data features and radiomics features based on multiple sample objects, improving the accuracy of the selected supplementary radiomics feature samples, That is, screening out samples with radiomic features that can truly reflect clinical data.
  • the filtering factors used to filter voxel features can include expression category labels.
  • primary screening of voxel features can be performed first to filter out voxel features with strong feature expressions. , then, among the voxel features with strong feature expression, filtering is performed based on the expression category label to reduce the calculation amount of the classification model for the voxel features and the calculation amount when filtering the voxel features.
  • the variance of each voxel feature can be obtained, and the voxel features with variance greater than the first variance threshold are retained to obtain multiple candidate voxel features; the expression category label is used as the prediction label, and multiple candidate voxel features are used As input, a linear regression model is used to filter out multiple candidate voxel features. Multiple voxel feature samples.
  • the linear regression model can be a LASSO regression model.
  • the LASSO regression L1 regularization algorithm can be used for voxel feature selection.
  • multiple candidate voxel features are used as the LASSO Input features, the predicted label of this LASSO is the expression category label of the target gene, thus obtaining a set of voxel feature samples selected by LASSO.
  • the location of the tumor area on the corresponding human body part can also be determined, so that The location information of this location is used as a supplementary feature of the training sample to train the classification model.
  • the position information corresponding to the glioma area can be determined; and the position characteristics corresponding to the position information can be obtained.
  • the location information includes the brain area to which the glioma area belongs, and/or the location coordinates of the glioma area in the brain.
  • training samples may be constructed based on position features, multiple radiomics feature samples, and multiple voxel feature samples.
  • the location of the glioma region in the brain can be obtained. , that is, obtaining the position information corresponding to the glioma area.
  • the location of the glioma area can be determined based on the MRI image of the brain, and then the location information can be determined based on the brain area to which the location belongs.
  • the location information may include the brain area to which the glioma area belongs or the position coordinates of the glioma area in the brain, or it may include both the brain area to which the glioma area belongs and the position coordinates of the glioma area in the brain.
  • the position coordinates may refer to the center coordinates of the glioma in the brain, that is, the spatial location of the glioma in the brain.
  • the brain region may include the cerebrum, cerebellum, and brainstem; in another example, the brain is subdivided into 116 ROIs (regions of interest, regions of interest) according to the Anatomical Labeling (AAL) atlas, AAL
  • AAL Anatomical Labeling
  • AAL Anatomical Automatic Labeling
  • the brain area can include 116 ROI areas.
  • the position coordinates can be expressed in numerical form.
  • the brain area to which the glioma area belongs can be expressed by the label of whether the glioma area belongs to each of the above brain areas.
  • Brain For example, the region includes the cerebrum, cerebellum and brainstem. If the tumor belongs to this region, it is expressed as 1, and if it does not belong to this region, it is expressed as 0. Assuming that gliomas are distributed in the cerebellum and brainstem, the location characteristics are expressed as [0, 1,1].
  • the location characteristics of the region to which the tumor belongs can be fused, thereby providing a reference for the location of the tumor for predicting the expression category of the target gene. Based on the correlation between the location of the tumor and the expression category of the target gene, it can be compared Accurately predict the expression category of target genes.
  • the process of using training samples to train the preset model to obtain the classification model can be as follows:
  • the preset model that meets the training end condition is used as the classification model.
  • the training end condition is that the classification model converges or reaches the preset number of updates.
  • the sample group of each sample object that is, the radiomics feature samples, voxel feature samples and supplementary radiomics feature samples screened out for each sample object
  • the preset model performs processing at different scales based on radiomics feature samples, voxel feature samples and supplementary radiomics feature samples, thereby predicting the expression category of the target gene of the sample object, that is, the predicted expression category.
  • a loss function is constructed based on the expression category label and predicted expression category of the sample object, and the loss value of the preset model is calculated. Based on the loss value, the parameters of the preset model are continuously updated, and the classification model is finally obtained.
  • the convergence of the classification model may mean that the loss value is less than or equal to the preset loss value, or it may mean that the loss value no longer becomes smaller.
  • the preset update times can be set according to actual needs.
  • the classification model acquisition method of the present disclosure since the training samples are radiomics feature samples and voxel samples that are highly correlated with the expression category of the target gene and are screened based on the expression category label of the target gene, the classification model learns The correlation between radiomic features and voxel features and the expression categories of target genes improves the interpretability of the classification model.
  • the preset model After training the above classification model, since the training samples sent to the preset model are effective radiomic feature samples and voxel feature samples selected based on clinical data and expression categories of target genes, the preset model is trained In the process, we can learn about these mutations related to the target gene.
  • the association between features and the expression category of the target gene can improve the medical interpretability of the classification model, and the classification model can predict the expression category of the target gene based on radiomic features and voxel features.
  • the present disclosure provides a method for determining the expression category of a target gene.
  • a flow chart of the steps of the method for determining the expression category of a target gene is shown, as shown in FIG. 5 , including the following steps:
  • Step S501 Obtain multiple target radiomic features and multiple target voxel features of the tumor area of the subject to be tested.
  • Step S502 Input multiple target radiomic features and multiple target voxel features into a classification model; wherein the classification model is obtained according to the classification model acquisition method described in the above embodiment;
  • Step S503 Based on the output of the classification model, determine the expression category of the target gene of the object to be tested.
  • the subject to be tested may refer to a patient whose expression category of the target gene is to be determined, wherein an MRI image of the tumor area of the subject to be tested may be obtained.
  • the MRI image may include the components described in the above embodiment.
  • the images of the four modalities include T1-weighted images, T2-weighted images, contrast-enhanced T1-weighted images and T2 fluid attenuation recovery-type images.
  • each modality can be obtained from The image is segmented into three sub-areas, and features of each sub-area are extracted at multiple scales. Through feature extraction at multiple scales, the first-order statistical features and morphology described in the above embodiment can be obtained.
  • Features and texture features to obtain multiple target radiomic features corresponding to the object to be tested.
  • the process of obtaining the target voxel characteristics of the object to be measured may be described with reference to the process of obtaining the voxel characteristics of the sample object in the above implementation, and will not be described again here.
  • multiple target radiomic features and multiple target voxel features can be input into the classification model. Since the classification model has learned the relationship between the radiomic features and voxel features and the target gene through the acquisition process of the above embodiment. The correlation between expression categories, therefore, has the ability to predict the expression category of target genes based on radiomic features and voxel features.
  • the output of the classification model is the probability that the target gene belongs to each expression category, that is, the probability of belonging to the mutant type and the probability of belonging to the wild type.
  • the category with a probability greater than the preset probability value can be used as the expression category of the target gene.
  • the method for determining the expression category of the target gene of this embodiment is adopted. Since the preset model learns the relationship between the features related to the mutation of the target gene and the expression category of the target gene during the training process, That is to say, the medical interpretability of the classification model can be improved, and then the classification model can predict the expression category of the target gene based on the imaging features and voxel features. Therefore, in practical applications, the target imaging features and target voxel features of the object to be tested can be directly input into the classification model to obtain the accurate expression category of the target gene, without the need to perform invasive detection methods such as pyrosequencing or PCR to determine the mutation status of the object to be tested, which can greatly alleviate the pain of patients.
  • invasive detection methods such as pyrosequencing or PCR to determine the mutation status of the object to be tested
  • the radiomic features when using the classification model to predict the expression category of the target gene of the subject to be tested, can also be screened.
  • the target radiomic features with strong feature expression can be screened out.
  • Features and target voxel features on the other hand, can screen out target radiomic features that have a strong correlation with the tumor grade and clinical data of the subject to be tested.
  • the process of screening out target radiomics features with strong feature expression is as follows: the variance corresponding to each target radiomics feature can be determined, and the target radiomics features whose variance is greater than the second variance threshold are retained; determine The variance of each voxel feature retains the target voxel features whose variance is greater than the first variance threshold.
  • the retained target radiomic features and the retained target voxel features can be input to the classification model.
  • the first variance threshold may be the same as the first variance threshold in the above embodiment, and the second variance threshold may be the same as the second variance threshold in the above embodiment.
  • the variance selection method can also be used to select target voxel features and target radiomic features with strong expressive capabilities.
  • the process of screening out target imaging features that are strongly associated with the tumor grade and clinical data of the subject to be tested can be as follows:
  • a fourth screening factor corresponding to the subject to be tested is obtained, and multiple target radiomic features are screened based on the fourth screening factor; wherein the fourth screening factor includes clinical data and/or tumor grading data of the subject to be tested.
  • the fourth screening factor may include clinical data, tumor grading data, or both clinical data and tumor grading data.
  • the mutual information coefficient between the target radiomics features and the clinical data can be calculated, and then the target radiomics features sent to the classification model are selected based on the mutual information coefficient; when only the tumor grade is included, In this case, the third relationship value between the target radiomics feature and the tumor grade can be calculated, and then the target radiomics features sent to the classification model are selected based on the third relationship value.
  • Each target radiomics signature can be determined when including clinical data and tumor grade a third relationship value with the tumor grade label, and a mutual information coefficient between each target radiomic feature and the clinical data;
  • the multiple target radiomic features are screened based on the third relationship value, and the multiple target radiomic features are screened based on the mutual information coefficient; then, the target image group selected based on the third relationship value is screened.
  • the selected radiomic features and the target radiomic features selected based on the mutual information coefficient are deduplicated to obtain the screened target radiomic features.
  • the target radiomics features that are closely related to the clinical data of the subject to be tested and the target radiomics features that are closely related to the tumor grade of the subject to be tested are selected from multiple target radiomics features. Therefore, when the classification model already has the ability to determine the correlation between radiomic features and voxel features and the expression category of the target gene, what is fed into the classification model is also related to the clinical data and tumor of the subject to be tested. Classification of closely related target radiomic features can, therefore, further improve the accuracy of predicting the expression category of target genes.
  • the present disclosure also provides a classification model acquisition device.
  • Figure 6 shows a schematic structural framework diagram of the classification model acquisition device of the present disclosure.
  • the device may specifically include the following Module:
  • the feature acquisition module 601 is used to acquire multiple radiomic features and multiple voxel features of the tumor area of the sample object;
  • the feature selection module 602 is used to screen multiple radiomic features based on the first screening factor to obtain multiple radiomic feature samples; and screen multiple voxel features based on the second screening factor to obtain multiple voxel features.
  • Sample wherein, the first filtering factor and the second filtering factor both include the expression category label of the TERT gene of the sample object;
  • the sample construction module 603 is used to construct training samples based on multiple radiomics feature samples and multiple voxel feature samples;
  • the model training module 604 is used to train a preset model using training samples as input to obtain a classification model.
  • the classification model is used to predict the category of target gene mutations.
  • the feature acquisition module 601 includes:
  • An image segmentation unit configured to extract a first sub-region image belonging to the non-enhanced tumor area, a second sub-region image belonging to the tumor-enhanced area, and an edema area around the tumor from the image sample of the tumor area. image of the second subregion;
  • the feature extraction unit is used to extract features from the first sub-region image, the second sub-region image and the third sub-region image respectively to obtain multiple radiomic features.
  • the feature acquisition module 601 includes:
  • a multi-type image acquisition unit is used to acquire multiple types of image samples of the tumor area, including T1-weighted type, T2-weighted type, contrast-enhanced T1-weighted type and T2 fluid attenuation recovery type;
  • Feature extraction unit used to extract features from each type of image samples respectively
  • the feature combination unit is used to combine the radiomics features corresponding to the extracted image samples of each type to obtain multiple radiomics features.
  • the tumor area is a glioma area in the brain
  • the device further includes:
  • a position information acquisition module used to determine the position information corresponding to the glioma region based on the image sample of the glioma region;
  • a location feature acquisition module is used to obtain location features corresponding to the location information; where the location information includes the brain area to which the glioma area belongs, and/or the location coordinates of the glioma area in the brain;
  • the sample construction module 603 is specifically used to construct training samples based on location features, multiple radiomics feature samples, and multiple voxel feature samples.
  • the first screening factor includes an expression category label and a tumor grade label of the tumor region;
  • the feature selection module 602 includes a radiomics feature screening unit, and the radiomics feature screening unit includes:
  • the first screening subunit is used to screen multiple radiomics features based on the first relationship value between each radiomics feature and the expression category label to obtain multiple first radiomics features; wherein, the first A relationship value is used to characterize the degree of association between radiomic features and mutations of target genes;
  • the second screening subunit is used to screen multiple radiomics features based on the second relationship value between each radiomics feature and the tumor grade label to obtain multiple second radiomics features; wherein, the The binary relationship value is used to characterize the degree of association between radiomics features and tumor grade;
  • the deduplication unit is used to deduplicate a plurality of first radiomics features and a plurality of second radiomics features to obtain multiple radiomics feature samples.
  • the apparatus further includes:
  • the radiomics feature re-screening module is used to screen all radiomics features included in all sample objects based on the third screening factor to obtain supplementary images.
  • the sample construction module 603 is specifically used to construct training samples based on multiple radiomics feature samples, multiple voxel feature samples, and multiple supplementary radiomics feature samples.
  • the radiomics feature re-screening module includes:
  • a matrix creation unit is used to obtain a radiomics feature matrix and a clinical data matrix; wherein, the radiomics feature matrix includes multiple radiomics features corresponding to multiple sample objects, and the clinical data matrix includes multiple corresponding radiomics features of multiple sample objects. clinical data;
  • the mutual information coefficient determination unit is used to obtain the mutual information coefficient matrix based on the radiomics feature matrix and the clinical data feature matrix.
  • the mutual information coefficient matrix includes the mutual information coefficient between each radiomics feature and the clinical data.
  • the mutual information coefficient Used to characterize the degree of association between radiomic features and clinical data;
  • the supplementary screening unit is used to screen all radiomics features included in the radiomics feature matrix based on the mutual information coefficient matrix to obtain multiple supplementary radiomics feature samples.
  • the second filtering factor includes an expression category label
  • the feature selection module 602 includes a voxel feature filtering unit
  • the voxel feature filtering unit includes:
  • the first variance determination unit is used to obtain the variance of each voxel feature, retain the voxel features whose variance is greater than the first variance threshold, and obtain multiple candidate voxel features;
  • the voxel screening unit is used to use the expression category label as the prediction label, multiple candidate voxel features as input, and use a linear regression model to screen out multiple voxel feature samples from the multiple candidate voxel features.
  • the device also includes:
  • the second variance determination unit is used to determine the variance corresponding to each radiomics feature, and retain the radiomics features whose variance is greater than the second variance threshold to obtain multiple candidate radiomics features;
  • the radiomics feature screening unit is used to screen multiple candidate radiomics features based on the first screening factor to obtain multiple radiomics feature samples.
  • the feature acquisition module includes an imaging omics feature extraction unit
  • the imaging omics feature extraction unit specifically includes:
  • the multi-dimensional feature extraction subunit obtains the wavelet image and LoG image of the image sample of the tumor area; performs multi-scale feature extraction on the image sample, wavelet image and LoG image of the tumor area respectively to obtain the first-order statistical features and texture of the tumor area. Characteristics and morphological characteristics;
  • Multi-dimensional feature combination subunit is used to combine first-order statistical features and texture features of the tumor area Combined with morphological features, multiple radiomic features are obtained.
  • model training module includes:
  • An input unit is used to input training samples into the classification model to obtain the predicted expression category of the target gene output by the classification model;
  • a loss determination unit used to determine the loss value of the classification model based on the predicted expression category and the expression category label
  • Parameter update unit used to update the parameters of the classification model based on the loss value
  • the classification model acquisition unit is used to use the classification model that meets the training end condition as the classification model.
  • the training end condition is that the classification model converges or reaches the preset number of updates.
  • the present disclosure also provides a device for determining the expression category of a target gene.
  • a schematic framework diagram of the device for determining the expression category of a target gene of the present disclosure is shown, as shown in Figure 7.
  • the device may specifically include the following modules:
  • the feature acquisition module 701 is used to acquire multiple target radiomic features and multiple target voxel features of the tumor area of the subject to be tested;
  • the feature input module 702 is used to input multiple target radiomic features and multiple target voxel features into a classification model; wherein the classification model is obtained according to the acquisition method of the classification model in the above embodiment;
  • the category determination module 703 is used to determine the expression category of the target gene of the object to be tested based on the output of the classification model.
  • the device also includes:
  • the first radiomics feature screening module is used to determine the variance corresponding to each target radiomics feature, and retain the target radiomics features whose variance is greater than the second variance threshold;
  • the voxel feature screening module is used to determine the variance of each voxel feature and retain the target voxel features whose variance is greater than the first variance threshold;
  • the feature input module 702 is specifically used to input the retained target radiomic features and target voxel features into the classification model.
  • the device also includes:
  • a screening factor acquisition module is used to obtain a fourth screening factor corresponding to the subject to be tested.
  • the fourth screening factor includes clinical data and/or tumor grading data of the subject to be tested;
  • the second radiomics feature screening module is used to select multiple target images based on the fourth screening factor. Screening based on omics features;
  • the feature input module 702 is specifically configured to input multiple filtered target radiomic features and multiple target voxel features into the classification model.
  • the fourth screening factor includes clinical data and tumor grade data;
  • the second radiomics feature screening module includes:
  • a numerical determination unit for determining a third relationship value between each target radiomics feature and the tumor grade label, and a mutual information coefficient between each target radiomics feature and clinical data
  • a screening unit configured to screen multiple target radiomic features based on a third relationship value, and to screen multiple target radiomic features based on mutual information coefficients;
  • the deduplication unit is used to deduplication the target imaging omics features screened out based on the third relationship value and the target imaging omics features screened out based on the mutual information coefficient to obtain the screened target imaging omics features.
  • the present disclosure also provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes, the classification model acquisition method is implemented. , or when executed, the method for determining the expression category of the target gene is achieved.
  • FIG. 8 a structural block diagram of an electronic device 800 according to an embodiment of the present disclosure is shown.
  • an electronic device provided by an embodiment of the present invention can be used to execute a classification model. Obtaining methods or methods for determining expression categories of target genes.
  • the electronic device 800 may include a memory 801, a processor 802, and a computer program stored on the memory and executable on the processor.
  • the processor 802 is configured to execute the image processing method.
  • the electronic device 800 may completely include an input device 803, an output device 804, and an image acquisition device 805.
  • the image acquisition device 805 can acquire an image of the tumor area (including an image sample and an image of the tumor area of the object to be tested), and then the input device 803 can acquire the image acquired by the image acquisition device 805, and the image can be processed by the processor 802, and the processing can specifically be Including extracting radiomic features and voxel features, filtering the radiomic features and voxel features, and constructing a training sample to train a preset model based on the filtered features.
  • the output device 804 can output a classification model, or can output a classification The expression category results of the model output.
  • the memory 801 may include volatile memory and non-volatile memory, where the volatile memory can be understood as a random access memory used to store and save data.
  • Non-volatile memory refers to a computer memory in which the stored data will not disappear when the current is turned off.
  • the computer program for the classification model acquisition method of the present disclosure or the method for determining the expression category of the target gene can be stored in volatile memory. In volatile memory and non-volatile memory, or in either of the two.
  • the present disclosure also provides a computer-readable storage medium, which stores a computer program that enables a processor to execute the classification model acquisition method, or implement the target gene expression category determination method during execution.
  • the present disclosure also provides a computer program product, including a computer program/instruction, which implements the acquisition method of the classification model when executed by a processor, or implements the determination when executed.
  • a computer program product including a computer program/instruction, which implements the acquisition method of the classification model when executed by a processor, or implements the determination when executed.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word “comprising” does not exclude the presence of elements or steps not listed in a claim.
  • the word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
  • the present disclosure may be implemented by means of hardware comprising several different elements and by means of a suitably programmed computer. In the element claim enumerating several means, several of these means may be embodied by the same item of hardware.
  • the use of the words first, second, third, etc. does not indicate any order. These words can be interpreted as names.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biotechnology (AREA)
  • Molecular Biology (AREA)
  • Genetics & Genomics (AREA)
  • Bioethics (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Public Health (AREA)
  • Image Processing (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

一种分类模型获取方法、表达类别确定方法、装置、设备及介质,所述分类模型获取方法包括:针对样本对象的肿瘤区域,获取肿瘤区域的多个影像组学特征和多个体素特征;基于第一筛选因子对多个影像组学特征进行筛选,得到多个影像组学特征样本;以及基于第二筛选因子对多个体素特征进行筛选,得到多个体素特征样本;其中,第一筛选因子和第二筛选因子均包括样本对象的目标基因的表达类别标签;基于多个影像组学特征样本和多个体素特征样本,构建训练样本;以训练样本为输入,对预设模型进行训练,得到分类模型,分类模型用于预测目标基因的表达类别;基于该分类模型实现了对目标基因的表达类别的检测,为肿瘤的精准检测提供可靠的生理参数。

Description

分类模型获取方法、表达类别确定方法、装置、设备及介质
本申请要求在2022年9月19日提交中国专利局、申请号为202211140564.6、发明名称为“分类模型获取方法、表达类别确定方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及数据处理技术领域,特别是涉及一种分类模型获取方法、表达类别确定方法、装置、设备及介质。
背景技术
对常见的肿瘤而言,特别是神经胶质瘤,其肿瘤的分子标志物及所涉及的信号通路有很多,如:异柠檬酸脱氢酶(IDH)突变、O6-甲基鸟嘌呤DNA甲基转移酶(MGMT)启动子甲基化、染色体1p/19q缺失、表皮生长因子受体(EGFR)扩增、端粒酶逆转录酶(TERT)基因启动子(TERTp)突变、H3F3A突变、Notch通路、miRNAs等。
而上述基因的表达类型可以作为肿瘤的检测和预后的一个生理参数。
概述
本公开提供了一种分类模型获取方法,方法包括:
针对样本对象的肿瘤区域,获取肿瘤区域的多个影像组学特征和多个体素特征;
基于第一筛选因子对多个影像组学特征进行筛选,得到多个影像组学特征样本;以及基于第二筛选因子对多个体素特征进行筛选,得到多个体素特征样本;其中,第一筛选因子和第二筛选因子均包括样本对象的目标基因的表达类别标签;
基于多个影像组学特征样本和多个体素特征样本,构建训练样本;
以训练样本为输入,对预设模型进行训练,得到分类模型,分类模型用于预测目标基因的表达类别。
在一种可选地示例中,获取肿瘤区域的多个影像组学特征,包括:
从肿瘤区域的图像样本中提取属于肿瘤非增强区的第一亚区图像、属于肿瘤增强区的第二亚区图像,以及属于肿瘤周围水肿区的第二亚区图像;
分别对第一亚区图像、第二亚区图像以及第三亚区图像进行特征提取,得到多个影像组学特征。
在一种可选地示例中,获取肿瘤区域的多个影像组学特征,包括:
获取肿瘤区域的多种类型的图像样本,多种类型包括T1加权类型、T2加权类型、对比度增强的T1加权类型和T2流体衰减期转恢复类型;
分别对每种类型的图像样本进行特征提取;
将提取到的每种类型的图像样本各自对应的影像组学特征进行组合,得到多个影像组学特征。
在一种可选地示例中,肿瘤区域为大脑中的胶质瘤区域,方法还包括:
基于胶质瘤区域的图像样本,确定胶质瘤区域对应的位置信息;
获取位置信息对应的位置特征;其中,位置信息包括胶质瘤区域所属的大脑区域,和/或胶质瘤区域在大脑中的位置坐标;
基于多个影像组学特征样本和多个体素特征样本,构建训练样本,包括:
基于位置特征、多个影像组学特征样本和多个体素特征样本,构建训练样本。
在一种可选地示例中,第一筛选因子包括表达类别标签和肿瘤区域的肿瘤分级标签;基于第一筛选因子对多个影像组学特征进行筛选,得到多个影像组学特征样本,包括:
基于每个影像组学特征与表达类别标签之间的第一关系值,对多个影像组学特征进行筛选,得到多个第一影像组学特征;其中,第一关系值用于表征影像组学特征与目标基因的突变之间的关联程度;
基于每个影像组学特征与肿瘤分级标签之间的第二关系值,对多个影像组学特征进行筛选,得到多个第二影像组学特征;其中,第二关系值用于表征影像组学特征与肿瘤分级之间的关联程度;
对多个第一影像组学特征和多个第二影像组学特征进行去重,得到多个影像组学特征样本。
在一种可选地示例中,包括多个样本对象,方法还包括:
针对全部样本对象所包括的全部影像组学特征,基于第三筛选因子对全部影像组学特征进行筛选,得到补充性影像组学特征样本;其中,第三筛选因子包括多个样本对象各自对应的临床数据;
基于多个影像组学特征样本和多个体素特征样本,构建训练样本,包括:
基于多个影像组学特征样本、多个体素特征样本和多个补充性影像组学特征样本,构建训练样本。
在一种可选地示例中,针对全部样本对象所包括的全部影像组学特征,基于第三筛选因子对全部影像组学特征进行筛选,得到补充性影像组学特征样本,包括:
获取影像组学特征矩阵以及临床数据矩阵;其中,影像组学特征矩阵包括多个样本对象各自对应的多个影像组学特征,临床数据矩阵包括多个样本对象各自对应的临床数据;
基于影像组学特征矩阵和临床数据特征矩阵,获取互信息系数矩阵,互信息系数矩阵包括每个影像组学特征与临床数据之间的互信息系数,互信息系数用于表征影像组学特征与临床数据之间的关联程度;
基于互信息系数矩阵,对影像组学特征矩阵所包括的全部影像组学特征进行筛选,得到多个补充性影像组学特征样本。
在一种可选地示例中,第二筛选因子包括表达类别标签,基于第二筛选因子对多个体素特征进行筛选,得到多个体素特征样本,包括:
获取每个体素特征的方差,将方差大于第一方差阈值的体素特征保留,得到多个候选体素特征;
以表达类别标签为预测标签,以多个候选体素特征为输入,利用线性回归模型从多个候选体素特征中筛选出多个体素特征样本。
在一种可选地示例中,基于第一筛选因子对多个影像组学特征进行筛选,得到多个影像组学特征样本之前,方法还包括:
确定每个影像组学特征对应的方差,并将方差大于第二方差阈值的影像组学特征保留,得到多个候选影像组学特征;
基于第一筛选因子对多个影像组学特征进行筛选,得到多个影像组学特征样本,包括:
基于第一筛选因子对多个候选影像组学特征进行筛选,得到多个影像组 学特征样本。
在一种可选地示例中,获取肿瘤区域的多个影像组学特征,包括:
获取所述肿瘤区域的图像样本的小波图像和LoG图像;
分别对所述肿瘤区域的图像样本、所述小波图像和所述LoG图像进行多尺度特征提取,得到所述肿瘤区域的一阶统计量特征、纹理特征和形态特征;
将所述肿瘤区域的一阶统计量特征、纹理特征和形态特征进行组合,得到多个所述影像组学特征。
在一种可选地示例中,以训练样本为输入,对预设模型进行训练,得到分类模型,包括:
将训练样本输入至分类模型,得到分类模型输出的目标基因的预测表达类别;
基于预测表达类别和表达类别标签,确定分类模型的损失值;
基于损失值,更新分类模型的参数;
将满足训练结束条件时的分类模型作为分类模型,训练结束条件为分类模型收敛或达到预设更新次数。
本公开还提供一种目标基因的表达类别确定方法,方法包括:
获取待测对象的肿瘤区域的多个目标影像组学特征和多个目标体素特征;
将多个目标影像组学特征和多个目标体素特征,输入至分类模型;其中,分类模型是按照所述分类模型的获取方法得到的;
基于分类模型的输出,确定待测对象的目标基因的表达类别。
在一种可选地示例中,获取待测对象的肿瘤区域的多个目标影像组学特征和多个目标体素特征之后,方法还包括:
确定每个体素特征对应的方差,将方差大于第一方差阈值的目标体素特征保留;
确定每个目标影像组学特征对应的方差,并将方差大于第二方差阈值的目标影像组学特征保留;
将多个目标影像组学特征和多个目标体素特征,输入至分类模型,包括:
将保留的目标影像组学特征和目标体素特征,输入至分类模型。
在一种可选地示例中,获取待测对象的肿瘤区域的多个目标影像组学特征和多个目标体素特征之后,方法还包括:
获取待测对象对应的第四筛选因子,第四筛选因子包括待测对象的临床数据和/或肿瘤分级数据;
基于第四筛选因子,对多个目标影像组学特征进行筛选;
将多个目标影像组学特征和多个目标体素特征,输入至分类模型,包括:
将多个筛选出的目标影像组学特征和多个目标体素特征,输入至分类模型。
在一种可选地示例中,第四筛选因子包括临床数据和肿瘤分级数据;基于第四筛选因子,对多个目标影像组学特征进行筛选,包括:
确定每个目标影像组学特征与肿瘤分级标签之间的第三关系值,以及每个目标影像组学特征与临床数据之间的互信息系数;
基于第三关系值对多个目标影像组学特征进行筛选,以及,基于互信息系数对多个目标影像组学特征进行筛选;
将基于第三关系值筛选出的目标影像组学特征和基于互信息系数筛选出的目标影像组学特征进行去重,得到筛选出的目标影像组学特征。
本公开还提供一种分类模型获取装置,装置包括:
特征获取模块,用于针对样本对象的肿瘤区域,获取肿瘤区域的多个影像组学特征和多个体素特征;
特征选择模块,用于基于第一筛选因子对多个影像组学特征进行筛选,得到多个影像组学特征样本;以及基于第二筛选因子对多个体素特征进行筛选,得到多个体素特征样本;其中,第一筛选因子和第二筛选因子均包括样本对象的目标基因的表达类别标签;
样本构建模块,用于基于多个影像组学特征样本和多个体素特征样本,构建训练样本;
模型训练模块,用于以训练样本为输入,对预设模型进行训练,得到分类模型,分类模型用于预测目标基因的表达类别。
本公开还提供一种目标基因的表达类别确定装置,装置包括:
特征获取模块,用于获取待测对象的肿瘤区域的多个目标影像组学特征和多个目标体素特征;
特征输入模块,用于将多个目标影像组学特征和多个目标体素特征,输入至分类模型;其中,分类模型是按照所述的分类模型的获取方法得到的;
类别确定模块,用于基于分类模型的输出,确定待测对象的目标基因的表达类别。
本公开还提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行时实现的分类模型获取方法,或执行时实现的目标基因的表达类别确定方法。
本公开还提供一种计算机可读存储介质,其存储的计算机程序使得处理器执行的分类模型获取方法,或执行时实现的目标基因的表达类别确定方法。
采用本公开提供的分类模型获取方法,可以针对样本对象的肿瘤区域,获取肿瘤区域的多个影像组学特征和多个体素特征;基于第一筛选因子对多个影像组学特征进行筛选,得到多个影像组学特征样本;以及基于第二筛选因子对多个体素特征进行筛选,得到多个体素特征样本;并基于多个影像组学特征样本和多个体素特征样本,构建训练样本;接着以训练样本为输入,对预设模型进行训练,得到分类模型,分类模型用于预测目标基因的表达类别。
一方面,由于第一筛选因子和第二筛选因子均包括样本对象的目标基因的表达类别标签,该表达类别标签可以表征目标基因的表达类别,如突变类别、缺失状态等,如此,便可以以目标基因的表达类别为生理参数,筛选出与目标基因的表达类别紧密相关的影像组学特征样本和体素特征样本,再以这些筛选出的影像组学特征样本和体素特征样本为训练样本对预设模型进行训练,从而可以使得分类模型可以学习到肿瘤区域的形态学特点与目标基因的表达类别之间的关联性,提高了分类模型的可解释性,从而提高了基于肿瘤区域的影像预测目标基因的表达类别的准确性。
另一方面,由于训练样本不仅包括肿瘤区域的影像组学特征,还包括了肿瘤区域的体素特征,其中,影像组学特征可以反应肿瘤区域的纹理、形状等三维特征,体素特征可以反应肿瘤区域的空间立体形态等三维特征,从而可以提高训练样本的丰富程度,进而提高分类模型的准确度。
上述说明仅是本公开技术方案的概述,为了能够更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为了让本公开的上述和其它目的、特征和优点能够更明显易懂,以下特举本公开的具体实施方式。
附图简述
为了更清楚地说明本公开实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。需要说明的是,附图中的比例仅作为示意并不代表实际比例。
图1示意性地示出了分类模型获取过程的总体流程示意图;
图2示意性地示出了分类模型获取方法的步骤流程图;
图3示意性地示出本公开的完整的对影像组学特征进行提取的过程示意图;
图4示意性地示出了基于临床数据对影像组学特征进行筛选的步骤流程示意图;
图5示意性地示出了目标基因的表达类别确定方法的步骤流程图;
图6示意性地示出了分类模型获取装置的结构框架示意图;
图7示意性地示出了目标基因的表达类别确定装置的结构框架示意图;
图8示意性地示出了本公开的电子设备的结构框图。
详细描述
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
对于神经胶质瘤(脑胶质瘤)而言,脑胶质瘤的IDH基因型分型(突变型/野生型),1p/19q染色体缺失状态(缺失/未缺失),MGMT甲基化状态(甲基化/未甲基化)这些标志物及信号转导通路参与了胶质瘤的发生、发展,对胶质瘤的增殖、转移、侵袭等具有明显影响。
以TERT(telomerase reverse tranase,端粒酶逆转录酶)基因为例,其是编码端粒酶复合体的重要基因之一,TERT基因在绝大多数非肿瘤细胞中没有转录活性,但是在73%的肿瘤中存在TERT基因突变,如启动子突变、基 因易位和DNA扩增等。也就是说上述基因的表达类别与肿瘤具有一定的关联性。
目前,化疗仍是治疗神经胶质瘤的重要手段之一,然而化疗药物带来的副作用日益突出,效果也不理想。而基因的表达类型一般需要进行有创检测,给患者带来较大的痛苦,因此,对上述基因的生物指标状态的预测,可以帮助对脑胶质瘤的治疗提供参考参数,以制定治疗方案。
本公开为实现对目标基因的表达类别的检测,以为肿瘤的精准检测提供可靠的生理参数,提出了一种基于影像组学与神经网络的目标基因的表达类别确定方法,该方法可以无创实现目标基因的检测,主要核心构思在于,利用MR分割图像的影像组学特征,并结合多种特征筛选方法,得到筛选的影像组学特征,接着,利用基于体素的形态测量学(VBM)方法对MRI脑影像进行体素计算,并对得到的体素特征进行体素与目标基因之间的关联分析,得到筛选的体素特征;然后基于影像组学特征与体素特征融合训练分类器,得到分类模型,利用分类模型对待预测的目标基因的表达类别进行预测。
其中,本公开所指的肿瘤可以是脑胶质瘤、肝肿瘤、乳腺肿瘤、甲状腺肿瘤、肺部肿瘤以及黑色素肿瘤等常见肿瘤,本公开主要以脑胶质瘤为例进行说明。
需要说明的是,由于本公开提出的是基于影像组学与神经网络的目标基因的表达类别确定方法,因此,旨在利用机器学习的思想,构建可以用于预测目标基因的表达类别的分类模型,为了使得分类模型具有较高的可解释性,提出了对影像组学特征(结合多种特征筛选影像组学特征)和体素特征进行筛选(进行体素与目标基因之间的关联分析)的技术手段,从而提高训练样本的有效性。
参照图1所示,示出了分类模型获取过程的总体流程示意图,参照图1所示,可以对包含肿瘤区域的三维图像进行预处理后进行分割,从而得到肿瘤区域的图像,接着,对肿瘤区域的图像进行特征提取,得到影像组学特征,对预处理后的三维图像进行体素特征计算,得到体素特征。
之后,进行特征筛选,对影像组学特征进行多种筛选,得到特征筛选后的影像组学特征,以及对体素特征进行目标基因关联分析,得到筛选后的体素特征,接着,对筛选后的体素特征和影像组学特征进行融合后送入分类器 进行训练,从而训练得到分类模型。
参照图2所示,示出了本公开的分类模型获取方法的步骤流程图,如图2所示,具体可以包括以下步骤:
步骤S201:针对样本对象的肿瘤区域,获取肿瘤区域的多个影像组学特征和多个体素特征。
本实施例中,样本对象可以是指肿瘤患者,其中,肿瘤区域的多个影像组学特征和多个体素特征可以是从肿瘤区域的核磁共振图像中提取的,多个影像组学特征可以对肿瘤区域的三维图像进行特征提取得到的。
实际中,三维图像是核磁共振图像,由于人体的组织密度是不均匀的,在扫描中可以将介质分成很多密度相对均匀的立方体小块,这种小立方体,称为体素,体素是构成三维图像的基本单元,体素越小图像越清晰。
其中,影像组学特征可以反应肿瘤区域的切片纹理、形状等二维特征,体素特征可以反应肿瘤区域的空间立体形态等三维特征,这样可以得到肿瘤的各个维度的特征,以充分反应肿瘤区域的形态学特征,丰富特征信息。
其中,得到多个影像组学特征的过程可以是如下过程:
将肿瘤所在部位的具有不同切片厚度(范围:1到10)和像素间距的不同核磁共振机器生成的图像帧,重采样到1.0的均匀切片厚度和[1,1,1]像素区间得到三维图像;并采用均值和标准差对三维图像进行归一化处理后得到出后的图像。对于处理后的图像进行图像分割,分割出肿瘤区域所在的图像,接着对肿瘤区域所在的图像进行特征提取,得到多个影像组学特征。具体实施时,图像分割模块采用UNet分割网络模型实现肿瘤区域分割,UNet分割网络模型训练输入包括4种模态的三维影像数据,标签为分割mask图像。
其中,得到多个体素特征的过程可以是如下过程:
以脑胶质瘤为例,将肿瘤所在部位的具有不同切片厚度(范围:1到10)和像素间距的不同MRI机器生成的图像帧,重采样到1.0的均匀切片厚度和[1,1,1]像素区间得到三维图像,将该三维图像与肿瘤所在部位的T1加权模板图像对齐,然后标准化到蒙特利尔神经学研究所空间,使用8mm FWHM内核提取,生成灰质密度图像,该灰质密度图像包括240*240*155=8928000个体素,体素大小为1*1*1mm3。其中,本示例以脑胶质瘤为例进行说明,对于其他类型的肿瘤,可以参照进行。
步骤S202:基于第一筛选因子对多个影像组学特征进行筛选,得到多个影像组学特征样本;以及基于第二筛选因子对多个体素特征进行筛选,得到多个体素特征样本。
其中,第一筛选因子和第二筛选因子均包括样本对象的目标基因的表达类别标签。
本实施例中,可以基于目标基因的表达类别标签对多个影像组学特征和多个体素特征进行分别筛选,目标基因的表达类别标签表示样本对象的目标基因的表达类别,其中,目标基因的表达类别包括突变型和野生型,突变型的标签是1,野生型的标签是0。
其中,基于目标基因的表达类别对多个影像组学特征筛选可以是指,将多个影像组学特征中与目标基因的表达类别相关性较高的影像组学特征筛选出,同理,基于目标基因的表达类别对多个体素特征筛选可以是指,将多个体素特征中与目标基因的表达类别相关性较高的体素特征筛选出。
示例地,样本对象A的目标基因的表达类别为突变型,则可以将多个影像组学特征中与突变型相关性较高的影像组学特征筛选出,以及将多个体素特征中与突变型相关性较高的体素特征筛选出。而样本对象B的目标基因的表达类别为野生型,则可以将多个影像组学特征中与野生型相关性较高的影像组学特征筛选出,以及将多个体素特征中与野生型相关性较高的体素特征筛选出。也就是说,针对不同的样本对象,可以按照样本对象自身的目标基因的表达类别,将与该表达类别相关性较高的影像组学特征和体素特征筛选出。
本实施例中,可以通过计算影像组学特征与表达类别标签之间的关系值,得到影像组学特征与表达类别标签之间的相关性。以及通过计算屠苏特征与表达类别标签之间的关系值,得到体素特征与表达类别标签之间的相关性。
实际中,第一筛选因子还可以包括除目标基因的表达类别标签外的其他筛选因子,如临床数据、肿瘤分级等。也就是说,针对多个影像组学特征,可以分别基于每种筛选因子进行一次筛选,得到每种筛选因子筛选出的影像组学特征,之后,将多种筛选因子分别筛选主的影像组学特征组合并去重后,得到筛选出的多个影像组学特征样本。
步骤S203:基于多个影像组学特征样本和多个体素特征样本,构建训练 样本。
本实施例中,针对每个样本对象,可以通过筛选出对应的多个影像组学特征样本和体素特征样本,之后,将每个样本对象筛选出的多个影像组学特征样本和体素特征样本作为一个样本组,将该样本对象的表达类别标签作为该样本组的标签,以用于后续模型训练中构建损失函数。
这样,多个样本对象便构成多个样本组,多个样本组构成训练样本,其中,每个样本组包括对应的一个样本对象的多个影像组学特征样本和体素特征样本,以及对应一个表达类别标签。
步骤S204:以训练样本为输入,对预设模型进行训练,得到分类模型,分类模型用于预测目标基因的表达类别。
本实施例中,可以将每个样本组中的多个影像组学特征样本和体素特征样本融合后,输入到预设模型进行训练,其中,融合可以指将多个影像组学特征样本和体素特征样本组合为一个特征集,将特征集中的每个特征样本输入到预设模型。
其中,预设模型可以是分类器,例如,DenseNet网络作为分类器,其中,DenseNet的每一层都建立起了位于该层之前的每个层之间的连接,这样,误差信号可以很容易地传播到较早的层,从而较早的层可以从最终分类层获得直接监督,这样,可以减轻梯度消失现象,避免模型过拟合。
在训练多次后,当预设模型收敛时,或者损失值趋近于最小值,则可以得到分类模型,该分类模型可以用于预测肿瘤患者的目标基因的表达类别。
其中,本公开中的影像组学特征和体素特征均可以以特征向量的方式表达。
本公开中的目标基因可以包括:TERT基因启动子、基因IDH基因型分型(突变型/野生型),1p/19q染色体缺失状态(缺失/未缺失),MGMT甲基化状态(甲基化/未甲基化)中的任一者,只需要预先标注目标基因的表达类别标签即可,例如,对于TERT基因启动子而言,其表达类别包括突变类别和野生类别,对于IDH基因分型,其表达类别包括突变类别和野生类别,对于1p/19q染色体而言,其表达类别包括缺失类别和未缺失类别,对MGMT而言,其表达类别包括甲基化类别和未甲基化类别。
在一种可选的示例中,用于进行分类模型训练的目标基因可以是一种或 多种,一种目标基因的情况下,分类模型可以对一种目标基因的表达类别进行预测,在目标基因是多种的情况下,分类模型可以同时实现对多种基因的表达类别的预测,如分类模型可以同时输出TERT基因启动子的突变类别、1p/19q染色体的缺失状态、MGMT甲基化状态,此种情况下,可以为每种基因准备表达类别标签,以使分类模型同时学习到体素特征和影像组学特征与多种目标基因之间的关联。
采用本公开实施例的技术方案,一方面,由于表达类别标签可以表征目标基因的表达类别,如此,便可以以目标基因的表达类别为生理参数,筛选出与目标基因的表达类别紧密相关的影像组学特征样本和体素特征样本,再以这些筛选出的影像组学特征样本和体素特征样本为训练样本对预设模型进行训练,从而可以使得分类模型可以学习到肿瘤区域的形态特征与目标基因的表达类别之间的关联性,提高了分类模型的可解释性,从而提高了基于肿瘤区域的影像预测目标基因的表达类别的准确性。
另一方面,由于训练样本不仅包括肿瘤区域的影像组学特征,还包括了肿瘤区域的体素特征,其中,影像组学特征可以反应肿瘤区域的切片纹理、形状等二维特征,体素特征可以反应肿瘤区域的空间立体形态等三维特征,从而可以提高训练样本的丰富程度,进而提高分类模型的准确度。
<影像组学特征的提取>
在一种可选的示例中,为提高提取到的影像组学特征的丰富性和细腻性,提出了两种措施使得提取出的影像组学特征可以更加细腻地描述肿瘤区域的形态学特征,其中一种措施A是对肿瘤区域的图像的三种亚区进行影像组学特征提取,从而可以反应肿瘤区域的各个亚区的形态学特征,以描述肿瘤不同亚区的形态学特征,另一种措施B是对肿瘤区域的图像进行细粒度的影像组学特征提取,具体而言可以是提取描述肿瘤不同形态学特点的特征,如描述肿瘤的表面(MRI的切片)的细腻程度的特征,描述肿瘤的外观形状的特征等。
其中,措施A和措施B可以结合,也就是可以对每种亚区的图像进行多种细粒度的影像组学特征提取。
措施A:可以从肿瘤区域的图像样本中提取属于肿瘤非增强区的第一亚区图像、属于肿瘤增强区的第二亚区图像,以及属于肿瘤周围水肿区的第三 亚区图像;分别对第一亚区图像、第二亚区图像以及第三亚区图像进行特征提取,得到多个影像组学特征。
本措施A中,可以对肿瘤区域分别进行三次图像分割,每次图像分割得到一个亚区的图像,接着,对每个亚区的图像进行特征提取。其中,亚区包括肿瘤非增强区、肿瘤增强区和肿瘤周围水肿区;肿瘤非增强区是指肿瘤区域中的增强肿瘤区域,即肿瘤核;肿瘤增强区是指肿瘤核周围的增强区域,由增强肿瘤体素组成;肿瘤周围水肿区是指肿瘤浮肿区域。
本实施例中,可以分别对第一亚区图像进行特征提取,得到属于第一亚区图像的多个第一影像组学特征,对第二亚区图像进行特征提取,得到属于第二亚区图像的多个第二影像组学特征,以及对第三亚区图像进行特征提取,得到属于第三亚区图像的多个第三影像组学特征。并将多个第一影像组学特征、多个第二影像组学特征以及多个第三影像组学特征合并后得到样本对象的多个影像组学特征。
其中,针对不同亚区图像所提取的影像组学特征的数量可以是相同的,例如从不同亚区图像中均提取N个影像组学特征。这样,三种亚区图像共提取得到3N个影像组学特征。
采用该措施A时,基于肿瘤区域中肿瘤细胞表达程度的不同,分肿瘤核心区、增强肿瘤核心区和整个肿瘤区域,进而在提取影像组学特征时,也可以分区域对不同肿瘤细胞表达程度的区域进行特征提取,进而实现了对肿瘤区域的细粒度的特征提取,提取到的影像组学特征可以充分反应肿瘤区域的形态学特征,以及肿瘤细胞在不同表达程度下的形态学特征,从而增强训练样本的丰富性。
措施B:获取肿瘤区域的图像样本的小波图像和LoG图像;分别对肿瘤区域的图像样本、小波图像和LoG图像进行多尺度特征提取,得到肿瘤区域的一阶统计量特征、纹理特征和形态特征;并将肿瘤区域的一阶统计量特征、纹理特征和形态特征进行组合,得到多个影像组学特征。
本措施B中,小波图像可以是指:针对肿瘤区域的图像样本进行小波变换后得到的图像,LoG图像可以是指:求取肿瘤区域的图像样本一阶导数,最终得到肿瘤区域的边缘图像。
其中,可以分别对肿瘤区域的图像样本、小波图像和LoG图像进行多种 尺度的特征提取,每种尺度对应一种维度,具体可以包括:一阶统计量维度、纹理维度和形态维度。
其中,可以从肿瘤区域的图像样本中提取一阶统计量特征、纹理特征和形态特征,可以从小波图像中提取一阶统计量特征和纹理特征,从LoG图像中提取一阶统计量特征和纹理特征。
其中,从一阶统计量维度可以提取肿瘤区域的一阶统计量特征,具体地,一阶统计量维度可以是基于图像样本的像素灰度分布而计算出来的特征值,包括形态学特征和直方图特征,可以反应肿瘤区域的整体形态学特征。
其中,从形态维度可以提取肿瘤区域的形态特征,具体地,该形态特征可以是基于图像样本中肿瘤区域的轮廓线条而计算出来的特征值,可以反应肿瘤区域的肿瘤的形状和结构。
其中,从纹理维度可以提取肿瘤区域的纹理特征,具体地,可以采用统计方法、几何法和模型法提取肿瘤区域的图像样本中的纹理特征,其中,统计方法可以包括GLCM方法(空间灰度共生矩阵)、半方差图、纹理谱方法等,模型法可以包括随机场模型方法。其中,纹理特征可以用于描述肿瘤的表面性质,例如表面的粗细、稠密等特征。
具体实施时,可以提取小波图像的一阶统计量特征和纹理特征,小波图像是对图像样本进行去噪后得到的,提取出的纹理特征和一阶统计量特征所包含的噪点较少,这样,便可以与从原始的图像样本中提取的纹理特征和一阶统计量特征形成对比,以得到不同尺度下的图像样本的多维度特征。
其中,LoG图像可以勾勒出肿瘤区域的形态结构,从而可以对该边缘图像进行不同形式的特征提取,得到LoG图像的一阶统计特征和纹理特征。这样,便可以提取出肿瘤区域的边缘线条的形态学特征和表面性质特征。
采用此措施B的实施方案,可以按照不同特征方式,分别对肿瘤区域的原始的图像样本、去噪后的图像样本以及边缘图像进行特征提取,可以理解为是按照不同的关注点,提取不同关注点下的一阶统计量特征和纹理特征,这样,不同关注点下的一阶统计量特征和纹理特征,可以用于描述肿瘤区域在不同观测角度下的形态学特征,进而可以全方位反应肿瘤区域的形态学特征,从而增强训练样本的丰富性。
实际中,措施A和措施B可以结合使用,也就是说,对每个亚区图像, 可以对该亚区图像进行多种维度的特征提取,得到每个亚区图像在每种维度下的多个影像组学特征。这样,不仅对肿瘤区域的细粒度区域进行特征提取,还针对每个细粒度区域,从不同观测角度提取了影像组学特征。
在另外一种可选示例中,样本对象的肿瘤区域的图像样本可以包括T1加权图像(T1w)、T2加权图像(T2w)、对比度增强的T1加权图像(T1WCE)和T2流体衰减翻转恢复图像(T2-FLAIR),在进行特征提取时,可以对四种类型的图像均进行特征提取。
具体实施时,可以获取目标对象的肿瘤区域的多种类型的图像样本,多种类型包括T1加权类型、T2加权类型、对比度增强的T1加权类型和T2流体衰减期转恢复类型;分别对每种类型的图像样本进行特征提取;将提取到的每种类型的图像样本各自对应的影像组学特征进行组合,得到多个影像组学特征。
本可选示例中,以脑胶质瘤为例,核磁共振成像常用的序列有T1加权(T1)、对比增强T1加权(T1c)、T2加权(T2)和流体衰减反演恢复(FLAIR)图像。不同的模式的图像称为一种模态,其可以提供互补的信息来分析不同的胶质瘤分区。例如,T2和FLAIR突出肿瘤周围水肿,指定整个肿瘤。T1和T1c突出显示没有瘤周水肿的肿瘤,指定为肿瘤核心。在T1c中也可以观察到肿瘤核心的高强度增强区域,称为增强肿瘤核心。因此,应用多模态图像可以减少信息的不确定性,提高临床诊断和分割的准确性。
这样,对每种模态的图像样本,都可以提取出相应的影像组学特征。具体实施时,可以针对每种模态的图像样本,从该模态的图像样本中提取三个亚区的图像(上述措施A),接着又对每个亚区的图像样本分别进行多种维度的特征提取(上述措施B),从而得到每种模态下的图像样本的多个影像组学特征,接着对上述四种模态的图像样本提取到的多个影像组学特征进行组合,得到样本对象的多个影像组学特征。
参照图3所示,示出了本公开的完整的对影像组学特征进行提取的过程示意图,如图3所示,包括T1加权类型(TIW图像)、T2加权类型(T2W图像)、对比度增强的T1加权类型(T1WCE图像)和T2流体衰减期转恢复类型的图像样本(FLAIR图像),需要说明的是,这四种模态的图像样本都为三维图像。
其中,对每一种模态的图像样本进行图像分割,得到第一亚区图像、第二亚区图像和第三亚区图像,由于不同模态的图像样本用于突出不同肿瘤区域的不同亚区的特征,因此,对分割出的每种亚区的图像样本均进行特征提取。
在进行特征提取时,可以利用措施B,得到每个亚区图像的一阶统计量特、形态特征、纹理特征,小波图像的一阶统计量特征和纹理特征,以及LoG图像的一阶统计特征和纹理特征。
在实际进行特征提取时,可以参考以下示例进行:
其中,对每个亚区的图像样本,提取一阶统计量特征18个,形态特征16个特征;对纹理维度而言,可以采用不同纹理特征提取方式得到不同方式下提取到的纹理特征,具体包括灰度共生矩阵(GLCM)24个特征、灰度游程矩阵(GLRLM)16个特征、灰度尺寸区域矩阵(GLSZM)16个特征,灰度依赖矩阵(GLDM)14个特征、相邻灰度差分矩阵(NGTDM)5个特征;
LoG滤波图像(sigma:[1.0,2.0,3.0,4.0,5.0])的一阶统计量特征90个、纹理特征375个,小波滤波图像(LLH、LHL、LHH、HLL、HLH、HHL、HHH、LLL)的一阶统计量特征144个、纹理特征600个。
也就是说,针对每种模态下的每个亚区的图像样本,可以得到1318个影像组学特征,这样,针对肿瘤增强区、肿瘤非增强区、肿瘤周围水肿区等三个亚区,得到每个模态的图像样本的影像组学特征为1318*3=3954个,四种模态的图像样本的影像组学特征总共为4*3954=15816个。
当然,以上仅为示例性说明,实际中,提取的影像组学特征的数量可以根据实际需求进行确定即可。
如上所述,本公开的对多个影像组学特征进行筛选可以是指,对多个影像组学特征进行多次筛选,每次筛选所依据的筛选因子可以不同,这样,可以将多次筛选出的影像组学特征进行合并并去重,从而得到筛选出的影像组学特征样本。下面,分别对如何进行影像组学特征的筛选和体素特征的筛选进行介绍。
<影像组学特征的筛选过程>
对影像组学特征的筛选可以包括针对单个样本对象的多个影像组学特征的筛选,也可以包括针对全部样本对象的全部影像组学特征的筛选。其中, 在单个样本对象的多个影像组学特征进行筛选时,可以依据表达类别标签和肿瘤区域的肿瘤分级标签进行筛选,在对全部样本对象的全部影像组学特征进行筛选时,可以依据临床数据进行筛选。
具体实施,针对单个样本对象的多个影像组学特征的筛选的过程如下:
在一种可选的示例中,第一筛选因子可以包括表达类别标签和肿瘤区域的肿瘤分级标签;其中,肿瘤分级标签用于标识该样本对象的肿瘤分级,其中,肿瘤分级是指肿瘤的组织学分级,用以表示肿瘤的恶性程度指标。
需要说明的是,本公开的表达类别标签和肿瘤分级标签都是样本对象被确诊肿瘤疾病后得到的,即可以是确诊患者的目标基因的表达类别和肿瘤分级。
其中,可以分别基于表达类别标签和肿瘤分级标签对影像组学特征进行筛选,将二者筛选后的影像组学特征去重后,得到影像组学特征样本。
具体实施时,可以基于每个影像组学特征与表达类别标签之间的第一关系值,对多个影像组学特征进行筛选,得到多个第一影像组学特征;基于每个影像组学特征与肿瘤分级标签之间的第二关系值,对多个影像组学特征进行筛选,得到多个第二影像组学特征;接着对多个第一影像组学特征和多个第二影像组学特征进行去重,得到多个影像组学特征样本。
其中,第一关系值用于表征影像组学特征与目标基因的突变之间的关联程度;第二关系值用于表征影像组学特征与肿瘤分级之间的关联程度。
其中,可以利用Mann-Whitney U(惠特尼检验)检验方法选择与TERT状态标签显著相关的特征,其中,Mann-Whitney U(惠特尼检验)检验方法用于评估两个抽样群体是否可能来自同一群。
具体地,按照TERT状态类别标签0和1分将样本对象分成两组x1和x2,其中,x1中的样本对象的TERT的表达类别标签是0,x2中的样本对象的TERT的表达类别标签是1,接着,计算样本x1和x2之间的Mann-Whitney U检验,得到每个样本对象的每个影像组学特征的p-value,即第一关系值,该第一关系值可以反应样本对象的影像组学特征与状态标签之间的关联程度,进而保留p-value<0.05的影像组学特征,从而完成对影像组学特征的第一次筛选。
其中,也可以利用Mann-Whitney U检验方法选择与肿瘤分级(高级别 的胶质瘤/低级别的胶质瘤)标签显著相关的特征,其中,肿瘤分级标签可以包括0和1,0代表高级别的肿瘤,1代表低级别的肿瘤,按照肿瘤分级标签0和1分将样本对象分成两组x3和x4,其中,x3中的样本对象的肿瘤分级标签是0,x4中的样本对象的肿瘤分级标签是1,接着,计算样本x3和x4之间的Mann-Whitney U检验,得到每个样本对象的每个影像组学特征的p-value,即第二关系值,该第二关系值可以反应样本对象的影像组学特征与肿瘤分级标签之间的关联程度,进而保留p-value<0.05的影像组学特征,从而完成对影像组学特征的第二次筛选。
接着,对多个第一影像组学特征和多个第二影像组学特征进行组合后,并去除重复的影像组学特征,得到多个影像组学特征样本。
需要说明的是,第一次筛选(基于目标基因的表达类别标签的筛选)和第二次筛选(基于肿瘤分级标签的筛选)是相互独立的。
在又一种可选的示例中,还可以先采用方差选择法选择区分能力较好的影像组特征,接着,基于第一筛选因子从区分能力较好的影像组特征中筛选出多个影像组学特征样本。
具体实施时,可以确定每个影像组学特征对应的方差,并将方差大于第二方差阈值的影像组学特征保留,得到多个候选影像组学特征。之后,基于第一筛选因子对多个候选影像组学特征进行筛选,得到多个影像组学特征样本。
其中,方差选择法可以选择出对样本的区分有用的特征,也就是说可以选择出特征表达较强的影像组学特征。具体地,若一个影像组学特征的方差接近于0,则表征该样本对象在这个影像组学特征上基本上没有差异,这个影像组学特征对于样本对象之间的区分并没有什么用。
具体地,可以设定阈值,将方差大于阈值的影像组学特征保留,得到方差选择法选择的影像组学特征。实际中,可以对方差选择法选择的影像组学特征使用z-score进行数据标准化处理,接着再基于第一筛选因子,对数据标准化处理后的影像组学特征进行筛选,得到影像组学特征样本。
采用此种实施方式时,可以先对提取出的影像组学特征中对样本区分性不大的影像组学特征进行剔除,从而保留的影像组学特征都是特征表达强的特征,从而提高了后续筛选出的影像组学特征的特征表达强度,也减小了后 续进行特征筛选的计算量,提高特征筛选效率。
针对全部样本对象的多个影像组学特征的筛选的过程如下:
具体实施时,基于第三筛选因子对全部影像组学特征进行筛选,得到补充性影像组学特征样本;其中,第三筛选因子包括多个样本对象各自对应的临床数据。
相应地,可以基于多个影像组学特征样本、多个体素特征样本和多个补充性影像组学特征样本,构建所述训练样本。
具体地,如上所述,训练样本包括多个样本组,每个样本组包括一个样本对象对应的多个影像组学特征样本、多个体素特征样本以及该样本对象的多个补充性影像组学特征样本。
这样,多个样本对象被筛选出的影像组学特征样本构成影像组学特征子集1,多个补充性影像组学特征样本构成影像组学特征子集2,体素特征样本构成体素特征子集,影像组学特征子集1、影像组学特征子集2和体素特征子集便作为训练预设模型的训练样本。
具体实施时,参照图4所示,示出了基于临床数据对影像组学特征进行筛选的步骤流程示意图,如图4所示,具体可以包括如下步骤:
步骤S401:获取影像组学特征矩阵以及临床数据矩阵;其中,影像组学特征矩阵包括多个样本对象各自对应的多个影像组学特征,临床数据矩阵包括多个样本对象各自对应的临床数据。
步骤S402:基于影像组学特征矩阵和临床数据特征矩阵,获取互信息系数矩阵。其中,互信息系数矩阵包括每个影像组学特征与临床数据之间的互信息系数,互信息系数用于表征影像组学特征与临床数据之间的关联程度。
步骤S403:基于互信息系数矩阵,对影像组学特征矩阵所包括的全部影像组学特征进行筛选,得到多个补充性影像组学特征样本。
本实施例中,可以将多个样本对象的影像组学特征,基于各自对应的临床数据进行筛选,得到了每个样本对象筛选出的补充性影像组学特征样本。具体实施时,可以使用互信息度量影像组学特征与临床数据特征之间的相关性。具体地,设有N个样本对象,则设影像组学特征矩阵为AN*M,临床数据特征矩阵为BN*S
其中,M为每个样本对象提取到的影像组学特征的数量,例如,如上示 例提取了15816个影像组学特征,则M为15816,当然,在采用方差选择法选择了部分影像组学特征后,M为方法选择法选择出的影像组学特征的数量。S为每个样本对象的临床数据的数量。
本公开中,临床数据包括年龄、性别、收缩压、舒张压、疾病史、恶性肿瘤史、用药信息、手术情况、生存时间等。示例如下表所示:
其中,可以将临床数据中的每种数据转换为临床数据特征,S即表示临床数据特征的数量。其中,将临床数据转换为临床数据特征的过程可以如下所示:
对于数值型的临床数据执行归一化处理,如对年龄、收缩压、舒张压执行归一化处理。对字符串类型的临床数据先将其转换为数值信息,例如,对性别、疾病史、恶性肿瘤史、用药信息、手术情况等数据进行数值化处理。例如,性别男用1表示,女用0表示;疾病史糖尿病用1表示,高血压用2表示,脑血管疾病用3表示,接着再将其转换为向量表示;其中,对生存期的临床数据进行特征离散化处理,按照0~3年用1表示1,3~5年用2表示,5年以上用3表示的标准划分为三个类别。
当然,实际中,也可以对每种临床数据进行独热编码,得到每种临床数据对应的临床数据特征。
其中,可以基于影像组学特征矩阵和临床数据特征矩阵,获取互信息系数矩阵,具体地,可以计算每个样本对象的每个影像组学特征与该样本对象的不同临床数据特征之间的互信息系数,得到互信息系数矩阵CS*M;这样, 系数矩阵CS*M的每行代表一个样本对象的M个影像组学特征各自对应的互信息系数,接着,可以从每行选择K个最好的影像组学特征,并将S行(S个临床数据特征)选择的特征合并去重,得到多个补充性影像组学特征样本。
其中,多个补充性影像组学特征样本中可以按照各自所属的样本对象进行分组,得到每个样本对象对应的补充性影像组学特征样本,进而每个样本对象对应的补充性影像组学特征样本可以划分到该样本对象的样本组中作为训练样本。
由于互信息系数可以度量影像组学特征与临床数据特征之间的相关性,因此,可以筛选出每个样本对象的与临床数据特征相关的影像组学特征,也就是说可以基于临床数据筛选出与患者的病情密切相关的影像组学特征用于模型训练,以提高模型的可解释性。
再一方面,由于基于影像组学特征矩阵和临床数据特征矩阵,获取互信息系数矩阵,这样通过互信息矩阵即可筛选出对应的补充性影像组学特征样本,由此,相比于单个计算每个样本对象的影像组学特征与临床数据特征之间的相关性,可以一次性筛选出多个样本对象各自对应的补充性影像组学特征样本,提高了筛选效率。且将不同样本对象的影像组学特征和临床数据特征纳入到统一矩阵空间中进行计算,由此,在筛选出一个样本对象的补充性影像组学特征样本时,可以借助其他样本对象的临床数据特征与影像组学特征之间的相关性,从而基于多个样本对象构建了临床数据特征与影像组学特征之间的医学关联,提高了筛选出的补充性影像组学特征样本的准确性,即筛选出可以真实反应临床数据的影像组学特征样本。
<体素特征的筛选过程>
在一种可选的示例中,用于筛选体素特征的筛选因子可以包括表达类别标签,当然在筛选过程中,可以先对体素特征进行初级筛选,以筛选出特征表达强的体素特征,之后,在特征表达强的体素特征中,基于表达类别标签进行筛选,以减小分类模型对体素特征的计算量,以及对体素特征进行筛选时的计算量。
具体实施时,可以获取每个体素特征的方差,将方差大于第一方差阈值的体素特征保留,得到多个候选体素特征;以表达类别标签为预测标签,以多个候选体素特征为输入,利用线性回归模型从多个候选体素特征中筛选出 多个体素特征样本。
本示例中,仍然可以计算每个体素特征的方差,接着,将方差大于第一方差阈值的体素特征保留,其中,第一方差阈值可以不同于上述的第二方差阈值。其中,线性回归模型可以是LASSO回归模型,具体地,针对保留的候选体素特征,可以采用LASSO回归L1正则化算法进行体素特征选择,具体而言,以多个候选体素特征为LASSO的输入特征,该LASSO的预测标签为目标基因的表达类别标签,从而得到LASSO选择的一组体素特征样本。
实际中,由于肿瘤发生位置与目标基因的表达类别有一定关联,因此,如图1所示,在一种可选的示例中,还可以确定肿瘤区域在所属人体部位上的位置,这样,可以将该位置的位置信息作为训练样本的补充特征,用于训练分类模型。
具体地,可以基于胶质瘤区域的图像样本,确定胶质瘤区域对应的位置信息;并获取位置信息对应的位置特征。其中,位置信息包括胶质瘤区域所属的大脑区域,和/或胶质瘤区域在大脑中的位置坐标。
相应地,可以基于位置特征、多个影像组学特征样本和多个体素特征样本,构建训练样本。
本实施方式中,对于脑胶质瘤而言,其胶质瘤的发生区域与目标基因的表达类别有一定的关联,因此,为了刻画此种关联,可以获取胶质瘤区域在大脑中的位置,即获取胶质瘤区域对应的位置信息。
其中,可以根据脑部的核磁共振图像,确定出胶质瘤区域所在的位置,进而基于该位置所属的脑部区域,确定位置信息。该位置信息可以包括胶质瘤区域所属的大脑区域或胶质瘤区域在大脑中的位置坐标,或者,既包括胶质瘤区域所属的大脑区域,也包括胶质瘤区域在大脑中的位置坐标。具体地,位置坐标可以是指胶质瘤在大脑中的中心坐标,即胶质瘤在大脑中的空间位置。
在一种示例中,大脑区域可以包括大脑、小脑和脑干;在另一种示例中,根据解剖标记(AAL)图谱将大脑细分为116个ROI(region of interest,感兴趣区),AAL图谱全称是Anatomical Automatic Labeling,是一种数字化的大脑结构图谱,一般用于功能性神经影像研究中定位大脑的活动区域,因此,大脑区域可以包括116个ROI区域。
其中,在将位置信息转换为位置特征时,对于位置坐标可以用数值型表示,对于胶质瘤区域所属的大脑区域,可以用胶质瘤区域是否属于上述每个大脑区域的标签表示,以大脑区域包括大脑、小脑和脑干为例,肿瘤属于该区域,则表示为1,不属于该区域,则表示为0,假设胶质瘤分布在小脑和脑干,则位置特征表示为[0,1,1]。
采用本实施方式的技术方案,可以融合肿瘤所属的区域的位置特征,从而可以为预测目标基因的表达类别提供肿瘤位置的参考,基于肿瘤发生位置与目标基因的表达类别之间的关联,可以较为准确预测目标基因的表达类别。
在利用训练样本对预设模型进行训练得到分类模型的过程可以如下:
将训练样本输入至分类模型,得到分类模型输出的目标基因的预测表达类别;并基于预测表达类别和表达类别标签,确定预设模型的损失值;基于损失值,更新预设模型的参数;接着将满足训练结束条件时的预设模型作为分类模型,训练结束条件为分类模型收敛或达到预设更新次数。
本实施方式中,如上所述,可以将每个样本对象的样本组,即每个样本对象被筛选出的影像组学特征样本、体素特征样本以及补充性影像组学特征样本输入预设模型中,由预设模型基于影像组学特征样本、体素特征样本以及补充性影像组学特征样本,进行不同尺度的处理,从而预测出该样本对象的目标基因的表达类别,即预测表达类别,之后,基于该样本对象的表达类别标签和预测表达类别构建损失函数,计算预设模型的损失值,基于该损失值,不断更新预设模型的参数,最终得到分类模型。
其中,在训练结束条件中,分类模型收敛可以是指损失值小于或等于预设损失值,或者,可以是损失值不再变小。其中,预设更新次数可以根据实际需求进行设置。
采用本公开的分类模型获取方法,由于训练样本是基于目标基因的表达类别标签筛选出的与目标基因的表达类别相关性较强的影像组学特征样本和体素样本,因此,分类模型学习到了影像组学特征和体素特征与目标基因的表达类别之间的相关性,从而提高了分类模型的可解释性。
在训练得到上述的分类模型后,由于送入预设模型的训练样本是基于临床数据、目标基因的表达类别筛选出的有效影像组学特征样本和体素特征样本,这样,预设模型在训练过程中可以学习到这些与目标基因的突变有关的 特征与目标基因的表达类别之间的关联,也就是说可以提高分类模型的医学可解释性,进而该分类模型可以具有依据影像组学特征和体素特征,预测目标基因的表达类别。
这样,在又一种实施例中,本公开提供了一种目标基因的表达类别确定方法,参照图5所示,示出了目标基因的表达类别确定方法的步骤流程图,如图5所示,包括以下步骤:
步骤S501:获取待测对象的肿瘤区域的多个目标影像组学特征和多个目标体素特征。
步骤S502:将多个目标影像组学特征和多个目标体素特征,输入至分类模型;其中,分类模型是按照上述实施例所述的分类模型获取方法得到的;
步骤S503:基于分类模型的输出,确定待测对象的目标基因的表达类别。
本实施例中,待测对象可以是指待测定目标基因的表达类别的患者,其中,可以获取待测对象的肿瘤区域的核磁共振图像,具体地,该核磁共振图像可以包括上述实施例所述的四种模态的图像,即包括T1加权类型的图像、T2加权类型的图像、对比度增强的T1加权类型的图像和T2流体衰减期转恢复类型的图像,接着,可以从每种模态的图像中分割出三个亚区的图像,分别对每个亚区的图像进行多种尺度的特征提取,通过多种尺度的特征提取,可以获取上述实施例所述的一阶统计量特征、形态特征、纹理特征,从而得到待测对象对应的多个目标影像组学特征。
其中,获取待测对象的目标体素特征的过程可以参照上述实施中获取样本对象的体素特征的过程所述,在此不再赘述。
接着,可以将多个目标影像组学特征和多个目标体素特征,输入至分类模型,由于分类模型经过上述实施例的获取过程,已经学习到影像组学特征和体素特征与目标基因的表达类别之间的相关性,因此,具有基于影像组学特征和体素特征预测目标基因的表达类别的能力。
其中,分类模型的输出是目标基因属于每种表达类别的概率,即属于突变型的概率和属于野生型的概率,实际中,可以将概率大于预设概率值的类别作为目标基因的表达类别。
采用本实施方案的目标基因的表达类别确定方法,由于预设模型在训练过程中学习到这些与目标基因的突变有关的特征与目标基因的表达类别之间 的关联,也就是说可以提高分类模型的医学可解释性,进而该分类模型可以具有依据影像组学特征和体素特征,预测目标基因的表达类别,从而在实际应用中,可以直接将待测对象的目标影像组学特征和目标体素特征输入到分类模型,即可得到准确的目标基因的表达类别,而无需对待测对象进行焦磷酸测序或PCR等有创检测方法确定突变状态,可以大大减轻患者的痛苦。
在一种可选示例中,在利用分类模型对待测对象的目标基因的表达类别进行预测时,也可以对影像组学特征进行筛选,一方面,可以筛选出特征表达较强的目标影像组学特征和目标体素特征,另一方面,可以筛选出与待测对象的肿瘤分级和临床数据具有较强关联的目标影像组学特征。
具体实施时,筛选出特征表达较强的目标影像组学特征的过程如下:可以确定每个目标影像组学特征对应的方差,并将方差大于第二方差阈值的目标影像组学特征保留;确定每个体素特征的方差,将方差大于所述第一方差阈值的目标体素特征保留。相应地,可以将保留的目标影像组学特征和保留的目标体素特征,输入至分类模型。
其中,第一方差阈值可以与上述实施例的第一方差阈值相同,第二方差阈值可以与上述实施例的第二方差阈值相同。也就是说对于待测对象,也可以采用方差选择法选择出表达能力强的目标体素特征和目标影像组学特征。
具体实施时,筛选出与待测对象的肿瘤分级和临床数据具有较强关联的目标影像组学特征的过程可以如下:
获取待测对象对应的第四筛选因子,基于第四筛选因子,对多个目标影像组学特征进行筛选;其中,第四筛选因子包括待测对象的临床数据和/或肿瘤分级数据。
其中,第四筛选因子可以包括临床数据或肿瘤分级数据,或者临床数据和肿瘤分级数据都包括。
在仅包括临床数据的情况下,可以计算目标影像组学特征与临床数据之间的互信息系数,进而基于互信息系数筛选出送入分类模型的目标影像组学特征;在仅包括肿瘤分级的情况下,可以计算目标影像组学特征与肿瘤分级之间的第三关系值,进而基于第三关系值筛选出送入分类模型的目标影像组学特征。
在包括临床数据和肿瘤分级的情况下,可以确定每个目标影像组学特征 与所述肿瘤分级标签之间的第三关系值,以及每个目标影像组学特征与所述临床数据之间的互信息系数;
接着,分别基于第三关系值对多个目标影像组学特征进行筛选,以及,基于互信息系数对多个目标影像组学特征进行筛选;之后,将基于第三关系值筛选出的目标影像组学特征和基于互信息系数筛选出的目标影像组学特征进行去重,得到筛选出的目标影像组学特征。
其中,计算三关系值和互信息系数的过程可以参照上述实施例的描述即可,在此不再赘述。
采用此种实施方式,由于从多个目标影像组学特征中筛选出了与待测对象的临床数据密切相关的目标影像组学特征,以及与待测对象的肿瘤分级密切相关的目标影像组学特征,因此,在分类模型已经具备确定影像组学特征和体素特征与目标基因的表达类别之间的关联的能力的情况下,送入到分类模型的也是与待测对象的临床数据和肿瘤分级密切相关的目标影像组学特征,因此,可以进一步提高预测目标基因的表达类别的准确度。
基于相同的发明构思,本公开还提供了一种分类模型获取装置,参照图6所示,示出了本公开的分类模型获取装置的结构框架示意图,如图6所示,装置具体可以包括以下模块:
特征获取模块601,用于针对样本对象的肿瘤区域,获取肿瘤区域的多个影像组学特征和多个体素特征;
特征选择模块602,用于基于第一筛选因子对多个影像组学特征进行筛选,得到多个影像组学特征样本;以及基于第二筛选因子对多个体素特征进行筛选,得到多个体素特征样本;其中,第一筛选因子和第二筛选因子均包括样本对象的TERT基因的表达类别标签;
样本构建模块603,用于基于多个影像组学特征样本和多个体素特征样本,构建训练样本;
模型训练模块604,用于以训练样本为输入,对预设模型进行训练,得到分类模型,分类模型用于预测目标基因突变的类别。
可选地,特征获取模块601,包括:
图像分割单元,用于从肿瘤区域的图像样本中提取属于肿瘤非增强区的第一亚区图像、属于肿瘤增强区的第二亚区图像,以及属于肿瘤周围水肿区 的第二亚区图像;
特征提取单元,用于分别对第一亚区图像、第二亚区图像以及第三亚区图像进行特征提取,得到多个影像组学特征。
可选地,特征获取模块601,包括:
多类型图像获取单元,用于获取肿瘤区域的多种类型的图像样本,多种类型包括T1加权类型、T2加权类型、对比度增强的T1加权类型和T2流体衰减期转恢复类型;
特征提取单元,用于分别对每种类型的图像样本进行特征提取;
特征组合单元,用于将提取到的每种类型的图像样本各自对应的影像组学特征进行组合,得到多个影像组学特征。
可选地,肿瘤区域为大脑中的胶质瘤区域,装置还包括:
位置信息获取模块,用于基于胶质瘤区域的图像样本,确定胶质瘤区域对应的位置信息;
位置特征获取模块,用于获取位置信息对应的位置特征;其中,位置信息包括胶质瘤区域所属的大脑区域,和/或胶质瘤区域在大脑中的位置坐标;
样本构建模块603,具体用于基于位置特征、多个影像组学特征样本和多个体素特征样本,构建训练样本。
可选地,第一筛选因子包括表达类别标签和肿瘤区域的肿瘤分级标签;特征选择模块602包括影像组学特征筛选单元,影像组学特征筛选单元包括:
第一筛选子单元,用于基于每个影像组学特征与表达类别标签之间的第一关系值,对多个影像组学特征进行筛选,得到多个第一影像组学特征;其中,第一关系值用于表征影像组学特征与目标基因的突变之间的关联程度;
第二筛选子单元,用于基于每个影像组学特征与肿瘤分级标签之间的第二关系值,对多个影像组学特征进行筛选,得到多个第二影像组学特征;其中,第二关系值用于表征影像组学特征与肿瘤分级之间的关联程度;
去重单元,用于对多个第一影像组学特征和多个第二影像组学特征进行去重,得到多个影像组学特征样本。
可选地,包括多个样本对象,装置还包括:
影像组学特征再筛选模块,用于针对全部样本对象所包括的全部影像组学特征,基于第三筛选因子对全部影像组学特征进行筛选,得到补充性影像 组学特征样本;其中,第三筛选因子包括多个样本对象各自对应的临床数据;
样本构建模块603,具体用于基于多个影像组学特征样本、多个体素特征样本和多个补充性影像组学特征样本,构建训练样本。
可选地,影像组学特征再筛选模块,包括:
矩阵创建单元,用于获取影像组学特征矩阵以及临床数据矩阵;其中,影像组学特征矩阵包括多个样本对象各自对应的多个影像组学特征,临床数据矩阵包括多个样本对象各自对应的临床数据;
互信息系数确定单元,用于基于影像组学特征矩阵和临床数据特征矩阵,获取互信息系数矩阵,互信息系数矩阵包括每个影像组学特征与临床数据之间的互信息系数,互信息系数用于表征影像组学特征与临床数据之间的关联程度;
补充筛选单元,用于基于互信息系数矩阵,对影像组学特征矩阵所包括的全部影像组学特征进行筛选,得到多个补充性影像组学特征样本。
可选地,第二筛选因子包括表达类别标签,特征选择模块602包括体素特征筛选单元,体素特征筛选单元包括:
第一方差确定单元,用于获取每个体素特征的方差,将方差大于第一方差阈值的体素特征保留,得到多个候选体素特征;
体素筛选单元,用于以表达类别标签为预测标签,以多个候选体素特征为输入,利用线性回归模型从多个候选体素特征中筛选出多个体素特征样本。
可选地,装置还包括:
第二方差确定单元,用于确定每个影像组学特征对应的方差,并将方差大于第二方差阈值的影像组学特征保留,得到多个候选影像组学特征;
影像组学特征筛选单元,用于基于第一筛选因子对多个候选影像组学特征进行筛选,得到多个影像组学特征样本。
可选地,特征获取模块包括影像组学特征提取单元,影像组学特征提取单元具体包括:
多维度特征提取子单元,获取肿瘤区域的图像样本的小波图像和LoG图像;分别对肿瘤区域的图像样本、小波图像和LoG图像进行多尺度特征提取,得到肿瘤区域的一阶统计量特征、纹理特征和形态特征;
多维度特征组合子单元,用于将肿瘤区域的一阶统计量特征、纹理特征 和形态特征进行组合,得到多个影像组学特征。
可选地,模型训练模块,包括:
输入单元,用于将训练样本输入至分类模型,得到分类模型输出的目标基因的预测表达类别;
损失确定单元,用于基于预测表达类别和表达类别标签,确定分类模型的损失值;
参数更新单元,用于基于损失值,更新分类模型的参数;
分类模型获取单元,用于将满足训练结束条件时的分类模型作为分类模型,训练结束条件为分类模型收敛或达到预设更新次数
基于相同的发明构思,本公开还提供了一种目标基因的表达类别确定装置,参照图7所示,示出了本公开的目标基因的表达类别确定装置的框架示意图,如图7所示,装置具体可以包括以下模块:
特征获取模块701,用于获取待测对象的肿瘤区域的多个目标影像组学特征和多个目标体素特征;
特征输入模块702,用于将多个目标影像组学特征和多个目标体素特征,输入至分类模型;其中,分类模型是按照合上述实施例的分类模型的获取方法得到的;
类别确定模块703,用于基于分类模型的输出,确定待测对象的目标基因的表达类别。
可选地,装置还包括:
第一影像组学特征筛选模块,用于确定每个目标影像组学特征对应的方差,并将方差大于第二方差阈值的目标影像组学特征保留;
体素特征筛选模块,用于确定每个体素特征的方差,将方差大于第一方差阈值的目标体素特征保留;
特征输入模块702,具体用于将保留的目标影像组学特征和目标体素特征,输入至分类模型。
可选地,装置还包括:
筛选因子获取模块,用于获取待测对象对应的第四筛选因子,第四筛选因子包括待测对象的临床数据和/或肿瘤分级数据;
第二影像组学特征筛选模块,用于基于第四筛选因子,对多个目标影像 组学特征进行筛选;
特征输入模块702,具体用于将多个筛选出的目标影像组学特征和多个目标体素特征,输入至分类模型。
可选地,第四筛选因子包括临床数据和肿瘤分级数据;第二影像组学特征筛选模块,包括:
数值确定单元,用于确定每个目标影像组学特征与肿瘤分级标签之间的第三关系值,以及每个目标影像组学特征与临床数据之间的互信息系数;
筛选单元,用于基于第三关系值对多个目标影像组学特征进行筛选,以及,基于互信息系数对多个目标影像组学特征进行筛选;
去重单元,用于将基于第三关系值筛选出的目标影像组学特征和基于互信息系数筛选出的目标影像组学特征进行去重,得到筛选出的目标影像组学特征。
基于相同的发明构思,本公开还提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行时实现所述的分类模型获取方法,或执行时实现所述的目标基因的表达类别确定方法。
参照图8所示,示出了本公开实施例的一种电子设备800的结构框图,如图8所示,本发明实施例提供的一种电子设备,该电子设备800可以用于执行分类模型获取方法或者目标基因的表达类别确定方法。
电子设备800可以包括存储器801、处理器802及存储在存储器上并可在处理器上运行的计算机程序,所述处理器802被配置为执行所述的图像处理方法。
如图8所示,在一实施例中,该电子设备800完整的可以包括输入装置803、输出装置804以及图像采集装置805,其中,在执行本公开实施例的图像处理方法时,图像采集装置805可以获取肿瘤区域的图像(包括图像样本和待测对象的肿瘤区域的图像),接着输入装置803可以获得图像采集装置805获取的图像,该图像可以由处理器802进行处理,该处理具体可以包括提取影像组学特征和体素特征,以及对影像组学特征和体素特征进行筛选,并对筛选后的特征构建训练样本训练预设模型,输出装置804可以输出分类模型,或者可以输出分类模型输出的表达类别结果。
当然,在一实施例中,存储器801可以包括易失性存储器和非易失性存储器,其中,易失性存储器可以理解为是随机存取记忆体,用来存储和保存数据的。非易失性存储器是指当电流关掉后,所存储的数据不会消失的电脑存储器,当然,本公开的分类模型获取方法,或者目标基因的表达类别确定方法的计算机程序可以存储在易失性存储器和非易失性存储器中,或者存在二者中的任意一个中。
基于相同的发明构思,本公开还提供一种计算机可读存储介质,其存储的计算机程序使得处理器执行所述的分类模型获取方法,或执行时实现所述的目标基因的表达类别确定方法。
基于相同的发明构思,本公开还提供一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现所述的分类模型的获取方法,或执行时实现所述的确定目标基因的表达类别的方法。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上对本公开所提供的一种分类模型获取方法、表达类别确定方法、装置、设备及介质进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本公开的方法及其核心思想;同时,对于本领域的一般技术人员,依据本公开的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本公开的限制。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公 开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。
本文中所称的“一个实施例”、“实施例”或者“一个或者多个实施例”意味着,结合实施例描述的特定特征、结构或者特性包括在本公开的至少一个实施例中。此外,请注意,这里“在一个实施例中”的词语例子不一定全指同一个实施例。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本公开的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本公开可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
最后应说明的是:以上实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的精神和范围。

Claims (19)

  1. 一种分类模型获取方法,其特征在于,所述方法包括:
    针对样本对象的肿瘤区域,获取所述肿瘤区域的多个影像组学特征和多个体素特征;
    基于第一筛选因子对多个所述影像组学特征进行筛选,得到多个影像组学特征样本;以及基于第二筛选因子对多个所述体素特征进行筛选,得到多个体素特征样本;其中,所述第一筛选因子和所述第二筛选因子均包括所述样本对象的目标基因的表达类别标签;
    基于多个所述影像组学特征样本和多个所述体素特征样本,构建训练样本;
    以所述训练样本为输入,对预设模型进行训练,得到所述分类模型,所述分类模型用于预测目标基因的表达类别。
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述肿瘤区域的多个影像组学特征,包括:
    从所述肿瘤区域的图像样本中提取属于肿瘤非增强区的第一亚区图像、属于肿瘤增强区的第二亚区图像,以及属于肿瘤周围水肿区的第三亚区图像;
    分别对所述第一亚区图像、所述第二亚区图像以及所述第三亚区图像进行特征提取,得到多个所述影像组学特征。
  3. 根据权利要求1或2所述的方法,其特征在于,所述获取所述肿瘤区域的多个影像组学特征,包括:
    获取所述肿瘤区域的多种类型的图像样本,所述多种类型包括T1加权类型、T2加权类型、对比度增强的T1加权类型和T2流体衰减期转恢复类型;
    分别对每种类型的图像样本进行特征提取;
    将提取到的所述每种类型的图像样本各自对应的影像组学特征进行组合,得到多个所述影像组学特征。
  4. 根据权利要求1所述的方法,其特征在于,所述肿瘤区域为大脑中的胶质瘤区域,所述方法还包括:
    基于所述胶质瘤区域的图像样本,确定所述胶质瘤区域对应的位置信息;
    获取所述位置信息对应的位置特征;其中,所述位置信息包括所述胶质瘤区域所属的大脑区域,和/或所述胶质瘤区域在所述大脑中的位置坐标;
    基于多个所述影像组学特征样本和多个所述体素特征样本,构建训练样本,包括:
    基于所述位置特征、多个所述影像组学特征样本和多个所述体素特征样本,构建所述训练样本。
  5. 根据权利要求1所述的方法,其特征在于,所述第一筛选因子包括所述表达类别标签和所述肿瘤区域的肿瘤分级标签;所述基于第一筛选因子对多个所述影像组学特征进行筛选,得到多个影像组学特征样本,包括:
    基于每个所述影像组学特征与所述表达类别标签之间的第一关系值,对多个所述影像组学特征进行筛选,得到多个第一影像组学特征;其中,所述第一关系值用于表征所述影像组学特征与所述目标基因的突变之间的关联程度;
    基于每个所述影像组学特征与所述肿瘤分级标签之间的第二关系值,对多个所述影像组学特征进行筛选,得到多个第二影像组学特征;其中,所述第二关系值用于表征所述影像组学特征与所述肿瘤分级之间的关联程度;
    对多个所述第一影像组学特征和多个第二影像组学特征进行去重,得到多个所述影像组学特征样本。
  6. 根据权利要求1-5任一所述的方法,其特征在于,包括多个所述样本对象,所述方法还包括:
    针对全部所述样本对象所包括的全部影像组学特征,基于第三筛选因子对所述全部影像组学特征进行筛选,得到补充性影像组学特征样本;其中,所述第三筛选因子包括多个所述样本对象各自对应的临床数据;
    基于多个所述影像组学特征样本和多个所述体素特征样本,构建训练样本,包括:
    基于多个所述影像组学特征样本、多个所述体素特征样本和多个所述补充性影像组学特征样本,构建所述训练样本。
  7. 根据权利要求6所述的方法,其特征在于,所述针对全部所述样本对象所包括的全部影像组学特征,基于第三筛选因子对所述全部影像组学特征进行筛选,得到补充性影像组学特征样本,包括:
    获取影像组学特征矩阵以及临床数据矩阵;其中,所述影像组学特征矩阵包括多个样本对象各自对应的多个影像组学特征,所述临床数据矩阵包括多个样本对象各自对应的临床数据;
    基于所述影像组学特征矩阵和所述临床数据特征矩阵,获取互信息系数矩阵,所述互信息系数矩阵包括每个影像组学特征与所述临床数据之间的互信息系数,所述互信息系数用于表征所述影像组学特征与所述临床数据之间的关联程度;
    基于所述互信息系数矩阵,对所述影像组学特征矩阵所包括的全部影像组学特征进行筛选,得到多个所述补充性影像组学特征样本。
  8. 根据权利要求1所述的方法,其特征在于,所述第二筛选因子包括所述表达类别标签,所述基于第二筛选因子对多个所述体素特征进行筛选,得到多个体素特征样本,包括:
    获取每个所述体素特征的方差,将方差大于第一方差阈值的体素特征保留,得到多个候选体素特征;
    以所述表达类别标签为预测标签,以多个所述候选体素特征为输入,利用线性回归模型从多个所述候选体素特征中筛选出多个所述体素特征样本。
  9. 根据权利要求1所述的方法,其特征在于,所述基于第一筛选因子对多个所述影像组学特征进行筛选,得到多个影像组学特征样本之前,所述方法还包括:
    确定每个所述影像组学特征对应的方差,并将方差大于第二方差阈值的影像组学特征保留,得到多个候选影像组学特征;
    所述基于第一筛选因子对多个所述影像组学特征进行筛选,得到多个影像组学特征样本,包括:
    基于第一筛选因子对多个所述候选影像组学特征进行筛选,得到多个影像组学特征样本。
  10. 根据权利要求1所述的方法,其特征在于,所述获取所述肿瘤区 域的多个影像组学特征,包括:
    获取所述肿瘤区域的图像样本的小波图像和LoG图像;
    分别对所述肿瘤区域的图像样本、所述小波图像和所述LoG图像进行多尺度特征提取,得到所述肿瘤区域的一阶统计量特征、纹理特征和形态特征;
    将所述肿瘤区域的一阶统计量特征、纹理特征和形态特征进行组合,得到多个所述影像组学特征。
  11. 根据权利要求1所述的方法,其特征在于,以所述训练样本为输入,对预设模型进行训练,得到所述分类模型,包括:
    将所述训练样本输入至所述分类模型,得到所述分类模型输出的TERT基因的预测表达类别;
    基于所述预测表达类别和所述表达类别标签,确定所述分类模型的损失值;
    基于所述损失值,更新所述分类模型的参数;
    将满足训练结束条件时的分类模型作为所述分类模型,所述训练结束条件为所述分类模型收敛或达到预设更新次数。
  12. 一种目标基因的表达类别确定方法,其特征在于,所述方法包括:
    获取待测对象的肿瘤区域的多个目标影像组学特征和多个目标体素特征;
    将所述多个目标影像组学特征和多个所述目标体素特征,输入至分类模型;其中,所述分类模型是按照权利要求1-11任一所述的分类模型的获取方法得到的;
    基于所述分类模型的输出,确定所述待测对象的目标基因的表达类别。
  13. 根据权利要求12所述的方法,其特征在于,所述获取待测对象的肿瘤区域的多个目标影像组学特征和多个目标体素特征之后,所述方法还包括:
    确定每个所述体素特征对应的方差,将方差大于第一方差阈值的目标体素特征保留;
    确定每个所述目标影像组学特征对应的方差,并将方差大于第二方差阈值的目标影像组学特征保留;
    将多个所述目标影像组学特征和多个所述目标体素特征,输入至分类模 型,包括:
    将保留的目标影像组学特征和目标体素特征,输入至所述分类模型。
  14. 根据权利要求12或13所述的方法,其特征在于,所述获取待测对象的肿瘤区域的多个目标影像组学特征和多个目标体素特征之后,所述方法还包括:
    获取所述待测对象对应的第四筛选因子,所述第四筛选因子包括所述待测对象的临床数据和/或肿瘤分级数据;
    基于所述第四筛选因子,对多个目标影像组学特征进行筛选;
    将多个所述目标影像组学特征和多个所述目标体素特征,输入至分类模型,包括:
    将多个筛选出的目标影像组学特征和多个所述目标体素特征,输入至所述分类模型。
  15. 根据权利要求14所述的方法,其特征在于,所述第四筛选因子包括所述临床数据和所述肿瘤分级数据;基于所述第四筛选因子,对多个目标影像组学特征进行筛选,包括:
    确定每个所述目标影像组学特征与所述肿瘤分级标签之间的第三关系值,以及每个所述目标影像组学特征与所述临床数据之间的互信息系数;
    基于所述第三关系值对多个所述目标影像组学特征进行筛选,以及,基于所述互信息系数对多个所述目标影像组学特征进行筛选;
    将基于所述第三关系值筛选出的目标影像组学特征和基于所述互信息系数筛选出的目标影像组学特征进行去重,得到所述筛选出的目标影像组学特征。
  16. 一种分类模型获取装置,其特征在于,所述装置包括:
    特征获取模块,用于针对样本对象的肿瘤区域,获取所述肿瘤区域的多个影像组学特征和多个体素特征;
    特征选择模块,用于基于第一筛选因子对多个所述影像组学特征进行筛选,得到多个影像组学特征样本;以及基于第二筛选因子对多个所述体素特征进行筛选,得到多个体素特征样本;其中,所述第一筛选因子和所述第二筛选因子均包括所述样本对象的目标基因的表达类别标签;
    样本构建模块,用于基于多个所述影像组学特征样本和多个所述体素特征样本,构建训练样本;
    模型训练模块,用于以所述训练样本为输入,对预设模型进行训练,得到所述分类模型,所述分类模型用于预测目标基因的表达类别。
  17. 一种目标基因的表达类别确定装置,其特征在于,所述装置包括:
    特征获取模块,用于获取待测对象的肿瘤区域的多个目标影像组学特征和多个目标体素特征;
    特征输入模块,用于将所述多个目标影像组学特征和多个所述目标体素特征,输入至分类模型;其中,所述分类模型是按照权利要求1-11任一所述的分类模型的获取方法得到的;
    类别确定模块,用于基于所述分类模型的输出,确定所述待测对象的目标基因的表达类别。
  18. 一种电子设备,其特征在于,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行时实现如权利要求1-11任一所述的分类模型获取方法,或执行时实现如权利要求12-15任一项所述的目标基因的表达类别确定方法。
  19. 一种计算机可读存储介质,其特征在于,其存储的计算机程序使得处理器执行如权利要求1-11任一所述的分类模型获取方法,或执行时实现如权利要求12-15任一所述的目标基因的表达类别确定方法。
PCT/CN2023/110354 2022-09-19 2023-07-31 分类模型获取方法、表达类别确定方法、装置、设备及介质 WO2024060842A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211140564.6 2022-09-19
CN202211140564.6A CN115457361A (zh) 2022-09-19 2022-09-19 分类模型获取方法、表达类别确定方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2024060842A1 true WO2024060842A1 (zh) 2024-03-28

Family

ID=84305867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/110354 WO2024060842A1 (zh) 2022-09-19 2023-07-31 分类模型获取方法、表达类别确定方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN115457361A (zh)
WO (1) WO2024060842A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457361A (zh) * 2022-09-19 2022-12-09 京东方科技集团股份有限公司 分类模型获取方法、表达类别确定方法、装置、设备及介质
CN116452559B (zh) * 2023-04-19 2024-02-20 深圳市睿法生物科技有限公司 基于ctDNA片段化模式的肿瘤病灶的定位方法及装置

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060195269A1 (en) * 2004-02-25 2006-08-31 Yeatman Timothy J Methods and systems for predicting cancer outcome
CN106943192A (zh) * 2017-03-14 2017-07-14 上海交通大学医学院附属第九人民医院 肺癌细胞ki‑67表达指数的术前预测模型的建立方法
CN108109140A (zh) * 2017-12-18 2018-06-01 复旦大学 基于深度学习的低级别脑胶质瘤柠檬酸脱氢酶无损预测方法及系统
CN108376565A (zh) * 2018-02-13 2018-08-07 北京市神经外科研究所 一种脑胶质瘤Ki-67表达水平的影像组学预测方法
CN110097921A (zh) * 2019-05-30 2019-08-06 复旦大学 基于影像组学的胶质瘤内基因异质性可视化定量方法和系统
CN111260636A (zh) * 2020-01-19 2020-06-09 郑州大学 模型训练方法及设备、图像处理方法及设备以及介质
WO2021108382A1 (en) * 2019-11-26 2021-06-03 University Of Cincinnati Characterizing intra-site tumor heterogeneity
CN113744801A (zh) * 2021-09-09 2021-12-03 首都医科大学附属北京天坛医院 肿瘤类别的确定方法、装置、系统、电子设备及存储介质
US20220101147A1 (en) * 2018-12-28 2022-03-31 Osaka University System and method for predicting trait information of individuals
CN114463320A (zh) * 2022-02-17 2022-05-10 厦门大学 一种磁共振成像脑胶质瘤idh基因预测方法及系统
CN114999571A (zh) * 2022-06-06 2022-09-02 哈尔滨工业大学 一种诊断早期结肠癌的突变基因筛选方法及系统
CN115457361A (zh) * 2022-09-19 2022-12-09 京东方科技集团股份有限公司 分类模型获取方法、表达类别确定方法、装置、设备及介质

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060195269A1 (en) * 2004-02-25 2006-08-31 Yeatman Timothy J Methods and systems for predicting cancer outcome
CN106943192A (zh) * 2017-03-14 2017-07-14 上海交通大学医学院附属第九人民医院 肺癌细胞ki‑67表达指数的术前预测模型的建立方法
CN108109140A (zh) * 2017-12-18 2018-06-01 复旦大学 基于深度学习的低级别脑胶质瘤柠檬酸脱氢酶无损预测方法及系统
CN108376565A (zh) * 2018-02-13 2018-08-07 北京市神经外科研究所 一种脑胶质瘤Ki-67表达水平的影像组学预测方法
US20220101147A1 (en) * 2018-12-28 2022-03-31 Osaka University System and method for predicting trait information of individuals
CN110097921A (zh) * 2019-05-30 2019-08-06 复旦大学 基于影像组学的胶质瘤内基因异质性可视化定量方法和系统
WO2021108382A1 (en) * 2019-11-26 2021-06-03 University Of Cincinnati Characterizing intra-site tumor heterogeneity
CN111260636A (zh) * 2020-01-19 2020-06-09 郑州大学 模型训练方法及设备、图像处理方法及设备以及介质
CN113744801A (zh) * 2021-09-09 2021-12-03 首都医科大学附属北京天坛医院 肿瘤类别的确定方法、装置、系统、电子设备及存储介质
CN114463320A (zh) * 2022-02-17 2022-05-10 厦门大学 一种磁共振成像脑胶质瘤idh基因预测方法及系统
CN114999571A (zh) * 2022-06-06 2022-09-02 哈尔滨工业大学 一种诊断早期结肠癌的突变基因筛选方法及系统
CN115457361A (zh) * 2022-09-19 2022-12-09 京东方科技集团股份有限公司 分类模型获取方法、表达类别确定方法、装置、设备及介质

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
QIU, CHUN; MA, QIAO-RONG; ZHAO, MAN-MAN; SU, QIANG; ZHONG, MEI-ZUO: "A CFS-MRMR feature screening method and a glioma related gene screening and prediction model based on an AdaBoost algorithm are established", PROGRESS IN MODERN BIOMEDICINE, XIAN DAI SHENG WU YI XUE JIN ZHAN BIAN JI BU, CN, vol. 19, no. 1, 15 January 2019 (2019-01-15), CN , pages 26 - 30, XP009554011, ISSN: 1673-6273, DOI: 10.13241/j.cnki.pmb.2019.01.006 *
SHBOUL ZEINA A., CHEN JAMES, M. IFTEKHARUDDIN KHAN: "Prediction of Molecular Mutations in Diffuse Low-Grade Gliomas using MR Imaging Features", SCIENTIFIC REPORTS, NATURE PUBLISHING GROUP, US, vol. 10, no. 1, 1 January 2020 (2020-01-01), US , pages 3711, XP093148754, ISSN: 2045-2322, DOI: 10.1038/s41598-020-60550-0 *
ZHANG BIQI, CHANG KEN, RAMKISSOON SHAKTI, TANGUTURI SHYAM, BI WENYA LINDA, REARDON DAVID A., LIGON KEITH L., ALEXANDER BRIAN M., W: "Multimodal MRI features predict isocitrate dehydrogenase genotype in high-grade gliomas", NEURO-ONCOLOGY, OXFORD UNIVERSITY PRESS, US, vol. 19, no. 1, 1 January 2017 (2017-01-01), US , pages 109 - 117, XP093148756, ISSN: 1522-8517, DOI: 10.1093/neuonc/now121 *
王宇泽 (WANG, YUZE), 15 January 2021 (2021-01-15) *

Also Published As

Publication number Publication date
CN115457361A (zh) 2022-12-09

Similar Documents

Publication Publication Date Title
US10049450B2 (en) High-throughput adaptive sampling for whole-slide histopathology image analysis
CN107016395B (zh) 稀疏表示的原发性脑部淋巴瘤和胶质母细胞瘤的鉴别系统
WO2024060842A1 (zh) 分类模型获取方法、表达类别确定方法、装置、设备及介质
Lladó et al. Automated detection of multiple sclerosis lesions in serial brain MRI
Mahrooghy et al. Pharmacokinetic tumor heterogeneity as a prognostic biomarker for classifying breast cancer recurrence risk
Karki et al. CT window trainable neural network for improving intracranial hemorrhage detection by combining multiple settings
US11593940B2 (en) Method and system for standardized processing of MR images
Ashwin et al. Efficient and reliable lung nodule detection using a neural network based computer aided diagnosis system
CN108109140A (zh) 基于深度学习的低级别脑胶质瘤柠檬酸脱氢酶无损预测方法及系统
WO2010115885A1 (en) Predictive classifier score for cancer patient outcome
Xu et al. Using transfer learning on whole slide images to predict tumor mutational burden in bladder cancer patients
Tsougos et al. Application of radiomics and decision support systems for breast MR differential diagnosis
Liu et al. Deep convolutional neural network for accurate segmentation and quantification of white matter hyperintensities
Florez et al. Emergence of radiomics: novel methodology identifying imaging biomarkers of disease in diagnosis, response, and progression
Galimzianova et al. Stratified mixture modeling for segmentation of white-matter lesions in brain MR images
CN109191422B (zh) 基于常规ct图像的缺血性脑卒中早期检测系统和方法
Mehta et al. Propagating uncertainty across cascaded medical imaging tasks for improved deep learning inference
Lu et al. Texture analysis of breast DCE-MRI based on intratumoral subregions for predicting HER2 2+ status
Biradar et al. Lung Cancer Detection and Classification using 2D Convolutional Neural Network
Kaushik et al. Brain tumor segmentation using genetic algorithm
Meng et al. Artificial intelligence-based radiomics in bone tumors: Technical advances and clinical application
Dai et al. Clinical application of AI-based PET images in oncological patients
Jeong et al. Dilated saliency u-net for white matter hyperintensities segmentation using irregularity age map
Wei et al. Radiomics: A radiological evidence-based artificial intelligence technique to facilitate personalized precision medicine in hepatocellular carcinoma
Rathore et al. Radiopathomics: integration of radiographic and histologic characteristics for prognostication in glioblastoma

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23867137

Country of ref document: EP

Kind code of ref document: A1