WO2021135774A1 - Tumor prediction method and device, cloud platform, and computer-readable storage medium - Google Patents

Tumor prediction method and device, cloud platform, and computer-readable storage medium Download PDF

Info

Publication number
WO2021135774A1
WO2021135774A1 PCT/CN2020/132372 CN2020132372W WO2021135774A1 WO 2021135774 A1 WO2021135774 A1 WO 2021135774A1 CN 2020132372 W CN2020132372 W CN 2020132372W WO 2021135774 A1 WO2021135774 A1 WO 2021135774A1
Authority
WO
WIPO (PCT)
Prior art keywords
tumor
data
target
features
image
Prior art date
Application number
PCT/CN2020/132372
Other languages
French (fr)
Chinese (zh)
Inventor
邓胡川
赵安江
高杰临
丁瑞鹏
谢庆国
Original Assignee
苏州瑞派宁科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州瑞派宁科技有限公司 filed Critical 苏州瑞派宁科技有限公司
Publication of WO2021135774A1 publication Critical patent/WO2021135774A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • This application relates to the technical field of medical data processing, and in particular to a tumor prediction method, device, cloud platform, and computer-readable storage medium.
  • tumor tissues are different from the normal tissues from which they originated to varying degrees. This difference is called atypia.
  • the size of the atypia can be expressed by the degree of differentiation and maturity of tumor tissue.
  • the small atypia of the tumor tissue indicates that the degree of differentiation is high, and the degree of malignancy is low; on the contrary, it indicates that the degree of differentiation of tumor tissue is low, and the degree of malignancy is high.
  • Malignant tumors are divided into early stage, middle stage and late stage. Most of the early stage malignant tumors can be cured, and the intermediate stage malignant tumors can relieve pain and prolong life. Therefore, tumor classification prediction is particularly important.
  • the method of imagingomics is mainly used to predict tumor classification. This method is mainly used in Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (MRI).
  • CT Computed Tomography
  • MRI Magnetic Resonance Imaging
  • MRI Positron Emission Tomography
  • PET Emission Tomography
  • Use selection Models are constructed with the features derived, and the constructed models are used for tumor diagnosis and clinical phenotype prediction.
  • the existing imaging omics method basically only manually outlines the segmentation of images layer by layer for a certain type of tumor (for example, kidney cancer or lung cancer), and extracts gray-scale intensity features and three-dimensional shapes from the outlined segmentation area. Features, texture features and wavelet features and other high-dimensional features, and then use these high-dimensional features for analysis and research.
  • a certain type of tumor for example, kidney cancer or lung cancer
  • extracting the same features cannot fully express the deep information and hidden information of different tumor regions. Therefore, the existing imagingomics methods are not universal for different tumors.
  • the purpose of the embodiments of the present application is to provide a tumor prediction method, device, cloud platform, and computer-readable storage medium to solve at least one problem in the prior art.
  • the tumor prediction method can be executed on a cloud platform and can include:
  • the target prediction model is called to fuse the selected depth features and the high-dimensional features to obtain fusion features, and the tumor classification of the target patient is predicted according to the fusion features.
  • the target prediction model is obtained in the following manner:
  • obtaining the target prediction model locally includes:
  • the machine learning model that has achieved the optimal training effect and passed the verification is determined as the target prediction model.
  • the tumor prediction method before training the machine learning model, includes:
  • the sample image data is obtained by processing the received patient data.
  • obtaining the sample image data by processing the received patient data includes:
  • the sample image data is selected from the parsed patient data according to a preset standard.
  • the preset criteria include whether the patient data is complete, whether it has been clinically verified, and whether it meets clinical indicators.
  • filtering the depth features and the high-dimensional features extracted from the target image according to preset conditions includes:
  • the target image includes a CT image, an MRI image, a PET image, a US image, a SPECT image, and/or a PET/CT image.
  • the target prediction model includes an AlexNet model or a VGGNet model.
  • the embodiment of the present application also provides a tumor prediction device, which may be set on a cloud platform and may include:
  • a segmentation unit configured to call the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing the tumor region;
  • An extraction unit configured to extract high-dimensional features and depth features from the obtained segmented image
  • a screening unit configured to screen the high-dimensional features and the depth features according to preset conditions
  • a fusion unit configured to call the target prediction model to fuse the selected depth feature and the high-dimensional feature to obtain a fusion feature
  • a prediction unit configured to predict the tumor classification of the target patient according to the fusion feature.
  • the tumor prediction device further includes:
  • the acquiring unit is configured to acquire the target prediction model by using the acquired sample image data to train and verify the pre-built machine learning model, and to optimize the training effect and pass the verification.
  • a machine learning model is determined as the target prediction model, wherein the sample image data includes training data and verification data, and matches the target image.
  • the embodiment of the present application also provides a cloud platform, which includes the above-mentioned tumor prediction device.
  • the cloud platform also includes:
  • a data management device configured to manage user permissions and received user data, the user data including patient data and user account information.
  • the cloud platform further includes one or more of the following devices:
  • a resource monitoring device which is configured to monitor resource usage and network performance parameters according to received monitoring instructions
  • a visualization processing device configured to display the received user data, the processing result output by the tumor prediction device, and the constructed nomogram and/or survival curve diagram;
  • a data storage device configured to store various data output by the data management device and the tumor prediction device
  • a control device configured to operate the tumor prediction device, the data management device, the resource monitoring device, the visualization processing device, and the data storage device.
  • An embodiment of the present application also provides a computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, and the computer program can implement the above-mentioned tumor prediction method when the computer program is executed.
  • the embodiments of the application predict the tumor classification of the target patient by calling the target prediction model on the cloud platform instead of executing on multiple systems or software, which simplifies the realization of tumor classification prediction
  • the operating environment can improve the accuracy of tumor classification prediction.
  • the embodiments of the present application not only extract high-dimensional features in the segmented images, but also extract deep features in the segmented images. This takes into account the differences in the features that need to be extracted from images of different tumors or different imaging devices, and fully interprets the tumors.
  • the heterogeneity of this method can fully express the deep information and hidden information of different tumor regions. Therefore, this method has universal applicability.
  • the tumor prediction method provided by the embodiments of the present application can realize automatic segmentation of images, thereby improving the speed and accuracy of image segmentation, and can also save labor and time costs.
  • Fig. 1 is an application environment diagram of a tumor prediction method in an embodiment of the present application
  • Fig. 2 is a schematic flowchart of a tumor prediction method provided in an embodiment of the present application
  • Fig. 3 is a schematic structural diagram of a tumor prediction device provided in an embodiment of the present application.
  • Fig. 4 is a schematic structural diagram of a cloud platform provided in an embodiment of the present application.
  • connection/connection refers to the presence or addition of features, steps or elements, but does not exclude the presence or addition of one or more other features, steps or elements.
  • connecting/connection refers to the presence or addition of features, steps or elements, but does not exclude the presence or addition of one or more other features, steps or elements.
  • connecting/connection refers to the presence or addition of features, steps or elements, but does not exclude the presence or addition of one or more other features, steps or elements.
  • and/or as used herein includes any and all combinations of one or more of the associated listed items.
  • Fig. 1 is an application environment diagram of a tumor prediction method in an embodiment.
  • this method can be applied to a cloud platform.
  • the cloud platform includes a terminal 100 and a server 200 connected through a network.
  • This method can be executed in the terminal 100 or the server 200.
  • the terminal 100 can directly acquire patient data including image data of a target patient from a medical device, and execute the above method on the terminal side; or, the terminal 100 can also acquire the target patient
  • the server 200 obtains the patient data of the target patient and executes the above-mentioned method.
  • the terminal 100 may specifically be a desktop terminal (for example, a desktop computer) or a mobile terminal (for example, a notebook computer or a tablet computer) or the like.
  • the server 200 may be implemented as an independent server or a server cluster composed of multiple servers.
  • Figure 2 is a tumor prediction method provided in an embodiment of the application.
  • the method can be executed on a cloud platform and can include the following steps:
  • the target prediction model may be any neural network model used to predict tumor classification, for example, an AlexNet model or a VGGNet model.
  • the AlexNet model mainly includes an 8-layer structure such as 5-layer convolutional layer and 3-layer fully connected layer;
  • VGGNet model can include 16-layer structure such as 8-layer convolutional layer, 5-layer pooling layer and 3-layer fully connected layer, or It is a 19-layer structure, but it is not limited to this.
  • the target prediction model can be obtained from an external device or locally.
  • the external device here can refer to a device external to the cloud platform, correspondingly, the local device can refer to the cloud platform; or, the external device can also refer to the device on the cloud platform other than the tumor prediction device, and correspondingly, the local device can refer to the tumor Forecasting device.
  • Obtaining the target prediction model locally can include:
  • Sample image data can include CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography, Positron Emission Tomography), US (ultra sound, ultrasound), SPECT (Single-Photon Emission Computed Tomography) and other image data, and PET/CT (Positron Emission Tomography/Computed Tomography, referred to as PET/CT) and other multimodal image data At least one of.
  • the sample image data can be divided into training data and verification data. Among them, the training data can be used to train the machine learning model, and the verification data can be used to verify the training result of the machine learning model. In the sample image data, the ratio of these two types of data is generally 7:3 or 8:2.
  • the patient data may include image data of different types of patients and patient case information, such as gender, age, height, weight, and so on.
  • a large amount of corresponding image data can be selected from the local database as sample image data according to the received instruction, and most of the image data can be randomly used as training data and the remaining image data as the training data according to the preset ratio. verify the data.
  • the received patient data can be formatted according to the received instruction.
  • different types of image data can be parsed into DICOM format; then, the parsed patient data can be retrieved according to a preset standard. Select the corresponding image data from the data as the sample image data.
  • the preset standard may include whether the patient data is complete, whether it has been clinically verified, whether it meets clinical indicators, and so on. For example, it can be judged whether the patient’s clinical data and its case information are not complete.
  • the patient’s image data is not selected as the sample image data; it can also be judged whether the patient’s image data has been confirmed by clinical means, such as biopsy testing If it is diagnosed as malignant tumor, the image data of the patient can be selected as sample image data; it can also be judged whether the image data of the patient is judged by the doctor as whether the lesion is too small or abnormal in shape. If it does not meet the requirements, it is not selected.
  • the image data of the patient; the image data of the patient can also be selected according to the actual research needs of the doctor and combined with clinical indicators.
  • the obtained training data can be used to train the machine learning model.
  • the segmentation model in the machine learning model can be trained according to the received user instructions, the tumor area and the background area in the sample image data can be separated, and then high-dimensional features and depth features can be extracted from the tumor area, and then the high-dimensional features can be extracted from the tumor area. Dimensional features and depth features are screened, and then the filtered high-dimensional features and deep features are fused to obtain the fused features.
  • machine learning algorithms such as support vector machine, lasso (LASSO) logistic regression or random forest are used to fuse
  • the feature and the classification label of the tumor (for example, the label of benign tumor is 1, and the label of malignant tumor is 0) are processed, and the features that are highly related to the benign and malignant tumors are selected from the fusion features.
  • the training effect can be considered as the best Optimal, and determine the various network parameters in the machine learning model.
  • the verification data can be used to cross-validate the trained machine learning model with a 50% or 10% discount to calculate the corresponding accuracy, precision, and recall rates.
  • the trained machine learning model can be determined as the target prediction model.
  • S2 Call the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing the tumor region.
  • the target image may refer to an image obtained by scanning a target patient with a medical imaging device, which contains the area where the tumor site of the target patient is located (ie, the tumor area).
  • the target image may include a CT image, an MRI image, a PET image, a US image, a SPECT image, and/or a PET/CT image.
  • the sample image data is matched with the target image, including matching of type and/or content, etc., for example, they are all CT images of the patient, or all lung images.
  • the acquired target prediction model After acquiring the target prediction model and receiving the target image of the target patient, the acquired target prediction model can be called to segment the acquired target image of the target patient on the tumor region and the background region to extract the tumor region in the target image. Thus, a segmented image containing the tumor area is obtained.
  • High-dimensional features may refer to data with a higher dimensionality, which may include at least one of histogram features, three-dimensional shape features, texture features, and filtering features.
  • the histogram feature can indicate the gray level of the image, which can include features such as maximum, minimum, median, mean, span (maximum-minimum), variance, standard deviation, mean absolute deviation, and/or root mean square.
  • Three-dimensional shape features can include volume, surface area, compactness features, sphericity, spherical asymmetry, and/or surface area to volume ratio, etc.
  • texture features can indicate information about the relative positions of various gray levels of the image, which can include Gray Level Dependence Matrix (GLDM), Gray Level Co-occurence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Area Size Matrix (Gray Level Size Zone Matrix, GLSZM for short) and/or Neighborhood Gray Tone Difference Matrix (NGTDM for short) and other types of features.
  • GLDM Gray Level Dependence Matrix
  • GLCM Gray Level Co-occurence Matrix
  • GLRLM Gray Level Run Length Matrix
  • GLSZM Gray Area Size Matrix
  • NTTDM Gray Tone Difference Matrix
  • Filtering features can be obtained in the following ways: Decompose image texture information through wavelet transform to obtain high-frequency and low-frequency sampled images, and decompose the image into multiple components, that is, they can be respectively along the X, Y, and Z directions High- and low-pass filtering is performed to obtain sub-bands in different directions. For each sub-band, the histogram feature, three-dimensional shape feature, and texture feature are calculated separately to obtain the filtered feature after filtering. The filtering feature can be used to eliminate the noise mixed in the image and improve the clarity of the image.
  • the depth feature may refer to the hidden information of the image that cannot be seen by the naked eye and cannot be represented by ordinary features. It is a feature that needs to be extracted by calling a deep neural network and can be used to predict tumor classification.
  • the target prediction model can be called to extract from the target image.
  • the first convolutional layer in the AlexNet model can be called to perform convolution processing such as local response normalization and pooling on the input target image, and output the result.
  • the extracted feature map then, the second convolutional layer can be called to perform local response normalization, maximum pooling and other convolution processing on the feature map output by the first convolutional layer and output the corresponding feature map; then, you can sequentially Call the third convolutional layer and the fourth convolutional layer to perform convolution processing and output the corresponding feature maps; call the fifth convolutional layer to directly perform maximum pooling processing on the feature maps output by the fourth convolutional layer; finally call three
  • the fully connected layer performs classification processing to extract depth features and output the extracted depth features.
  • the convolutional layer in the VGGNet model can be called to perform convolution processing on the input target image, then the pooling layer is called for maximum pooling processing, and finally the full connection is called
  • the layer performs classification processing to extract depth features and output the extracted depth features.
  • each network layer processes the feature map extracted by the previous network layer connected to it and outputs the extracted feature map to the next network layer connected to it.
  • S4 Perform screening processing on the extracted high-dimensional features and depth features according to preset conditions.
  • the preset conditions can be set according to empirical data or actual needs, which can indicate that the selected features have the greatest impact on the prediction target.
  • the greatest impact on the forecast target can be reflected in the relevance, but also in other measurement indicators.
  • y is the sparse representation classification label set (e.g., benign, malignant; metastasis, no metastasis);
  • D [d 1, d 2, K, d i, K, d k] is a high-dimensional feature and depth sparse feature set representation constituting, d i denotes a characteristic high-dimensional feature and depth features;
  • [alpha] is a coefficient sparse representation, in the form of a matrix, and It is its estimated value and contains some non-zero elements;
  • is a regularization parameter greater than 0, and is used to balance the balance between fidelity and sparsity.
  • is obtained, and the features whose coefficients are not 0 are selected as high-dimensional features and depth features.
  • Fisher-based discriminant method is a qualitative classification discrimination method, which is mainly based on the idea of projection. It obtains the feature vector corresponding to the largest feature value in high-dimensional features and depth features, and projects the image data to a feature vector composed of these feature vectors. In this high-dimensional space, the high-dimensional features and depth features with the smallest distance of the same category and the largest distance of different categories are selected in the high-dimensional space.
  • S5 Call the target prediction model to fuse the selected high-dimensional features and deep features to obtain fusion features, and predict the tumor classification of the target patient based on the fusion features.
  • the target prediction model After screening the high-dimensional features and depth features that meet the preset conditions, the target prediction model can be called to fuse the screened high-dimensional features and depth features to obtain the fused features.
  • the obtained fusion feature can be matched with the preset tumor classification label, so as to predict the tumor classification of the target patient according to the matching result.
  • the fusion feature matches a benign tumor signature
  • the patient's tumor can be predicted to be benign
  • the fusion feature matches the malignant tumor signature the patient's tumor can be predicted to be malignant.
  • the matching relationship between the fusion feature and the tumor classification label may be determined when the machine learning model is trained.
  • the embodiment of the present application predicts the tumor classification of the target patient by calling the target prediction model on the cloud platform instead of executing on multiple systems or software, which simplifies the operating environment for achieving tumor classification prediction, and It can prevent data loss and incomplete data information, and can also realize data sharing and improve data processing efficiency.
  • the sample image data used comes from multiple imaging equipment or medical institutions, which can make the established target prediction model highly repeatable, versatile and anti-interference, and can be widely used.
  • the embodiments of the present application not only extract high-dimensional features in the segmented images, but also extract deep features in the segmented images. This takes into account the differences in the features that need to be extracted from images of different tumors or different imaging devices, and fully interprets the tumors.
  • the heterogeneity of this method can fully express the deep information and hidden information of different tumor regions. Therefore, this method has universal applicability.
  • the tumor prediction method provided by the embodiments of the present application can realize automatic segmentation of images, thereby improving the speed and accuracy of image segmentation, and can also save labor and time costs.
  • an embodiment of the present application also provides a tumor prediction apparatus 300, which may be set on a cloud platform and may include:
  • An obtaining unit 310 which may be configured to obtain a target prediction model for tumor prediction
  • the segmentation unit 320 may be configured to call the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing the tumor region;
  • An extraction unit 330 which may be configured to extract high-dimensional features and depth features from the obtained segmented images
  • the screening unit 340 which can be configured to screen high-dimensional features and depth features according to preset conditions;
  • the fusion unit 350 may be configured to call the target prediction model to fuse the selected depth features and high-dimensional features to obtain fusion features;
  • the prediction unit 360 may be configured to predict the tumor classification of the target patient according to the fusion feature.
  • the acquiring unit 310 may also be specifically configured to use the acquired sample image data to train a pre-built machine learning model, and determine the machine learning model that has reached the optimal training effect and passed the verification as the target prediction model.
  • the tumor prediction device provided in the embodiments of the present application, fully automatic segmentation of tumor regions can be realized, and the accuracy of tumor classification and prediction can be improved, and the diagnosis can be effectively assisted by doctors.
  • the embodiment of the present application also provides another cloud platform, which may include the tumor prediction device 300 in FIG. 3, and may also include a data management device 100, which may be configured to manage user rights and user data, and the user data includes Patient data and user account information.
  • the data management device 100 can manage user permissions based on account authorization information. For example, a user account has authorized N image centers or medical institutions to have upload permissions, but only M image centers or medical institutions are authorized to have operating data. The authorized N imaging centers or medical institutions can upload data to this account, but only M imaging centers or medical institutions have the right to manipulate data, where N and M are both positive integers greater than 1. And N is greater than M.
  • the data management device 100 can also manage the account information registered by the user, and filter the patient data uploaded by the user to filter out the patient data that meets the preset requirements, and can also set the preset size and format to meet the preset requirements.
  • the requested patient data is sent to the data storage device for storage.
  • the cloud platform may also include one or more of a resource monitoring device 200, a visualization processing device 400, a data storage device 500, and a control device 600.
  • the resource monitoring device 200 can be configured to monitor resource usage and network performance parameters, including CPU, memory, GPU, concurrency, bandwidth, packet loss rate, etc., according to received monitoring instructions, and perform according to resource usage The corresponding scheduling.
  • resource usage and network performance parameters including CPU, memory, GPU, concurrency, bandwidth, packet loss rate, etc.
  • the visual processing device 400 may display corresponding data according to the received instructions (including user instructions or preset script instructions).
  • the visualization processing device 400 can display the received user data, can also display the processing results (including image segmentation results and/or tumor classification prediction results) output by the tumor prediction device 300, and can also combine the high-dimensional features that have passed the screening with the The combination of imaging omics labels and clinical indicators (for example, age, gender, gene mutation, etc.) obtained by linear combination of in-depth features and their feature coefficients, to construct a personalized and intuitive nomogram and/or survival curve Figures, etc., to effectively assist doctors in medical diagnosis.
  • imaging omics labels and clinical indicators for example, age, gender, gene mutation, etc.
  • the data storage device 500 may be used to store various data output by the data management device 100 and/or the tumor prediction device 300.
  • the data storage device 500 can be provided with a MySQL database, which can be used to store high-dimensional features, depth features, system dynamics information, DICOM file storage paths, system usage records, and the like.
  • the data storage device 500 supports cloud storage and real-time viewing of raw data and analysis results, as well as data sharing across regions and multiple centers.
  • the control device 600 can be used to control the operations of the data management device 100, the resource monitoring device 200, the tumor prediction device 300, the visualization processing device 400, and the data storage device 500.
  • the present application also provides a computer-readable storage medium in which a computer program is stored, and when the computer program is executed, the corresponding function described in the foregoing method embodiment can be realized.
  • the computer program can also be run on the terminal or server as shown in Figure 1.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain Channel
  • memory bus Radbus direct RAM
  • RDRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed in embodiments of the present application are a tumor prediction method and device, a cloud platform, and a computer-readable storage medium. The tumor prediction method is executed on the cloud platform, and comprises: calling an obtained target prediction model to segment an obtained target image of a target patient, so as to obtain a segmented image containing a tumor area; extracting a high-dimensional feature and a depth feature from the obtained segmented image; performing screening processing on the high-dimensional feature and the depth feature according to a preset condition; and calling the target prediction model to fuse the screened depth feature and high-dimensional feature to obtain a fusion feature, and predicting a tumor classification of the target patient according to the fusion feature. By means of the technical solution provided in the embodiments of the present application, the accuracy of a tumor classification prediction result can be improved, and a doctor can be efficiently assisted in diagnosis.

Description

肿瘤预测方法、装置、云平台及计算机可读存储介质Tumor prediction method, device, cloud platform and computer readable storage medium 技术领域Technical field
本申请涉及医学数据处理技术领域,特别涉及一种肿瘤预测方法、装置、云平台及计算机可读存储介质。This application relates to the technical field of medical data processing, and in particular to a tumor prediction method, device, cloud platform, and computer-readable storage medium.
背景技术Background technique
无论在细胞形态上,还是在组织结构上,肿瘤组织都与其发源的正常组织有不同程度的差异,这种差异称为异型性。异型性的大小可以用肿瘤组织分化成熟的程度来表示。肿瘤组织异型性小,说明其分化程度高,则其恶性程度低;反之,说明肿瘤组织分化程度低,则其恶性程度高。Regardless of cell morphology or tissue structure, tumor tissues are different from the normal tissues from which they originated to varying degrees. This difference is called atypia. The size of the atypia can be expressed by the degree of differentiation and maturity of tumor tissue. The small atypia of the tumor tissue indicates that the degree of differentiation is high, and the degree of malignancy is low; on the contrary, it indicates that the degree of differentiation of tumor tissue is low, and the degree of malignancy is high.
恶性肿瘤分为早期、中期和晚期,早期的恶性肿瘤大多可以治愈,中期的恶性肿瘤可以减轻痛苦,延长生命,因此肿瘤分类预测显得尤为重要。目前,主要使用影像组学的方法来进行肿瘤分类预测,该方法主要通过在计算机断层成像(Computed Tomography,简称CT)、磁共振成像(Magnetic Resonance Imaging,简称MRI)、正电子发射断层成像(Positron Emission Tomography,简称PET)等医学影像中的感兴趣区域提取大量的定量影像特征,利用机器学习方法对这些定量影像特征进行筛选、分析,选择出与临床问题相关联最有价值的特征,利用选择出的特征构建模型,并且利用所构建的模型进行肿瘤的诊断和临床表型预测。Malignant tumors are divided into early stage, middle stage and late stage. Most of the early stage malignant tumors can be cured, and the intermediate stage malignant tumors can relieve pain and prolong life. Therefore, tumor classification prediction is particularly important. At present, the method of imagingomics is mainly used to predict tumor classification. This method is mainly used in Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (MRI). A large number of quantitative image features are extracted from the regions of interest in medical images such as Emission Tomography (PET), and machine learning methods are used to screen and analyze these quantitative image features, and select the most valuable features associated with clinical problems. Use selection Models are constructed with the features derived, and the constructed models are used for tumor diagnosis and clinical phenotype prediction.
在实现本申请过程中,发明人发现现有技术中至少存在如下问题:In the process of realizing this application, the inventor found at least the following problems in the prior art:
(1)现有的影像组学方法需要在多个系统或软件上实现不同的处理步骤,例如,在3D slicer、Mazda等图像处理软件上进行肿瘤区域的分割,然后将结果导入到MATLAB、Python等软件上训练模型,最后利用SPSS、R等软件绘制曲线下面积(Area Under Curve,简称AUC)、生存曲线等。这种方式不仅需要配置多个软件下所需的繁琐环境,多系统处理也使得数据的收集和整理工作变得繁重,易造成数据丢失、数据信息不全、数据无法共享等问题。(1) Existing imagingomics methods need to implement different processing steps on multiple systems or software, for example, segmentation of tumor regions on image processing software such as 3D slicer and Mazda, and then import the results into MATLAB, Python The model is trained on software such as SPSS and R, and finally the Area Under Curve (AUC) and survival curve are drawn using software such as SPSS and R. This method not only requires the configuration of the cumbersome environment required under multiple software, but also the multi-system processing makes the data collection and sorting work cumbersome, which can easily cause problems such as data loss, incomplete data information, and data sharing.
(2)现有的影像组学方法一般采用单个数据中心中的影像数据进行试验,这使得所建立的肿瘤分类预测模型可重复性、通用性、抗干扰性较低并 且难以广泛使用。(2) Existing imagingomics methods generally use imaging data in a single data center for testing, which makes the established tumor classification prediction model low in repeatability, versatility, anti-interference and difficult to be widely used.
(3)现有的影像组学方法基本只针对某一种肿瘤(例如,肾癌或肺癌)进行图像的逐层手动勾画分割,从勾画出的分割区域中提取出灰度强度特征、三维形状特征、纹理特征以及小波特征等高维特征,再利用这些高维特征进行分析研究。然而,由于不同的肿瘤之间存在着较大的差异性,提取相同的特征无法充分表达不同肿瘤区域的深层信息和隐藏信息。因此,现有的影像组学方法对于不同的肿瘤不具有普适性。(3) The existing imaging omics method basically only manually outlines the segmentation of images layer by layer for a certain type of tumor (for example, kidney cancer or lung cancer), and extracts gray-scale intensity features and three-dimensional shapes from the outlined segmentation area. Features, texture features and wavelet features and other high-dimensional features, and then use these high-dimensional features for analysis and research. However, due to the large differences between different tumors, extracting the same features cannot fully express the deep information and hidden information of different tumor regions. Therefore, the existing imagingomics methods are not universal for different tumors.
(4)在现有的影像组学技术中,肿瘤的精确快速分割是一个极大的挑战。在精准性方面,仍然以医生的手动分割结果为金标准,这很大程度上依赖医生的专业和经验,可复现率低。传统的手动分割方法主要由专业的影像科医生进行手动分割,但他们无法大批量地处理病人的影像数据,无法避免耗时耗力的局限性。即便是采用半自动分割方法,也需要医生对每个病人的多张图像数据进行目标区域和背景区域的标注,虽然降低了医生的操作频率,但这仍然耗时费力。(4) In the existing imaging omics technology, the precise and rapid segmentation of tumors is a great challenge. In terms of accuracy, the doctor’s manual segmentation results are still the gold standard, which largely relies on the doctor’s expertise and experience, and the reproducibility rate is low. The traditional manual segmentation method is mainly performed by professional imaging doctors, but they cannot process the patient's image data in large quantities, and cannot avoid the limitation of time-consuming and labor-intensive. Even if the semi-automatic segmentation method is adopted, it is necessary for the doctor to mark the target area and the background area on multiple images of each patient. Although the operation frequency of the doctor is reduced, it is still time-consuming and labor-intensive.
发明内容Summary of the invention
本申请实施例的目的是提供一种肿瘤预测方法、装置、云平台及计算机可读存储介质,以解决现有技术中存在的至少一种问题。The purpose of the embodiments of the present application is to provide a tumor prediction method, device, cloud platform, and computer-readable storage medium to solve at least one problem in the prior art.
为了解决上述技术问题,本申请实施例提供了一种肿瘤预测方法,该肿瘤预测方法可以在云平台上执行,并且可以包括:In order to solve the above technical problems, embodiments of the present application provide a tumor prediction method. The tumor prediction method can be executed on a cloud platform and can include:
调用所获取的目标预测模型对所获取的目标患者的目标图像进行分割以得到含有肿瘤区域的分割图像;Calling the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing the tumor area;
从所得到的所述分割图像中提取高维特征和深度特征;Extracting high-dimensional features and depth features from the obtained segmented image;
按照预设条件对所述高维特征和所述深度特征进行筛选处理;Screening the high-dimensional features and the depth features according to preset conditions;
调用所述目标预测模型对筛选出的所述深度特征和所述高维特征进行融合以得到融合特征,并且根据所述融合特征预测所述目标患者的肿瘤分类。The target prediction model is called to fuse the selected depth features and the high-dimensional features to obtain fusion features, and the tumor classification of the target patient is predicted according to the fusion features.
可选地,所述目标预测模型通过以下方式获取:Optionally, the target prediction model is obtained in the following manner:
从外部装置或本地获取所述目标预测模型。Obtain the target prediction model from an external device or locally.
可选地,从本地获取所述目标预测模型包括:Optionally, obtaining the target prediction model locally includes:
利用所获取的样本影像数据对预先构建的机器学习模型进行训练和验证,所述样本影像数据包括训练数据和验证数据,并且与所述目标图像相匹配;Training and verifying a pre-built machine learning model by using the acquired sample image data, the sample image data including training data and verification data, and matching the target image;
将训练效果达到最优并且通过验证的所述机器学习模型确定为所述目标预测模型。The machine learning model that has achieved the optimal training effect and passed the verification is determined as the target prediction model.
可选地,在对所述机器学习模型进行训练之前,所述肿瘤预测方法包括:Optionally, before training the machine learning model, the tumor prediction method includes:
从本地数据库中选取预先存储的所述样本影像数据;或者Select the pre-stored sample image data from a local database; or
通过对所接收的患者数据进行处理来获得所述样本影像数据。The sample image data is obtained by processing the received patient data.
可选地,通过对所接收的患者数据进行处理来获得所述样本影像数据包括:Optionally, obtaining the sample image data by processing the received patient data includes:
对所接收的患者数据进行格式解析;Analyze the format of the received patient data;
按照预设标准从解析后的患者数据中选取所述样本影像数据。The sample image data is selected from the parsed patient data according to a preset standard.
可选地,所述预设标准包括所述患者数据是否完整、是否经过临床验证以及是否满足临床指标。Optionally, the preset criteria include whether the patient data is complete, whether it has been clinically verified, and whether it meets clinical indicators.
可选地,按照预设条件对所述深度特征以及从所述目标图像提取的高维特征进行筛选包括:Optionally, filtering the depth features and the high-dimensional features extracted from the target image according to preset conditions includes:
利用稀疏表示算法、套索算法、Fisher判别法、基于最大相关-最小冗余的特征选择算法或基于条件互信息的特征选择算法对所述高维特征和所述深度特征进行筛选以筛选出满足所述预设条件的高维特征和深度特征。Use sparse representation algorithm, lasso algorithm, Fisher discriminant, feature selection algorithm based on maximum correlation-minimum redundancy, or feature selection algorithm based on conditional mutual information to filter the high-dimensional features and the depth features to filter out those that satisfy The high-dimensional features and depth features of the preset conditions.
可选地,所述目标图像包括CT图像、MRI图像、PET图像、US图像、SPECT图像和/或PET/CT图像。Optionally, the target image includes a CT image, an MRI image, a PET image, a US image, a SPECT image, and/or a PET/CT image.
可选地,所述目标预测模型包括AlexNet模型或VGGNet模型。Optionally, the target prediction model includes an AlexNet model or a VGGNet model.
本申请实施例还提供了一种肿瘤预测装置,该肿瘤预测装置可以设置在云平台上,并且可以包括:The embodiment of the present application also provides a tumor prediction device, which may be set on a cloud platform and may include:
分割单元,其被配置为调用所获取的目标预测模型对所获取的目标患者的目标图像进行分割以得到含有肿瘤区域的分割图像;A segmentation unit configured to call the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing the tumor region;
提取单元,其被配置为从所得到的所述分割图像中提取高维特征和深度特征;An extraction unit configured to extract high-dimensional features and depth features from the obtained segmented image;
筛选单元,其被配置为按照预设条件对所述高维特征和所述深度特征进行筛选;A screening unit configured to screen the high-dimensional features and the depth features according to preset conditions;
融合单元,其被配置为调用所述目标预测模型对筛选出的所述深度特征和所述高维特征进行融合以得到融合特征;A fusion unit configured to call the target prediction model to fuse the selected depth feature and the high-dimensional feature to obtain a fusion feature;
预测单元,其被配置为根据所述融合特征预测所述目标患者的肿瘤分类。A prediction unit configured to predict the tumor classification of the target patient according to the fusion feature.
可选地,该肿瘤预测装置还包括:Optionally, the tumor prediction device further includes:
获取单元,其被配置为通过以下方式来获取所述目标预测模型:利用所获取的样本影像数据对预先构建的机器学习模型进行训练和验证,并且将训练效果达到最优并且通过验证的所述机器学习模型确定为所述目标预测模型,其中,所述样本影像数据包括训练数据和验证数据,并且与所述目标图像相匹配。The acquiring unit is configured to acquire the target prediction model by using the acquired sample image data to train and verify the pre-built machine learning model, and to optimize the training effect and pass the verification. A machine learning model is determined as the target prediction model, wherein the sample image data includes training data and verification data, and matches the target image.
本申请实施例还提供了一种云平台,该云平台包括上述肿瘤预测装置。The embodiment of the present application also provides a cloud platform, which includes the above-mentioned tumor prediction device.
可选地,该云平台还包括:Optionally, the cloud platform also includes:
数据管理装置,其被配置为管理用户权限以及所接收的用户数据,所述用户数据包括患者数据和用户账号信息。A data management device configured to manage user permissions and received user data, the user data including patient data and user account information.
可选地,该云平台还包括以下装置中的一种或多种:Optionally, the cloud platform further includes one or more of the following devices:
资源监测装置,其被配置为根据所接收的监测指令监测资源的使用情况以及网络的性能参数;A resource monitoring device, which is configured to monitor resource usage and network performance parameters according to received monitoring instructions;
可视化处理装置,其被配置为显示所接收的用户数据、所述肿瘤预测装置输出的处理结果、以及构造出的诺模图和/或生存曲线图;A visualization processing device configured to display the received user data, the processing result output by the tumor prediction device, and the constructed nomogram and/or survival curve diagram;
数据存储装置,其被配置为存储所述数据管理装置以及所述肿瘤预测装置输出的各种数据;A data storage device configured to store various data output by the data management device and the tumor prediction device;
控制装置,其被配置为所述肿瘤预测装置、所述数据管理装置、所述资源监测装置、所述可视化处理装置、以及所述数据存储装置的操作。A control device configured to operate the tumor prediction device, the data management device, the resource monitoring device, the visualization processing device, and the data storage device.
本申请实施例还提供了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被执行时能够实现上述肿瘤预测方法。An embodiment of the present application also provides a computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, and the computer program can implement the above-mentioned tumor prediction method when the computer program is executed.
由以上本申请实施例提供的技术方案可见,本申请实施例通过在云平 台上调用目标预测模型预测目标患者的肿瘤分类,而不是在多个系统或软件上执行,这简化了实现肿瘤分类预测的运行环境,并且可以提高肿瘤分类预测的准确性。另外,本申请实施例不仅提取了分割图像中的高维特征,还提取了分割图像中的深层特征,这考虑到了不同肿瘤或不同影像设备的图像需要提取的特征有所差异,充分诠释了肿瘤的异质性,可以充分地表达不同肿瘤区域的深层信息和隐藏信息,因此,该方法具有普适性。此外,利用本申请实施例提供的肿瘤预测方法可以实现图像的自动分割,从而可以提高图像分割速度以及准确性,并且还可以节省人力及时间成本。It can be seen from the technical solutions provided by the above embodiments of the application that the embodiments of the application predict the tumor classification of the target patient by calling the target prediction model on the cloud platform instead of executing on multiple systems or software, which simplifies the realization of tumor classification prediction The operating environment can improve the accuracy of tumor classification prediction. In addition, the embodiments of the present application not only extract high-dimensional features in the segmented images, but also extract deep features in the segmented images. This takes into account the differences in the features that need to be extracted from images of different tumors or different imaging devices, and fully interprets the tumors. The heterogeneity of this method can fully express the deep information and hidden information of different tumor regions. Therefore, this method has universal applicability. In addition, the tumor prediction method provided by the embodiments of the present application can realize automatic segmentation of images, thereby improving the speed and accuracy of image segmentation, and can also save labor and time costs.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments described in this application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative labor.
图1是本申请的一个实施例中的肿瘤预测方法的应用环境图;Fig. 1 is an application environment diagram of a tumor prediction method in an embodiment of the present application;
图2是本申请的一个实施例中提供的肿瘤预测方法的流程示意图;Fig. 2 is a schematic flowchart of a tumor prediction method provided in an embodiment of the present application;
图3是本申请的一个实施例中提供的肿瘤预测装置的结构示意图;Fig. 3 is a schematic structural diagram of a tumor prediction device provided in an embodiment of the present application;
图4是本申请的一个实施例中提供的云平台的结构示意图。Fig. 4 is a schematic structural diagram of a cloud platform provided in an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是用于解释说明本申请的一部分实施例,而不是全部的实施例,并不希望限制本申请的范围或权利要求书。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其它实施例,都应当属于本申请保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present application in conjunction with the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only used to explain some of the embodiments of the present application, not all of them. The embodiments are not intended to limit the scope of the application or the claims. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work should fall within the protection scope of this application.
需要说明的是,当元件被称为“设置在”另一个元件上,它可以直接设置在另一个元件上或者也可以存在居中的元件。当元件被称为“连接/联接”至另一个元件,它可以是直接连接/联接至另一个元件或者可能同时存在 居中元件。本文所使用的术语“连接/联接”可以包括电气和/或机械物理连接/联接。本文所使用的术语“包括/包含”指特征、步骤或元件的存在,但并不排除一个或更多个其它特征、步骤或元件的存在或添加。本文所使用的术语“和/或”包括一个或多个相关所列项目的任意的和所有的组合。It should be noted that when an element is referred to as being "disposed on" another element, it can be directly disposed on the other element or there may be a centered element. When an element is referred to as being "connected/coupled" to another element, it can be directly connected/coupled to the other element or a central element may be present at the same time. The term "connection/connection" as used herein may include electrical and/or mechanical physical connection/connection. The term "comprising/comprising" as used herein refers to the presence or addition of features, steps or elements, but does not exclude the presence or addition of one or more other features, steps or elements. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述具体实施例的目的,而并不是旨在限制本申请。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the technical field of this application. The terminology used herein is only for the purpose of describing specific embodiments, and is not intended to limit the application.
另外,在本申请的描述中,术语“第一”、“第二”、“第三”等仅用于描述目的和区别类似的对象,两者之间并不存在先后顺序,也不能理解为指示或暗示相对重要性。此外,在本申请的描述中,除非另有说明,“多个”的含义是两个或两个以上。In addition, in the description of this application, the terms "first", "second", "third", etc. are only used for the purpose of description and to distinguish similar objects. There is no sequence between the two, nor can they be understood as Indicates or implies relative importance. In addition, in the description of the present application, unless otherwise specified, "plurality" means two or more.
图1为一个实施例中的肿瘤预测方法的应用环境图。参照图1,该方法可以应用于云平台。该云平台包括通过网络连接的终端100和服务器200。该方法可以在终端100或服务器200中执行,例如,终端100可直接从医疗设备获取目标患者的包括影像数据的患者数据,并在终端侧执行上述方法;或者,终端100也可在获取目标患者的患者数据后将患者数据发送至服务器200,使得服务器200获取目标患者的患者数据并执行上述方法。终端100具体可以是台式终端(例如,台式电脑)或移动终端(例如,笔记本电脑或平板电脑)等。服务器200可以用独立的服务器或者是多个服务器组成的服务器集群来实现。Fig. 1 is an application environment diagram of a tumor prediction method in an embodiment. Referring to Figure 1, this method can be applied to a cloud platform. The cloud platform includes a terminal 100 and a server 200 connected through a network. This method can be executed in the terminal 100 or the server 200. For example, the terminal 100 can directly acquire patient data including image data of a target patient from a medical device, and execute the above method on the terminal side; or, the terminal 100 can also acquire the target patient After the patient data is sent to the server 200, the server 200 obtains the patient data of the target patient and executes the above-mentioned method. The terminal 100 may specifically be a desktop terminal (for example, a desktop computer) or a mobile terminal (for example, a notebook computer or a tablet computer) or the like. The server 200 may be implemented as an independent server or a server cluster composed of multiple servers.
图2为本申请的一个实施例中提供的肿瘤预测方法,该方法可以在云平台上执行,并且可以包括如下步骤:Figure 2 is a tumor prediction method provided in an embodiment of the application. The method can be executed on a cloud platform and can include the following steps:
S1:获取目标预测模型。S1: Obtain the target prediction model.
目标预测模型可以是用于预测肿瘤分类的任意一种神经网络模型,例如,AlexNet模型或VGGNet模型。其中,AlexNet模型主要包括5层卷积层和3层全连接层等8层结构;VGGNet模型可以包括8层卷积层、5层池化层和3层全连接层等16层结构,也可以是19层结构,但不限于此。The target prediction model may be any neural network model used to predict tumor classification, for example, an AlexNet model or a VGGNet model. Among them, the AlexNet model mainly includes an 8-layer structure such as 5-layer convolutional layer and 3-layer fully connected layer; VGGNet model can include 16-layer structure such as 8-layer convolutional layer, 5-layer pooling layer and 3-layer fully connected layer, or It is a 19-layer structure, but it is not limited to this.
在接收到指示即将进行肿瘤预测的指令后,可以从外部装置或本地获取目标预测模型。这里的外部装置可以是指云平台外部的装置,相应地,本 地可以是指云平台;或者,外部装置也可以是指云平台上除了肿瘤预测装置以外的装置,相应地,本地可以是指肿瘤预测装置。After receiving the instruction indicating that tumor prediction is about to be performed, the target prediction model can be obtained from an external device or locally. The external device here can refer to a device external to the cloud platform, correspondingly, the local device can refer to the cloud platform; or, the external device can also refer to the device on the cloud platform other than the tumor prediction device, and correspondingly, the local device can refer to the tumor Forecasting device.
从本地获取目标预测模型可以包括:Obtaining the target prediction model locally can include:
(1)从本地数据库中选取预先存储的样本影像数据或者通过对所接收的患者数据进行处理来获得样本影像数据。(1) Select pre-stored sample image data from a local database or obtain sample image data by processing the received patient data.
样本影像数据可以包括来自于多个医疗机构的CT(Computed Tomography,计算机断层扫描)、MRI(Magnetic Resonance Imaging,磁共振成像)、PET(Positron Emission Tomography,正电子发射型计算机断层显像)、US(ultra sound,超声)、SPECT(Single-Photon Emission Computed Tomography,单光子发射计算机断层成像术)等影像数据以及PET/CT(Positron Emission Tomography/Computed Tomography,简称PET/CT)等多模影像数据中的至少一种。样本影像数据可以分为训练数据和验证数据,其中,训练数据可以用于对机器学习模型进行训练,验证数据可以用于对机器学习模型的训练结果进行验证。在样本影像数据中,这两种数据的比例一般为7:3或8:2等。Sample image data can include CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography, Positron Emission Tomography), US (ultra sound, ultrasound), SPECT (Single-Photon Emission Computed Tomography) and other image data, and PET/CT (Positron Emission Tomography/Computed Tomography, referred to as PET/CT) and other multimodal image data At least one of. The sample image data can be divided into training data and verification data. Among them, the training data can be used to train the machine learning model, and the verification data can be used to verify the training result of the machine learning model. In the sample image data, the ratio of these two types of data is generally 7:3 or 8:2.
患者数据可以包括不同类型的患者的影像数据及患者的病例信息,例如,性别、年龄、身高、体重等。The patient data may include image data of different types of patients and patient case information, such as gender, age, height, weight, and so on.
在一个实施例中,可以根据所接收到的指令从本地数据库中选取对应的大量影像数据作为样本影像数据,并且按照预设比例随机地将大部分影像数据作为训练数据并且将剩余的影像数据作为验证数据。In one embodiment, a large amount of corresponding image data can be selected from the local database as sample image data according to the received instruction, and most of the image data can be randomly used as training data and the remaining image data as the training data according to the preset ratio. verify the data.
在另一个实施例中,可以根据所接收到的指令对所接收的患者数据进行格式解析,例如,可以将不同类型的影像数据解析成DICOM格式;然后,可以按照预设标准从解析后的患者数据中选取对应的影像数据作为样本影像数据。该预设标准可以包括患者数据是否完整、是否经过临床验证以及是否满足临床指标等。例如,可以判断患者的临床数据及其病例信息是否,如果不完整,则不选取该患者的影像数据作为样本影像数据;也可以判断患者的影像数据是否经临床手段证实,例如,经过活检化验而确诊为恶性肿瘤的,则可以将该患者的影像数据选取为样本影像数据;还可以判断该患者的影像数据是否被医生判断为病灶是否过小或形态异常,如果不满足要求,则不选取该患者的影像数据;也还可以根据医生的实际研究需求并结合临床指标来 选取患者的影像数据。In another embodiment, the received patient data can be formatted according to the received instruction. For example, different types of image data can be parsed into DICOM format; then, the parsed patient data can be retrieved according to a preset standard. Select the corresponding image data from the data as the sample image data. The preset standard may include whether the patient data is complete, whether it has been clinically verified, whether it meets clinical indicators, and so on. For example, it can be judged whether the patient’s clinical data and its case information are not complete. If it is incomplete, the patient’s image data is not selected as the sample image data; it can also be judged whether the patient’s image data has been confirmed by clinical means, such as biopsy testing If it is diagnosed as malignant tumor, the image data of the patient can be selected as sample image data; it can also be judged whether the image data of the patient is judged by the doctor as whether the lesion is too small or abnormal in shape. If it does not meet the requirements, it is not selected. The image data of the patient; the image data of the patient can also be selected according to the actual research needs of the doctor and combined with clinical indicators.
(2)利用所获取的样本影像数据对预先构建的机器学习模型进行训练和验证,并且将训练效果达到最优并且通过验证的机器学习模型确定为目标预测模型。(2) Use the acquired sample image data to train and verify the pre-built machine learning model, and determine the machine learning model that has achieved the best training effect and passed the verification as the target prediction model.
在获取样本影像数据之后,可以利用所获取的训练数据对机器学习模型进行训练。具体地,可以根据所接收的用户指令对机器学习模型中分割模型进行训练,分离出样本影像数据中的肿瘤区域和背景区域,然后从肿瘤区域中提取出高维特征和深度特征,接着对高维特征和深度特征进行筛选,随后对筛选后的高维特征和深度特征进行融合处理以得到融合特征,最后,利用支持向量机、套索(LASSO)逻辑回归或随机森林等机器学习算法对融合特征以及肿瘤的分类标签(例如,良性肿瘤的标签为1,恶性肿瘤的标签为0)进行处理,从而从融合特征中选择出跟肿瘤良恶性高度相关的特征,此时可以认为训练效果达到最优,并且确定出机器学习模型中的各个网络参数。After obtaining the sample image data, the obtained training data can be used to train the machine learning model. Specifically, the segmentation model in the machine learning model can be trained according to the received user instructions, the tumor area and the background area in the sample image data can be separated, and then high-dimensional features and depth features can be extracted from the tumor area, and then the high-dimensional features can be extracted from the tumor area. Dimensional features and depth features are screened, and then the filtered high-dimensional features and deep features are fused to obtain the fused features. Finally, machine learning algorithms such as support vector machine, lasso (LASSO) logistic regression or random forest are used to fuse The feature and the classification label of the tumor (for example, the label of benign tumor is 1, and the label of malignant tumor is 0) are processed, and the features that are highly related to the benign and malignant tumors are selected from the fusion features. At this time, the training effect can be considered as the best Optimal, and determine the various network parameters in the machine learning model.
在确定出机器学习模型的训练效果达到最优后,可以利用验证数据对训练后的机器学习模型进行5折或10折等交叉验证,计算相应的准确率、精确率和召回率。当所得到的准确率、精确率和召回率达到对应的预设阈值时,可以将训练后的机器学习模型确定为目标预测模型。After it is determined that the training effect of the machine learning model is optimal, the verification data can be used to cross-validate the trained machine learning model with a 50% or 10% discount to calculate the corresponding accuracy, precision, and recall rates. When the obtained accuracy rate, precision rate, and recall rate reach the corresponding preset thresholds, the trained machine learning model can be determined as the target prediction model.
S2:调用所获取的目标预测模型对所获取的目标患者的目标图像进行分割以得到含有肿瘤区域的分割图像。S2: Call the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing the tumor region.
目标图像可以是指利用医疗影像设备对目标患者进行扫描而得到的图像,其中含有目标患者的肿瘤部位所在区域(即,肿瘤区域)。目标图像可以包括CT图像、MRI图像、PET图像、US图像、SPECT图像和/或PET/CT图像。样本影像数据与目标图像相匹配,包括类型和/或内容等相匹配,例如,它们都是患者的CT图像,或者都是肺部图像。The target image may refer to an image obtained by scanning a target patient with a medical imaging device, which contains the area where the tumor site of the target patient is located (ie, the tumor area). The target image may include a CT image, an MRI image, a PET image, a US image, a SPECT image, and/or a PET/CT image. The sample image data is matched with the target image, including matching of type and/or content, etc., for example, they are all CT images of the patient, or all lung images.
在获取目标预测模型以及接收到目标患者的目标图像之后,可以调用所获取的目标预测模型对所获取的目标患者的目标图像进行肿瘤区域和背景区域分割,以提取出目标图像中的肿瘤区域,从而得到含有肿瘤区域的分割图像。After acquiring the target prediction model and receiving the target image of the target patient, the acquired target prediction model can be called to segment the acquired target image of the target patient on the tumor region and the background region to extract the tumor region in the target image. Thus, a segmented image containing the tumor area is obtained.
S3:从所得到的分割图像中提取高维特征和深度特征。S3: Extract high-dimensional features and depth features from the obtained segmented image.
高维特征可以是指维数较高的数据,其可以包括直方图特征、三维形状特征、纹理特征和滤波特征中的至少一种。其中,直方图特征可以指示图像灰度,其可以包括最大值、最小值、中值、均值、跨度(最大值-最小值)、方差、标准差、平均绝对偏差和/或均方根等特征;三维形状特征可以包括体积、表面积、紧密度特征、球度、球面不对称度和/或表面积与体积之比等;纹理特征可以指示关于图像各种灰度级相对位置的信息,其可以包括灰度相关矩阵(Gray Level Dependence Matrix,简称GLDM)、灰度共生矩阵(Gray Level Co-occurence Matrix,简称GLCM)、灰度游程长度矩阵(Gray Level Run Length Matrix,简称GLRLM)、灰度区域大小矩阵(Gray Level Size Zone Matrix,简称GLSZM)和/或局部灰度差分矩阵(Neighborhood Gray Tone Difference Matrix,简称NGTDM)等不同种类特征,这些特征可以是根据病灶肿瘤体素强度的空间分布来构造的;滤波特征可以通过以下方式来得到:通过小波变换的方式来分解图像纹理信息,得到高频和低频的采样图像,将图像分解为多个分量,即,可以沿着X、Y、Z方向分别进行高低通滤波,得到不同方向上的子频带,对于每个子频带,分别计算直方图特征、三维形状特征和纹理特征,从而得到经滤波之后的滤波特征。滤波特征可以用于消除图像中混入的噪声,提高图像的清晰度。High-dimensional features may refer to data with a higher dimensionality, which may include at least one of histogram features, three-dimensional shape features, texture features, and filtering features. Among them, the histogram feature can indicate the gray level of the image, which can include features such as maximum, minimum, median, mean, span (maximum-minimum), variance, standard deviation, mean absolute deviation, and/or root mean square. ; Three-dimensional shape features can include volume, surface area, compactness features, sphericity, spherical asymmetry, and/or surface area to volume ratio, etc.; texture features can indicate information about the relative positions of various gray levels of the image, which can include Gray Level Dependence Matrix (GLDM), Gray Level Co-occurence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Area Size Matrix (Gray Level Size Zone Matrix, GLSZM for short) and/or Neighborhood Gray Tone Difference Matrix (NGTDM for short) and other types of features. These features can be constructed based on the spatial distribution of tumor voxel intensity ; Filtering features can be obtained in the following ways: Decompose image texture information through wavelet transform to obtain high-frequency and low-frequency sampled images, and decompose the image into multiple components, that is, they can be respectively along the X, Y, and Z directions High- and low-pass filtering is performed to obtain sub-bands in different directions. For each sub-band, the histogram feature, three-dimensional shape feature, and texture feature are calculated separately to obtain the filtered feature after filtering. The filtering feature can be used to eliminate the noise mixed in the image and improve the clarity of the image.
深度特征可以是指肉眼无法看到的并且普通特征无法表征的图像隐藏信息,其是需要调用深度神经网络来提取的并且能够用于预测肿瘤分类的特征。The depth feature may refer to the hidden information of the image that cannot be seen by the naked eye and cannot be represented by ordinary features. It is a feature that needs to be extracted by calling a deep neural network and can be used to predict tumor classification.
在得到目标患者的分割图像之后,可以从该分割图像中提取出高维特征和深度特征。After the segmented image of the target patient is obtained, high-dimensional features and depth features can be extracted from the segmented image.
关于如何从图像中提取高维特征的方法,可以参照现有技术中的描述,在此不再赘叙。Regarding the method for extracting high-dimensional features from the image, reference can be made to the description in the prior art, which will not be repeated here.
关于深度特征,可以调用目标预测模型从目标图像中来提取。Regarding the depth feature, the target prediction model can be called to extract from the target image.
在一个实施例中,当目标预测模型为AlexNet模型时,首先,可以调用AlexNet模型中的第一卷积层对所输入的目标图像进行局部响应归一化、池化等卷积处理并输出所提取出的特征图;然后,可以调用第二卷积层对第一卷积层输出的特征图进行局部响应归一化、最大池化等卷积处理并输出对应的 特征图;接着,可以依次调用第三卷积层和第四卷积层进行卷积处理并输出对应的特征图;调用第五卷积层直接对第四卷积层输出的特征图进行最大池化处理;最后调用三个全连接层进行分类处理以提取深度特征并输出所提取的深度特征。In one embodiment, when the target prediction model is the AlexNet model, first, the first convolutional layer in the AlexNet model can be called to perform convolution processing such as local response normalization and pooling on the input target image, and output the result. The extracted feature map; then, the second convolutional layer can be called to perform local response normalization, maximum pooling and other convolution processing on the feature map output by the first convolutional layer and output the corresponding feature map; then, you can sequentially Call the third convolutional layer and the fourth convolutional layer to perform convolution processing and output the corresponding feature maps; call the fifth convolutional layer to directly perform maximum pooling processing on the feature maps output by the fourth convolutional layer; finally call three The fully connected layer performs classification processing to extract depth features and output the extracted depth features.
在一个实施例中,当目标预测模型为VGGNet模型时,可以调用VGGNet模型中的卷积层对所输入的目标图像进行卷积处理,然后调用池化层进行最大池化处理,最后调用全连接层进行分类处理以提取深度特征并输出所提取的深度特征。In one embodiment, when the target prediction model is the VGGNet model, the convolutional layer in the VGGNet model can be called to perform convolution processing on the input target image, then the pooling layer is called for maximum pooling processing, and finally the full connection is called The layer performs classification processing to extract depth features and output the extracted depth features.
需要说明的是,上述同一类型的多个网络层的卷积核数量、尺寸、步长等设置可以相同,也可以不同,在此并没有进行限制。而且,每一层网络层是对与其连接的前一网络层提取的特征图进行处理并且将其提取出的特征图输出至与其连接的下一网络层。It should be noted that the number, size, step length and other settings of the convolution kernels of the multiple network layers of the same type may be the same or different, and there is no limitation here. Moreover, each network layer processes the feature map extracted by the previous network layer connected to it and outputs the extracted feature map to the next network layer connected to it.
S4:按照预设条件对提取出的高维特征和深度特征进行筛选处理。S4: Perform screening processing on the extracted high-dimensional features and depth features according to preset conditions.
预设条件可以根据经验数据或实际需求来设置,其可以指示所筛选出的特征对预测目标具有最大影响。对预测目标具有最大影响可以体现在相关性方面,也可以体现在其它衡量指标方面。The preset conditions can be set according to empirical data or actual needs, which can indicate that the selected features have the greatest impact on the prediction target. The greatest impact on the forecast target can be reflected in the relevance, but also in other measurement indicators.
在从目标图像中提取出高维特征和深度特征之后,可以利用稀疏表示算法、套索算法、Fisher判别法、基于最大相关-最小冗余的特征选择算法或基于条件互信息的特征选择算法等算法对高维特征和深度特征进行筛选,以从这些特征中筛选出满足预设条件的特征,也即,筛选出对预测目标具有最大影响的特征。After extracting high-dimensional features and depth features from the target image, you can use sparse representation algorithm, lasso algorithm, Fisher discriminant, feature selection algorithm based on maximum correlation-minimum redundancy, or feature selection algorithm based on conditional mutual information, etc. The algorithm screens high-dimensional features and depth features to filter out the features that meet the preset conditions from these features, that is, to filter out the features that have the greatest impact on the prediction target.
基于稀疏表示算法的主要思想是自然信号可以通过字典稀疏地表示。一般来说,该算法的模型可以表示如下:The main idea based on the sparse representation algorithm is that natural signals can be sparsely represented by a dictionary. Generally speaking, the model of the algorithm can be expressed as follows:
Figure PCTCN2020132372-appb-000001
Figure PCTCN2020132372-appb-000001
其中,y是稀疏表示集的分类标签(例如,良性、恶性;转移、未转移等);D=[d 1,d 2,K,d i,K,d k]是由高维特征和深度特征构成的稀疏表示集,d i表示高维特征和深度特征中的一种特征;α为稀疏表示系数,其为矩阵的形式,并且
Figure PCTCN2020132372-appb-000002
为其估计值并且包含部分非零元素;μ为大于0的正则化参数,并且用 于平衡保真度和稀疏度之间的平衡。
Wherein, y is the sparse representation classification label set (e.g., benign, malignant; metastasis, no metastasis); D = [d 1, d 2, K, d i, K, d k] is a high-dimensional feature and depth sparse feature set representation constituting, d i denotes a characteristic high-dimensional feature and depth features; [alpha] is a coefficient sparse representation, in the form of a matrix, and
Figure PCTCN2020132372-appb-000002
It is its estimated value and contains some non-zero elements; μ is a regularization parameter greater than 0, and is used to balance the balance between fidelity and sparsity.
通过对上式进行求解来得到α,从而将其中系数不为0的特征作为筛选出的高维特征和深度特征。By solving the above formula, α is obtained, and the features whose coefficients are not 0 are selected as high-dimensional features and depth features.
基于Fisher判别法是一种定性分类判别方法,其主要基于投影的思想,求取高维特征和深度特征中的最大特征值所对应的特征向量,并且将图像数据投影到一个由这些特征向量组成的高维空间,从而在该高维空间中筛选相同类别距离最小以及不同类别距离最大的高维特征和深度特征。Fisher-based discriminant method is a qualitative classification discrimination method, which is mainly based on the idea of projection. It obtains the feature vector corresponding to the largest feature value in high-dimensional features and depth features, and projects the image data to a feature vector composed of these feature vectors. In this high-dimensional space, the high-dimensional features and depth features with the smallest distance of the same category and the largest distance of different categories are selected in the high-dimensional space.
关于其它算法的相关描述,可以参照现有技术,在此不再赘叙。For related descriptions of other algorithms, reference can be made to the prior art, which will not be repeated here.
通过对高维特征和深度特征进行筛选,可以有效筛除这两种特征中的冗余特征和相关性较低的特征,因而可以提高预测结果的准确性。By screening high-dimensional features and depth features, redundant features and low-relevance features in these two features can be effectively screened out, and thus the accuracy of the prediction results can be improved.
S5:调用目标预测模型对筛选出的高维特征和深度特征进行融合以得到融合特征,并且根据融合特征预测目标患者的肿瘤分类。S5: Call the target prediction model to fuse the selected high-dimensional features and deep features to obtain fusion features, and predict the tumor classification of the target patient based on the fusion features.
在筛选出满足预设条件的高维特征和深度特征之后,可以调用目标预测模型将筛选出的高维特征和深度特征进行融合处理以得到融合特征。After screening the high-dimensional features and depth features that meet the preset conditions, the target prediction model can be called to fuse the screened high-dimensional features and depth features to obtain the fused features.
关于如何利用机器学习模型对图像数据进行融合处理,可以参照现有技术中的相关描述。Regarding how to use the machine learning model to perform fusion processing on image data, reference may be made to related descriptions in the prior art.
在得到融合特征之后,可以将所得到的融合特征与预设的肿瘤分类标签进行匹配,从而根据匹配结果来预测目标患者的肿瘤分类。例如,当融合特征与良性肿瘤标签匹配时,可以预测该患者的肿瘤为良性;当融合特征与恶性肿瘤标签匹配时,可以预测该患者的肿瘤为恶性。After the fusion feature is obtained, the obtained fusion feature can be matched with the preset tumor classification label, so as to predict the tumor classification of the target patient according to the matching result. For example, when the fusion feature matches a benign tumor signature, the patient's tumor can be predicted to be benign; when the fusion feature matches the malignant tumor signature, the patient's tumor can be predicted to be malignant.
需要说明的是,融合特征与肿瘤分类标签之间的匹配关系可以是在对机器学习模型进行训练时确定的。It should be noted that the matching relationship between the fusion feature and the tumor classification label may be determined when the machine learning model is trained.
通过上述描述可以看出,本申请实施例通过在云平台上调用目标预测模型预测目标患者的肿瘤分类,而不是在多个系统或软件上执行,这简化了实现肿瘤分类预测的运行环境,并且可以防止数据丢失以及数据信息不全,还可以实现数据共享并且提高数据处理效率。而且,所采用的样本影像数据来自于多个影像设备或医疗机构,这可以使得所建立的目标预测模型可重复性、通用性和抗干扰性较强,并且可以广泛使用。另外,本申请实施例不仅提取了分割图像中的高维特征,还提取了分割图像中的深层特征,这考虑到 了不同肿瘤或不同影像设备的图像需要提取的特征有所差异,充分诠释了肿瘤的异质性,可以充分地表达不同肿瘤区域的深层信息和隐藏信息,因此,该方法具有普适性。此外,利用本申请实施例提供的肿瘤预测方法可以实现图像的自动分割,从而可以提高图像分割速度以及准确性,并且还可以节省人力及时间成本。It can be seen from the above description that the embodiment of the present application predicts the tumor classification of the target patient by calling the target prediction model on the cloud platform instead of executing on multiple systems or software, which simplifies the operating environment for achieving tumor classification prediction, and It can prevent data loss and incomplete data information, and can also realize data sharing and improve data processing efficiency. Moreover, the sample image data used comes from multiple imaging equipment or medical institutions, which can make the established target prediction model highly repeatable, versatile and anti-interference, and can be widely used. In addition, the embodiments of the present application not only extract high-dimensional features in the segmented images, but also extract deep features in the segmented images. This takes into account the differences in the features that need to be extracted from images of different tumors or different imaging devices, and fully interprets the tumors. The heterogeneity of this method can fully express the deep information and hidden information of different tumor regions. Therefore, this method has universal applicability. In addition, the tumor prediction method provided by the embodiments of the present application can realize automatic segmentation of images, thereby improving the speed and accuracy of image segmentation, and can also save labor and time costs.
如图3所示,本申请实施例还提供了一种肿瘤预测装置300,其可以设置在云平台上,并且可以包括:As shown in FIG. 3, an embodiment of the present application also provides a tumor prediction apparatus 300, which may be set on a cloud platform and may include:
获取单元310,其可以被配置为获取用于肿瘤预测的目标预测模型;An obtaining unit 310, which may be configured to obtain a target prediction model for tumor prediction;
分割单元320,其可以被配置为调用所获取的目标预测模型对所获取的目标患者的目标图像进行分割以得到含有肿瘤区域的分割图像;The segmentation unit 320 may be configured to call the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing the tumor region;
提取单元330,其可以被配置为从所得到的分割图像中提取高维特征和深度特征;An extraction unit 330, which may be configured to extract high-dimensional features and depth features from the obtained segmented images;
筛选单元340,其可以被配置为按照预设条件对高维特征和深度特征进行筛选;The screening unit 340, which can be configured to screen high-dimensional features and depth features according to preset conditions;
融合单元350,其可以被配置为调用目标预测模型对筛选出的深度特征和高维特征进行融合以得到融合特征;The fusion unit 350 may be configured to call the target prediction model to fuse the selected depth features and high-dimensional features to obtain fusion features;
预测单元360,其可以被配置为根据融合特征预测目标患者的肿瘤分类。The prediction unit 360 may be configured to predict the tumor classification of the target patient according to the fusion feature.
在一实施例中,获取单元310还可以具体被配置为利用所获取的样本影像数据对预先构建的机器学习模型进行训练,并且将训练效果达到最优并且通过验证的机器学习模型确定为目标预测模型。In an embodiment, the acquiring unit 310 may also be specifically configured to use the acquired sample image data to train a pre-built machine learning model, and determine the machine learning model that has reached the optimal training effect and passed the verification as the target prediction model.
关于上述单元的详细描述,可以参照上面方法实施例中的相关描述,在此不再赘叙。For the detailed description of the above-mentioned units, reference may be made to the relevant description in the above method embodiments, which will not be repeated here.
通过利用本申请实施例提供的肿瘤预测装置,可以实现对肿瘤区域的全自动分割,并且可以提高肿瘤分类预测的准确性,可以有效地辅助医生进行诊断。By using the tumor prediction device provided in the embodiments of the present application, fully automatic segmentation of tumor regions can be realized, and the accuracy of tumor classification and prediction can be improved, and the diagnosis can be effectively assisted by doctors.
本申请实施例还提供了另一种云平台,其可以包括图3中的肿瘤预测装置300,并且还可以包括数据管理装置100,其可以被配置为管理用户权限以及用户数据,该用户数据包括患者数据和用户账号信息。具体地,数据管理 装置100可以根据账号授权信息来管理用户权限,例如,某用户账号已授权N个影像中心或医疗机构有上传权限,但仅授权其中的M个影像中心或医疗机构有操作数据的权限,则已授权的N个影像中心或医疗机构可一上传数据到此账号,但只有M个影像中心或医疗机构有操作数据的权利,其中,N和M均为大于1的正整数,并且N大于M。数据管理装置100还可以对用户注册的账号信息进行管理,并且对用户上传的患者数据进行筛选,以筛选出符合预设要求的患者数据,还可以将符合预设大小、预设格式等预设要求的患者数据发送给数据存储装置进行存储。The embodiment of the present application also provides another cloud platform, which may include the tumor prediction device 300 in FIG. 3, and may also include a data management device 100, which may be configured to manage user rights and user data, and the user data includes Patient data and user account information. Specifically, the data management device 100 can manage user permissions based on account authorization information. For example, a user account has authorized N image centers or medical institutions to have upload permissions, but only M image centers or medical institutions are authorized to have operating data. The authorized N imaging centers or medical institutions can upload data to this account, but only M imaging centers or medical institutions have the right to manipulate data, where N and M are both positive integers greater than 1. And N is greater than M. The data management device 100 can also manage the account information registered by the user, and filter the patient data uploaded by the user to filter out the patient data that meets the preset requirements, and can also set the preset size and format to meet the preset requirements. The requested patient data is sent to the data storage device for storage.
另外,该云平台还可以包括资源监测装置200、可视化处理装置400、数据存储装置500以及控制装置600中的一种或多种。In addition, the cloud platform may also include one or more of a resource monitoring device 200, a visualization processing device 400, a data storage device 500, and a control device 600.
资源监测装置200可以被配置为根据所接收的监测指令监测资源的使用情况以及网络的性能参数,包括CPU、内存、GPU、并发量、带宽、丢包率等等,并根据资源的使用情况进行相应的调度。The resource monitoring device 200 can be configured to monitor resource usage and network performance parameters, including CPU, memory, GPU, concurrency, bandwidth, packet loss rate, etc., according to received monitoring instructions, and perform according to resource usage The corresponding scheduling.
可视化处理装置400可以根据所接收的指令(包括用户指令或预设脚本指令)显示对应的数据。例如,可视化处理装置400可以显示所接收的用户数据,也可以显示肿瘤预测装置300输出的处理结果(包括图像分割结果和/或肿瘤分类预测结果),还可以将通过筛选后的高维特征和深度特征及其特征系数的线性组合而得到的影像组学标签和临床指标(例如,年龄、性别、基因突变等)相结合,构造出具有个性化和直观化的诺模图和/或生存曲线图等,以有效地辅助医生进行医疗诊断。The visual processing device 400 may display corresponding data according to the received instructions (including user instructions or preset script instructions). For example, the visualization processing device 400 can display the received user data, can also display the processing results (including image segmentation results and/or tumor classification prediction results) output by the tumor prediction device 300, and can also combine the high-dimensional features that have passed the screening with the The combination of imaging omics labels and clinical indicators (for example, age, gender, gene mutation, etc.) obtained by linear combination of in-depth features and their feature coefficients, to construct a personalized and intuitive nomogram and/or survival curve Figures, etc., to effectively assist doctors in medical diagnosis.
关于诺模图和生存曲线图的具体形式,可以参照现有技术,在此不再赘叙。Regarding the specific forms of the nomogram and the survival curve diagram, reference can be made to the prior art, which will not be repeated here.
数据存储装置500可以用于存储数据管理装置100和/或肿瘤预测装置300输出的各种数据。数据存储装置500中可以设置有MySQL数据库,该数据库可以用于存储高维特征、深度特征、系统动态化信息、DICOM文件存储路径以及系统使用记录等。数据存储装置500支持原始数据及分析结果的云端保存和实时查看,也支持跨地区和多中心之间的数据共享。The data storage device 500 may be used to store various data output by the data management device 100 and/or the tumor prediction device 300. The data storage device 500 can be provided with a MySQL database, which can be used to store high-dimensional features, depth features, system dynamics information, DICOM file storage paths, system usage records, and the like. The data storage device 500 supports cloud storage and real-time viewing of raw data and analysis results, as well as data sharing across regions and multiple centers.
控制装置600可以用于控制数据管理装置100、资源监测装置200、肿瘤预测装置300、可视化处理装置400、数据存储装置500的操作。The control device 600 can be used to control the operations of the data management device 100, the resource monitoring device 200, the tumor prediction device 300, the visualization processing device 400, and the data storage device 500.
通过利用上述云平台,可以实现对影像数据的高效管理、对肿瘤分类的准确预测以及数据的实时共享。By using the above cloud platform, it is possible to achieve efficient management of image data, accurate prediction of tumor classification, and real-time data sharing.
在一个实施例中,本申请还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,该计算机程序被执行时能够实现上述方法实施例中描述对应的功能。该计算机程序还可在如1图所示的终端或服务器上运行。In one embodiment, the present application also provides a computer-readable storage medium in which a computer program is stored, and when the computer program is executed, the corresponding function described in the foregoing method embodiment can be realized. The computer program can also be run on the terminal or server as shown in Figure 1.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储介质、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through a computer program. The program can be stored in a non-volatile computer readable storage medium. Here, when the program is executed, it may include the procedures of the above-mentioned method embodiments. Wherein, any references to memories, storage media, databases or other media used in the embodiments provided in this application may include non-volatile and/or volatile memories. Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
上述实施例阐明的系统、设备、装置、单元等,具体可以由半导体芯片、计算机芯片和/或实体实现,或者由具有某种功能的产品来实现。为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个芯片中实现。The systems, equipment, devices, units, etc. described in the foregoing embodiments may be specifically implemented by semiconductor chips, computer chips, and/or entities, or implemented by products with certain functions. For the convenience of description, when describing the above device, the functions are divided into various units and described separately. Of course, when implementing this application, the functions of each unit can be implemented in the same or multiple chips.
虽然本申请提供了如上述实施例或流程图所述的方法操作步骤,但基于常规或者无需创造性的劳动在所述方法中可以包括更多或者更少的操作步骤。在逻辑性上不存在必要因果关系的步骤中,这些步骤的执行顺序不限于本申请实施例提供的执行顺序。Although the present application provides the method operation steps described in the above-mentioned embodiments or flowcharts, more or fewer operation steps may be included in the method based on conventional or no creative labor. In steps where there is no necessary causal relationship logically, the execution order of these steps is not limited to the execution order provided in the embodiments of the present application.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其它实施例的不 同之处。另外,以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The various embodiments in this specification are described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other. Each embodiment focuses on the differences from other embodiments. In addition, the technical features of the above embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, All should be considered as the scope of this specification.
上述实施例是为便于该技术领域的普通技术人员能够理解和使用本申请而描述的。熟悉本领域技术的人员显然可以容易地对这些实施例做出各种修改,并把在此说明的一般原理应用到其它实施例中而不必经过创造性的劳动。因此,本申请不限于上述实施例,本领域技术人员根据本申请的揭示,不脱离本申请范畴所做出的改进和修改都应该在本申请的保护范围之内。The above-mentioned embodiments are described to facilitate those skilled in the art to understand and use this application. It is obvious that those skilled in the art can easily make various modifications to these embodiments, and apply the general principles described here to other embodiments without creative work. Therefore, this application is not limited to the above-mentioned embodiments. Improvements and modifications made by those skilled in the art based on the disclosure of this application without departing from the scope of this application should fall within the protection scope of this application.

Claims (15)

  1. 一种肿瘤预测方法,其特征在于,所述肿瘤预测方法在云平台上执行,并且包括:A tumor prediction method, characterized in that the tumor prediction method is executed on a cloud platform, and includes:
    调用所获取的目标预测模型对所获取的目标患者的目标图像进行分割以得到含有肿瘤区域的分割图像;Calling the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing the tumor area;
    从所得到的所述分割图像中提取高维特征和深度特征;Extracting high-dimensional features and depth features from the obtained segmented image;
    按照预设条件对所述高维特征和所述深度特征进行筛选处理;Screening the high-dimensional features and the depth features according to preset conditions;
    调用所述目标预测模型对筛选出的所述深度特征和所述高维特征进行融合以得到融合特征,并且根据所述融合特征预测所述目标患者的肿瘤分类。The target prediction model is called to fuse the selected depth features and the high-dimensional features to obtain fusion features, and the tumor classification of the target patient is predicted according to the fusion features.
  2. 根据权利要求1所述的肿瘤预测方法,其特征在于,所述目标预测模型通过以下方式获取:The tumor prediction method according to claim 1, wherein the target prediction model is obtained in the following manner:
    从外部装置或本地获取所述目标预测模型。Obtain the target prediction model from an external device or locally.
  3. 根据权利要求2所述的肿瘤预测方法,其特征在于,从本地获取所述目标预测模型包括:The tumor prediction method according to claim 2, wherein obtaining the target prediction model locally comprises:
    利用所获取的样本影像数据对预先构建的机器学习模型进行训练和验证,所述样本影像数据包括训练数据和验证数据,并且与所述目标图像相匹配;Training and verifying a pre-built machine learning model by using the acquired sample image data, the sample image data including training data and verification data, and matching the target image;
    将训练效果达到最优并且通过验证的所述机器学习模型确定为所述目标预测模型。The machine learning model that has achieved the optimal training effect and passed the verification is determined as the target prediction model.
  4. 根据权利要求3所述的肿瘤预测方法,其特征在于,在对所述机器学习模型进行训练之前,所述肿瘤预测方法包括:The tumor prediction method according to claim 3, characterized in that, before training the machine learning model, the tumor prediction method comprises:
    从本地数据库中选取预先存储的所述样本影像数据;或者Select the pre-stored sample image data from a local database; or
    通过对所接收的患者数据进行处理来获得所述样本影像数据。The sample image data is obtained by processing the received patient data.
  5. 根据权利要求4所述的肿瘤预测方法,其特征在于,通过对所接收的患者数据进行处理来获得所述样本影像数据包括:The tumor prediction method according to claim 4, wherein obtaining the sample image data by processing the received patient data comprises:
    对所接收的患者数据进行格式解析;Analyze the format of the received patient data;
    按照预设标准从解析后的患者数据中选取所述样本影像数据。The sample image data is selected from the parsed patient data according to a preset standard.
  6. 根据权利要求5所述的肿瘤预测方法,其特征在于,所述预设标准包括所述患者数据是否完整、是否经过临床验证以及是否满足临床指标。The tumor prediction method according to claim 5, wherein the preset criteria include whether the patient data is complete, whether it has been clinically verified, and whether it meets clinical indicators.
  7. 根据权利要求1所述的肿瘤预测方法,其特征在于,按照预设条件对所述深度特征以及从所述目标图像提取的高维特征进行筛选包括:The tumor prediction method according to claim 1, wherein the screening of the depth features and the high-dimensional features extracted from the target image according to preset conditions comprises:
    利用稀疏表示算法、套索算法、Fisher判别法、基于最大相关-最小冗余的特征选择算法或基于条件互信息的特征选择算法对所述高维特征和所述深度特征进行筛选以筛选出满足所述预设条件的高维特征和深度特征。Use sparse representation algorithm, lasso algorithm, Fisher discriminant, feature selection algorithm based on maximum correlation-minimum redundancy, or feature selection algorithm based on conditional mutual information to filter the high-dimensional features and the depth features to filter out those that satisfy The high-dimensional features and depth features of the preset conditions.
  8. 根据权利要求1所述的肿瘤预测方法,其特征在于,所述目标图像包括CT图像、MRI图像、PET图像、US图像、SPECT图像和/或PET/CT图像。The tumor prediction method according to claim 1, wherein the target image includes a CT image, an MRI image, a PET image, a US image, a SPECT image, and/or a PET/CT image.
  9. 根据权利要求1所述的肿瘤预测方法,其特征在于,所述目标预测模型包括AlexNet模型或VGGNet模型。The tumor prediction method according to claim 1, wherein the target prediction model comprises an AlexNet model or a VGGNet model.
  10. 一种肿瘤预测装置,其特征在于,所述肿瘤预测装置设置在云平台上,并且包括:A tumor prediction device, characterized in that the tumor prediction device is set on a cloud platform and includes:
    分割单元,其被配置为调用所获取的目标预测模型对所获取的目标患者的目标图像进行分割以得到含有肿瘤区域的分割图像;A segmentation unit configured to call the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing the tumor region;
    提取单元,其被配置为从所得到的所述分割图像中提取高维特征和深度特征;An extraction unit configured to extract high-dimensional features and depth features from the obtained segmented image;
    筛选单元,其被配置为按照预设条件对所述高维特征和所述深度特征进行筛选;A screening unit configured to screen the high-dimensional features and the depth features according to preset conditions;
    融合单元,其被配置为调用所述目标预测模型对筛选出的所述深度特征和所述高维特征进行融合以得到融合特征;A fusion unit configured to call the target prediction model to fuse the selected depth feature and the high-dimensional feature to obtain a fusion feature;
    预测单元,其被配置为根据所述融合特征预测所述目标患者的肿瘤分类。A prediction unit configured to predict the tumor classification of the target patient according to the fusion feature.
  11. 根据权利要求10所述的肿瘤预测装置,其特征在于,所述肿瘤预测装置还包括:The tumor prediction device according to claim 10, wherein the tumor prediction device further comprises:
    获取单元,其被配置为通过以下方式来获取所述目标预测模型:利用所获取的样本影像数据对预先构建的机器学习模型进行训练和验证,并且将训练效果达到最优并且通过验证的所述机器学习模型确定为所述目标预测模型,其中,所述样本影像数据包括训练数据和验证数据,并且与所述目标图像相匹配。The acquiring unit is configured to acquire the target prediction model by using the acquired sample image data to train and verify the pre-built machine learning model, and to optimize the training effect and pass the verification. A machine learning model is determined as the target prediction model, wherein the sample image data includes training data and verification data, and matches the target image.
  12. 一种云平台,其特征在于,所述云平台包括权利要求10-11中任一项所述的肿瘤预测装置。A cloud platform, wherein the cloud platform comprises the tumor prediction device according to any one of claims 10-11.
  13. 根据权利要求12所述的云平台,其特征在于,所述云平台还包括:The cloud platform according to claim 12, wherein the cloud platform further comprises:
    数据管理装置,其被配置为管理用户权限以及所接收的用户数据,所述用户数据包括患者数据和用户账号信息。A data management device configured to manage user permissions and received user data, the user data including patient data and user account information.
  14. 根据权利要求13所述的云平台,其特征在于,所述云平台还包括以下装置中的一种或多种:The cloud platform according to claim 13, wherein the cloud platform further comprises one or more of the following devices:
    资源监测装置,其被配置为根据所接收的监测指令监测资源的使用情况以及网络的性能参数;A resource monitoring device, which is configured to monitor resource usage and network performance parameters according to received monitoring instructions;
    可视化处理装置,其被配置为显示所接收的用户数据、所述肿瘤预测装置输出的处理结果、以及构造出的诺模图和/或生存曲线图;A visualization processing device configured to display the received user data, the processing result output by the tumor prediction device, and the constructed nomogram and/or survival curve diagram;
    数据存储装置,其被配置为存储所述数据管理装置以及所述肿瘤预测装置输出的各种数据;A data storage device configured to store various data output by the data management device and the tumor prediction device;
    控制装置,其被配置为所述肿瘤预测装置、所述数据管理装置、所述资源监测装置、所述可视化处理装置、以及所述数据存储装置的操作。A control device configured to operate the tumor prediction device, the data management device, the resource monitoring device, the visualization processing device, and the data storage device.
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被执行时能够实现权利要求1至9中任一项所述的肿瘤预测方法。A computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, which can realize the tumor prediction method according to any one of claims 1 to 9 when the computer program is executed.
PCT/CN2020/132372 2020-01-02 2020-11-27 Tumor prediction method and device, cloud platform, and computer-readable storage medium WO2021135774A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010001251.7A CN111210441A (en) 2020-01-02 2020-01-02 Tumor prediction method and device, cloud platform and computer-readable storage medium
CN202010001251.7 2020-01-02

Publications (1)

Publication Number Publication Date
WO2021135774A1 true WO2021135774A1 (en) 2021-07-08

Family

ID=70788310

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/132372 WO2021135774A1 (en) 2020-01-02 2020-11-27 Tumor prediction method and device, cloud platform, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN111210441A (en)
WO (1) WO2021135774A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210441A (en) * 2020-01-02 2020-05-29 苏州瑞派宁科技有限公司 Tumor prediction method and device, cloud platform and computer-readable storage medium
CN112309576A (en) * 2020-09-22 2021-02-02 江南大学 Colorectal cancer survival period prediction method based on deep learning CT (computed tomography) image omics
CN112837324A (en) * 2021-01-21 2021-05-25 山东中医药大学附属医院 Automatic tumor image region segmentation system and method based on improved level set
CN113744801B (en) * 2021-09-09 2023-05-26 首都医科大学附属北京天坛医院 Tumor category determining method, device and system, electronic equipment and storage medium
CN115100130A (en) * 2022-06-16 2022-09-23 慧影医疗科技(北京)股份有限公司 Image processing method, device and equipment based on MRI (magnetic resonance imaging) image omics and storage medium
CN115631370A (en) * 2022-10-09 2023-01-20 北京医准智能科技有限公司 Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network
CN115761360A (en) * 2022-11-24 2023-03-07 深圳先进技术研究院 Tumor gene mutation classification method and device, electronic equipment and storage medium
CN117253584B (en) * 2023-02-14 2024-07-19 南雄市民望医疗有限公司 Hemodialysis component detection-based dialysis time prediction system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040184646A1 (en) * 2003-03-19 2004-09-23 Fuji Photo Film Co., Ltd. Method, apparatus, and program for judging images
US20080267499A1 (en) * 2007-04-30 2008-10-30 General Electric Company Method and system for automatic detection of objects in an image
CN106780448A (en) * 2016-12-05 2017-05-31 清华大学 A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features
CN108596247A (en) * 2018-04-23 2018-09-28 南方医科大学 A method of fusion radiation group and depth convolution feature carry out image classification
CN109146848A (en) * 2018-07-23 2019-01-04 东北大学 A kind of area of computer aided frame of reference and method merging multi-modal galactophore image
CN110264454A (en) * 2019-06-19 2019-09-20 四川智动木牛智能科技有限公司 Cervical cancer tissues pathological image diagnostic method based on more hidden layer condition random fields
CN110399902A (en) * 2019-06-27 2019-11-01 华南师范大学 A kind of method of melanoma texture feature extraction
CN110533683A (en) * 2019-08-30 2019-12-03 东南大学 A kind of image group analysis method merging traditional characteristic and depth characteristic
CN111210441A (en) * 2020-01-02 2020-05-29 苏州瑞派宁科技有限公司 Tumor prediction method and device, cloud platform and computer-readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11138731B2 (en) * 2018-05-30 2021-10-05 Siemens Healthcare Gmbh Methods for generating synthetic training data and for training deep learning algorithms for tumor lesion characterization, method and system for tumor lesion characterization, computer program and electronically readable storage medium
CN109934832A (en) * 2019-03-25 2019-06-25 北京理工大学 Liver neoplasm dividing method and device based on deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040184646A1 (en) * 2003-03-19 2004-09-23 Fuji Photo Film Co., Ltd. Method, apparatus, and program for judging images
US20080267499A1 (en) * 2007-04-30 2008-10-30 General Electric Company Method and system for automatic detection of objects in an image
CN106780448A (en) * 2016-12-05 2017-05-31 清华大学 A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features
CN108596247A (en) * 2018-04-23 2018-09-28 南方医科大学 A method of fusion radiation group and depth convolution feature carry out image classification
CN109146848A (en) * 2018-07-23 2019-01-04 东北大学 A kind of area of computer aided frame of reference and method merging multi-modal galactophore image
CN110264454A (en) * 2019-06-19 2019-09-20 四川智动木牛智能科技有限公司 Cervical cancer tissues pathological image diagnostic method based on more hidden layer condition random fields
CN110399902A (en) * 2019-06-27 2019-11-01 华南师范大学 A kind of method of melanoma texture feature extraction
CN110533683A (en) * 2019-08-30 2019-12-03 东南大学 A kind of image group analysis method merging traditional characteristic and depth characteristic
CN111210441A (en) * 2020-01-02 2020-05-29 苏州瑞派宁科技有限公司 Tumor prediction method and device, cloud platform and computer-readable storage medium

Also Published As

Publication number Publication date
CN111210441A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
WO2021135774A1 (en) Tumor prediction method and device, cloud platform, and computer-readable storage medium
Thawani et al. Radiomics and radiogenomics in lung cancer: a review for the clinician
US10235755B2 (en) High-throughput adaptive sampling for whole-slide histopathology image analysis
US10339653B2 (en) Systems, methods and devices for analyzing quantitative information obtained from radiological images
US9430829B2 (en) Automatic detection of mitosis using handcrafted and convolutional neural network features
CN112768072B (en) Cancer clinical index evaluation system constructed based on imaging omics qualitative algorithm
EP3796210A1 (en) Spatial distribution of pathological image patterns in 3d image data
US11257210B2 (en) Method and system of performing medical treatment outcome assessment or medical condition diagnostic
WO2023020366A1 (en) Medical image information computing method and apparatus, edge computing device, and storage medium
KR20230051197A (en) Systems and methods for processing electronic images for continuous biomarker prediction
CN113017674B (en) EGFR gene mutation detection method and system based on chest CT image
Zhang et al. Deep learning for intelligent recognition and prediction of endometrial cancer
CN114332132A (en) Image segmentation method and device and computer equipment
CN115274119A (en) Construction method of immunotherapy prediction model fusing multi-image mathematical characteristics
CN112884759A (en) Method and related device for detecting metastasis state of axillary lymph nodes of breast cancer
CN116740386A (en) Image processing method, apparatus, device and computer readable storage medium
CN115631387B (en) Method and device for predicting lung cancer pathology high-risk factor based on graph convolution neural network
Tomassini et al. On-cloud decision-support system for non-small cell lung cancer histology characterization from thorax computed tomography scans
US20220375077A1 (en) Method for generating models to automatically classify medical or veterinary images derived from original images into at least one class of interest
Miao et al. Application of deep learning and XGBoost in predicting pathological staging of breast cancer MR images
Ramkumar Identification and Classification of Breast Cancer using Multilayer Perceptron Techniques for Histopathological Image
CN112329876A (en) Colorectal cancer prognosis prediction method and device based on image omics
CN112750530A (en) Model training method, terminal device and storage medium
US20230162361A1 (en) Assessment of skin toxicity in an in vitro tissue samples using deep learning
Beig Peri-tumoral radiogenomic approaches to capture tumor environment for disease diagnosis and predicting patient survival

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20910489

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20910489

Country of ref document: EP

Kind code of ref document: A1