CN111354442A - Tumor image deep learning assisted cervical cancer patient prognosis prediction system and method - Google Patents

Tumor image deep learning assisted cervical cancer patient prognosis prediction system and method Download PDF

Info

Publication number
CN111354442A
CN111354442A CN201811561566.6A CN201811561566A CN111354442A CN 111354442 A CN111354442 A CN 111354442A CN 201811561566 A CN201811561566 A CN 201811561566A CN 111354442 A CN111354442 A CN 111354442A
Authority
CN
China
Prior art keywords
training
image
image data
tumor
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811561566.6A
Other languages
Chinese (zh)
Inventor
高嘉鸿
陈尚文
沈伟志
吴国祯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Medical University Hospital
Original Assignee
China Medical University Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Medical University Hospital filed Critical China Medical University Hospital
Priority to CN201811561566.6A priority Critical patent/CN111354442A/en
Publication of CN111354442A publication Critical patent/CN111354442A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Pathology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A prognosis prediction system for cervical cancer patient with tumor image deep learning assistance is used for analyzing image data of a cervical tumor of a patient. The system comprises: a small sample data expansion module for performing data expansion processing on the image data to generate a plurality of slice images of the image data; and a deep convolutional neural network model, which performs a feature analysis on the slice image to predict a prognosis of the patient after receiving the chemical and radiation treatment.

Description

Tumor image deep learning assisted cervical cancer patient prognosis prediction system and method
Technical Field
The invention belongs to the technical field of image auxiliary prediction, and particularly relates to a cervical tumor image auxiliary prediction system and method based on deep learning and a computer program product.
Background
Chemo-and radiotherapy (chemo-hyperthermy) is one of the current conventional methods for treating locally advanced (locally advanced) cervical cancer, but the prognosis of patients after treatment (e.g. local control of tumor, distant metastasis, survival probability) is not ideal at present. Since many side effects may be caused by the treatment, the medical quality of the patient can be effectively improved if the prognosis after the treatment can be predicted. Currently Positron Emission Tomography (PET) images/Computed Tomography (CT) images of fluorodeoxyglucose (18F-FDG) have been widely used to assess the pre-treatment stage of cervical cancer, with some studies using features derived from the images to predict the likelihood of post-treatment response, local recurrence (local recurrence) or distant metastasis (distant metastasis).
However, the prediction technique combining PET/CT imaging omics (radiomics) and conventional machine learning and statistical algorithms has a drawback in accuracy, so that the probability of prediction error is very high.
Accordingly, the present invention provides an improved image deep learning aided prediction system, method and computer program product, which can effectively solve the above problems.
Disclosure of Invention
The invention provides a prognosis prediction system for a cervical cancer patient assisted by deep learning of tumor images, which is based on a deep learning algorithm and is formed by matching with image training of cervical tumors. The deep convolutional neural network model can be established through deep learning, and can be used for analyzing the image of the cervical tumor of the patient so as to predict the prognosis of the cervical tumor of the patient after combination of chemical treatment and radiation treatment. According to the invention, through the operation of the deep convolutional neural network, the prediction accuracy can be improved.
According to an aspect of the present invention, a prognosis prediction system for cervical cancer patient with advanced tumor image learning assistance is provided, for analyzing image data of cervical tumor of patient before treatment. The system comprises a small sample data expansion module and an analysis module. The small sample data expansion module can perform data expansion processing on the image data so as to generate a plurality of slice images. The analysis module can perform characteristic analysis on the slice images through the deep convolutional neural network model so as to predict the prognosis of the patient after receiving treatment.
According to another aspect of the present invention, there is provided a method for predicting response to cervical cancer therapy assisted by deep learning of tumor images, for analyzing image data of cervical tumor of a patient before therapy, the method being performed by a system for predicting prognosis of cervical cancer patient assisted by deep learning of tumor images, and the method comprising the steps of: performing data expansion processing on the image data through a small sample data expansion module to generate a plurality of slice images; and performing a feature analysis on the slice images through the deep convolutional neural network model by using the analysis module to predict the prognosis of the patient after the patient receives the treatment.
According to another aspect of the present invention, there is provided a computer program product stored on a non-transitory computer readable medium for enabling deep learning of tumor images to assist in the operation of a cervical cancer patient prognosis prediction system, wherein the computer program product comprises: an instruction, make the system carry on the data expansion processing to the image data, in order to produce a plurality of slice images; and instructions for causing the cervical tumor image aided prediction system to perform feature analysis on the slice images to predict a prognosis of the patient after receiving treatment.
Drawings
FIG. 1(A) is a system architecture diagram of a prognosis prediction system for cervical cancer patients with deep learning of tumor images according to an embodiment of the present invention;
FIG. 1(B) is an architectural diagram of a deep convolutional neural network model according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating the basic steps of a method for predicting prognosis of a patient with cervical cancer by deep learning of tumor images according to an embodiment of the present invention;
FIG. 3(A) is a flow chart of a data expansion process according to an embodiment of the invention;
FIG. 3(B) is a schematic view of a volume of interest region in accordance with an embodiment of the present invention;
FIG. 3(C) is a schematic diagram of a basic slice image set according to an embodiment of the present invention;
FIG. 3(D) is a schematic view of an extended slice image set according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a process for building a deep convolutional neural network model according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an example of convolution operations for a multi-layer perceptual convolution layer.
[ description of reference ]
1 tumor image deep learning assisted cervical cancer patient prognosis prediction system
12 small sample data expansion module
14 analysis module
15 deep convolution neural network model
16 data input terminal
17 training module
20 computer program product
152. 152-1 to 152-3 multilayer sensing convolutional layer
154 global average pooling layer
156 loss function layer
22 normalization arithmetic unit
24 activation function
26 largest pooling layer
Step S21-S23, S31-S35, S41-S47 mules
50 slice image
51. 51-1, 51-2 pixel locations
52 feature detector
53 characteristic map
VOI volume of interest region
Detailed Description
The following description will provide various embodiments of the present invention. It is to be understood that these examples are not intended to be limiting. Features of embodiments of the invention may be modified, replaced, combined, separated, and designed to be applied to other embodiments.
Fig. 1(a) is a system architecture diagram of a tumor image deep learning assisted cervical cancer patient prognosis prediction system 1 according to an embodiment of the present invention. As shown in fig. 1(a), the tumor image deep learning assisted cervical cancer patient prognosis prediction system 1 includes a small sample data expansion module 12, an analysis module 14 and a deep convolutional neural network model 15. In one embodiment, the tumor image deep learning assisted cervical cancer patient prognosis prediction system 1 may further include a data input 16 for obtaining image data from the outside, i.e., a user may input the image data into the tumor image deep learning assisted cervical cancer patient prognosis prediction system 1 through the data input 16. The "image" may be, for example, a PET image or CT image of a cervical Tumor of a patient prior to treatment, wherein treatment may be, for example, but not limited to, combined chemotherapy and radiation therapy, and the "image data" may be, for example, a Metabolic Tumor Volume (MTV) range of the PET image or CT image, but is not limited thereto. For clarity, the following paragraphs will be exemplified with respect to PET images. When the tumor image deep learning assists the cervical cancer patient prognosis prediction system 1 to obtain the image data, the small sample data expansion module 12 may perform a data expansion process on the image data to expand the image data into more detailed image data. The analysis module 14 may perform feature analysis on the detailed image data through the deep convolutional neural network model 15 (for example, the deep convolutional neural network model 15 analyzes the image data and establishes a prediction path by using the feature), so as to predict the prognosis of the patient after receiving the treatment; for example, if the image data of the cervical tumor of the patient before receiving the chemical and radiation therapy is inputted into the prognosis prediction system 1 for cervical cancer patient with deep learning of tumor image, the prognosis prediction system 1 for cervical cancer patient with deep learning of tumor image can predict the prognosis (e.g. recurrence or metastasis probability of cervical tumor) of the patient after receiving the chemical and radiation therapy. In addition, in an embodiment, the tumor image deep learning aided cervical cancer patient prognosis prediction system 1 may further include a training module 17 for controlling the training model to perform deep learning, and when the training of the training model is completed, the deep convolutional neural network model 15 may be formed. Details of each element will be described below.
The tumor image deep learning assisted cervical cancer patient prognosis prediction system 1 can be an image processing device, which can be implemented by any device having a microprocessor, such as a desktop computer, a notebook computer, an intelligent mobile device, a server, or a cloud host. In an embodiment, the tumor image deep learning assisted cervical cancer patient prognosis prediction system 1 may have a network communication function to transmit data through a network, wherein the network communication may be a wired network or a wireless network, so that the tumor image deep learning assisted cervical cancer patient prognosis prediction system 1 may also obtain image data through the network. In one embodiment, the tumor image deep learning assisted cervical cancer patient prognosis prediction system 1 can be implemented by a computer program product 20 executed by a microprocessor, for example, the computer program product 20 can have a plurality of instructions that can cause the processor to perform special operations, thereby causing the processor to perform the functions of the small sample data expansion module 12, the analysis module 14, the deep convolutional neural network model 15 and the training module 17. In one embodiment, the computer program product 20 may be stored in a non-transitory (non-transitory) computer readable medium, such as but not limited to a recording medium.
In one embodiment, the data input 16 is a physical connection interface for obtaining external data, for example, when the tumor image deep learning assisted cervical cancer patient prognosis prediction system 1 is implemented by a computer, the data input 16 can be a USB interface, various transmission line connectors, and the like, but is not limited thereto. In addition, the data input terminal 16 may also be integrated with a wireless communication chip, so that data can be received in a wireless transmission manner.
The small sample data expansion module 12 may be a functional module, which may be implemented by a program code, for example, when the program code is executed by a microprocessor in the tumor image deep learning assisted cervical cancer patient prognosis prediction system 1, the program code may make the microprocessor execute the functions of the small sample data expansion module 12. The analysis module 14 can also be a functional module, which can be implemented by a program code, for example, when the program code is executed by a microprocessor in the tumor image deep learning assisted cervical cancer patient prognosis prediction system 1, the program code can make the microprocessor execute the functions of the analysis module 14. The training module 17 can also be a functional module and can be implemented by a program code, for example, when the program code is executed by a microprocessor in the tumor image deep learning assisted cervical cancer patient prognosis prediction system 1, the program code can make the microprocessor execute the functions of the training module 17. In one embodiment, the training module 17 may be integrated with the deep convolutional neural network model 15 (the model for training). The modules may be implemented by different programs respectively independent from each other, or by different subroutines in the same program, or by integrating the programs into the computer program product 20, which is not limited in the present invention.
The details of the deep convolutional neural network model 15 are explained next. The deep convolutional neural network model 15 of the present invention is an artificial intelligence model for analyzing image characteristics by using a deep convolutional neural network, and is formed by training a training model based on a deep learning technique. When the training model training is completed, a characteristic path is generated, which can be regarded as a neuron conduction path in the artificial intelligence model, wherein each neuron can represent an image feature, and each image feature may have a different weight value. In one embodiment, the deep convolutional neural network model 15 is composed of a plurality of algorithms (e.g., program code). In addition, in order to distinguish the deep convolutional neural network model 15 before training from the deep convolutional neural network model 15 after training, the deep convolutional neural network model 15 before training will be referred to as a "model for training" herein. In one embodiment, the training model needs to go through at least one "training phase" to train and establish a feature path, and the training model needs to go through at least one "testing phase" to test the accuracy of the feature path, and when the accuracy reaches the requirement, the training model can be used as the deep convolutional neural network model 15 for subsequent use. In the present invention, the training model undergoes multiple training, and different feature paths are generated after each training, and the feature path with the highest accuracy is set as the feature path of the deep convolutional neural network model 15.
Fig. 1(B) is an architecture diagram of the deep convolutional neural network model 15 according to an embodiment of the present invention. In order to predict the recurrence or metastasis probability of cervical tumors, the deep convolutional neural network model 15 (or training model) of the present invention is configured to include a plurality of multi-layer perceptual convolutional layers (mlpconv layers) 152, at least one global average pooling layer (global average pooling layer)154, and at least one loss function layer (loss function layer) 156. The number of multi-layer perceptual convolution layers (mlpconv layers) 152 is determined based on the number of training image data input to the training model in the training stage. In the present embodiment, in order to balance the image resolution, the training data amount, and the computation time, and avoid the accuracy degradation caused by the over-training, the number of the multi-layered perceptual convolution layers 152 is set to 3, and the number of the training image data is set to 142 PET images, wherein the 142 PET images generate 1562 slice image sets after the data expansion processing. It is noted that the deep convolutional neural network model 15 described herein is merely an example.
In this embodiment, the input of the first multi-layer perceptual convolutional layer 152-1 is used to receive image data (e.g., slice image after being extended by the small sample data extension module 12), the output of the first multi-layer perceptual convolutional layer 152-1 can be connected to the input of the second multi-layer perceptual convolutional layer 152-2, the output of the second multi-layer perceptual convolutional layer 152-2 can be connected to the input of the third multi-layer perceptual convolutional layer 152-3, the output of the third multi-layer perceptual convolutional layer 152-3 can be connected to the input of the global average pooling layer 154, and the output of the global average pooling layer 154 can be connected to the loss function layer 156. Where the global mean pooling layer 154 may have two outputs, respectively representing a good (low likelihood of tumor recurrence/metastasis) or a poor (high likelihood of tumor recurrence/metastasis) prognosis.
It is noted that each of the multi-layer sensing convolutional layers 152 is configured to include at least one feature detector (featuredetector). In one embodiment, the feature detector may be presented in a matrix form and obtain the features in the image data by performing a convolution operation with the pixel value (e.g., SUV value for each volume pixel) of each pixel location of the image data. The initial content of each feature detector is randomly generated, and the content of the feature detectors can be adjusted through the accuracy of the training result in the training process, so that the training effect is improved. In one embodiment, the feature detector of the first multi-layer perceptual convolutional layer 152-1 performs a first convolution operation with the original image data inputted into the training model, thereby generating a first feature map (feature map), wherein the size of the first feature map is smaller than that of the inputted image; the feature detector/of the second multi-layer perceptual convolution layer 152-2 performs a second convolution operation with the convolution operation result (the first feature map) of the first multi-layer perceptual convolution layer 152-1 to generate a second feature map, wherein the size of the second feature map is smaller than that of the first feature map; the feature detector of the third multi-layered perceptual convolutional layer 152-3 performs a third convolution operation with the convolution operation result (the second feature map) of the second multi-layered perceptual convolutional layer 152-3 to generate a third feature map. Finally, the global average pooling layer 154 extracts the important features from the third feature map. Thereby, important features related to tumor recurrence or metastasis in the image of the cervical tumor can be identified.
In one embodiment, each of the multi-layered perceptual convolution layers 152 may include a normalization unit 22 for performing normalization operations. The normalization operation here may be, for example, Batch normalization (Batch normalization). The normalization unit 22 normalizes the data of the convolution result of each multi-layer perceptual convolution layer 152, thereby speeding up the subsequent data processing convergence and making the training process more stable. Furthermore, in one embodiment, each of the multi-layer perceptual convolution layers 152 may each include a pooling unit 26 for performing a pooling operation, where the pooling operation may be, for example, max pooling (maximum pooling), and the role of the pooling layer 26 is to reduce the size of the feature map obtained by the multi-layer perceptual convolution layer 152 and to preserve feature concentration into the reduced feature map, and in a broad sense, the role of the pooling layer 26 may be considered to extract important features from the feature map, so as to emphasize the important features. In some embodiments, the maximum pooling layer 26 may also be changed to an average pooling layer architecture.
In one embodiment, each of the multi-layer perceptual convolutional layer 152 and the global average pooling layer 154 may each include an Activation function 24(Activation function). The activation function 24 may be used to adjust the output result of the multi-layer perceptual convolutional layer 152 or the global average pooling layer 154, so that the output result generates a nonlinear effect, thereby improving the prediction capability of the training model. The Activation function 24 may be a Saturated Activation function (satured Activation function) or a Non-Saturated Activation function (Non-satured Activation function), and when the Activation function 24 is a Saturated Activation function, the Activation function 24 may employ an architecture of tanh, sigmoid, etc., and when the Activation function 24 is a Non-Saturated Activation function, the Activation function 24 may employ a Linear rectification function (ReLU) or a variant architecture thereof (e.g., an ELU, a leak ReLU, a prellu, a RReLU, or other variant architectures). In a preferred embodiment, the activation function 24 of the multi-layer perceptual convolutional layer 152 is in a ReLU architecture, and the activation function 24 of the global average pooling layer 154 is in an architecture other than ReLU.
In one embodiment, the average pooling layer 154 functions to reduce the dimension of the feature map obtained by the multi-layer perceptual convolutional layer 152, and each feature map is represented as an average value, and in a broad sense, the global average pooling layer 154 functions to extract important features from the feature map, so that the complexity of the operation is reduced and the important features can be emphasized.
In one embodiment, the function of the loss function layer 156 is to balance the prediction probabilities, i.e., make the probabilities of two output results of the feature path being predicted similar, so as to avoid the training module or deep convolutional neural network model 15 from making a large number of predictions for only a single output result.
In one embodiment, the contents of the deep convolutional neural network model 15 or the training model can be realized by the Python 3.6.4 program language on the server installed with NVIDIA Quadro M4000 display card and the tensflo deep learning framework of the Keras 2.1.3 deep learning module, but are not limited thereto.
The basic operation of the present invention will be described next. Fig. 2 is a flowchart illustrating the basic steps of a method for prognosis prediction of a patient with cervical cancer assisted by deep learning of tumor images according to an embodiment of the present invention, which is performed by the system 1 for prognosis prediction of a patient with cervical cancer assisted by deep learning of tumor images shown in fig. 1(a) and 1(B), and the deep convolutional neural network model 15 is trained. As shown in fig. 2, step S21 is first executed, and the data input terminal 16 obtains image data of a cervical cancer tumor before chemotherapy and radiotherapy. Thereafter, step S22 is executed, and the small sample data expansion module 12 performs a data expansion process on the image data to generate a plurality of slice images of the image data. Thereafter, step S23 is performed, and the deep convolutional neural network model 15 performs a feature analysis on the slice image data to predict the prognosis of the patient after receiving the chemical and radiation therapy (i.e. the recurrence probability or metastasis probability of the cervical tumor of the patient). The details of each step will be described next.
In step S21, a user (e.g., a physician) of the system can input image data of a patient into the cervical cancer tumor image aided prediction system 1 through the data input terminal 16, wherein the image data can be, for example, a metabolic tumor volume range (MTV range) of a cervical tumor in a PET image. The MTV range can be obtained by various known methods. In one embodiment, the image data is of a cervical tumor of the patient exhibiting an abnormal metabolic response to a tracer (e.g., 18F-FDG) after ingestion of the tracer. In one embodiment, the image data may have a plurality of volume pixels (voxels), and the pixel value of each volume pixel is a standard metabolic value (SUV) of glucose.
In step S22, after the image data of the patient is inputted into the cervical cancer tumor image aided prediction system 1, the microprocessor in the system 1 can operate according to the instructions in the computer program product 20, that is, the cervical cancer tumor image aided prediction system 1 can perform data expansion processing on the image data through the small sample data expansion module 12. The purpose of step S22 is to make the training result of the system less than expected if the available image data is limited, so the data size must be expanded before training. The details of the data expansion process will be described below with reference to fig. 3(a), wherein fig. 3(a) is a schematic flow chart of the data expansion process according to an embodiment of the present invention, and the data expansion process is executed by the small sample data expansion module 12, that is, the whole flow can be realized by the execution of the processor in the system 1.
[ data expansion processing ]
First, step S31 is executed, and the small sample data expansion module 12 performs interpolation (interpolated) on the image data (hereinafter, defined as original image data) inputted into the system 1. this step is for improving the resolution of the image data. in an embodiment, the original size of each volume pixel in the image data may be 5.47mm ×.47mm ×.27mm, and after the interpolation, the size of each volume pixel in the image data may be 2.5mm ×.5mm ×.5mm, so that the resolution of the image may be improved.
Thereafter, step S32 is executed, since the aforementioned interpolation process changes the image resolution, the small sample data expansion module 12 converts the SUVmax coordinate before interpolation into the interpolated coordinate based on the volume pixel (SUVmax) having the maximum SUV value in the original image data.
Thereafter, step S33 is performed, the small sample data expansion module 12 extracts a volume of interest (VOI) with SUVmax as the center point from the interpolated image data, in one embodiment, the size of the VOI is set to 70mm × 70mm × 70mm, and when the size of the VOI is set to 70mm × 70mm × 70mm, the MTV size applicable to most of the cervical tumors is not limited, but can be as shown in the schematic diagram of the VOI of fig. 3(B) according to an embodiment of the present invention.
Thereafter, step S34 is executed, and the small sample data expansion module 12 sets the XY plane, the XZ plane and the YZ plane passing through the VOI center (SUVmax) as a basic slice image set, as shown in fig. 3(C), wherein fig. 3(C) is a schematic diagram of a basic slice image set (coordinates are not rotated) according to an embodiment of the present invention.
Then, step S35 is executed, and the small sample data expansion module 12 rotates one of the planes of the basic slice image set counterclockwise in a specific direction to obtain a plurality of expanded slice image sets. In one embodiment, the small sample data expansion module 12 rotates the XY plane of the basic slice image set counterclockwise in a specific direction, but is not limited thereto. In one embodiment, the small sample data expansion module 12 rotates one of the planes of the basic slice image set counterclockwise in the Y-axis direction, but is not limited thereto. In one embodiment, counterclockwise rotation refers to rotating one of the planes of the basic slice image group by 15 °, 30 °, 45 °, 60 °, 75 °, 105 °, 120 °, 135 °, 150 °, and 165 ° in a specific direction, respectively, so as to obtain 10 rotated slice groups (i.e. expanded slice image groups), as shown in fig. 3(D), where fig. 3(D) is a schematic diagram of an expanded slice image group according to an embodiment of the present invention. In one embodiment, after 10 rotations, a single image data can generate 11 slice image sets (1 basic slice image set and 10 rotation slice image sets, wherein each slice image set comprises 3 slice images), so that the data amount can be greatly increased, and the local features of the tumor are more obvious. In the present embodiment, each slice image is a two-dimensional image, and thus the subsequent convolution operation is also performed on the two-dimensional image, but in another embodiment, the slice image may also be a three-dimensional image, and the convolution operation is performed on the three-dimensional image.
Through the above steps, the data expansion process can be completed, and the small sample data expansion module 12 can generate a plurality of slice images from a single image data.
Please refer to fig. 2 again. In step S23, after acquiring a plurality of slice images, the depth convolution neural network model 15 may analyze the slice images. Since each slice image includes local features of the tumor, the deep convolutional neural network model 15 can automatically analyze the local features of the tumor in the slice image and determine what the output result of the slice image is through the feature path, thereby predicting the prognosis (the possibility of recurrence/metastasis of cervical tumor) of the patient after receiving chemical and radiation therapy.
Thus, the cervical cancer tumor image-aided prediction system 1 can predict the prognosis of the patient after receiving the radiotherapy, and assist the user (e.g. physician) to determine whether to adjust the treatment mode.
In order for the deep convolutional neural network model 15 to achieve the effect of step S23, the deep convolutional neural network model 15 must be trained through deep learning. The process and method for building the deep convolutional neural network model 15 will be described in detail below.
[ establishing method of deep convolutional neural network model 15 ]
In order to build the deep convolutional neural network model 15, the training model must be trained using the multi-image data, so that the training module can find out the features from the image data, and then build the feature path. In order to make the training result (feature path) have the effect of a neural network such as a human being, the training model must be trained many times, and in each training, the training model uses a feature detector and image data to perform convolution operation, wherein the feature detector used for the first time is randomly generated, so that different feature paths may be generated in each training, and then the system 1 verifies the accuracy of each feature path, and sets the feature path with the highest accuracy as the feature path of the deep convolutional neural network model 15. The following are some of the key points of the deep convolutional neural network model 15 of the present invention at the time of construction.
[ establishing points of the deep convolutional neural network model 15 ]
1. Architecture of model for training
In the present invention, the training model uses the same basic architecture for each training (the difference is that the feature detector is randomly generated for each training). Before training, the basic architecture of the training model must be set. In this embodiment, each training model comprises three multi-layer perceptual convolutional layers, a global average pooling layer, and a loss function layer, wherein each multi-layer perceptual convolutional layer comprises at least one randomly generated feature detector. The above-mentioned training model architecture is a preferred example and not a limitation of the present invention.
2. Purpose of training
In the present invention, "training" refers to a process of inputting a large amount of image data into a training model, and then the training model automatically analyzes and summarizes the data by a pre-written algorithm, thereby establishing a characteristic path. The objective of the training is to find out important features from a large amount of data, establish a feature path, and find out a feature path with the best accuracy, wherein the training model establishes one feature path after each training, and each feature path corresponds to two output results (i.e. good corresponding prognosis and bad corresponding prognosis).
Point 3. number of training
In the present invention, the total number of training times of the training model is set to 500, so that after all training of the training model is completed, substantially 500 different feature paths can be established. It should be noted that the above training times are only a preferred example and are not a limitation of the present invention.
Point 4. number of data
In this embodiment, PET images of cervical tumors of 142 patients are used as raw image data, and out of the 142 PET images, 121 PET images without local recurrence after treatment and 21 PET images with local recurrence after treatment are included. It should be noted that the amount of the image data is only a preferred example and is not a limitation of the present invention. In addition, the content of the image data may vary according to which prognosis is desired, for example, to predict recurrence of a tumor, the PET image data uses prognosis data related to recurrence of a tumor, to predict metastasis, the PET image data uses prognosis data related to metastasis of a tumor, and so on.
Point 5. expansion of data
In the present embodiment, each PET image can be expanded into 11 slice image sets by the small sample data expansion module 12, so that 1562 slice image sets (each set includes 3 slice images) are finally generated in the present embodiment. The data expansion method for each PET image can be applied to the description of fig. 3(a), and therefore, will not be described in detail here.
Point 6 distribution of data
At each training time, 1562 slice image sets will be randomly assigned to test and training sets. In one embodiment, the random assignment randomly assigns 142 PET images into 10 data strings, wherein each data string has a similar number of good-prognosis PET images and a similar number of bad-prognosis PET images, and one data string is set as a test set and the other data strings are integrated as a training set. In one embodiment, the small sample data expansion module 12 starts data expansion after the test set and the training set are allocated, but not limited thereto. The details of the "test set" and "training set" are described next. The "test set" is image data used when training of the training model is completed (for example, 500 times of training are completed) to test the accuracy of the training completed model. In one embodiment, each string is set as a test set 1 time, the rest of the strings are combined as a training set, and the accuracy of the model established after each training is determined by the test results of the test set, and then the 10 test results are summarized. The above accuracy testing method is only exemplary and not limiting. The "training set" is image data for training a training model in a training phase. In one embodiment, the data of the training set during each training may be divided into a sub-training set and a sub-verification set, where the sub-training set is used for training the training model to generate training data of a preliminary feature path, and the sub-verification set is used for the training model to determine whether to adjust the verification data of the preliminary feature path, and the final adjusted feature path is the feature path established by the training. In one embodiment, the number ratio of the sub-training set to the sub-verification set is 8: 2. The foregoing is by way of example only and is not intended as a limitation on the present invention. In one embodiment, the data of the training set may not be divided into the sub-training set and the sub-validation set.
Key 7. training and testing
Referring again to fig. 1(a), in one embodiment, when the training model is trained, the prognosis corresponding to the image data in the training set is known by the system 1 in advance, so that the training model can analyze and summarize the possible features corresponding to various prognoses. In one embodiment, when the accuracy of the trained model is tested, the prognosis corresponding to the image data in the test set is not known by the trained model in advance, and after the trained model analyzes the result, the prognosis corresponding to the image data in the test set is used for comparison, so as to determine the accuracy of the test module. In one embodiment, the model with the most accurate prediction accuracy will be set by the system 1 as the deep convolutional neural network model 15, whereby the training module can form the deep convolutional neural network model 15. The foregoing is by way of example only and is not intended as a limitation on the present invention.
[ details of the establishing Process of the deep convolutional neural network model 15 ]
Fig. 4 is a schematic diagram of a process of building the deep convolutional neural network model 15 according to an embodiment of the present invention, wherein steps S41 to S46 correspond to a "training phase", and step S47 corresponds to a "testing phase". Please refer to fig. 1(a), 1(B) and fig. 4. First, step S41 is executed, and the basic architecture of the training model is set to be completed, that is, the number of the multi-layer perceptual convolutional layer, the global average pooling layer and the loss function layer is set to be completed, wherein the initial feature detector in the multi-layer perceptual convolutional layer is randomly generated; then, step S42 is executed to acquire a plurality of image data (training sets) for the training model; then step S43 is executed, in which the feature detectors of the multi-layered perceptual convolution layer perform convolution operation on the image data (training set) to find out image features; then step S44 is executed, the global average pooling layer reinforces the image feature; then step S45 is executed, the training module establishes a feature path, wherein the predicted path includes two output results, one of the output results is good prognosis, and the other output result is bad prognosis; then step S46 is executed, and steps S42 to S45 are executed again until the preset training times (e.g., 500 times) are completed; then step S47 is executed, and the system 1 uses the image data of the test set to test the accuracy of each feature path, and sets the feature path with the highest accuracy as the feature path of the deep convolutional neural network model 15. Details of each step are explained below.
Since the contents of steps S41, S42, S46 and S47 can be applied to the description of the previous paragraphs, they will not be described in detail here.
Regarding step S43, the contents of the convolution operation are described below by way of example, it should be noted that this example is simplified and the actual operation is more complicated. FIG. 5 is a diagram illustrating an example of convolution operations of a multi-layer perceptual convolution layer. As shown in fig. 5, a slice image 50 may include a plurality of pixel locations 51, each pixel location 51 having a pixel value (e.g., SUV value). The pixel locations of the slice image 50 are convolved with the feature detector 52 of the multilayer perceptual convolution layer 15. When all pixel positions are convolved with the feature detector 52, a feature map 53 is generated.
Taking the pixel position 51-1 as an example, since the position of the feature detector 52 with the value "1" corresponds to the pixel position 51-1 and the surrounding pixel positions are both "0", the value of the pixel position 51-1 on the feature map 53 after convolution operation is "0"; taking the pixel position 51-2 as an example, since the position of the value "1" in the feature detector 52 corresponds to the pixel position 51-2 and the surrounding pixel positions are both "1", the value of the pixel position 51-2 on the feature map 53 is "3" after the convolution operation; since the pixel detector 52 is 3 × 3 in this example, some pixel positions (e.g., peripheral pixel positions) in the slice image 50 are eliminated because convolution operations cannot be performed, and the number of pixel positions of the feature map 53 is smaller than that of the slice image 50. In the above manner, when the convolution operation of the three multi-layered perceptual convolution layers is completed, the features in the image data (PET image of cervical tumor) can be found.
In addition, regarding step S43, in an embodiment, after the convolution operation of each multi-layer perceptual convolution layer is completed, the activation function activates the convolution operation result of each multi-layer perceptual convolution layer, so that the convolution operation result has a non-linear effect. In one embodiment, the activation function of each multi-layer perceptual convolutional layer is ReLu, but is not so limited.
In step S44, after the convolution operation is completed, the activation function may retain part of the features in the image map, remove data other than the features, and narrow down the feature map by pooling, thereby enhancing the features. In one embodiment, the activation function of the global average pooling layer may be represented by the following equation:
Figure RE-GDA0001986955180000151
where p is a characteristic of the pooling result, f (p) is the activation result, and e is an exponential function. In addition, in one embodiment, the pooling result can be multiplied by different multiplying factors according to requirements to adjust the size of the value range.
Regarding step S45, in one embodiment, the penalty function is set such that the probability that any image data is predicted to be good or bad is similar to ensure that the characteristic path does not favor one of the output results.
Through the execution of steps S41-S47, the deep convolutional neural network model 15 can have the most accurate characteristic path and can be used to predict the likelihood of recurrence/metastasis of cervical tumors of other patients. After the deep convolutional neural network model 15 is established, the medical effect can be analyzed by inputting the PET image of the patient into the system 1, in this embodiment, after 142 cervical cancer patients receive chemical and radiation therapy, 21 patients have local recurrence (actual data), and the prediction system constructed by the invention can achieve the prediction effect of 89% Accuracy (Accuracy), 93% Specificity (Specificity) and 95% negative prediction Value (negative prediction Value) in the prediction result through cross validation; in addition, in the same patient, 26 patients had far-end metastasis (actual data), and the prediction system constructed by the invention can achieve the prediction effects of 87% accuracy, 90% specificity and 95% negative prediction value in the prediction results through cross validation.
In one embodiment, the Cervical tumor image aided prediction system 1, method and computer program product of the present invention Can be realized by at least the contents described in the document "Deep Learning for 18F-FDG PET Can Become An excellent Forming tool in Patients with Advanced scientific Cancer", but are not limited thereto.
Therefore, the cervical tumor image auxiliary prediction system can expand a small amount of image data through the small sample data expansion module without inputting huge image data at the beginning. In addition, through the deep convolutional neural network model trained through deep learning, the invention can accurately predict the possibility of recurrence/metastasis of the cervical tumor after receiving the radioactive chemotherapy, and can assist patients to seek the best medical treatment mode.
Although the present invention has been described by the above embodiments, it is understood that many modifications and variations are possible in light of the spirit of the invention and the scope of the claims appended hereto.

Claims (11)

1. A tumor image deep learning assisted cervical cancer patient prognosis prediction system for analyzing an image data of a cervical tumor of a patient before a treatment, comprising:
a small sample data expansion module for performing data expansion processing on the first image data to generate a plurality of slice images of the first image data; and
an analysis module for performing a feature analysis on the slice images through a deep convolutional neural network model to predict a prognosis of the patient after receiving the treatment.
2. The tumor image deep learning assistant cervical cancer patient prognosis prediction system of claim 1, wherein the small sample data expansion module performs an image interpolation process on the image data, wherein the image interpolation process uses a SUVmax pixel in the image data as a center to form a basic slice image set, and then rotates the basic slice image set in a specific direction to form a plurality of expansion slice image sets.
3. The system of claim 1, wherein the deep convolutional neural network model is formed by a training model undergoing multiple training and testing, wherein the training and testing is performed by a training module, the training model is trained multiple times by using multiple training image data, so as to generate multiple different feature paths, the accuracy of each feature path is tested by using multiple testing image data, and the feature path with the highest accuracy is set as the feature path of the deep convolutional neural network model.
4. The system of claim 3, wherein the training model comprises a plurality of multi-layer perceptual convolution layers (mlpconv layers), at least one global averaging layer (global averaging layer), and at least one loss function layer (loss function layer) arranged in sequence, and each multi-layer perceptual convolution layer comprises a feature detector generated randomly.
5. The system of claim 4, wherein during each training, the feature detector of a first multi-layered perceptual convolution layer performs a first convolution operation with the training image data, and the feature detectors of subsequent multi-layered perceptual convolution layers each perform a convolution operation with the convolution operation result of the previous multi-layered perceptual convolution layer to find out features of the training image data.
6. A method for predicting prognosis of cervical cancer patient with deep learning of tumor image, which is used to analyze an image data of a cervical tumor of a patient before a treatment, is performed by a system for predicting prognosis of cervical cancer patient with deep learning of tumor image, and the method comprises the following steps:
performing data expansion processing on the first image data through a small sample data expansion module to generate a plurality of slice images of the first image data; and
performing a feature analysis on the slice images through a deep convolutional neural network model by an analysis module to predict a prognosis of the patient after receiving the treatment.
7. The method of claim 6, wherein the small sample data expansion module performs an image interpolation process on the image data, wherein the image interpolation process uses a SUVmax pixel in the image data as a center to form a basic slice image, and then rotates the basic slice image in a specific direction to form a plurality of expanded slice images.
8. The method of claim 6, wherein the deep convolutional neural network model is formed by a training model undergoing multiple training and testing, wherein the training and testing comprises:
through a training module, a plurality of training image data are utilized to train the training model for a plurality of times, and then a plurality of different characteristic paths are generated;
testing the accuracy of each characteristic path by using a plurality of test image data through the training module; and
and setting the characteristic path with the highest accuracy as the characteristic path of the deep convolutional neural network model through the training module.
9. The method of claim 8, wherein the training model comprises a plurality of multi-layered perceptual convolution layers, at least one global average pooling layer and at least one loss function layer arranged in sequence, and each multi-layered perceptual convolution layer comprises a randomly generated feature detector.
10. The method of claim 9, wherein during each training, the feature detector of a first multi-layered perceptual convolution layer performs a first convolution operation with the image data for training, and the feature detectors of subsequent multi-layered perceptual convolution layers each perform a convolution operation with the convolution operation result of the previous multi-layered perceptual convolution layer, thereby finding out a plurality of features of the image data for training.
11. A computer program product stored on a non-transitory computer readable medium for enabling a tumor image deep learning assisted cervical cancer patient prognosis prediction system to operate, wherein the tumor image deep learning assisted cervical cancer patient prognosis prediction system is configured to analyze image data of a cervical tumor of a patient before a treatment, wherein the computer program product comprises:
an instruction, performing data expansion processing on the first image data to generate a plurality of slice images of the first image data; and
instructions for performing a feature analysis on the slice image through a deep convolutional neural network model to predict a prognosis of the patient after the treatment.
CN201811561566.6A 2018-12-20 2018-12-20 Tumor image deep learning assisted cervical cancer patient prognosis prediction system and method Pending CN111354442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811561566.6A CN111354442A (en) 2018-12-20 2018-12-20 Tumor image deep learning assisted cervical cancer patient prognosis prediction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811561566.6A CN111354442A (en) 2018-12-20 2018-12-20 Tumor image deep learning assisted cervical cancer patient prognosis prediction system and method

Publications (1)

Publication Number Publication Date
CN111354442A true CN111354442A (en) 2020-06-30

Family

ID=71196825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811561566.6A Pending CN111354442A (en) 2018-12-20 2018-12-20 Tumor image deep learning assisted cervical cancer patient prognosis prediction system and method

Country Status (1)

Country Link
CN (1) CN111354442A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932541A (en) * 2020-10-14 2020-11-13 北京信诺卫康科技有限公司 CT image processing method for predicting prognosis of new coronary pneumonia
CN113390913A (en) * 2021-06-10 2021-09-14 中国科学院高能物理研究所 Positron annihilation angle correlation measurement method and device based on deep learning
WO2022063200A1 (en) * 2020-09-24 2022-03-31 上海健康医学院 Non-small cell lung cancer prognosis survival prediction method, medium and electronic device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005714A (en) * 2015-06-18 2015-10-28 中国科学院自动化研究所 Non-small cell lung cancer prognosis method based on tumor phenotypic characteristics
CN105718952A (en) * 2016-01-22 2016-06-29 武汉科恩斯医疗科技有限公司 Method for focus classification of sectional medical images by employing deep learning network
CN108647775A (en) * 2018-04-25 2018-10-12 陕西师范大学 Super-resolution image reconstruction method based on full convolutional neural networks single image
CN108694718A (en) * 2018-05-28 2018-10-23 中山大学附属第六医院 The same period new chemoradiation therapy curative effect evaluation system and method before rectal cancer
CN108921195A (en) * 2018-05-31 2018-11-30 沈阳东软医疗系统有限公司 A kind of Lung neoplasm image-recognizing method neural network based and device
CN109003672A (en) * 2018-07-16 2018-12-14 北京睿客邦科技有限公司 A kind of early stage of lung cancer detection classification integration apparatus and system based on deep learning
US20190209116A1 (en) * 2018-01-08 2019-07-11 Progenics Pharmaceuticals, Inc. Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination
US20190223789A1 (en) * 2016-08-30 2019-07-25 Washington University Quantitative differentiation of tumor heterogeneity using diffusion mr imaging data

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005714A (en) * 2015-06-18 2015-10-28 中国科学院自动化研究所 Non-small cell lung cancer prognosis method based on tumor phenotypic characteristics
CN105718952A (en) * 2016-01-22 2016-06-29 武汉科恩斯医疗科技有限公司 Method for focus classification of sectional medical images by employing deep learning network
US20190223789A1 (en) * 2016-08-30 2019-07-25 Washington University Quantitative differentiation of tumor heterogeneity using diffusion mr imaging data
US20190209116A1 (en) * 2018-01-08 2019-07-11 Progenics Pharmaceuticals, Inc. Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination
CN108647775A (en) * 2018-04-25 2018-10-12 陕西师范大学 Super-resolution image reconstruction method based on full convolutional neural networks single image
CN108694718A (en) * 2018-05-28 2018-10-23 中山大学附属第六医院 The same period new chemoradiation therapy curative effect evaluation system and method before rectal cancer
CN108921195A (en) * 2018-05-31 2018-11-30 沈阳东软医疗系统有限公司 A kind of Lung neoplasm image-recognizing method neural network based and device
CN109003672A (en) * 2018-07-16 2018-12-14 北京睿客邦科技有限公司 A kind of early stage of lung cancer detection classification integration apparatus and system based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘影、许璐、周静、刘倩、李文波、庞华: "~(18)F-FDG PET/CT代谢体积参数对Ⅱ~Ⅲ期非小细胞肺癌的预后分析", 中国医学影像技术 *
王胜军;陆宏军;杨卫东;石梅;魏丽春;马晓伟;赵小虎;汪静;: "~(18)F-FDGPET/CT代谢体积与病理体积对比确定宫颈癌最大标准摄取值的最佳百分阈值", 解放军医学杂志, no. 08, pages 1 *
蔡鸿宁等: "人工神经网络在宫颈癌预后预测中的应用", 《肿瘤防治研究》 *
蔡鸿宁等: "人工神经网络在宫颈癌预后预测中的应用", 《肿瘤防治研究》, no. 09, 25 September 2012 (2012-09-25) *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022063200A1 (en) * 2020-09-24 2022-03-31 上海健康医学院 Non-small cell lung cancer prognosis survival prediction method, medium and electronic device
CN111932541A (en) * 2020-10-14 2020-11-13 北京信诺卫康科技有限公司 CT image processing method for predicting prognosis of new coronary pneumonia
CN111932541B (en) * 2020-10-14 2021-02-02 北京信诺卫康科技有限公司 CT image processing method for predicting prognosis of new coronary pneumonia
CN113390913A (en) * 2021-06-10 2021-09-14 中国科学院高能物理研究所 Positron annihilation angle correlation measurement method and device based on deep learning
CN113390913B (en) * 2021-06-10 2022-04-12 中国科学院高能物理研究所 Positron annihilation angle correlation measurement method and device based on deep learning

Similar Documents

Publication Publication Date Title
TWI681406B (en) Deep learning of tumor image-aided prediction of prognosis of patients with uterine cervical cancer system, method and computer program product thereof
Fan et al. Adversarial learning for mono-or multi-modal registration
AU2020261370A1 (en) Systems and methods for automated and interactive analysis of bone scan images for detection of metastases
KR20220069106A (en) Systems and Methods Using Self-Aware Deep Learning for Image Enhancement
CN111354442A (en) Tumor image deep learning assisted cervical cancer patient prognosis prediction system and method
Tao et al. SeqSeg: A sequential method to achieve nasopharyngeal carcinoma segmentation free from background dominance
CN111488872A (en) Image detection method, image detection device, computer equipment and storage medium
Peiris et al. Uncertainty-guided dual-views for semi-supervised volumetric medical image segmentation
CN116091490A (en) Lung nodule detection method based on YOLOv4-CA-CBAM-K-means++ -SIOU
Diao et al. A unified uncertainty network for tumor segmentation using uncertainty cross entropy loss and prototype similarity
Zhu et al. CRCNet: Global-local context and multi-modality cross attention for polyp segmentation
CN114332563A (en) Image processing model training method, related device, equipment and storage medium
Yousefirizi et al. TMTV-Net: fully automated total metabolic tumor volume segmentation in lymphoma PET/CT images—a multi-center generalizability analysis
Jiang et al. PCF-Net: Position and context information fusion attention convolutional neural network for skin lesion segmentation
US20240005507A1 (en) Image processing method, apparatus and non-transitory computer readable medium for performing image processing
CN112037886B (en) Radiotherapy plan making device, method and storage medium
Dabass et al. Lung segmentation in CT scans with residual convolutional and attention learning-based U-Net
CN111192252B (en) Image segmentation result optimization method and device, intelligent terminal and storage medium
TWI726459B (en) Transfer learning aided prediction system, method and computer program product thereof
Atiya et al. Enhancing non-small cell lung cancer radiotherapy planning: A deep learning-based multi-modal fusion approach for accurate GTV segmentation
CN112712875B (en) Auxiliary prediction system and method for transfer learning
Zhang et al. XTransCT: ultra-fast volumetric CT reconstruction using two orthogonal x-ray projections for image-guided radiation therapy via a transformer network
Huang et al. HST-MRF: heterogeneous Swin transformer with multi-receptive field for medical image segmentation
Ji et al. Affine medical image registration with fusion feature mapping in local and global
CN114581463B (en) Multi-phase 4D CT image segmentation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200630

WD01 Invention patent application deemed withdrawn after publication