CN117727441A - Method for predicting lung cancer immune curative effect based on clinical-fusion image computer model - Google Patents

Method for predicting lung cancer immune curative effect based on clinical-fusion image computer model Download PDF

Info

Publication number
CN117727441A
CN117727441A CN202311695579.3A CN202311695579A CN117727441A CN 117727441 A CN117727441 A CN 117727441A CN 202311695579 A CN202311695579 A CN 202311695579A CN 117727441 A CN117727441 A CN 117727441A
Authority
CN
China
Prior art keywords
image
pet
clinical
lung cancer
patient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311695579.3A
Other languages
Chinese (zh)
Inventor
刘建井
隋春晓
边海曼
徐文贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Medical University Cancer Institute and Hospital
Original Assignee
Tianjin Medical University Cancer Institute and Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Medical University Cancer Institute and Hospital filed Critical Tianjin Medical University Cancer Institute and Hospital
Priority to CN202311695579.3A priority Critical patent/CN117727441A/en
Publication of CN117727441A publication Critical patent/CN117727441A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a method for predicting lung cancer immune curative effect based on a clinical-fusion image computer model, which relates to the field of lung cancer immune treatment and specifically comprises the following steps: s1, collecting clinical data of a patient; s2, extracting CT morphological characteristics of a patient; s3, measuring PET metabolic parameters; s4, segmentation of PET images and CT images, and extraction and screening of histology parameters; s5, screening stable image histology characteristics; s6, establishing a mapping relation and a plurality of combined models; according to the method for predicting the immune curative effect of the lung cancer based on the clinical-fusion image computer model, the mapping model from the image characteristics to the curative effect has higher accuracy by carrying out parameterized modeling on the PET/CT image; the accuracy and the effectiveness of the segmentation are ensured to the greatest extent by adopting a method of combining the machine with the manual segmentation on the PET/CT tumor region segmentation module; and predicting the curative effect of the lung cancer new auxiliary immunotherapy by using a mathematical model established by a computer.

Description

Method for predicting lung cancer immune curative effect based on clinical-fusion image computer model
Technical Field
The invention relates to the technical field of lung cancer immunotherapy, in particular to a method for predicting lung cancer immunotherapy effect based on a clinical-fusion image computer model.
Background
Immunotherapy is increasingly used in lung cancer, and also brings extensive clinical benefit. At present, the efficacy evaluation is carried out internationally by adopting the efficacy evaluation standard of solid tumor, namely RECIST1.1 standard.
Tumor efficacy evaluations were classified into four classes according to RECIST 1.1: complete Remission (CR), partial Remission (PR), stable Disease (SD), progression of Disease (PD). The tumor disease sites must be assessed in the same way as baseline, including consistent enhancement and timely scanning. CR: all target lesions disappear and all pathological lymph nodes (including target nodes and non-target nodes) short diameters must be reduced to < 10mm. PR: the sum of target lesion diameters is reduced by at least 30% from baseline levels. PD: the sum of diameters is increased by at least 20% relative to the minimum sum of diameters of all measured target lesions throughout the experimental study (baseline value if the baseline measurement is minimal); in addition, it must be satisfied that the absolute value of the sum of diameters increases by at least 5mm (the appearance of one or more new lesions is also considered as disease progression). SD: the target lesions did not decrease to PR nor did they increase to PD levels, which was intermediate between them, and the minimum of the sum of diameters was considered as a reference. Uncertainty: progress was not recorded and 1 or more measurable target lesions were not evaluated, or the evaluation method used was not consistent with baseline, or 1 or more target lesions could not be accurately measured, or 1 or more target lesions were resected or irradiated and did not relapse or enlarge.
However, an effective prediction system for the curative effect of the novel adjuvant immunotherapy of lung cancer is not available at present.
Disclosure of Invention
The invention aims to provide a method for predicting the immune curative effect of lung cancer based on a clinical-fusion image computer model, which aims to solve the problem that an evaluation system in the prior art can only objectively evaluate the actual effect of lung cancer after the new auxiliary immune treatment of lung cancer is finished, and has hysteresis; and many patients can have false progress or super progress after receiving immunotherapy, and the accurate evaluation of curative effect causes interference; the failure to accurately evaluate the efficacy of the new adjuvant immunotherapy for lung cancer in time can affect the formulation of clinical schemes of patients, aggravate the economic burden of the patients, reduce life treatment, and is unfavorable for survival and benefit.
In order to achieve the above object, the present invention provides the following technical solutions: the method for predicting the immune curative effect of the lung cancer based on the clinical-fusion image computer model specifically comprises the following steps:
s1, collecting clinical data of patients: firstly, collecting medical record of a patient, laboratory examination results, imaging examination results and special examination results, and counting and recording clinical data of the patient;
s2, extracting CT morphological characteristics of a patient: then, a CT image of the patient is acquired, and the macro signs of the part, the number, the size, the edge of the primary focus of the lung cancer, the combined calcification, the split leaves and the traction of the pleura are measured from the CT image of the patient;
s3, measuring PET metabolic parameters: injecting a radioactive tracer into a patient, generating a PET three-dimensional image of the patient by using a positron emission tomography imaging technology, and measuring PET metabolic parameters of the patient by matching with a PET measurement method;
s4, segmentation of PET images and CT images, extraction and screening of histology parameters: at the moment, PET and CT images of the patients in the group are respectively exported to be in DICOM format, two nuclear medicine doctors with experience of PET/CT reading more than 5 years apply a machine joint manual segmentation module to manually delineate the primary focus ROI of the breast cancer of the PET and CT images of the patients from axial position, coronal position and sagittal position respectively, and mutually recheck each other; the sketched PET and CT images and corresponding ROI area files are stored into a format of 'nii';
the Python3.7.1 Pyradiomics module is used for respectively carrying out corresponding filtering pretreatment and feature extraction on PET and CT images;
s5, screening stable image histology characteristics: firstly, calculating image histology characteristics with significant differences of PET and CT images of a patient by using a wilcoxon test, wherein a significance level threshold value is set to be p=0.05;
then calculating the correlation R between every two image histology feature combinations, and removing high-dimensional feature redundancy, wherein the redundancy feature is defined as a feature with smaller AUC value in the two image histology features with the correlation R of more than 0.8;
then, using LASSO regression to screen out image group science feature combination with higher prediction efficiency; the LASSO obtains the image histology characteristics after dimension reduction and the weights of the corresponding histology characteristics by determining the value of a constant lambda;
finally, the image group score of each patient is calculated by carrying out linear combination on the feature weights corresponding to the reserved features;
s6, building a mapping relation and a plurality of combination models: establishing a mapping relation between the clinical-fusion image characteristics in the step S5 and the effect of the patient newly assisted immunotherapy, establishing a plurality of combined models, comparing clinical model performances of CT, PET, PETCT and PETCT clinical models, and finally screening out a model with the best predicted performance;
and predicting the curative effect of the lung cancer neoadjuvant immunotherapy through the screened computer model.
Further, the clinical data collected in S1 includes patient gender, age, smoking history, pathology type, stage, PD-L1 expression status, tumor markers.
Further, the PET measurement method in S3 is as follows: more than two nuclear medicine doctors read the film on a Xeleris workstation, all images are processed by PETVCR software of an AW4.6 post-processing workstation, an iterative self-adaptive algorithm is used for detecting a threshold level, a mouse can be positioned to a target focus by taking 42% of a primary focus SUVmax of the breast cancer as a threshold, an interesting area is automatically sketched through an Insert key, the whole focus can be optionally wrapped, if the activity outside the interesting area is unavoidable, the focus is removed before analysis, and after the interesting area is confirmed to be proper, the PETVCR software automatically calculates the following indexes in the interesting area: SUVmax, SUVmean, SUVpeak, MTV and TLG.
Further, the PET metabolic parameters in S3 are: maximum normalized uptake value, average normalized uptake value, peak normalized uptake value, tumor metabolic volume, and total glycolysis of the lesion.
Further, two nuclear medicine doctors in the S4 mutually review each other, and if opinion disagreement occurs, two persons solve in a negotiation mode; if the unified opinion cannot be reached, another nuclear medicine physician with more than 10 years of experience of PET/CT film reading checks and checks the ROI sketching divergence part, and then makes a final judgment.
Further, the preprocessing filtering in S4 includes: exponential, gradient, laplacian of gaussian, logarithmic, square root, and wavelet filtering;
wherein the wavelet filtering consists of a combination of high pass H and low pass L incorporating 3 dimensions of the PET/CT image, respectively comprising LLH, LHL, HHL, LLL, HHH, LHH, HLL, HLH.
Further, the extracting features in S4 includes: three-dimensional and two-dimensional shape features are extracted from the PET/CT original image;
extracting shape, first-order, gray level co-occurrence matrix, gray level run length matrix, gray level size area matrix and gray level dependency matrix characteristics from the PET/CT preprocessed image and the original image.
Further, the image histology features in S5 include: morphological features, density features, texture features, and hemodynamic features.
Compared with the prior art, the method for predicting the lung cancer immune curative effect based on the clinical-fusion image computer model is used for predicting the lung cancer new auxiliary immune curative effect so as to predict the non-small cell lung cancer patient new auxiliary immune curative effect before treatment, thereby accurately screening the tested population, reducing unnecessary economic burden of patients and improving survival benefit;
simultaneously, the PET/CT image is subjected to parameterized modeling, so that a mapping model from image features to curative effects has higher accuracy;
and the accuracy and the effectiveness of the segmentation are ensured to the greatest extent by adopting a method of combining the machine with the manual segmentation on the PET/CT tumor region segmentation module;
predicting the curative effect of the lung cancer new auxiliary immunotherapy by using a mathematical model and machine learning established by a computer;
in conclusion, the invention not only can predict the curative effect of the patient before the new auxiliary immunotherapy, but also can accurately screen the lung cancer patient who can benefit from the auxiliary immunotherapy again; avoiding multiple repeated examinations of the patient; the method can guide the patients with poor curative effect of the new auxiliary immunotherapy to replace the treatment scheme as early as possible, thereby reducing unnecessary economic burden; and meanwhile, a computer model is used for prediction, so that subjective errors are effectively avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a flow chart of a method for predicting immune efficacy of lung cancer according to an embodiment of the invention;
FIG. 2 is a block diagram of a workflow for image acquisition, segmentation, feature extraction and selection, model and graph construction provided by an embodiment of the present invention;
FIG. 3 is a comparative block diagram of the prediction effect of the training queues of the Rad_CT (A), rad_PET (B), rad_PETCT (C) models, and the test queues of the Rad_CT (D), rad_PET (E) and Rad_PETCT (F) models provided by the embodiment of the invention;
FIG. 4 is a block diagram of a decision curve analysis (C) and a calibration curve (F) of Cli_Pat_Rad_PETCT model in comparison of the prediction effect of Cli_Pat (A), cli_Pat_Rad_PETCT (B) model training queue, cli_Pat (D) and Cli_Pat_Rad_PETCT (E) model test queue provided by the embodiment of the invention;
FIG. 5 is a block diagram of the clinical application of the nomogram provided by the embodiment of the invention in the identification of pCR and non-pCR in non-small cell lung cancer patients.
Description of the embodiments
In order to make the technical scheme of the present invention better understood by those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings.
Examples
Referring to fig. 1-5, the method for predicting the immune efficacy of lung cancer based on a clinical-fusion image computer model specifically comprises the following steps:
s1, collecting clinical data of patients: firstly, collecting medical records of patients, laboratory examination results, imaging examination results and special examination results, and counting and recording clinical data of the patients, wherein the collected clinical data comprise gender, age, smoking history, pathology type, stage, PD-L1 expression condition, tumor markers and the like of the patients;
s2, extracting CT morphological characteristics of a patient: then, a CT image of a patient is acquired, and macroscopic signs such as the part (accurate to a lung segment), the number (single range/multiple ranges), the size (maximum diameter), whether burrs exist on the edge, whether calcification is combined, whether the patient is in a split state, whether pleura is pulled and the like of a primary focus of lung cancer are measured from the CT image of the patient;
s3, measuring PET metabolic parameters: then injecting a radioactive tracer into the patient, and generating a signal by interaction of the radioactive tracer with a detector in a PET machine, wherein the PET scanning is usually carried out after the radioactive tracer is injected, and the tracer is absorbed by cells in the body and emits positrons so as to generate a PET three-dimensional image of the patient; the PET three-dimensional image can be converted into digital data through a computer program so as to calculate and analyze PET metabolic parameters, and then the PET metabolic parameters of a patient are measured by matching with a PET measurement method, wherein the PET measurement method comprises the following steps: the free switching of transverse, coronal and sagittal positions can be achieved by more than two experienced nuclear medicine physicians reading the film on a Xeleris workstation (GEHealthcare, milwaukee, WI, US). All images were processed by PETVCAR software of the AW4.6 post-processing workstation. PETVCAR is an automatic software system, an iterative adaptive algorithm is used for detecting a threshold level, 42% of SUVmax of a primary lesion of breast cancer is taken as a threshold, a mouse is required to be positioned to a target lesion, an interesting area is automatically sketched through an Insert key, and the whole lesion can be wrapped. If activity outside the region of interest is unavoidable, the lesion is rejected prior to analysis. After confirming that the region of interest is suitable, the PETVCAR software automatically calculates the following indexes in the region of interest: SUVmax, SUVmean, SUVpeak, MTV and TLG;
meanwhile, the PET metabolic parameters are as follows: maximum normalized uptake value, average normalized uptake value, normalized uptake value peak, tumor metabolic volume, and total amount of glycolysis in the range;
s4, segmentation of PET images and CT images, extraction and screening of histology parameters: at the moment, PET and CT images of the patients in the group are respectively exported to be in DICOM format, two nuclear medicine doctors with experience of PET/CT reading more than 5 years apply a machine joint manual segmentation module to manually delineate the primary focus ROI of the breast cancer of the PET and CT images (soft tissue window) of the patients from axial position, coronal position and sagittal position respectively, and mutually recheck each other; two nuclear medicine doctors mutually recheck each other, and if opinion disagrees appear, two persons solve the problem in a negotiation mode; if the unified opinion cannot be reached, another nuclear medicine physician with more than 10 years of experience of PET/CT film reading checks and checks the ROI sketching divergence part, and then makes a final judgment; the sketched PET and CT images and corresponding ROI area files are stored into a format of 'nii';
the Python3.7.1 Pyradiomics module is used for respectively carrying out corresponding filtering pretreatment and feature extraction on PET and CT images;
the preprocessing filtering comprises the following steps: exponential, gradient, laplacian of gaussian, logarithmic, square root, and wavelet filtering;
the wavelet filtering consists of a combination of high-pass H and low-pass L which are included in 3 dimensions of the PET/CT image, and each combination comprises LLH, LHL, HHL, LLL, HHH, LHH, HLL, HLH;
the feature extraction includes: three-dimensional and two-dimensional shape features are extracted from the PET/CT original image;
and two-dimensional and three-dimensional shape features are extracted only from PET/CT raw images.
In the present invention, shape Features (Shape Features) are feature parameters for describing the geometry and structure within the region of interest and the volume of interest, independent of the gray scale intensity distribution of the region of interest and the region within the volume of interest. Typically from two-dimensional and three-dimensional image data. Shape features generally include, but are not limited to:
area (Area): the area of an object or region refers to the number of pixels it covers.
Perimeter (Perimeter): the perimeter is the length of the boundary of an object or region. Generally used to describe the complexity of the shape of an object, more complex shapes have longer circumferences.
Compactibility (Compactness): compactness refers to a measure of the shape of an object relative to a compact circle or ellipse. Typically by comparing the area and circumference of the object.
Outline (content): a contour is a curve or boundary describing the appearance of an object. The analysis profile may be used to identify different shape features.
Eccentricity (Eccentricity): eccentricity is a feature used to describe the shape of an ellipse. It indicates the degree of stretching of an ellipse, with an eccentricity of 0 representing a perfect circle and an eccentricity approaching 1 representing a highly stretched ellipse.
Extensibility (elingation): extensibility means the degree of stretching of an object in one direction. Typically calculated by the length ratio of the major and minor axes.
Convexity (connectivity): convexity is a measure that describes whether an object is convex. The convexity features help to distinguish between convex objects and objects having concave portions.
Roundness (round): roundness is an indicator of how close an object is to a circle. It is typically calculated by similarity to the closest circle.
Extracting shape, first-order, gray level co-occurrence matrix, gray level run length matrix, gray level size area matrix and gray level dependency matrix characteristics from the PET/CT preprocessing image and the original image;
in the present invention, first order features (First Order features) are basic statistical properties used to describe the region of interest and pixels within the volume of interest. These features typically cover some basic information of the pixel intensities in the image, regardless of the spatial relationship between pixels. First order features generally include, but are not limited to:
mean (Mean): the average is the average of the pixel intensities in the image. It may provide information about the overall brightness level of the image.
Standard deviation (Standard Deviation): the standard deviation is a measure of the dispersion of pixel intensity values and is used to represent the degree of dispersion of pixel values. A higher standard deviation generally indicates a larger variation in intensity in the image.
Minimum and maximum values (Minimum and Maximum): these features represent the minimum and maximum values of pixel intensities in the image, respectively. They can provide information about the brightest and darkest areas in the image.
Median (Median): the median is the median of the pixel intensity values and can be used to describe a typical level of pixel values in an image.
Distribution Percentiles (Percentiles): the distribution percentile indicates how many percent of the pixel values in the image are less than or greater than a particular threshold. For example, a 25% fractional number indicates that 25% of the pixel values are less than this value.
Variance (Variance): the variance is the square of the mean deviation of the pixel intensity values, which represents the degree of dispersion of the pixel values.
Energy (Energy): the energy is the sum of the squares of the pixel intensity values and is used to indicate the presence of a high intensity region in the image.
Entropy (Entropy): entropy is a measure of uncertainty in pixel intensity values, used to represent the complexity of an image. High entropy values generally indicate that there are many different intensity levels in the image.
Skewness (Shewness): the skewness measures the degree of skewness of the pixel intensity value distribution. Positive skewness indicates that the distribution is skewed to the right, while negative skewness indicates that it is skewed to the left.
Kurtosis (Kurtosis): kurtosis is used to describe the sharpness of a distribution of pixel intensity values, i.e., it measures the degree of peaking or flattering of the distribution.
Uniformity (Uniformity): uniformity represents a measure of similarity of pixel intensity values in an image, with higher uniformity meaning that the pixels in the image are more similar.
In a digital image, each pixel has a Gray Level (Gray Level) that represents the brightness or density of the pixel. Different tissues or structures typically have different gray levels in medical images.
The co-occurrence matrix (Cooccurrence Matrix) is a two-dimensional matrix in which rows and columns represent different gray levels, and each element in the matrix represents the co-occurrence frequency between pairs of pixels of a particular gray level in a certain direction in the image. This means that the co-occurrence matrix records the spatial distribution relationship between the different gray level pixel pairs.
In the present invention, gray level co-occurrence matrix features (Gray Level Cooccurrence Matrix features, GLCM features) are a type of statistical features used to describe the texture features of medical images. These features can provide information about the texture and structure of the image by analyzing the gray level relationship between pixels in the image. The gray level co-occurrence matrix is a matrix used to represent the frequency and spatial relationship between pairs of pixels of different gray levels in an image. Gray co-occurrence matrix features generally include, but are not limited to:
informativity (Information Measure of Correlation 1 and 2): these features measure the degree of mutual information between pairs of pixels of different gray levels and are used to describe the correlation between pairs of pixels.
Free energy (Homogeneity): the free energy feature measures the similarity or proximity between different gray level pixel pairs in the image. A higher free energy value indicates a smaller difference between pixel pairs.
Maximum probability (Maximum Probability): the most probable feature represents the different gray-level pixel pairs that occur most frequently in the image.
Autocorrelation (autocorrection): the autocorrelation feature measures the correlation of each pixel in the image with itself, describing the periodicity and repeatability of the image.
Aggregation (Cluster Shade): the concentration feature measures the concentration of pixel pairs and can be used to describe brightness or texture changes in an image.
Maximum Entropy (Maximum Entropy): the maximum entropy feature represents the maximum entropy of the different gray-level pixel pairs in the image, and is used to describe the information content of the image.
Gray scale difference (Gray Level Difference Statistics): this includes statistical features such as mean, standard deviation, and entropy for different gray level pixel pairs to describe gray level differences between pixel pairs.
Inverse moment (Inverse Difference Moment): the inverse difference moment measures the inverse difference of the difference between different gray level pixel pairs in the image. A higher inverse difference moment indicates that the difference in pixel pairs is smaller.
In a digital image, run (Run) refers to a group of pixels having the same gray level appearing consecutively in a certain direction. This direction may be a horizontal, vertical or diagonal direction. The length of a run refers to the number of pixels of this sequence of consecutive pixels.
The gray scale run length matrix (GLRLM) is a two-dimensional matrix in which rows and columns represent different gray scale levels, and each element in the matrix represents the number (length) of runs of pixels of a particular gray scale level in a certain direction in the image. In other words, GLRLM captures the distribution of different gray level pixels in different directions, as well as their run lengths.
By analyzing the GLRLM matrix, a series of features can be extracted for describing the texture characteristics of the image. These features include the intensity, distribution and nature of the different gray level pixel runs, helping to quantitatively describe the texture of the image. In the present invention, the GLRLM matrix includes, but is not limited to:
short run Cheng Jiangdu (Short Run Emphasis, SRE): the SRE feature measures the intensity of short runs (sequences of pixels with consecutively identical pixel intensity values) in the image, emphasizing the presence of short textures in the image.
Long run Cheng Jiangdu (Long Run Emphasis, LRE): LRE features measure the intensity of long runs in an image, which are used to describe long-lived texture.
Run Percentage Cheng Bai (RP): RP features represent the percentage of the stream Cheng Xiangsu in the entire image and can be used to describe the continuous texture in the image.
Low intensity run (Low Gray-level Run Emphasis, LGRE): LGRE features emphasize runs at low intensity levels for describing low contrast textures in the image.
High intensity run (High Gray-level Run Emphasis, HGRE): HGRE features measure runs at high intensity levels for describing high contrast texture in an image.
Short run high intensity (Short Run High Gray-level samples, SRHGE): the SRHGE feature allows for texture at short runs and high intensity levels.
Run Length Non-uniformity, RLNU): RLNU features represent run-length non-uniformities that are used to describe texture variations in an image.
In digital medical images, each pixel has a Gray Level (Gray Level) representing its brightness or density. Different tissues and structures often have different gray levels in medical images.
The Zone (Zone) refers to a group of adjacent pixels in the image, which have the same gray level. Thus, a plurality of regions may be included in an image, each region being composed of pixels having the same gray level.
The gray-scale area matrix (GLSZM) is a two-dimensional matrix in which rows and columns represent different gray levels, and each element in the matrix represents an area containing how many pixels at a certain gray level in the image. Thus, GLSZM records the size and number of different gray level areas. GLSZM features typically include, but are not limited to, the following:
zone Size (Zone Size): the region size feature represents the average size of a region having a particular gray level.
Total Number of zones (Zone Number): the total number of regions feature the total number of regions with different gray levels.
Area Percentage (Zone permanent): the area percentage feature represents the percentage of each gray level area in the image.
Zone Non-uniformity (Zone Non-uniformity): the region non-uniformity feature represents non-uniformity in the size of regions of different gray levels in the image.
Average area (Average Zone Area): the average area feature represents the average area size of all areas having a specific gray level.
Zone Entropy (Zone Entropy): the region entropy features measure the uncertainty of the region distribution of different gray levels, i.e. the diversity of textures.
Zone Contrast (Zone Contrast): the region contrast feature represents the difference in gray level between regions and is used to describe the contrast of textures in an image.
Zone Correlation: the region association feature measures the association between different gray level regions in an image, i.e. how they are associated with each other.
Area contrast variation (Zone Contrast Variation): the region contrast variation feature represents the variation of the region contrast between different gray levels.
Region entropy change (Zone Entropy Variation): the region entropy change feature describes the change in entropy values of different gray level regions in an image.
Zone Uniformity (Zone Uniformity): the region uniformity feature measures the uniformity of the different gray level regions in the image, i.e., whether their area distribution is uniform.
In digital medical imaging, each pixel has a Gray Level (Gray Level) that represents the brightness or density of the pixel. Different tissues or structures typically have different gray levels in medical images.
The dependency matrix (Dependence Matrix) is a two-dimensional matrix in which rows and columns represent different gray levels, and each element in the matrix represents a dependency relationship between a particular gray level pixel pair in a certain direction in the image. This means that the dependency matrix records the spatial distribution relationship between the different gray level pixel pairs.
In the invention, the gray-scale dependent matrix (Gray Level Dependence Matrix, GLDM) feature is a statistical feature used for describing image textures in medical image group science. GLDM features provide information about texture in an image by analyzing the dependency between pixels of different gray levels in the image. These features are used to quantitatively analyze the interrelationship and dependencies between pixels of different gray levels to help identify features of different tissues, structures or textures for medical image analysis and classification. GLDM features include, but are not limited to:
gray level dependence (Gray Level Dependence): the gray scale dependency features represent the dependency between pairs of pixels of different gray scale levels in an image, i.e. the co-occurrence frequency between pairs of pixels.
Large-dependency low gray level (High Dependence Low Gray Level Emphasis, HDLGLE): the HDLGLE feature highlights the total number of pixel pairs in the image that are highly dependent and have a low gray scale level.
Small-dependency low gray level (Low Dependence Low Gray Level Emphasis, LDLGLE): the LDLGLE feature emphasizes the total number of pixel pairs in the image that have low dependencies and lower gray levels.
Dependency entropy (Dependence Entropy): the dependency entropy feature measures the dependency entropy of different gray level pixel pairs, i.e. the dependency uncertainty between pixel pairs.
High-dependency high gray level emphasis (High Dependence High Gray Level Emphasis, HDHGLE): the HDHGLE feature highlights the total number of pixel pairs in the image that are highly dependent and have high gray levels.
Low-dependency high gray level emphasis (Low Dependence High Gray Level Emphasis, LDHGLE): the LDHGLE feature emphasizes the total number of pixel pairs in the image that have low dependency and high gray levels.
Large-dependency high gray level (High Dependence High Gray Level Number, HDHGLN): the HDHGLN feature represents the total number of pixel pairs in the image that have high dependency and high gray levels.
Low-dependency high gray level (Low Dependence High Gray Level Number, LDHGLN): the LDHGLN feature represents the total number of pixel pairs in the image that have low dependency and high gray levels.
Low-dependency low gray level (Low Dependence Low Gray Level Number, LDLGLN): the LDLGLN feature represents the total number of pixel pairs in the image that have low dependency and low gray levels.
Gray scale dependency Uniformity (Gray Level Dependence Non-Uniformity, GLDNU): the GLDNU feature measures the non-uniformity of the dependence of different gray level pixel pairs in the image;
s5, screening stable image histology characteristics: first, PET and CT images of a patient were calculated by using the wilcoxon test to have significantly different image histology characteristics, the significance level threshold was set to p=0.05, the image histology characteristics including: morphological features: such as size, shape, contour, etc. Density characteristics: such as gray values, density profiles, etc. Texture features: such as texture entropy, contrast, correlation, etc. Hemodynamic characteristics: such as perfusion, blood flow velocity, volume, etc.;
then calculating the correlation (R) between every two image histology feature combinations, and removing high-dimensional feature redundancy, wherein the redundancy feature is defined as a feature with smaller AUC value in the two image histology features with the correlation R > 0.8;
then, using LASSO regression to screen out image group science feature combination with higher prediction efficiency; the LASSO obtains the image histology characteristics after dimension reduction and the weights of the corresponding histology characteristics by determining the value of a constant lambda;
and finally, carrying out linear combination on the reserved characteristics and the characteristic weights corresponding to the characteristics to calculate the image group score (Rad-score) of each patient.
S6, building a mapping relation and a plurality of combination models: establishing a mapping relation between the clinical-fusion image characteristics in the step S5 and the effect of the patient new auxiliary immunotherapy (divided into a pathological complete remission PCR and a non-PCR group), establishing a plurality of combined models, comparing clinical model performances of CT, PET, PETCT and PETCT, and finally screening out a model with the best prediction efficiency; wherein, each evaluation index of the Cli_Pat_Rad_PETCT model in the five machine learning algorithms is shown in table 1;
and predicting the curative effect of the lung cancer neoadjuvant immunotherapy through the screened computer model.
While certain exemplary embodiments of the present invention have been described above by way of illustration only, it will be apparent to those of ordinary skill in the art that modifications may be made to the described embodiments in various different ways without departing from the spirit and scope of the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive of the scope of the invention, which is defined by the appended claims.

Claims (8)

1. The method for predicting the immune curative effect of the lung cancer based on the clinical-fusion image computer model is characterized by comprising the following steps of:
s1, collecting clinical data of patients: firstly, collecting medical record of a patient, laboratory examination results, imaging examination results and special examination results, and counting and recording clinical data of the patient;
s2, extracting CT morphological characteristics of a patient: then, a CT image of the patient is acquired, and the macro signs of the part, the number, the size, the edge of the primary focus of the lung cancer, the combined calcification, the split leaves and the traction of the pleura are measured from the CT image of the patient;
s3, measuring PET metabolic parameters: injecting a radioactive tracer into a patient, generating a PET three-dimensional image of the patient by using a positron emission tomography imaging technology, and measuring PET metabolic parameters of the patient by matching with a PET measurement method;
s4, segmentation of PET images and CT images, extraction and screening of histology parameters: at the moment, PET and CT images of the patients in the group are respectively exported to be in DICOM format, two nuclear medicine doctors with experience of PET/CT reading more than 5 years apply a machine joint manual segmentation module to manually delineate the primary focus ROI of the breast cancer of the PET and CT images of the patients from axial position, coronal position and sagittal position respectively, and mutually recheck each other; the sketched PET and CT images and corresponding ROI area files are stored into a format of 'nii';
the Python3.7.1 Pyradiomics module is used for respectively carrying out corresponding filtering pretreatment and feature extraction on PET and CT images;
s5, screening stable image histology characteristics: firstly, calculating image histology characteristics with significant differences of PET and CT images of a patient by using a wilcoxon test, wherein a significance level threshold value is set to be p=0.05;
then calculating the correlation R between every two image histology feature combinations, and removing high-dimensional feature redundancy, wherein the redundancy feature is defined as a feature with smaller AUC value in the two image histology features with the correlation R of more than 0.8;
then, using LASSO regression to screen out image group science feature combination with higher prediction efficiency; the LASSO obtains the image histology characteristics after dimension reduction and the weights of the corresponding histology characteristics by determining the value of a constant lambda;
finally, the image group score of each patient is calculated by carrying out linear combination on the feature weights corresponding to the reserved features;
s6, building a mapping relation and a plurality of combination models: establishing a mapping relation between the clinical-fusion image characteristics in the step S5 and the effect of the patient newly assisted immunotherapy, establishing a plurality of combined models, comparing clinical model performances of CT, PET, PETCT and PETCT clinical models, and finally screening out a model with the best predicted performance;
and predicting the curative effect of the lung cancer neoadjuvant immunotherapy through the screened computer model.
2. The method for predicting immune efficacy of lung cancer based on clinical-fusion image computer model according to claim 1, wherein the clinical data collected in S1 comprises patient gender, age, smoking history, pathology type, stage, PD-L1 expression status, tumor markers.
3. The method for predicting immune efficacy of lung cancer based on clinical-fusion image computer model according to claim 1, wherein the PET measurement method in S3 is as follows: more than two nuclear medicine doctors read the film on a Xeleris workstation, all images are processed by PETVCR software of an AW4.6 post-processing workstation, an iterative self-adaptive algorithm is used for detecting a threshold level, a mouse can be positioned to a target focus by taking 42% of a primary focus SUVmax of the breast cancer as a threshold, an interesting area is automatically sketched through an Insert key, the whole focus can be optionally wrapped, if the activity outside the interesting area is unavoidable, the focus is removed before analysis, and after the interesting area is confirmed to be proper, the PETVCR software automatically calculates the following indexes in the interesting area: SUVmax, SUVmean, SUVpeak, MTV and TLG.
4. The method for predicting immune efficacy of lung cancer based on clinical-fusion image computer model according to claim 1, wherein the PET metabolic parameters in S3 are: maximum normalized uptake value, average normalized uptake value, peak normalized uptake value, tumor metabolic volume, and total glycolysis of the lesion.
5. The method for predicting immune efficacy of lung cancer based on clinical-fusion image computer model according to claim 1, wherein two nuclear medicine doctors in S4 check each other, and if opinion disagreement occurs, two people solve in a agreed manner; if the unified opinion cannot be reached, another nuclear medicine physician with more than 10 years of experience of PET/CT film reading checks and checks the ROI sketching divergence part, and then makes a final judgment.
6. The method for predicting immune efficacy of lung cancer based on clinical-fusion image computer model according to claim 1, wherein the preprocessing filtering in S4 comprises: exponential, gradient, laplacian of gaussian, logarithmic, square root, and wavelet filtering;
wherein the wavelet filtering consists of a combination of high pass H and low pass L incorporating 3 dimensions of the PET/CT image, respectively comprising LLH, LHL, HHL, LLL, HHH, LHH, HLL, HLH.
7. The method for predicting immune efficacy of lung cancer based on clinical-fusion image computer model according to claim 1, wherein the feature extraction in S4 comprises: three-dimensional and two-dimensional shape features are extracted from the PET/CT original image;
extracting shape, first-order, gray level co-occurrence matrix, gray level run length matrix, gray level size area matrix and gray level dependency matrix characteristics from the PET/CT preprocessed image and the original image.
8. The method for predicting immune efficacy of lung cancer based on clinical-fusion image computer model according to claim 1, wherein the image histology features in S5 comprise: morphological features, density features, texture features, and hemodynamic features.
CN202311695579.3A 2024-01-22 2024-01-22 Method for predicting lung cancer immune curative effect based on clinical-fusion image computer model Pending CN117727441A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311695579.3A CN117727441A (en) 2024-01-22 2024-01-22 Method for predicting lung cancer immune curative effect based on clinical-fusion image computer model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311695579.3A CN117727441A (en) 2024-01-22 2024-01-22 Method for predicting lung cancer immune curative effect based on clinical-fusion image computer model

Publications (1)

Publication Number Publication Date
CN117727441A true CN117727441A (en) 2024-03-19

Family

ID=90202765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311695579.3A Pending CN117727441A (en) 2024-01-22 2024-01-22 Method for predicting lung cancer immune curative effect based on clinical-fusion image computer model

Country Status (1)

Country Link
CN (1) CN117727441A (en)

Similar Documents

Publication Publication Date Title
He et al. A review on automatic mammographic density and parenchymal segmentation
CN104751178B (en) Lung neoplasm detection means and method based on shape template matching combining classification device
CN113711271A (en) Deep convolutional neural network for tumor segmentation by positron emission tomography
US9053534B2 (en) Voxel-based approach for disease detection and evolution
CN105559813B (en) Medical diagnostic imaging apparatus and medical image-processing apparatus
CN104838422B (en) Image processing equipment and method
CN1518719A (en) Method and system for automatically detecting lung nodules from multi-slice high resolution computed tomography (MSHR CT) images
CN111340827A (en) Lung CT image data processing and analyzing method and system
EP2987114B1 (en) Method and system for determining a phenotype of a neoplasm in a human or animal body
KR20180022607A (en) Determination of result data on the basis of medical measurement data from various measurements
EP3207521A1 (en) Image analysis method supporting illness development prediction for a neoplasm in a human or animal body
US20230154006A1 (en) Rapid, accurate and machine-agnostic segmentation and quantification method and device for coronavirus ct-based diagnosis
CN113706435A (en) Chest enhanced CT image processing method based on traditional image omics
Maitra et al. Automated digital mammogram segmentation for detection of abnormal masses using binary homogeneity enhancement algorithm
Koprowski et al. Assessment of significance of features acquired from thyroid ultrasonograms in Hashimoto's disease
Tang et al. CNN-based qualitative detection of bone mineral density via diagnostic CT slices for osteoporosis screening
CN112367905A (en) Methods for diagnosing, prognosing, determining prognosis, monitoring or staging disease based on vascularization patterns
Zhai et al. Automatic quantitative analysis of pulmonary vascular morphology in CT images
JP7004829B2 (en) Similarity determination device, method and program
Irene et al. Segmentation and approximation of blood volume in intracranial hemorrhage patients based on computed tomography scan images using deep learning method
Buongiorno et al. Uip-net: a decoder-encoder cnn for the detection and quantification of usual interstitial pneumoniae pattern in lung ct scan images
JP7034306B2 (en) Region segmentation device, method and program, similarity determination device, method and program, and feature quantity derivation device, method and program
Junior et al. Evaluating margin sharpness analysis on similar pulmonary nodule retrieval
US20230222771A1 (en) Method and system for automatic classification of radiographic images having different acquisition characteristics
US20230115927A1 (en) Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination