CN117936105A - Multimode melanoma immunotherapy prediction method based on deep learning network - Google Patents

Multimode melanoma immunotherapy prediction method based on deep learning network Download PDF

Info

Publication number
CN117936105A
CN117936105A CN202410338028.XA CN202410338028A CN117936105A CN 117936105 A CN117936105 A CN 117936105A CN 202410338028 A CN202410338028 A CN 202410338028A CN 117936105 A CN117936105 A CN 117936105A
Authority
CN
China
Prior art keywords
image
data
patient
pathological section
melanoma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410338028.XA
Other languages
Chinese (zh)
Other versions
CN117936105B (en
Inventor
王逸伦
谢慕寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Anhong Technology Co ltd
Fudan University
Original Assignee
Hangzhou Anhong Technology Co ltd
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Anhong Technology Co ltd, Fudan University filed Critical Hangzhou Anhong Technology Co ltd
Priority to CN202410338028.XA priority Critical patent/CN117936105B/en
Publication of CN117936105A publication Critical patent/CN117936105A/en
Application granted granted Critical
Publication of CN117936105B publication Critical patent/CN117936105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multimode melanoma immunotherapy prediction method based on a deep learning network, which relates to the field of multimode melanoma immunotherapy prediction, and comprises the steps of firstly carrying out data acquisition, preliminary screening and image standardization processing, then carrying out image semantic segmentation through an image segmentation module, wherein the image segmentation module comprises a convolutional neural network and a deep self-attention network, then carrying out computer generation on image information which is missing in a pathological section graph through a diffusion model and natural language coding prompt, and carrying out screening and dimension reduction on the image again, then extracting image characteristics and non-image information, and evaluating the PD-1 prognosis of a patient through a graph neural network; the invention discloses a multi-mode melanoma immunotherapy prediction method based on a deep learning network, and aims to solve the problem of how to obtain a prognosis evaluation model with high accuracy in prognosis evaluation of melanoma PD-1 immunotherapy based on a limited number of training samples.

Description

Multimode melanoma immunotherapy prediction method based on deep learning network
Technical Field
The invention relates to the field of multimode melanoma immunotherapy prediction, and in particular relates to a multimode melanoma immunotherapy prediction method based on a deep learning network.
Background
Melanoma, also known as "malignant melanoma," is a common and fatal malignancy of the skin. According to the latest global cancer statistics, skin cancer has been defined as one of the most dangerous cancers, whereas melanoma is the one of the most deadly cancers in skin cancer, accounting for about 75% of the mortality in skin cancer, and has the characteristics of high malignancy, easy local recurrence and distant metastasis, poor prognosis, and the like.
In recent years, with the development of medical technology, immunotherapy has become one of the most important therapeutic means in melanoma treatment. According to the pathological stage of melanoma, the immunotherapy for partial high-level melanoma patients can effectively prolong the relapse-free survival time of patients, and especially the PD-1 immunotherapy is widely applied clinically and achieves good effect. However, due to the strong individual heterogeneity, complex pathological classification and unknown pathogenesis of melanoma, PD-1 immunotherapy does not benefit all patients, and early diagnosis and accurate assessment are therefore critical for improving prognosis in melanoma patients. In this context, it is of great importance to evaluate the prognosis of PD-1 immunotherapy in melanoma patients.
With the wide application of computer technology, the deep learning neural network becomes a popular technology in the field of multi-mode melanoma immunotherapy prediction and medical image analysis. Although the performance of deep learning models applied to the field of melanoma recognition and segmentation is always improved, prognosis evaluation for melanoma immunotherapy is still rarely involved. The existing deep learning prognosis model is mostly based on tissue pathology images or single-mode data such as gene expression characteristics, uses a single type neural network model, and does not consider treatment means used in the diagnosis and treatment process of patients. Because the data of the whole patient sample using PD-1 immunotherapy is less and the data volume is unbalanced, training of the deep learning model is often needed to be realized on massive data, if the used training sample is insufficient, the model identification accuracy is low, and no model can obtain better melanoma PD-1 immunotherapy prognosis performance at present. Therefore, the research on how to obtain better model performance by using the latest deep learning method on the premise of a small amount of complete samples not only can promote the development of melanoma prognosis research, but also has great significance for the promotion and development of PD-1 immunotherapy clinical application.
Therefore, the invention provides a multi-mode melanoma immunotherapy prediction method based on a deep learning network based on the analysis results of multi-mode data such as pathological image data, patient metadata, patient clinical data, gene expression characteristics and the like by comparing a deep self-attention network and a graph convolution network of the fire heat in the deep learning method.
Disclosure of Invention
Aiming at the defects of the prior art, the invention discloses a multi-mode melanoma immunotherapy prediction method based on a deep learning network, which is based on a deep self-attention network and a graph convolution network which are compared with each other by the prior art in the deep learning method, and combines the analysis results of multi-mode data such as pathological image data, patient metadata, patient clinical data, gene expression characteristics and the like; before the formal model is trained, the images are required to be dyed and standardized, the colors are corrected to the same color space, and the robustness of the training model is enhanced; the convolution neural network and the depth self-attention network are used for carrying out feature extraction and dimension reduction on the input image to 2048 dimensions, then the feature matrix is spliced, the feature matrix is subjected to classification prediction by using the full-connection layer, and the computing advantages of the convolution neural network and the depth self-attention network are combined, so that the image information can be captured better; the missing image information is generated by a diffusion model and natural language coding template in a computer, so that the problem of insufficient data quantity is solved, and the condition of inaccurate model training caused by insufficient sample quantity is avoided; and encoding the clinical data of the patient through a multi-head self-attention model, and endowing importance degree parameters to the prognosis influence degree according to the clinical data of the patient, so as to realize the extraction of non-image information. The step can more comprehensively consider the condition of the patient, and the defect that other factors are ignored due to the fact that only image information is considered in the traditional method is avoided; the feature extraction is carried out on the pathological section graph by adopting Encoder-Decoder neural network mode and belief generation analysis, so that the feature can be better extracted, and the defect that the feature needs to be manually selected in the traditional method is avoided; and the automation degree and the intelligent degree are high.
The invention adopts the following technical scheme:
A multimode melanoma immunotherapy prediction method based on a deep learning network comprises the following steps:
Firstly, data acquisition, preliminary screening and image standardization processing, namely obtaining melanoma patient data in a tumor and cancer genome map through a medical image management system, wherein the melanoma patient data comprise pathological section maps, patient clinical data and RNA sequence three data cases and a melanoma patient data set, the melanoma patient data are preliminarily screened and reserved through a graphic processor GPU, and the pathological section maps are standardized to a specific target map by adopting a color standardization Reinhard method;
Secondly, performing image segmentation processing, namely performing image semantic segmentation through an image segmentation module, wherein the image segmentation module comprises a convolutional neural network and a depth self-attention network, the depth self-attention network captures high-frequency information in the pathological section graph through a self-attention global calculation mode, the high-frequency information comprises pixel points with intense image intensity variation, the convolutional neural network performs image feature extraction and data learning through a local calculation mode to realize low-frequency information extraction in the pathological section graph, and the low-frequency information comprises pixel points with gentle image intensity conversion;
Step three, image supplementing processing, namely generating missing image information through an image data supplementing module, wherein the image data supplementing module generates the missing image information in the pathological section map through a diffusion model and natural language coding promt by a computer;
The fourth step of image rescreening and dimension reduction, wherein the graphic processor GPU is used for rescreening the pathological section images of the small blocks subjected to image segmentation and data supplementation, and the dimension reduction treatment is carried out on the pathological section images of the small blocks by adopting a t-SNE dimension reduction method;
Step five, extracting image features, namely extracting the image features of the small pathological section images through an image feature extraction module, wherein the image feature extraction module adopts Encoder-Decoder neural network mode and belief analysis to extract the features of the small pathological section images;
Step six, non-image information extraction, namely extracting non-image information characteristics through a non-image auxiliary information context extraction module, wherein the non-image auxiliary information context extraction module adopts a multi-head self-attention model to encode clinical data of a patient and endows importance degree parameters to prognosis influence degree according to the clinical data of the patient, and the clinical data of the patient comprise melanoma Bressow thickness, ulcer condition, melanoma TNM stage, mitosis rate, lymph node metastasis, distant metastasis, satellite range, primary tumor part, age of the patient and sex of the patient;
And seventhly, predicting the multi-mode melanoma immunotherapy, and performing melanoma immunotherapy prediction assessment through a prediction module, wherein the prediction module evaluates the PD-1 prognosis of a patient through a graphic neural network, the graphic neural network performs evaluation calculation by combining the output of the image feature extraction module and the output of the non-image auxiliary information context extraction module, and performs pruning operation on a relation connecting line between two patients based on a preset similarity threshold.
As a further technical scheme of the invention, the working method of the image segmentation module comprises the following steps:
Step 1, inputting the pathological section map, wherein the input value of the image segmentation module is the pathological section map cut into small blocks of 256 multiplied by 3;
Step 2, performing feature extraction on the pathological section map of the input small block by adopting the convolutional neural network and the deep self-attention network;
Step 3, performing feature compression through a convolution layer, and reducing the dimensions of feature matrixes extracted from the convolution neural network and the deep self-attention network through a full-connection layer to form a low-dimensional feature matrix with the dimension of 2048;
Step 4, splicing the low-dimensional feature matrix through splicing operation to form a high-dimensional feature matrix with the dimension of 4096;
step 5, carrying out classified prediction on the high-dimensional feature matrix through the full-connection layer;
Step 6, measuring the difference between the predicted result and the real label through a binary cross entropy loss function, and updating parameters of the convolutional neural network and the deep self-attention network through an Adam parameter optimization algorithm;
And 7, outputting a pathological area of the pathological section map, wherein an output value is a small pathological area corresponding to the pathological section map, and the pathological area comprises a cancer area, a paracancerous area, a handover area and a background area.
According to the invention, the diffusion model is used for carrying out machine generation on missing cancer side or junction region data based on real cancer region data, the real cancer region data is used for obtaining a noise image through multiple rounds of noise adding operation, natural language coding prompt is used for carrying out missing data interpolation on the cancer side or junction region real data to obtain coding characteristic data, the natural language coding prompt is used for simulating a data distribution generation process in a network and step-by-step growth mode based on generation, the coding characteristic data is used for obtaining a simulated cancer side region or junction region image through splicing and noise reducing processing with the noise image, and the simulated cancer side region or junction region image is compared with the real cancer side region or junction region image to calculate cross entropy loss.
As a further technical scheme of the invention, the t-SNE dimension reduction method comprises the following steps:
(1) Calculating a similarity matrix between data points in a high-dimensional space, calculating the high-dimensional similarity matrix through Euclidean distance, manhattan distance or cosine similarity, and carrying out weighting processing on the data in the high-dimensional space through a Gaussian kernel function to obtain symmetrical probability distribution;
(2) Mapping the relative distance in the low-dimensional space and the relative distance in the high-dimensional space by constructing probability distribution in the low-dimensional space, wherein a probability formula of the data point in the low-dimensional space being selected as the adjacent point is as follows:
(1)
In the case of the formula (1), Representing the probability that data points i and j are selected as neighbors in a low dimensional space,/>Is the position of the data point i in the low-dimensional space,/>Is the position of the data point j in the low-dimensional space,/>Representing the maximum value of the relative distance in a low-dimensional space,/>Representing a relative distance minimum in the low-dimensional space;
(3) The KL divergence between the probability distribution P in the high-dimensional space and the probability distribution Q in the low-dimensional space is minimized through iterative calculation, and the KL divergence calculation formula is as follows:
(2)
In equation (2), where P and Q are probability distributions, x is a random variable, And/>Respectively express/>Probability of occurrence in distributions P and Q,/>To minimize the KL-divergence between the probability distribution P in the high-dimensional space and the probability distribution Q in the low-dimensional space.
As a further technical scheme of the invention, the working method of the coding part for extracting the image features in the step five is as follows: the image feature extraction module adopts the convolutional neural network and the depth attention network as encoders, firstly, 10 small blocks of the pathological section images are taken from a cancer area, a cancer side area and a junction area respectively through the convolutional neural network to perform feature extraction, the t-SNE dimension reduction method is adopted to reduce dimension to 8192 dimensions, and then position information is added on a feature vector after dimension reduction to be transmitted into the depth attention network to perform feature fusion; the working method of the coding part for extracting the image features in the fifth step comprises the following steps: the image feature extraction module encodes a gene sequence of the environment in the tumor of the patient through a language processing model GPT model, calculates the loss distance between the fusion feature of the pathological section graph and the encoding feature of the gene sequence of the patient through cross entropy loss, and optimizes the cross entropy loss to the minimum through back propagation adjustment network parameters.
As a further technical scheme of the invention, the graph neural network aggregates neighbor information of nodes through a message transmission mechanism and updates characteristic representation of the nodes, the message transmission mechanism captures remote information of the pathological section graph of a small block through multiple rounds of iteration, and the working method of the prediction module comprises the following steps:
(S1) establishing an undirected graph, wherein nodes of the undirected graph are patient cases, a node characteristic matrix of the undirected graph is a pathological graph image characteristic, a patient clinical data characteristic and total survival time of a patient after treatment by using the PD-1, and node connecting lines of the undirected graph are patient similarities;
(S2) initial feature representation, wherein a multi-head self-attention model is adopted to carry out similarity extraction on the features of the pathological section map of the small block to obtain an initial adjacency matrix, and the calculation formula of the multi-head self-attention model is as follows;
(3)
In the formula (3) of the present invention, Representing the mechanism of attention,/>Representing a multi-class activation function, W being a learnable weight matrix, q, k and v representing projection matrices of queries, keys and values, respectively,/>Representing the dimension size of each header;
(S3) aggregating neighbor information of the nodes through graph rolling operation, and adopting a mean square error loss as a loss function until the model converges, wherein node characteristics of the layer l+1 are expressed as follows:
(4)
in the formula (4) of the present invention, Indicating that the node is at the/>Feature vector of +1 layer,/>Indicating that the node is at the/>Feature vector of layer,/>Represents the/>Weight matrix of layer,/>Representing an activation function,/>Expressed as a new adjacency matrix obtained by adding a self-loop to the adjacency matrix,/>Representing a membership matrix;
(S4) performing PD-1 prognosis evaluation on a new patient, inserting new nodes into the graph neural network according to the pathological section graph characteristics and the clinical data characteristics of a small block, calculating an adjacency matrix until the model converges, and calculating and outputting the total survival time of the patient after PD-1 treatment through the adjacency matrix to complete the evaluation on PD-1 prognosis, wherein the adjacency matrix has a calculation formula:
(5)
In the formula (5) of the present invention, For similarity vectors of new patients with nodes other than the inserted new node,/>Indicating the time of insertion of the node.
As a further technical scheme of the invention, the color standardization Reinhard method eliminates the interference of color difference on the result by correcting the pathological section map to the same color space, and the working method of the color standardization Reinhard method comprises the following steps:
S1, selecting a specific target image as a target of color standardization, and preprocessing and standardizing the specific target image;
s2, converting the small pathological section map into a Lab color space, and normalizing three channels of the Lab color space;
S3, calculating the mean value and standard deviation of the pathological section graph of the small block, and adjusting;
S4, calculating a distribution density function in a color space, and mapping the distribution density function to the distribution density function of the specific target image;
s5, converting the mapped color values back to the RGB color space to obtain a standardized image.
Has the positive beneficial effects that:
The invention discloses a multi-mode melanoma immunotherapy prediction method based on a deep learning network, which is based on a deep self-attention network and a graph rolling network which are used for comparing the heat and the fire at present in the deep learning method, and combines the analysis results of multi-mode data such as pathological image data, patient metadata, patient clinical data, gene expression characteristics and the like; before the formal model is trained, the images are required to be dyed and standardized, the colors are corrected to the same color space, and the robustness of the training model is enhanced; the convolution neural network and the depth self-attention network are used for carrying out feature extraction and dimension reduction on the input image to 2048 dimensions, then the feature matrix is spliced, the feature matrix is subjected to classification prediction by using the full-connection layer, and the computing advantages of the convolution neural network and the depth self-attention network are combined, so that the image information can be captured better; the missing image information is generated by a diffusion model and natural language coding template in a computer, so that the problem of insufficient data quantity is solved, and the condition of inaccurate model training caused by insufficient sample quantity is avoided; and encoding the clinical data of the patient through a multi-head self-attention model, and endowing importance degree parameters to the prognosis influence degree according to the clinical data of the patient, so as to realize the extraction of non-image information. The step can more comprehensively consider the condition of the patient, and the defect that other factors are ignored due to the fact that only image information is considered in the traditional method is avoided; the feature extraction is carried out on the pathological section graph by adopting Encoder-Decoder neural network mode and belief generation analysis, so that the feature can be better extracted, and the defect that the feature needs to be manually selected in the traditional method is avoided; and the automation degree and the intelligent degree are high.
Drawings
FIG. 1 is a schematic flow chart of a multi-modal melanoma immunotherapy prediction method based on a deep learning network according to the present invention;
FIG. 2 is a block diagram of a multi-modal melanoma immunotherapy prediction method based on a deep learning network according to the present invention;
FIG. 3 is a schematic flow chart of an image segmentation module in a multi-modal melanoma immunotherapy prediction method based on a deep learning network;
FIG. 4 is a schematic flow chart of a prediction module in a multi-modal melanoma immunotherapy prediction method based on a deep learning network according to the present invention;
Fig. 5 is a schematic flow chart of a color standardization Reinhard method in a multi-modal melanoma immunotherapy prediction method based on a deep learning network.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, a multi-modal melanoma immunotherapy prediction method based on a deep learning network includes the following steps:
Firstly, data acquisition, preliminary screening and image standardization processing, namely obtaining melanoma patient data in a tumor and cancer genome map through a medical image management system, wherein the melanoma patient data comprise pathological section maps, patient clinical data and RNA sequence three data cases and a melanoma patient data set, the melanoma patient data are preliminarily screened and reserved through a graphic processor GPU, and the pathological section maps are standardized to a specific target map by adopting a color standardization Reinhard method;
Secondly, performing image segmentation processing, namely performing image semantic segmentation through an image segmentation module, wherein the image segmentation module comprises a convolutional neural network and a depth self-attention network, the depth self-attention network captures high-frequency information in the pathological section graph through a self-attention global calculation mode, the high-frequency information comprises pixel points with intense image intensity variation, the convolutional neural network performs image feature extraction and data learning through a local calculation mode to realize low-frequency information extraction in the pathological section graph, and the low-frequency information comprises pixel points with gentle image intensity conversion;
Step three, image supplementing processing, namely generating missing image information through an image data supplementing module, wherein the image data supplementing module generates the missing image information in the pathological section map through a diffusion model and natural language coding promt by a computer;
The fourth step of image rescreening and dimension reduction, wherein the graphic processor GPU is used for rescreening the pathological section images of the small blocks subjected to image segmentation and data supplementation, and the dimension reduction treatment is carried out on the pathological section images of the small blocks by adopting a t-SNE dimension reduction method;
Step five, extracting image features, namely extracting the image features of the small pathological section images through an image feature extraction module, wherein the image feature extraction module adopts Encoder-Decoder neural network mode and belief analysis to extract the features of the small pathological section images;
Step six, non-image information extraction, namely extracting non-image information characteristics through a non-image auxiliary information context extraction module, wherein the non-image auxiliary information context extraction module adopts a multi-head self-attention model to encode clinical data of a patient and endows importance degree parameters to prognosis influence degree according to the clinical data of the patient, and the clinical data of the patient comprise melanoma Bressow thickness, ulcer condition, melanoma TNM stage, mitosis rate, lymph node metastasis, distant metastasis, satellite range, primary tumor part, age of the patient and sex of the patient;
In specific embodiments, the influencing factors, prognostic impact levels, and importance parameters of patient clinical data are shown in Table 1.
TABLE 1 influence factors, prognostic influence, importance parameters of patient clinical data
And seventhly, predicting the multi-mode melanoma immunotherapy, and performing melanoma immunotherapy prediction assessment through a prediction module, wherein the prediction module evaluates the PD-1 prognosis of a patient through a graphic neural network, the graphic neural network performs evaluation calculation by combining the output of the image feature extraction module and the output of the non-image auxiliary information context extraction module, and performs pruning operation on a relation connecting line between two patients based on a preset similarity threshold.
As shown in fig. 2, in a specific embodiment, the present invention includes a total of 5 modules: an image segmentation module, an image data augmentation module, an image feature extraction module, a non-image auxiliary information context extraction module, and a prediction module (fig. 2). Under the condition that sample data are limited, the existing data are utilized to the greatest extent, missing data are supplemented, end-to-end training is carried out, an original patient pathology picture and patient clinical data are input, and an evaluation result is output. With the increase of the number of patients, when the required calculation force reaches the upper limit supportable by hardware, the node similarity threshold value can be properly set, namely, pruning operation is performed on the relationship connection line between two patients with similarity lower than a certain threshold value on the graph neural network, so that the calculation force requirement is reduced.
The invention relates to the fields of artificial intelligence, multimode melanoma immunotherapy prediction and oncology, in particular to a method for prognosis evaluation of PD-1 immunotherapy of melanoma patients based on a deep neural network. The invention discloses a novel PD-1 immunotherapy prognosis evaluation method for melanoma patients, which comprises an image segmentation module, an image data interpolation module, an image feature extraction module, a non-image auxiliary information context extraction module and a prediction module, wherein the image segmentation module is used for carrying out image segmentation on the melanoma patients. The method comprises the steps of dividing a pathological section image of a patient into a cancer region, a paracancerous region and a handover region by using a convolutional neural network with an attention mechanism through an image segmentation module, automatically generating missing region data by using an image data interpolation module, carrying out feature extraction of pathological images by combining a deep self-attention network in an image feature extraction module, carrying out feature extraction of non-image data such as clinical data of the patient by using a non-image auxiliary information context extraction module, and finally evaluating PD-1 immunotherapy prognosis results of the patient by integrating multi-mode data of the melanoma patient containing image features and non-image features by using a graph convolution network in a prediction module.
As shown in fig. 3, in the above embodiment, the working method of the image segmentation module includes the following steps:
Step 1, inputting the pathological section map, wherein the input value of the image segmentation module is the pathological section map cut into small blocks of 256 multiplied by 3;
Step 2, performing feature extraction on the pathological section map of the input small block by adopting the convolutional neural network and the deep self-attention network;
Step 3, performing feature compression through a convolution layer, and reducing the dimensions of feature matrixes extracted from the convolution neural network and the deep self-attention network through a full-connection layer to form a low-dimensional feature matrix with the dimension of 2048;
Step 4, splicing the low-dimensional feature matrix through splicing operation to form a high-dimensional feature matrix with the dimension of 4096;
step 5, carrying out classified prediction on the high-dimensional feature matrix through the full-connection layer;
Step 6, measuring the difference between the predicted result and the real label through a binary cross entropy loss function, and updating parameters of the convolutional neural network and the deep self-attention network through an Adam parameter optimization algorithm;
And 7, outputting a pathological area of the pathological section map, wherein an output value is a small pathological area corresponding to the pathological section map, and the pathological area comprises a cancer area, a paracancerous area, a handover area and a background area.
In the embodiment shown in fig. 4 and 5, the image segmentation module is formed by stitching a convolutional neural network and a deep self-focusing network, and is a deep neural network improved from the network for image semantic segmentation. A pathological slice image contains low frequency information and high frequency information, wherein the high frequency information includes pixels whose image intensity (brightness/gray level) varies drastically, such as local edges and textures. Whereas the low frequency information comprises pixels where the image intensity (brightness/gray) changes smoothly, such as the global shape and structure of the object. The deep self-attention network benefits from a self-attention global computing mode, more high-frequency information can be captured, and the convolutional neural network can implicitly extract image features and learn from data due to the characteristics of local computation of the convolutional neural network, so that the low-frequency information of the image can be effectively extracted. Therefore, the convolution neural network and the deep self-attention network are used for extracting the characteristics of the input image and reducing the dimension to 2048 dimensions, then the characteristic matrix is spliced, the full-connection layer is used for carrying out classification prediction on the characteristic matrix, and the computing advantages of the convolution neural network and the deep self-attention network are combined, so that the image information can be captured better. A binary cross entropy penalty is used in this module, using Adam parameter optimization algorithm. The input value of the module is cut into small blocks of the pathological diagram of the patient with the size of 256 multiplied by 3, and the output value of the module is the pathological area (cancer area, paracancerous area, junction area and background area) corresponding to each small block.
In the above embodiment, the diffusion model performs machine generation on missing paracancerous or handover area data based on real cancerous area data, the real cancerous area data obtains a noise map through multiple rounds of noise adding operation, and performs missing data interpolation on the paracancerous or handover area real data by using natural language coding template to obtain coding feature data, the natural language coding template is based on a generation process of generating an antagonism network and simulating data distribution in a step-by-step growth mode, the coding feature data obtains a simulated paracancerous or handover area image through splicing and noise reducing processing with the noise map, and the simulated paracancerous or handover area image is compared with the real paracancerous or handover area image to calculate cross entropy loss.
In a specific embodiment, the image data supplementing module is used for adding a promt to carry out computer generation on the missing image information on the basis of a diffusion model. According to incomplete statistics, 30% -40% of pathological image slices have the condition that data of a cancer side area or a junction area is missing or insufficient, and for the data of the type, before image feature extraction is carried out, the missing data of the cancer side area or the junction area is required to be generated by a machine according to real cancer area data through an image data interpolation module so as to avoid adverse effects on the next image feature extraction. According to the artificial labeling experience of pathological sections, the intersection region consists of 200-500 mu m pixels inward from the cancer region and the adjacent cancer region, so that the intersection region has strong data correlation with the cancer region and the adjacent cancer region, and the logic of the data generation process needs to be more focused when the computer generates interpolation data. Therefore, in order to meet the actual application requirements, a diffusion model is adopted as a base model to add the template for missing data interpolation, so that the generation process of data distribution is simulated, and a sample with more reality is generated. In the module, input data are real data of a cancer area, a noise diagram is obtained after the input data are subjected to multiple-round noise adding operation, then characteristic data obtained after the real data of the handover area or the cancer side area are subjected to the prompt coding are input, the characteristic data are combined with the noise diagram, noise reduction processing is carried out, and finally a generated diagram is output.
In the above embodiment, the t-SNE dimension reduction method includes the following steps:
(1) Calculating a similarity matrix between data points in a high-dimensional space, calculating the high-dimensional similarity matrix through Euclidean distance, manhattan distance or cosine similarity, and carrying out weighting processing on the data in the high-dimensional space through a Gaussian kernel function to obtain symmetrical probability distribution;
(2) Mapping the relative distance in the low-dimensional space and the relative distance in the high-dimensional space by constructing probability distribution in the low-dimensional space, wherein a probability formula of the data point in the low-dimensional space being selected as the adjacent point is as follows:
(1)
In the case of the formula (1), Representing the probability that data points i and j are selected as neighbors in a low dimensional space,/>Is the position of the data point i in the low-dimensional space,/>Is the position of the data point j in the low-dimensional space,/>Representing the maximum value of the relative distance in a low-dimensional space,/>Representing a relative distance minimum in the low-dimensional space;
(3) The KL divergence between the probability distribution P in the high-dimensional space and the probability distribution Q in the low-dimensional space is minimized through iterative calculation, and the KL divergence calculation formula is as follows:
(2)
In equation (2), where P and Q are probability distributions, x is a random variable, And/>Respectively express/>Probability of occurrence in distributions P and Q,/>To minimize the KL-divergence between the probability distribution P in the high-dimensional space and the probability distribution Q in the low-dimensional space.
In a specific embodiment, the patient pathology patches after image segmentation and data augmentation need to be screened before entering the image feature extraction module. And screening pathological image small blocks with the prediction rate of more than 80% and the non-blank area of more than 80% in the image segmentation module, reducing the dimension of the pathological image small blocks by using a t-SNE method, selecting 10 pieces near the aggregation center in the pathological image small blocks of 3 categories of cancer areas, cancer side areas and junction areas according to the data distribution situation after dimension reduction, and entering the image feature extraction module for feature extraction in the next step.
In the above embodiment, the working method of the coding part for extracting the image features in the fifth step is as follows: the image feature extraction module adopts the convolutional neural network and the depth attention network as encoders, firstly, 10 small blocks of the pathological section images are taken from a cancer area, a cancer side area and a junction area respectively through the convolutional neural network to perform feature extraction, the t-SNE dimension reduction method is adopted to reduce dimension to 8192 dimensions, and then position information is added on a feature vector after dimension reduction to be transmitted into the depth attention network to perform feature fusion; the working method of the coding part for extracting the image features in the fifth step comprises the following steps: the image feature extraction module encodes a gene sequence of the environment in the tumor of the patient through a language processing model GPT model, calculates the loss distance between the fusion feature of the pathological section graph and the encoding feature of the gene sequence of the patient through cross entropy loss, and optimizes the cross entropy loss to the minimum through back propagation adjustment network parameters.
In a specific embodiment, the image feature extraction module performs feature extraction on the input image using Encoder-Decoder mode in combination with a letter analysis. In the modular encoding section, a convolutional neural network is used in conjunction with a deep attention pattern as the encoder. Firstly, a convolutional neural network is used for extracting characteristics of 10 pathological diagram small blocks in a cancer area, a paracancerous area and a handover area respectively, and dimension reduction is carried out to 8192 dimensions. And then adding position information on the 30 feature vectors after dimension reduction, and then transmitting the feature vectors into an attention network for feature fusion. In the decoding part of the module, 29 gene sequences related to the intratumoral environment of a patient are encoded by using a GPT model, and then the cross entropy loss is used for calculating the loss distance between the fusion characteristics of the pathological diagram small blocks of the patient and the encoding characteristics of the gene sequences of the patient so as to minimize the loss distance. After training the image feature extraction module, when in use, only the coding module of the module is used, and the image features of the pathological image of the patient are output.
In the above embodiment, the graph neural network aggregates the neighbor information of the node and updates the feature representation of the node through a message passing mechanism, the message passing mechanism captures the remote information of the pathological section graph of the small block through multiple rounds of iteration, and the working method of the prediction module includes the following steps:
(S1) establishing an undirected graph, wherein nodes of the undirected graph are patient cases, a node characteristic matrix of the undirected graph is a pathological graph image characteristic, a patient clinical data characteristic and total survival time of a patient after treatment by using the PD-1, and node connecting lines of the undirected graph are patient similarities;
(S2) initial feature representation, wherein a multi-head self-attention model is adopted to carry out similarity extraction on the features of the pathological section map of the small block to obtain an initial adjacency matrix, and the calculation formula of the multi-head self-attention model is as follows;
(3)
In the formula (3) of the present invention, Representing the mechanism of attention,/>Representing a multi-class activation function, W being a learnable weight matrix, q, k and v representing projection matrices of queries, keys and values, respectively,/>Representing the dimension size of each header;
(S3) aggregating neighbor information of the nodes through graph rolling operation, and adopting a mean square error loss as a loss function until the model converges, wherein node characteristics of the layer l+1 are expressed as follows:
(4)
in the formula (4) of the present invention, Indicating that the node is at the/>Feature vector of +1 layer,/>Indicating that the node is at the/>Feature vector of layer,/>Represents the/>Weight matrix of layer,/>Representing an activation function,/>Expressed as a new adjacency matrix obtained by adding a self-loop to the adjacency matrix,/>Representing a membership matrix;
(S4) performing PD-1 prognosis evaluation on a new patient, inserting new nodes into the graph neural network according to the pathological section graph characteristics and the clinical data characteristics of a small block, calculating an adjacency matrix until the model converges, and calculating and outputting the total survival time of the patient after PD-1 treatment through the adjacency matrix to complete the evaluation on PD-1 prognosis, wherein the adjacency matrix has a calculation formula:
(5)
In the formula (5) of the present invention, For similarity vectors of new patients with nodes other than the inserted new node,/>Indicating the time of insertion of the node.
In particular embodiments, the prediction module uses a graph neural network in combination with the output of the image feature extraction module and the non-image ancillary information context extraction module to evaluate the PD-1 prognosis of the patient. The core idea of the graph neural network is to aggregate neighbor information of nodes through a messaging mechanism and update the feature representation of the nodes. This process typically performs multiple iterations to capture more distant information in the graph. Finally, the feature representation of each node will contain information of its neighbors and further nodes. In the module, an undirected graph is established, graph nodes are patient cases, a node characteristic matrix consists of pathological graph image characteristics, patient clinical data characteristics and total survival time of patients after treatment by using PD-1, and node connection lines are patient similarities. And performing similarity extraction on the image features of all patients by using a multi-head self-attention model to acquire an initial adjacency matrix of the images. Aggregating neighbor information of nodes through a graph rolling operation: the loss function uses the mean square error loss until the model converges. When the PD-1 prognosis evaluation is carried out on a new patient, a new node is inserted into the graph neural network according to the image characteristics and the clinical data characteristics of the pathological graph of the patient, then an adjacency matrix is recalculated until the model converges, and then the total survival time of the patient after the PD-1 treatment is calculated and output through the adjacency matrix, so that the purpose of the PD-1 prognosis evaluation is achieved. The hardware operating environment of the graph neural network includes the following:
CPU: a central processing unit, which is one of the core components of a computer, is used to execute program code and control various operations of the computer.
GPU: graphics processors are widely used in the field of deep learning due to their powerful parallel computing capabilities. The GPU can accelerate the training and deducing process of the neural network and improve the performance of the deep learning model.
And (3) FPGA: a field programmable gate array is a programmable logic device. The FPGA has high flexibility and reconfigurability, and has good performance in the aspect of accelerating deep learning tasks.
ASIC: an application specific integrated circuit is a chip designed for a particular application. ASICs are widely used in the deep learning field because they can provide efficient, low power consumption computing power.
TPU (thermoplastic polyurethane): tensor processor, a chip developed by Google that is dedicated to accelerating artificial intelligence tasks. TPU's perform well in the deep learning field and have been widely used in Google's own AI project.
In the above embodiment, the color normalization Reinhard method eliminates interference of color differences on the result by correcting the pathological section map to the same color space, and the working method of the color normalization Reinhard method includes:
S1, selecting a specific target image as a target of color standardization, and preprocessing and standardizing the specific target image;
s2, converting the small pathological section map into a Lab color space, and normalizing three channels of the Lab color space;
S3, calculating the mean value and standard deviation of the pathological section graph of the small block, and adjusting;
S4, calculating a distribution density function in a color space, and mapping the distribution density function to the distribution density function of the specific target image;
s5, converting the mapped color values back to the RGB color space to obtain a standardized image.
In a specific embodiment, before using the image segmentation module, the Reinhard method is used for dyeing and normalizing all pathology maps to a specific target map, so that all patient pathology maps are corrected to the same color space, and the robustness of the training model is enhanced. Therefore, before the formal model is trained, the images are required to be dyed and standardized, the colors are corrected to the same color space, and the robustness of the training model is enhanced.
While specific embodiments of the present invention have been described above, it will be understood by those skilled in the art that these specific embodiments are by way of example only, and that various omissions, substitutions, and changes in the form and details of the methods and systems described above may be made by those skilled in the art without departing from the spirit and scope of the invention. For example, it is within the scope of the present invention to combine the above-described method steps to perform substantially the same function in substantially the same way to achieve substantially the same result. Accordingly, the scope of the invention is limited only by the following claims.

Claims (9)

1. A multimode melanoma immunotherapy prediction method based on a deep learning network is characterized by comprising the following steps of: the method comprises the following steps:
Firstly, data acquisition, preliminary screening and image standardization processing, namely obtaining melanoma patient data in a tumor and cancer genome map through a medical image management system, wherein the melanoma patient data comprise pathological section maps, patient clinical data and RNA sequence three data cases and a melanoma patient data set, the melanoma patient data are preliminarily screened and reserved through a graphic processor GPU, and the pathological section maps are standardized to a specific target map by adopting a color standardization Reinhard method;
Secondly, performing image segmentation processing, namely performing image semantic segmentation through an image segmentation module;
Step three, image supplementing processing, namely generating missing image information through an image data supplementing module, wherein the image data supplementing module generates the missing image information in the pathological section map through a diffusion model and natural language coding promt by a computer;
The fourth step of image rescreening and dimension reduction, wherein the graphic processor GPU is used for rescreening the pathological section images of the small blocks subjected to image segmentation and data supplementation, and the dimension reduction treatment is carried out on the pathological section images of the small blocks by adopting a t-SNE dimension reduction method;
Step five, extracting image features, namely extracting the image features of the small pathological section images through an image feature extraction module, wherein the image feature extraction module adopts Encoder-Decoder neural network mode and belief analysis to extract the features of the small pathological section images;
Step six, non-image information extraction, namely extracting non-image information features through a non-image auxiliary information context extraction module;
And seventhly, predicting the multi-mode melanoma immunotherapy, and performing melanoma immunotherapy prediction assessment through a prediction module, wherein the prediction module evaluates the PD-1 prognosis of a patient through a graphic neural network, the graphic neural network performs evaluation calculation by combining the output of the image feature extraction module and the output of the non-image auxiliary information context extraction module, and performs pruning operation on a relation connecting line between two patients based on a preset similarity threshold.
2. The method for predicting multimode melanoma immunotherapy based on deep learning network according to claim 1, wherein the method comprises the following steps: the image segmentation module comprises a convolutional neural network and a depth self-attention network, the depth self-attention network captures high-frequency information in the pathological section graph through a self-attention global calculation mode, the high-frequency information comprises pixel points with severe image intensity changes, the convolutional neural network performs image feature extraction and data learning through a local calculation mode to achieve low-frequency information extraction in the pathological section graph, and the low-frequency information comprises pixel points with gentle image intensity conversion.
3. The method for predicting multimode melanoma immunotherapy based on deep learning network according to claim 2, wherein the method comprises the following steps: the working method of the image segmentation module comprises the following steps:
Step 1, inputting the pathological section map, wherein the input value of the image segmentation module is the pathological section map cut into small blocks of 256 multiplied by 3;
Step 2, performing feature extraction on the pathological section map of the input small block by adopting the convolutional neural network and the deep self-attention network;
Step 3, performing feature compression through a convolution layer, and reducing the dimensions of feature matrixes extracted from the convolution neural network and the deep self-attention network through a full-connection layer to form a low-dimensional feature matrix with the dimension of 2048;
Step 4, splicing the low-dimensional feature matrix through splicing operation to form a high-dimensional feature matrix with the dimension of 4096;
step 5, carrying out classified prediction on the high-dimensional feature matrix through the full-connection layer;
Step 6, measuring the difference between the predicted result and the real label through a binary cross entropy loss function, and updating parameters of the convolutional neural network and the deep self-attention network through an Adam parameter optimization algorithm;
And 7, outputting a pathological area of the pathological section map, wherein an output value is a small pathological area corresponding to the pathological section map, and the pathological area comprises a cancer area, a paracancerous area, a handover area and a background area.
4. The method for predicting multimode melanoma immunotherapy based on deep learning network according to claim 1, wherein the method comprises the following steps: the diffusion model carries out machine generation on missing cancer side or junction region data based on real cancer region data, the real cancer region data obtains a noise image through multiple rounds of noise adding operation, and natural language coding campt is adopted to carry out missing data interpolation on the cancer side or junction region real data to obtain coding characteristic data, the natural language coding campt is based on a generation process of generating an antagonism network and a step-by-step growth mode simulation data distribution, the coding characteristic data is spliced and noise-reduced through the noise image to obtain a simulated cancer side region or junction region image, and the simulated cancer side region or junction region image is compared with the real cancer side region or junction region image to calculate cross entropy loss.
5. The method for predicting multimode melanoma immunotherapy based on deep learning network according to claim 1, wherein the method comprises the following steps: the t-SNE dimension reduction method comprises the following steps:
(1) Calculating a similarity matrix between data points in a high-dimensional space, calculating the high-dimensional similarity matrix through Euclidean distance, manhattan distance or cosine similarity, and carrying out weighting processing on the data in the high-dimensional space through a Gaussian kernel function to obtain symmetrical probability distribution;
(2) Mapping the relative distance in the low-dimensional space and the relative distance in the high-dimensional space by constructing probability distribution in the low-dimensional space, wherein a probability formula of the data point in the low-dimensional space being selected as the adjacent point is as follows:
(1)
In the case of the formula (1), Representing the probability that data points i and j are selected as neighbors in a low dimensional space,/>Is the position of the data point i in the low-dimensional space,/>Is the position of the data point j in the low-dimensional space,/>Representing the maximum value of the relative distance in a low-dimensional space,/>Representing a relative distance minimum in the low-dimensional space;
(3) The KL divergence between the probability distribution P in the high-dimensional space and the probability distribution Q in the low-dimensional space is minimized through iterative calculation, and the KL divergence calculation formula is as follows:
(2)
In equation (2), where P and Q are probability distributions, x is a random variable, And/>Respectively express/>Probability of occurrence in distributions P and Q,/>To minimize the KL-divergence between the probability distribution P in the high-dimensional space and the probability distribution Q in the low-dimensional space.
6. The method for predicting multimode melanoma immunotherapy based on deep learning network according to claim 2, wherein the method comprises the following steps: the working method of the coding part for extracting the image features in the fifth step comprises the following steps: the image feature extraction module adopts the convolutional neural network and the depth attention network as encoders, firstly, 10 small blocks of the pathological section images are taken from a cancer area, a cancer side area and a junction area respectively through the convolutional neural network to perform feature extraction, the t-SNE dimension reduction method is adopted to reduce dimension to 8192 dimensions, and then position information is added on a feature vector after dimension reduction to be transmitted into the depth attention network to perform feature fusion; the working method of the coding part for extracting the image features in the fifth step comprises the following steps: the image feature extraction module encodes a gene sequence of the environment in the tumor of the patient through a language processing model GPT model, calculates the loss distance between the fusion feature of the pathological section graph and the encoding feature of the gene sequence of the patient through cross entropy loss, and optimizes the cross entropy loss to the minimum through back propagation adjustment network parameters.
7. The method for predicting multimode melanoma immunotherapy based on deep learning network according to claim 1, wherein the method comprises the following steps: the graph neural network aggregates neighbor information of nodes through a message transmission mechanism and updates characteristic representation of the nodes, the message transmission mechanism captures remote information of the pathological section graph of a small block through multiple rounds of iteration, and the working method of the prediction module comprises the following steps:
(S1) establishing an undirected graph, wherein nodes of the undirected graph are patient cases, a node characteristic matrix of the undirected graph is a pathological graph image characteristic, a patient clinical data characteristic and total survival time of a patient after treatment by using the PD-1, and node connecting lines of the undirected graph are patient similarities;
(S2) initial feature representation, wherein similarity extraction is carried out on features of the pathological section map of the small block by adopting a multi-head self-attention model to obtain an initial adjacency matrix, and the calculation formula of the multi-head self-attention model is as follows:
(3)
In the formula (3) of the present invention, Representing the mechanism of attention,/>Representing a multi-class activation function, W being a learnable weight matrix, q, k and v representing projection matrices of queries, keys and values, respectively,/>Representing the dimension size of each header;
(S3) aggregating neighbor information of the nodes through graph rolling operation, and adopting a mean square error loss as a loss function until the model converges, wherein node characteristics of the layer l+1 are expressed as follows:
(4)
in the formula (4) of the present invention, Indicating that the node is at the/>Feature vector of +1 layer,/>Indicating that the node is at the/>Feature vector of layer,/>Represents the/>Weight matrix of layer,/>Representing an activation function,/>Expressed as a new adjacency matrix obtained by adding a self-loop to the adjacency matrix,/>Representing a membership matrix;
(S4) performing PD-1 prognosis evaluation on a new patient, inserting new nodes into the graph neural network according to the pathological section graph characteristics and the clinical data characteristics of a small block, calculating an adjacency matrix until the model converges, and calculating and outputting the total survival time of the patient after PD-1 treatment through the adjacency matrix to complete the evaluation on PD-1 prognosis, wherein the adjacency matrix has a calculation formula:
(5)
In the formula (5) of the present invention, For similarity vectors of new patients with nodes other than the inserted new node,/>Indicating the time of insertion of the node.
8. The method for predicting multimode melanoma immunotherapy based on deep learning network according to claim 1, wherein the method comprises the following steps: the color standardization Reinhard method eliminates the interference of color difference on the result by correcting the pathological section map to the same color space, and the working method of the color standardization Reinhard method comprises the following steps:
S1, selecting a specific target image as a target of color standardization, and preprocessing and standardizing the specific target image;
s2, converting the small pathological section map into a Lab color space, and normalizing three channels of the Lab color space;
S3, calculating the mean value and standard deviation of the pathological section graph of the small block, and adjusting;
S4, calculating a distribution density function in a color space, and mapping the distribution density function to the distribution density function of the specific target image;
s5, converting the mapped color values back to the RGB color space to obtain a standardized image.
9. The method for predicting multimode melanoma immunotherapy based on deep learning network according to claim 1, wherein the method comprises the following steps: the non-image auxiliary information context extraction module encodes the clinical data of the patient by adopting a multi-head self-attention model, and gives importance degree parameters to the prognosis influence degree according to the clinical data of the patient, wherein the clinical data of the patient comprise melanoma Breslow thickness, ulcer condition, melanoma TNM stage, mitosis rate, lymph node metastasis, remote metastasis, satellite focus, tumor primary part, patient age and patient sex of the patient.
CN202410338028.XA 2024-03-25 2024-03-25 Multimode melanoma immunotherapy prediction method based on deep learning network Active CN117936105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410338028.XA CN117936105B (en) 2024-03-25 2024-03-25 Multimode melanoma immunotherapy prediction method based on deep learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410338028.XA CN117936105B (en) 2024-03-25 2024-03-25 Multimode melanoma immunotherapy prediction method based on deep learning network

Publications (2)

Publication Number Publication Date
CN117936105A true CN117936105A (en) 2024-04-26
CN117936105B CN117936105B (en) 2024-06-18

Family

ID=90759614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410338028.XA Active CN117936105B (en) 2024-03-25 2024-03-25 Multimode melanoma immunotherapy prediction method based on deep learning network

Country Status (1)

Country Link
CN (1) CN117936105B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140037172A1 (en) * 2011-01-13 2014-02-06 Rutgers, The State University Of New Jersey Enhanced multi-protocol analysis via intelligent supervised embedding (empravise) for multimodal data fusion
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network
WO2021022752A1 (en) * 2019-08-07 2021-02-11 深圳先进技术研究院 Multimodal three-dimensional medical image fusion method and system, and electronic device
KR20220144671A (en) * 2021-04-20 2022-10-27 김송환 Method and apparatus for sharing cancer screening data based on permissioned blockchains
KR20230029004A (en) * 2021-08-23 2023-03-03 전남대학교산학협력단 System and method for prediction of lung cancer final stage using chest automatic segmentation image
CN116664605A (en) * 2023-08-01 2023-08-29 昆明理工大学 Medical image tumor segmentation method based on diffusion model and multi-mode fusion
CN116934698A (en) * 2023-07-14 2023-10-24 广东工业大学 Semantic editing-based skin lesion image segmentation method and system
CN117132849A (en) * 2023-09-06 2023-11-28 重庆市人民医院 Cerebral apoplexy hemorrhage transformation prediction method based on CT flat-scan image and graph neural network
CN117671626A (en) * 2023-12-06 2024-03-08 安徽信息工程学院 Method for realizing multi-dimensional image processing by utilizing robot

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140037172A1 (en) * 2011-01-13 2014-02-06 Rutgers, The State University Of New Jersey Enhanced multi-protocol analysis via intelligent supervised embedding (empravise) for multimodal data fusion
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network
WO2021022752A1 (en) * 2019-08-07 2021-02-11 深圳先进技术研究院 Multimodal three-dimensional medical image fusion method and system, and electronic device
KR20220144671A (en) * 2021-04-20 2022-10-27 김송환 Method and apparatus for sharing cancer screening data based on permissioned blockchains
KR20230029004A (en) * 2021-08-23 2023-03-03 전남대학교산학협력단 System and method for prediction of lung cancer final stage using chest automatic segmentation image
CN116934698A (en) * 2023-07-14 2023-10-24 广东工业大学 Semantic editing-based skin lesion image segmentation method and system
CN116664605A (en) * 2023-08-01 2023-08-29 昆明理工大学 Medical image tumor segmentation method based on diffusion model and multi-mode fusion
CN117132849A (en) * 2023-09-06 2023-11-28 重庆市人民医院 Cerebral apoplexy hemorrhage transformation prediction method based on CT flat-scan image and graph neural network
CN117671626A (en) * 2023-12-06 2024-03-08 安徽信息工程学院 Method for realizing multi-dimensional image processing by utilizing robot

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
周天绮;朱超挺;石峰;: "影像组学在肺肿瘤良恶性分类预测中的应用研究", 中国医疗器械杂志, no. 02, 30 March 2020 (2020-03-30) *
李航;余镇;倪东;雷柏英;汪天富;: "基于深度残差网络的皮肤镜图像黑色素瘤的识别", 中国生物医学工程学报, no. 03, 20 June 2018 (2018-06-20) *
胡屹杉;秦品乐;曾建潮;柴锐;王丽芳;: "结合分段频域和局部注意力的超声甲状腺分割", 中国图象图形学报, no. 10, 16 October 2020 (2020-10-16) *

Also Published As

Publication number Publication date
CN117936105B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
CN109190524B (en) Human body action recognition method based on generation of confrontation network
CN113077471A (en) Medical image segmentation method based on U-shaped network
CN109493346A (en) It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device
CN112861722B (en) Remote sensing land utilization semantic segmentation method based on semi-supervised depth map convolution
CN111259906A (en) Method for generating and resisting remote sensing image target segmentation under condition containing multilevel channel attention
CN110490247B (en) Image processing model generation method, image processing method and device and electronic equipment
CN111325750B (en) Medical image segmentation method based on multi-scale fusion U-shaped chain neural network
CN112686898B (en) Automatic radiotherapy target area segmentation method based on self-supervision learning
CN111353995B (en) Cervical single cell image data generation method based on generation countermeasure network
CN110781912A (en) Image classification method based on channel expansion inverse convolution neural network
Yuan et al. Neighborloss: a loss function considering spatial correlation for semantic segmentation of remote sensing image
CN112084842B (en) Hydrological remote sensing image target recognition method based on depth semantic model
CN114511554A (en) Automatic nasopharyngeal carcinoma target area delineating method and system based on deep learning
CN114912486A (en) Modulation mode intelligent identification method based on lightweight network
CN115457057A (en) Multi-scale feature fusion gland segmentation method adopting deep supervision strategy
Su et al. HAT-Net: A Hierarchical Transformer Graph Neural Network for Grading of Colorectal Cancer Histology Images.
CN117333497A (en) Mask supervision strategy-based three-dimensional medical image segmentation method for efficient modeling
CN117936105B (en) Multimode melanoma immunotherapy prediction method based on deep learning network
CN115018943B (en) Electromagnetic backscatter imaging method based on training-free depth cascade network
CN114862763B (en) EFFICIENTNET-based gastric cancer pathological section image segmentation prediction method
CN114612478B (en) Female pelvic cavity MRI automatic sketching system based on deep learning
CN116152441B (en) Multi-resolution U-net curved surface reconstruction method based on depth priori
CN117994527B (en) Point cloud segmentation method and system based on region growth
CN114299305B (en) Saliency target detection algorithm for aggregating dense and attention multi-scale features
CN118314386A (en) Hyperspectral image classification method, system, equipment and medium based on agent assisted evolution convolution attention network architecture search

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant