CN115641957A - New auxiliary chemotherapy curative effect prediction method and system based on image genomics - Google Patents

New auxiliary chemotherapy curative effect prediction method and system based on image genomics Download PDF

Info

Publication number
CN115641957A
CN115641957A CN202211417630.XA CN202211417630A CN115641957A CN 115641957 A CN115641957 A CN 115641957A CN 202211417630 A CN202211417630 A CN 202211417630A CN 115641957 A CN115641957 A CN 115641957A
Authority
CN
China
Prior art keywords
image
genomics
preset
inputting
curative effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211417630.XA
Other languages
Chinese (zh)
Inventor
尹晓霞
张彦春
王伟彤
高明勇
叶国麟
殷丽华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202211417630.XA priority Critical patent/CN115641957A/en
Publication of CN115641957A publication Critical patent/CN115641957A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a novel auxiliary chemotherapy curative effect prediction method and system based on image genomics, wherein the method comprises the following steps: acquiring an exosome genomics sample of a patient adopting new auxiliary chemotherapy before an operation and a corresponding dynamic magnetic resonance mammary gland image; respectively inputting the dynamic magnetic resonance breast image into a preset image omics feature extraction model and a preset convolution neural network model to obtain corresponding image omics features and image molecular classification features; carrying out transcriptome sequencing on the exosome genomics sample to obtain the genomics characteristics; and inputting the image omics characteristics, the image molecular typing characteristics and the genomics characteristics into a preset bidirectional threshold circulating neural network model for predicting the curative effect to obtain a curative effect prediction result. The method provided by the invention realizes effective fusion of high-dimensional unstructured image data and high-specificity structured genomics data, efficiently and accurately excavates comprehensive and useful characteristic information, and effectively improves the accuracy of automatically predicting new auxiliary curative effect.

Description

New auxiliary chemotherapy curative effect prediction method and system based on image genomics
Technical Field
The invention relates to the technical field of medical image data processing, in particular to a new auxiliary chemotherapy curative effect prediction method and system based on image genomics.
Background
Neoadjuvant chemotherapy (NAC) refers to systemic cytotoxic drug therapy performed prior to surgical treatment, and is mainly used for patients with clinical stage ii and iii breast cancer and inflammatory breast cancer to reduce clinical staging, improve breast preservation rate and select sensitive chemotherapy regimens. However, not all breast cancer patients are NAC-effective, and some patients are NAC-insensitive, and even develop tumor progression during NAC, delaying the time to surgical treatment. Therefore, it is very important to judge whether the neoadjuvant chemotherapy is effective at an early stage, so that patients who can benefit from the chemotherapy can be effectively identified, and an individualized treatment scheme can be assisted to be formulated, thereby being very significant in accurate treatment of breast cancer.
The existing NAC curative effect prediction method mainly comprises single genomics prediction, single image omics prediction and image gene combinatorial prediction, but the prediction effect is not ideal: 1) The accuracy of a genomics method is poor, the prediction results of a plurality of biological factors are controversial, the research of predicting the NAC curative effect by a multi-gene expression profile is only based on a small sample amount, and the gene expression profiles of a population are different greatly due to different human species; 2) In the image omics prediction, mammary gland MRI is adopted to predict the curative effect of the new auxiliary chemotherapy and detect the residual focus after the new auxiliary chemotherapy, although the mammary gland B-ultrasonic diagnostic strip is more sensitive than the conventional mammary gland B-ultrasonic diagnostic strip and molybdenum target strip, the mammary gland B-ultrasonic diagnostic strip has higher false positive; 3) The image gene combinatory prediction research only can prompt that the combination of image data and gene data can reflect the tumor characteristics more fully and provide basis for predicting NAC curative effect of breast cancer, but how to extract useful structured data from the image omics and fuse the useful structured data with the highly specific unstructured data of the genomics, and the accurate and effective extraction of the characteristic data is still an important challenge for prediction of the image genomics.
Disclosure of Invention
The invention aims to provide a new auxiliary chemotherapy curative effect prediction method based on image genomics, which realizes efficient and accurate mining of data information by utilizing deep learning to unlock effective fusion of high-dimensional unstructured magnetic resonance image data and high-specific structured genomics data, overcomes the application defect of the existing new auxiliary chemotherapy curative effect prediction method, ensures the accuracy of automatic prediction of new auxiliary chemotherapy curative effect, further effectively reduces the over-treatment condition of breast cancer, reduces side effects in the chemotherapy process of insensitive patients of chemotherapy, and provides reliable guidance for the formulation of individual accurate treatment schemes.
In order to achieve the above objects, it is necessary to provide a novel adjuvant chemotherapy efficacy prediction method, system, computer device and storage medium based on image genomics.
In a first aspect, the embodiments of the present invention provide a method for predicting the efficacy of neoadjuvant chemotherapy based on image genomics, the method comprising the following steps:
acquiring an exosome genomics sample of a patient adopting new auxiliary chemotherapy before an operation and a corresponding dynamic magnetic resonance mammary gland image; the exosome genomics samples comprise puncture tumor tissue samples and tumor peripheral blood samples of each round of neoadjuvant chemotherapy;
respectively inputting the dynamic magnetic resonance breast image into a preset image omics feature extraction model and a preset convolution neural network model to obtain corresponding image omics features and image molecular classification features;
carrying out transcriptome sequencing on the exosome genomics sample to obtain genomics characteristics;
and inputting the image omics characteristics, the image molecular typing characteristics and the genomics characteristics into a preset bidirectional threshold cyclic neural network model for predicting the curative effect to obtain a curative effect prediction result.
Further, the preset image omics feature extraction model comprises a tumor position extraction module, a spatial structure extraction module, a spatial signal extraction module and a tower-type multi-scale dense residual error network which are sequentially connected.
Further, the tower-type multi-scale dense residual network comprises a plurality of multi-scale dense residual blocks; the multi-scale dense residual block consists of tower type convolution kernels with three different sizes; the output of the multi-scale dense residual block is represented as:
Figure BDA0003939633890000031
in the formula,
Figure BDA0003939633890000032
Figure BDA0003939633890000033
Figure BDA0003939633890000034
Figure BDA0003939633890000035
wherein L is n And L n-1 Respectively representing the outputs of the nth and the (n-1) th multi-scale dense residual blocks;
Figure BDA0003939633890000036
a 5 × 5 convolutional layer representing the ith layer;
Figure BDA0003939633890000037
a 3 × 3 convolutional layer representing the ith layer;
Figure BDA0003939633890000038
a 1 × 1 convolutional layer representing the ith layer; [ \ 8230; ]]Indicating a concat join operation.
Further, the preset imaging omics feature extraction model obtaining process includes:
acquiring dynamic magnetic resonance breast images of a new auxiliary chemotherapy patient completing a preset number of treatment courses before a surgery, and constructing a new auxiliary chemotherapy effect prediction image data set; the preoperative completion of a preset number of treatment courses for the newly-assisted chemotherapy patients comprises treatment of patients who reach complete pathological relief and treatment of patients who do not reach complete pathological relief; the dynamic magnetic resonance breast image comprises contrast enhanced magnetic resonance images of all breasts and bilateral underarms;
carrying out space geometric analysis on each mammary gland dynamic strengthening magnetic resonance image to obtain geometric objects with different dimensions; the geometric objects comprise three-dimensional space vectors, two-dimensional space-time vectors and one-dimensional time signal vectors;
inputting the geometric objects with different dimensions into the tumor position extraction module for high-dimensional image scaling and edge pixel detection to obtain tumor position characteristics;
inputting the geometric objects with different dimensions into the spatial structure extraction module to perform weak supervised clustering on the spatial images with the same dimension to obtain a multi-dimensional spatial image superpixel characteristic and a multi-dimensional spatial image superpixel characteristic;
inputting the geometric objects with different dimensions into the space signal extraction module to perform cross-dimension space signal weak supervision clustering to obtain image signal super-pixel characteristics and image signal super-voxel characteristics;
determining the number of hidden layer layers and the number of hidden layer neurons of the tower-type multi-scale intensive residual error network according to the tumor position feature, the multi-dimensional space image superpixel feature, the image signal superpixel feature and the image signal superpixel feature, and extracting the correlation feature through the tower-type multi-scale intensive residual error network to obtain the image correlation feature;
and calculating a learning error according to a preset loss function and the image correlation characteristics, and performing back propagation training according to the learning error to obtain the preset image omics characteristic extraction model.
Further, the step of performing transcriptome sequencing on the exosome genomics sample to obtain the genomics signature comprises:
performing transcriptome sequencing on the punctured tumor tissue sample and the tumor peripheral blood sample simultaneously to obtain corresponding genomics characteristics; the transcriptome sequencing comprises whole genome sequencing, whole transcriptome sequencing and exosome transcriptome sequencing; the genomics features include genetic mutation differences, gene expression differences, and circulating exosome gene expression characteristics.
Further, the preset bidirectional threshold recurrent neural network model comprises an input layer, a bidirectional activation gating recurrent unit, an RNN recurrent neural network based on attention, a fusion module and a softmax layer which are sequentially connected; the bidirectional activation door control circulating unit is a bidirectional door control circulating unit for introducing an activation valve.
Further, the step of inputting the image omics characteristics, the image molecular typing characteristics and the genomics characteristics into a preset bidirectional threshold cyclic neural network model for efficacy prediction to obtain an efficacy prediction result comprises the following steps:
inputting the image molecule typing characteristics and preset convolution weight vectors received by the input layer into the bidirectional activation gating circulation unit to perform bidirectional hidden state calculation to obtain a left hidden output and a right hidden output, and calculating to obtain a hidden layer vector according to the left hidden output and the right hidden output;
inputting the implicit layer vector and the image omics features received by the input layer into the attention-based RNN for feature extraction, respectively obtaining a corresponding first mapping feature vector and a corresponding second mapping feature vector, generating an attention weight vector according to the first mapping feature vector and the second mapping feature vector, and performing similarity calculation on the attention weight vector and the genomics features received by the input layer to obtain a fusion weight vector;
inputting the fusion weight vector and the hidden layer vector into the fusion module to obtain image genomics characteristics;
and inputting the image genomics characteristics into the softmax layer to obtain the curative effect prediction result.
In a second aspect, the embodiments of the present invention provide a system for predicting the efficacy of neoadjuvant chemotherapy based on image genomics, the system comprising:
the data acquisition module is used for acquiring exosome genomics samples of patients adopting new auxiliary chemotherapy before an operation and corresponding dynamic magnetic resonance breast images; the exosome genomics samples comprise puncture tumor tissue samples and tumor peripheral blood samples of each round of neoadjuvant chemotherapy;
the image feature extraction module is used for respectively inputting the dynamic magnetic resonance breast image into a preset image omics feature extraction model and a preset convolution neural network model to obtain corresponding image omics features and image molecular classification features;
the gene characteristic extraction module is used for carrying out transcriptome sequencing on the exosome genomics sample to obtain genomics characteristics;
and the curative effect prediction module is used for inputting the image omics characteristics, the image molecular typing characteristics and the genomics characteristics into a preset bidirectional threshold cyclic neural network model for curative effect prediction to obtain a curative effect prediction result.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the foregoing method when executing the computer program.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the above method.
The application provides a new auxiliary chemotherapy curative effect prediction method, a system, computer equipment and a storage medium based on image genomics, and the technical scheme is that by the method, an exosome genomics sample of a new auxiliary chemotherapy patient before operation and a corresponding dynamic magnetic resonance mammary gland image are obtained, the dynamic magnetic resonance mammary gland image is respectively input into a preset image omic feature extraction model and a preset convolution neural network model to obtain corresponding image omic features and image molecular typing features, and after the exosome genomics sample is subjected to transcriptome sequencing to obtain the genomics features, the image omic features, the image molecular typing features and the genomics features are input into a preset bidirectional threshold cycle neural network model to perform curative effect prediction to obtain curative effect prediction results. Compared with the prior art, the image genomics-based neoadjuvant chemotherapy curative effect prediction method has the advantages that the effective fusion of the high-dimensional unstructured magnetic resonance image data and the high-specificity structured genomics data is unlocked by utilizing deep learning, the efficient and accurate mining of data information is realized, the accuracy of the automatic prediction of the neoadjuvant chemotherapy curative effect is effectively improved, the excessive breast cancer treatment condition is effectively reduced, the side effect of chemotherapy of insensitive patients is reduced, and reliable guidance is provided for the formulation of individual precise treatment schemes.
Drawings
FIG. 1 is a schematic diagram of an application scenario of a new adjuvant chemotherapy curative effect prediction method based on image genomics in an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for predicting the efficacy of neoadjuvant chemotherapy based on image genomics according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a preset image omics feature extraction model in the embodiment of the present invention;
FIG. 4 is a schematic structural diagram of the tower multi-scale dense residual network in FIG. 3;
fig. 5 is a schematic frame diagram of multi-level information association fusion between the preset image omics feature extraction model and the genomic features extracted in the embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a preset bidirectional threshold recurrent neural network model according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of the structure of a system for predicting the efficacy of neoadjuvant chemotherapy based on image genomics in an embodiment of the present invention;
fig. 8 is an internal structural diagram of a computer device in the embodiment of the present invention.
Detailed Description
In order to make the purpose, technical solution and advantages of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments, and it is obvious that the embodiments described below are part of the embodiments of the present invention, and are used for illustrating the present invention only, but not for limiting the scope of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The image genomics-based new auxiliary chemotherapy curative effect prediction method can be applied to the terminal and the server shown in the figure 1. The terminal can be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, and the server can be implemented by an independent server or a server cluster formed by a plurality of servers. The server can accurately predict the newly assisted treatment effect based on the acquired dynamic magnetic resonance breast images and genome data of the newly assisted chemotherapy patients completing the preset number of treatment courses before the operation by adopting the image genomics-based newly assisted chemotherapy treatment effect prediction method, and the obtained treatment effect prediction result is used for subsequent research of the server or is sent to the terminal for the user of the terminal to check and analyze; the following examples will illustrate the imaging genomics-based neoadjuvant chemotherapy efficacy prediction method of the present invention in detail.
In one embodiment, as shown in fig. 2, a method for predicting the efficacy of neoadjuvant chemotherapy based on image genomics is provided, comprising the following steps:
s11, acquiring exosome genomics samples and corresponding dynamic magnetic resonance breast images of patients who adopt new auxiliary chemotherapy before an operation; the exosome genomics samples comprise punctured tumor tissue samples and tumor peripheral blood samples of each round of new auxiliary chemotherapy, the number of the punctured tumor tissue samples and the tumor peripheral blood samples which are obtained specifically is equal to the number of the new auxiliary chemotherapy courses participated by a patient, the amount of each punctured tumor tissue sample and the amount of each tumor peripheral blood sample can be determined according to actual requirements, 100mg of punctured tumor tissue samples and 2ml of tumor peripheral blood samples are preferably obtained in the embodiment, and the samples are stored in a sample library at minus 80 ℃;
the dynamic magnetic resonance breast image used in this embodiment can be understood as scanning all mammary glands and bilateral underarm ranges according to the prone position by using a 3.0T superconducting magnetic resonance apparatus (Discovery MR750w; GE Healthcare, USA) of GE and a double-breast special 8-channel phased array coil MRI machine, and focusing on the image sequence analysis of the DCE-MRI of the mammary glands, dynamically enhancing and collecting 5-stage images, wherein each time phase sequence is about 52s, T2WI sequence and EPI-DWI image sequence are used for image pre-analysis, eliminating fat tissues, reducing the operation complexity, and effectively improving the operation speed and the sensitivity and accuracy of detection of cancerous tissues; in addition, in order to ensure the quality and the use effect of the image data, the spatial interpolation, the image resampling and the bias field correction can be considered, and the spatial interpolation, the image resampling and the bias field correction are correspondingly preprocessed to ensure that the MRI signal intensity distribution is consistent and then is used for machine learning analysis.
S12, respectively inputting the dynamic magnetic resonance breast image into a preset image omics feature extraction model and a preset convolution neural network model to obtain corresponding image omics features and image molecular classification features; the preset image omics feature extraction model and the preset convolutional neural network model can be understood as stable-parameter models obtained by training according to related data sets in advance; the preset image omics feature extraction model is shown in fig. 3 and comprises a tumor position extraction module, a spatial structure extraction module, a spatial signal extraction module and a tower-type multi-scale dense residual error network which are sequentially connected;
specifically, the tumor position extraction module can understand that the influence of false positive brought by image measurement noise is effectively inhibited through a high-dimensional image low-dimensional reconstruction technology, detection of a cross-dimensional tumor associated region is achieved, and the identification capability and the fitting capability of tumor image boundary points are enhanced through detection of edge pixel points, so that missing pixel points are supplemented;
the spatial structure extraction module can be understood as clustering analysis is carried out on the same-dimensional spatial image data in the high-dimensional image data through a multi-example weak supervision clustering method so as to mine potential multi-level semantic information in the image data;
the spatial signal extraction module can be understood as clustering analysis is carried out on cross-dimensional spatial signal data in high-dimensional image data through a multi-example weak supervision clustering method so as to mine potential time sequence characteristic information in the image data;
the tower-type multi-scale dense residual network can be understood as an improved U-Net network, and a multi-dimensional multi-scale dense residual neural network obtained by combining the residual neural network with the linkage of U-Net strengthening layers can automatically extract abundant multi-dimensional and multi-scale image characteristics, so that a tumor image classification task can be completed more stably and efficiently; it should be noted that, in the tower-type multi-scale dense residual error network adopted in this embodiment, it is considered that the dynamic magnetic resonance image is composed of multi-dimensional geometric objects with different structures, so that the width of the characterization model can be increased, if multi-dimensional consistency deep learning is obtained, the convolutional layer needs to be subjected to multi-scale dimension reduction through multiple kernels (kernel) according to data structures with different dimensions, so as to increase the depth of the characterization model and improve the characterization capability of the model, and the tower-type multi-scale dense residual error network similar to U-Net, which is proposed by combining the residual error network concept and the dense convolutional network, is used as a network of a main structure, and the tower-type convolutional kernel is fused into the multi-scale dense residual error block, so that the depth of the convolutional kernel is gradually reduced along with the improvement of the convolutional kernel; each group of input is all scale feature input, each group of input outputs features of different scales, the special structural design of the system can automatically extract the features of the image on each scale, and 1 × 1 convolution is utilized to effectively reduce the dimension, so that the requirement of analyzing multi-dimensional geometric images with different breast tumor image scales is better met. Specifically, as shown in fig. 4, the tower-type multi-scale dense residual network includes a plurality of multi-scale dense residual blocks; the multi-scale dense residual block consists of tower type convolution kernels with three different sizes; the output of the multi-scale dense residual block is represented as:
Figure BDA0003939633890000091
in the formula,
Figure BDA0003939633890000092
Figure BDA0003939633890000101
Figure BDA0003939633890000102
Figure BDA0003939633890000103
wherein L is n And L n-1 Respectively representing the nth and n-1 th pluralitiesOutputting the scale-dense residual block;
Figure BDA0003939633890000104
a 5 × 5 convolutional layer representing the ith layer;
Figure BDA0003939633890000105
a 3 × 3 convolutional layer representing the ith layer;
Figure BDA0003939633890000106
a 1 × 1 convolutional layer representing the ith layer; [ \ 8230; ]]Represents a concat join operation;
the preset image omics feature extraction model can be understood as a model which is specially used for extracting image omics features and is obtained by gradient descent and back propagation training for a network meeting the structural requirements based on the obtained dynamic magnetic resonance mammary gland image; specifically, the process for obtaining the preset imagery omics feature extraction model comprises the following steps:
acquiring dynamic magnetic resonance breast images of new auxiliary chemotherapy patients completing a preset number of treatment courses before surgery, and constructing a new auxiliary chemotherapy effect prediction image data set; the new adjuvant chemotherapy patients completing a preset number of courses of treatment before the operation comprise patients achieving complete pathological relief and patients not achieving complete pathological relief; the dynamic magnetic resonance breast image comprises contrast enhanced magnetic resonance images of all breasts and bilateral underarms; the preset number of treatment courses can be determined according to actual application requirements, the embodiment is preferably set to 8, that is, the dynamic magnetic resonance breast images for constructing the new auxiliary curative effect prediction image data set are all derived from the patients who receive 8 preoperative courses of new auxiliary chemotherapy, correspondingly, the proportion of patients who have achieved complete pathological remission in treatment to patients who have not achieved complete pathological remission in treatment can be set according to actual conditions, and the embodiment is preferably selected according to the principle of 1;
carrying out space geometric analysis on each mammary gland dynamic strengthening magnetic resonance image to obtain geometric objects with different dimensions; the acquisition of the geometric object can be understood as obtaining four different layers of points, lines, planes and bodies of a multi-dimensional image and vectors in different directions of a sagittal plane, a coronal plane and a horizontal plane through the space geometric analysis of an algebraic hypersurface neuron, wherein the vectors comprise a three-dimensional space vector, a two-dimensional space-time vector and a one-dimensional time signal vector; specifically, the step of obtaining geometric objects of different dimensions through the spatial geometric analysis of the algebraic hypersurface neuron includes:
clustering the mammary gland dynamic strengthening magnetic resonance images stored in a tensor form to generate a three-dimensional space vector associated with a plurality of block-shaped area blocks; the region blocks in the three-dimensional space image data can be regarded as a series of two-dimensional space image data stacks arranged along three different directions of a sagittal plane, a coronal plane and a horizontal plane;
extracting two-dimensional space image data of each block area block in each direction to obtain a two-dimensional space vector;
adding a time frame sequence to each two-dimensional space vector to obtain a three-dimensional space-time vector;
adding the time frame sequence to each row or each column of the two-dimensional space vector in the three-dimensional space vector to obtain a two-dimensional space-time vector;
adding the time frame sequence to each pixel point in the two-dimensional space image data to obtain a plurality of corresponding time dimension signal vectors;
in the embodiment, a high-dimensional image is decomposed into geometric objects, and a sub-region of interest with relevance in the multi-dimensional image is embedded into a node for representation according to a geometric algebra theory to form a vector node representation form, so that a vector node embedding mode based on geometric algebra is provided, not only are semantic relevance and correspondence of images with different dimensions and signals thereof on input nodes realized, and thus dimension loss possibly caused when original data features are instantiated is avoided, but also on the basis, more comprehensive input node clusters with different representation modes and capabilities can be formed by fully mining multi-scale semantic feature representations such as tumor morphology in the cross-dimensional image and signal intensity variation of images with different tissues, and a reliable basis is provided for constructing a deeper and wider classification model suitable for the high-dimensional medical image.
Inputting the geometric objects with different dimensions into the tumor position extraction module for high-dimensional image scaling and edge pixel detection to obtain tumor position characteristics; the extraction of the tumor position features can be understood as a process of analyzing and reconstructing multi-dimensional consistency time, space and semantic association regions through tensor analysis and mining potential association among multi-dimensional medical image space, time sequences and semantic features.
The growth of tumor usually affects the surrounding normal tissue, and the adhesion between the tumor region and the surrounding neighborhood is usually expressed in the same-dimension image, i.e. the adjacent region and the related region at the edge of the image are mostly considered in the related region in the dimension, and for the cross-dimension medical dynamic enhanced image, the semantic related region of each dimension and time sequence needs to be searched, and the related relationship of each abnormal region in the dimension is mined to provide a basis for establishing a representation model; aiming at the characteristics of medical images, in order to better search semantic association areas among dimensions, the embodiment preferably adopts a high-dimensional image low-dimensional reconstruction method to effectively inhibit the influence of false positive caused by image measurement noise, so as to realize detection of cross-dimension tumor association areas; in addition, in order to reconstruct the tumor image accurately, a plurality of different tensor decomposition methods, such as hovvd, HOOI, 2DPCA, 2DSVD, etc., may be contrastively analyzed to determine a tensor decomposition method with a high degree of retention of image data structure information as a method used for tensor analysis in this embodiment.
According to the tensor resolution definition, there are
Figure BDA0003939633890000121
Wherein,
Figure BDA0003939633890000122
decomposition of high-dimensional image data for the moment τ before reconstruction, C τ Is the core tensor; in order to find the image area with the maximum enhancement, a principal component analysis strategy is adopted for the low-dimensional embedded dynamic two-dimensional image in each direction, and a new low-dimensional matrix is regenerated by calculating a covariance matrix delta and a corresponding feature vector E:
Figure BDA0003939633890000123
the final tensor reconstructed silhouette image may be redefined according to the generated low-dimensional matrix as:
Figure BDA0003939633890000124
Figure BDA0003939633890000125
is the average core tensor, and γ is the number of channels corresponding to time frame τ. The difference of time dimension and space dimension is reduced by a high-dimensional image zooming technology, and meanwhile, the identification capability and the fitting capability of the boundary points of the tumor image are enhanced by detecting the edge pixel points, so that the missing pixel points are supplemented, and the effective tumor position characteristics are obtained.
Inputting the geometric objects with different dimensions into the spatial structure extraction module to perform same-dimension spatial image weak supervision clustering to obtain a multi-dimensional spatial image superpixel feature and a multi-dimensional spatial image superpixel feature; the same-dimension spatial image weak supervised clustering can be understood as multi-example weak supervised clustering performed on two-dimensional space vectors and three-dimensional space vectors according to semantic labels and directions of image data, and the specifically adopted clustering method can be obtained by combining a fuzzy C-means clustering method and a superpixel segmentation method and then integrating algorithms of machine learning such as a vector machine and the like and adjusting the algorithms as required, and is not repeated here.
Inputting the geometric objects with different dimensions into the space signal extraction module to perform cross-dimension space signal weak supervision clustering to obtain image signal super-pixel characteristics and image signal super-voxel characteristics; the cross-dimension space signal weak supervised clustering can be understood as multi-example weak supervised clustering on a two-dimensional space-time vector, a three-dimensional space-time vector and a one-dimensional time signal vector, and the specifically adopted clustering method can also be obtained by combining a fuzzy C mean value clustering method with a superpixel segmentation method and then integrating algorithms of machine learning such as a vector machine and the like and adjusting as required, and is not repeated here.
Determining the number of hidden layers and the number of hidden layer neurons of the tower-type multi-scale dense residual network according to the tumor position feature, the multi-dimensional spatial image superpixel feature, the image signal superpixel feature and the image signal superpixel feature, and extracting correlation features through the tower-type multi-scale dense residual network to obtain image correlation features;
the tumor position characteristic, the multi-dimensional spatial image superpixel characteristic, the image signal superpixel characteristic and the image signal superpixel characteristic can be understood as a cross-scale multi-dimensional image characteristic based on geometric algebra, and initial characterization of the same-dimension and cross-dimension incidence relation is realized; considering that rich and close connections exist among local regions in a mammary gland image, and when a multilayer perceptual convolutional network is used for extracting features, mining of the connections among nodes and a plurality of hidden layers in the network is a key for feature extraction, and selecting an appropriate number of layers and the number of nodes of the hidden layers can influence the accuracy of feature extraction to a great extent, the embodiment preferably adopts a principal component analysis method to convert original related variables into new irrelevant variables with smaller sizes, determines the number of the hidden layers in a neural network based on accumulated variance, and then determines the number of neurons in the hidden layers by a clustering method; the core of the aggregation by adopting the tower-type multi-scale dense residual error network is that the neighbor node features with the adjacent matrix as the node are aggregated and mined, and the adjacent matrix is dynamically sampled to obtain neurons with different sizes, so that data blocks with different scales are represented, more potential information is provided, and reliable guarantee is provided for accurate extraction of the characteristics of the image omics;
specifically, the step of determining the number of hidden layer layers and the number of hidden layer neurons of the tower-type multi-scale dense residual error network includes:
performing principal component analysis on the tumor position characteristic, the multi-dimensional spatial image superpixel characteristic, the image signal superpixel characteristic and the image signal superpixel characteristic to obtain principal component characteristics, and determining the number of hidden layers of the tower type multi-scale dense residual error network according to the number of the principal component characteristics;
determining the number of hidden layer neurons of the tower-type multi-scale dense residual error network by performing cluster analysis or principal component singular value calculation on the principal component features;
specifically, the principal component analysis can be understood as a process of determining to select several principal components from the original characteristic variables according to the percentage of the accumulated variance of the part to be preserved in the total variance (i.e., the accumulated contribution rate), that is, converting the input characteristic variables into several principal components, the values of which are kept within a certain range (maximum value-minimum value), and by a method of clustering the input nodes, for example, a K-Means method, the node information in the convolutional neural network and the association relationship between the nodes can be aggregated and mined. Due to the multilayer perception feedforward neural network, each node of the upper layer is connected with each node of the lower layer; if each layer of the neural network contains p 1 ,...,p n The neuron then outputs a vector (column) h 1 ,...,h k Then, each column vector composed of p1.. Pn can be analyzed by a clustering method or a method of calculating a singular value of a principal component to obtain the hidden layer h 1 ,...,h k The node of (2); the deepening of the hidden layer means more complex deep abstract features, the complexity of the hidden layer features corresponds to the principal components with higher variance, namely the number of the hidden layers in the neural network is determined by the number of important principal components formed through PCA conversion, and the formula is satisfied: sigma Variance (PC) i )=∑Complexity(h i )。
In the embodiment, the characteristic that the number of hidden layers is increased by dimensional data so as to reduce the performance of a neural network is considered, a multi-layer perception neural network capable of automatically determining the number of the hidden layers and the number of corresponding nodes is designed and established, and a multi-dimensional consistent space-time representation model of multi-dimensional image latent semantic association in the imaging group is obtained.
Calculating a learning error according to a preset loss function and the image correlation characteristics, and performing back propagation training according to the learning error to obtain a preset image omics characteristic extraction model; the preset loss function can be determined according to the actual requirement of image feature extraction, such as formulating the loss function according to spatial content, formulating the loss function according to texture characteristics, formulating the loss function according to all variables, and the like, and the method is not particularly limited herein.
In addition, the preset convolutional neural network model can be understood as a network model for molecular typing feature extraction obtained by training a 3D convolutional neural network obtained by alternately connecting a preset number of convolutional layers and pooling layers based on the newly constructed aided curative effect prediction image data set; the specific construction process can be understood as: firstly, extracting sequence images with the most outstanding difference in the aspect of new auxiliary curative effect prediction image data set, and intercepting interested areas (including tumor disease areas and tumor surrounding matrixes) from the extracted sequence images according to the size of the range of the focus areas; secondly, performing space-time consistency characteristic analysis and extraction on different tissue characteristic sub-regions (such as a diffusion weighted image, and can decompose a tumor into different tissue characteristic sub-regions reflecting three degrees of diffusion limitation, medium diffusion and free diffusion of the tumor) by combining the geometric algebra, the subspace division and the overall heterogeneity analysis on the intercepted region of interest, establishing a mapping relation from image characteristics to molecular pathological information, inputting the extracted characteristics into a subsequent full connection layer (Dropout layer) and a Softmax classifier by combining corresponding magnetic resonance parameters to input the 3D convolutional neural network, performing iterative training according to a preset loss function, and storing model parameters meeting requirements to obtain a preset convolutional neural network model capable of being used for extracting molecular classification characteristics; it should be noted that, both the learning and training process of the model and the setting of the loss function can be adjusted according to the actual application requirements, and are not limited specifically here.
S13, carrying out transcriptome sequencing on the exosome genomics sample to obtain genomics characteristics; wherein, the genomics features comprise genetic mutation differences, gene expression differences and circulating exosome gene expression characteristics; specifically, the step of performing transcriptome sequencing on the exosome genomics sample to obtain the genomics characteristics comprises:
performing transcriptome sequencing on the punctured tumor tissue sample and the tumor peripheral blood sample simultaneously to obtain corresponding genomics characteristics; the transcriptome sequencing comprises whole genome sequencing, whole transcriptome sequencing and exosome transcriptome sequencing, and each sequencing method can be realized by adopting the existing corresponding method, so that the details of various sequencing processes are not repeated;
as shown in figure 5, the information fusion is carried out on the feature vectors of four different layers of a multi-dimensional image, namely, a point, a line, a plane and a body in different directions (a sagittal plane, a coronal plane and a horizontal plane) through image knowledge migration and time sequence data interpolation to respectively obtain multi-dimensional vector space-time information and multi-dimensional space associated feature information, the semantic correlations and the spatial and temporal correlations between different dimensions are combined, a reasonable tower-type multi-scale dense residual error network structure is determined through principal component analysis, a light preset image omics feature extraction model which can be used for multi-dimensional consistent analysis is realized, and meanwhile, the cross-dimension and same-dimension space semantic information are utilized to control the state of an AuGRU activation gate according to the following steps
Figure BDA0003939633890000161
And combined with the attention vector a of molecular typing t And a genomics characteristic vector u to obtain image genome characteristics u' of different tumor patients for accurate diagnosis of tumors and prediction of new adjuvant chemotherapy curative effect.
S14, inputting the image omics characteristics, the image molecular classification characteristics and the genomics characteristics into a preset bidirectional threshold cyclic neural network model for curative effect prediction to obtain a curative effect prediction result; the preset bidirectional threshold cyclic neural network model can be understood as that the memory attribute of a bidirectional threshold cyclic unit neural network is utilized, a bidirectional threshold cyclic neural network (BiGRU-RNN) is adopted, the image omic characteristics and the image molecular typing characteristics generate association vectors with Attention in the cyclic neural network through an Attention mechanism (Attention Mechanisms), meanwhile, the AuGRU is adopted to calculate a similarity matrix of the genomics characteristics aiming at individualized cases, and on the basis of effective association of dynamic image time-space characteristics, the joint characterization of the image data and the genomics data is established, so that the image genomics characteristics, the molecular typing and the genomics characteristics are effectively fused and classified, and more accurate prediction of new auxiliary chemotherapy curative effect is obtained; specifically, the preset bidirectional threshold recurrent neural network model comprises an input layer, a bidirectional activation gating recurrent unit, an attention-based RNN recurrent neural network, a fusion module and a softmax layer which are sequentially connected; in order to deal with genomics data with individual differences and facilitate effective fusion with molecular typing characteristics and image omics characteristics, the embodiment preferably sets the bidirectional activation gating circulating unit as a bidirectional gating circulating unit with an activation valve introduced, so as to obtain a front-back time trajectory of the characteristics of the lesion area, and effectively capture and highlight key time sequence information of the lesion area;
as shown in fig. 6, the step of inputting the image omics characteristics, the image molecular classification characteristics and the genomics characteristics into a preset bidirectional threshold recurrent neural network model for efficacy prediction to obtain an efficacy prediction result includes:
inputting the image molecule typing characteristics and preset convolution weight vectors received by the input layer into the bidirectional activation gating circulation unit to perform bidirectional hidden state calculation to obtain a left hidden output and a right hidden output, and calculating to obtain a hidden layer vector according to the left hidden output and the right hidden output;
inputting the implicit layer vector and the image omics features received by the input layer into the attention-based RNN for feature extraction, respectively obtaining a corresponding first mapping feature vector and a corresponding second mapping feature vector, generating an attention weight vector according to the first mapping feature vector and the second mapping feature vector, and performing similarity calculation on the attention weight vector and the genomics features received by the input layer to obtain a fusion weight vector;
inputting the fusion weight vector and the hidden layer vector into the fusion module to obtain image genomics characteristics;
and inputting the image genomics characteristics into the softmax layer to obtain the curative effect prediction result.
The context interaction mechanism of the BiGRU-RNN model based on the memory and attention mechanism and the image omics feature input and molecular typing feature vector of the tissue-level of interest and the coding tumor appearance genomics feature vector provided in this embodiment is a key for performing effective association of dynamic image spatiotemporal features, thereby establishing a combined characterization model of image data and genomics data, and the specific process of the application is as follows:
1) The image omics characteristics, the image molecular classification characteristics and the preset convolution weight vector (omega) which are respectively output by the preset image omics characteristic extraction model and the preset convolution neural network model 1 ,...,ω n ) As input to the attention BiGRU-RNN network model;
2) Calculating the right fh from the input image molecular typing features t And a leftward direction bh t And computing a hidden layer vector h 1 ,...,h n
3) According to the attention mechanism, calculating the vectors h respectively composed of the hidden layers 1 ,...,h n And image omics feature vector V at Output vector u mapped by multilayer perception RNN network w (first mapping feature vector) and u at (second mapped feature vector) and thus generates a weight vector of attention
Figure BDA0003939633890000171
Figure BDA0003939633890000172
4) Genomics features u g Attention weight vector with output
Figure BDA0003939633890000173
The similarity matrix δ is a function of:
Figure BDA0003939633890000181
in the formula,
Figure BDA0003939633890000182
5) From the hidden layer vector h 1 ,...,h n And an AuGRU fusion weight vector delta, outputting image genomics features, expressed as:
Figure BDA0003939633890000183
6) Calculating and analyzing the characteristics of the image genomics through a softmax layer to obtain a curative effect prediction result of performing new adjuvant chemotherapy on the adenoma; on the basis, the curative effect prediction result and the pathological result are compared, analyzed and verified, and an individualized accurate treatment scheme is formulated in an auxiliary mode.
According to the technical scheme, the efficient and accurate excavation of data information is realized by utilizing deep learning to unlock the effective fusion of high-dimensional unstructured magnetic resonance image data and high-specific structured genomic data, the efficiency and the accuracy of automatically predicting the new auxiliary curative effect are effectively improved, the over-treatment condition of the breast cancer is effectively reduced, the side effect of the chemotherapy process of the patient with the insensitive breast cancer is not sensitive, and reliable guidance is provided for the formulation of the individual accurate chemotherapy scheme.
In one embodiment, as shown in fig. 7, a system for predicting neoadjuvant chemotherapy efficacy based on image genomics is provided, the system comprising:
the data acquisition module 1 is used for acquiring exosome genomics samples of patients who adopt new auxiliary chemotherapy before an operation and corresponding dynamic magnetic resonance breast images; the exosome genomics samples comprise puncture tumor tissue samples and tumor peripheral blood samples of each round of neoadjuvant chemotherapy;
the image feature extraction module 2 is used for respectively inputting the dynamic magnetic resonance breast image into a preset image omics feature extraction model and a preset convolution neural network model to obtain corresponding image omics features and image molecular classification features;
the gene characteristic extraction module 3 is used for carrying out transcriptome sequencing on the exosome genomics sample to obtain the genomics characteristics;
and the curative effect prediction module 4 is used for inputting the image omics characteristics, the image molecular typing characteristics and the genomics characteristics into a preset bidirectional threshold cyclic neural network model for curative effect prediction to obtain a curative effect prediction result.
For specific limitations of the system for predicting the effect of neoadjuvant chemotherapy based on image genomics, reference may be made to the above limitations of the method for predicting the effect of neoadjuvant chemotherapy based on image genomics, which are not described herein again. The modules in the system for predicting the effect of neoadjuvant chemotherapy based on image genomics can be realized in whole or in part by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 8 shows an internal structure diagram of a computer device in one embodiment, and the computer device may be a terminal or a server. As shown in fig. 8, the computer apparatus includes a processor, a memory, a network interface, a display, and an input device, which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize the image genomics-based neoadjuvant chemotherapy curative effect prediction method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those of ordinary skill in the art that the architecture shown in fig. 8 is merely a block diagram of a portion of architecture associated with aspects of the present application, and is not intended to limit the computing devices to which aspects of the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a similar arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the steps of the above method being performed when the computer program is executed by the processor.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method.
To sum up, the image genomics-based new-assisted chemotherapy curative effect prediction method, system, computer device and storage medium provided by the embodiments of the present invention realizes a technical scheme that by obtaining an exosome genomics sample and a corresponding dynamic magnetic resonance breast image of a patient who adopts new-assisted chemotherapy before an operation, inputting the dynamic magnetic resonance breast image into a preset image omics feature extraction model and a preset convolutional neural network model respectively to obtain corresponding image omics features and image molecular typing features, performing transcriptome sequencing on the exosome genomics sample to obtain the genomics features, inputting the image omics features, the image molecular typing features and the gene features into a preset bidirectional cyclic neural network model to perform curative effect prediction to obtain a curative effect prediction result, and the method effectively fuses high-dimensional unstructured magnetic resonance image data and high-specific structured genomics data by utilizing deep learning to unlock the high-efficiency accurate mining of data information, effectively improving the efficiency and accuracy of automatic prediction of new-assisted chemotherapy curative effect, thereby reducing the occurrence of insensitive cases and providing an effective chemotherapy for the accurate chemotherapy treatment of the individual breast cancer.
The embodiments in this specification are described in a progressive manner, and all the same or similar parts of the embodiments are directly referred to each other, and each embodiment is described with emphasis on differences from other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. It should be noted that, the technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present description should be considered as being included in the present specification.
The above-mentioned embodiments only express some preferred embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, without departing from the technical principle of the present invention, several improvements and substitutions can be made, and these improvements and substitutions should also be regarded as the protection scope of the present application. Therefore, the protection scope of the present patent application shall be subject to the protection scope of the claims.

Claims (10)

1. A neoadjuvant chemotherapy curative effect prediction method based on image genomics is characterized by comprising the following steps:
acquiring an exosome genomics sample of a patient adopting new auxiliary chemotherapy before an operation and a corresponding dynamic magnetic resonance mammary gland image; the exosome genomics samples comprise puncture tumor tissue samples and tumor peripheral blood samples of each round of neoadjuvant chemotherapy;
respectively inputting the dynamic magnetic resonance breast image into a preset image omics feature extraction model and a preset convolution neural network model to obtain corresponding image omics features and image molecular classification features;
carrying out transcriptome sequencing on the exosome genomics sample to obtain genomics characteristics;
and inputting the image omics characteristics, the image molecular typing characteristics and the genomics characteristics into a preset bidirectional threshold cyclic neural network model for efficacy prediction to obtain an efficacy prediction result.
2. The method for predicting the efficacy of neoadjuvant chemotherapy based on image genomics according to claim 1, wherein the pre-defined image genomics feature extraction model comprises a tumor location extraction module, a spatial structure extraction module, a spatial signal extraction module, and a tower-type multi-scale dense residual error network connected in sequence.
3. The method of image genomics-based neoadjuvant chemotherapy efficacy prediction of claim 2, wherein the tower multi-scale dense residual network comprises a plurality of multi-scale dense residual blocks; the multi-scale dense residual block consists of tower type convolution kernels with three different sizes; the output of the multi-scale dense residual block is represented as:
Figure FDA0003939633880000011
in the formula,
Figure FDA0003939633880000012
Figure FDA0003939633880000013
Figure FDA0003939633880000021
Figure FDA0003939633880000022
wherein L is n And L n-1 Respectively representing the outputs of the nth and the (n-1) th multi-scale dense residual blocks;
Figure FDA0003939633880000023
a 5 × 5 convolutional layer representing the ith layer;
Figure FDA0003939633880000024
a 3 × 3 convolutional layer representing the ith layer;
Figure FDA0003939633880000025
a 1 × 1 convolutional layer representing the ith layer; [ \ 8230; ]]Indicating a concat join operation.
4. The method for predicting the efficacy of neoadjuvant chemotherapy based on image genomics according to claim 2, wherein the obtaining process of the preset image genomics feature extraction model comprises:
acquiring dynamic magnetic resonance breast images of a new auxiliary chemotherapy patient completing a preset number of treatment courses before a surgery, and constructing a new auxiliary chemotherapy effect prediction image data set; the new adjuvant chemotherapy patients completing a preset number of courses of treatment before the operation comprise patients achieving complete pathological relief and patients not achieving complete pathological relief; the dynamic magnetic resonance breast image comprises contrast enhanced magnetic resonance images of all breasts and bilateral underarms;
carrying out space geometric analysis on each mammary gland dynamic strengthening magnetic resonance image to obtain geometric objects with different dimensions; the geometric objects comprise three-dimensional space vectors, two-dimensional space-time vectors and one-dimensional time signal vectors;
inputting the geometric objects with different dimensions into the tumor position extraction module to perform high-dimensional image scaling and edge pixel detection to obtain tumor position characteristics;
inputting the geometric objects with different dimensions into the spatial structure extraction module to perform same-dimension spatial image weak supervision clustering to obtain a multi-dimensional spatial image superpixel feature and a multi-dimensional spatial image superpixel feature;
inputting the geometric objects with different dimensions into the space signal extraction module to perform cross-dimension space signal weak supervision clustering to obtain image signal super-pixel characteristics and image signal super-voxel characteristics;
determining the number of hidden layer layers and the number of hidden layer neurons of the tower-type multi-scale intensive residual error network according to the tumor position feature, the multi-dimensional space image superpixel feature, the image signal superpixel feature and the image signal superpixel feature, and extracting the correlation feature through the tower-type multi-scale intensive residual error network to obtain the image correlation feature;
and calculating a learning error according to a preset loss function and the image correlation characteristics, and performing back propagation training according to the learning error to obtain the preset image omics characteristic extraction model.
5. The method for predicting neoadjuvant chemotherapy efficacy based on image genomics according to claim 1, wherein the step of performing transcriptome sequencing on the exosome genomics samples to obtain genomics signatures comprises:
performing transcriptome sequencing on the punctured tumor tissue sample and the tumor peripheral blood sample simultaneously to obtain corresponding genomics characteristics; the transcriptome sequencing comprises whole genome sequencing, whole transcriptome sequencing and exosome transcriptome sequencing; the genomics features include genetic mutation differences, gene expression differences, and circulating exosome gene expression characteristics.
6. The method for predicting neoadjuvant chemotherapy efficacy based on image genomics as claimed in claim 1, wherein the preset bidirectional threshold recurrent neural network model comprises an input layer, a bidirectional activation gating recurrent unit, an attention-based RNN recurrent neural network, a fusion module and a softmax layer which are connected in sequence; the bidirectional activation door control circulating unit is a bidirectional door control circulating unit for introducing an activation valve.
7. The method of claim 6, wherein the step of inputting the image genomics features, the image molecular typing features and the genomics features into a preset bilateral threshold recurrent neural network model for efficacy prediction to obtain efficacy prediction results comprises:
inputting the image molecule typing characteristics and preset convolution weight vectors received by the input layer into the bidirectional activation gating circulation unit to perform bidirectional hidden state calculation to obtain a left hidden output and a right hidden output, and calculating to obtain a hidden layer vector according to the left hidden output and the right hidden output;
inputting the implicit layer vector and the image omics features received by the input layer into the attention-based RNN for feature extraction, respectively obtaining a corresponding first mapping feature vector and a corresponding second mapping feature vector, generating an attention weight vector according to the first mapping feature vector and the second mapping feature vector, and performing similarity calculation on the attention weight vector and the genomics features received by the input layer to obtain a fusion weight vector;
inputting the fusion weight vector and the hidden layer vector into the fusion module to obtain the image genomics characteristics;
and inputting the image genomics characteristics into the softmax layer to obtain the curative effect prediction result.
8. A system for predicting neoadjuvant chemotherapy efficacy based on image genomics, the system comprising:
the data acquisition module is used for acquiring exosome genomics samples of patients adopting new auxiliary chemotherapy before an operation and corresponding dynamic magnetic resonance breast images; the exosome genomics samples comprise puncture tumor tissue samples and tumor peripheral blood samples of each round of neoadjuvant chemotherapy;
the image feature extraction module is used for respectively inputting the dynamic magnetic resonance breast image into a preset image omics feature extraction model and a preset convolution neural network model to obtain corresponding image omics features and image molecular classification features;
the gene characteristic extraction module is used for carrying out transcriptome sequencing on the exosome genomics sample to obtain genomics characteristics;
and the curative effect prediction module is used for inputting the image omics characteristics, the image molecular typing characteristics and the genomics characteristics into a preset bidirectional threshold cyclic neural network model for curative effect prediction to obtain a curative effect prediction result.
9. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method as claimed in any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202211417630.XA 2022-11-11 2022-11-11 New auxiliary chemotherapy curative effect prediction method and system based on image genomics Pending CN115641957A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211417630.XA CN115641957A (en) 2022-11-11 2022-11-11 New auxiliary chemotherapy curative effect prediction method and system based on image genomics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211417630.XA CN115641957A (en) 2022-11-11 2022-11-11 New auxiliary chemotherapy curative effect prediction method and system based on image genomics

Publications (1)

Publication Number Publication Date
CN115641957A true CN115641957A (en) 2023-01-24

Family

ID=84948633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211417630.XA Pending CN115641957A (en) 2022-11-11 2022-11-11 New auxiliary chemotherapy curative effect prediction method and system based on image genomics

Country Status (1)

Country Link
CN (1) CN115641957A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580841A (en) * 2023-07-12 2023-08-11 北京大学 Disease diagnosis device, device and storage medium based on multiple groups of study data
CN117036762A (en) * 2023-08-03 2023-11-10 北京科技大学 Multi-mode data clustering method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580841A (en) * 2023-07-12 2023-08-11 北京大学 Disease diagnosis device, device and storage medium based on multiple groups of study data
CN116580841B (en) * 2023-07-12 2023-11-10 北京大学 Disease diagnosis device, device and storage medium based on multiple groups of study data
CN117036762A (en) * 2023-08-03 2023-11-10 北京科技大学 Multi-mode data clustering method
CN117036762B (en) * 2023-08-03 2024-03-22 北京科技大学 Multi-mode data clustering method

Similar Documents

Publication Publication Date Title
US10366491B2 (en) Deep image-to-image recurrent network with shape basis for automatic vertebra labeling in large-scale 3D CT volumes
US10282588B2 (en) Image-based tumor phenotyping with machine learning from synthetic data
US10853449B1 (en) Report formatting for automated or assisted analysis of medical imaging data and medical diagnosis
CN115641957A (en) New auxiliary chemotherapy curative effect prediction method and system based on image genomics
Shi et al. Automatic segmentation of cardiac magnetic resonance images based on multi-input fusion network
Ma et al. ATFE-Net: Axial Transformer and Feature Enhancement-based CNN for ultrasound breast mass segmentation
Guo et al. Msanet: multiscale aggregation network integrating spatial and channel information for lung nodule detection
Li et al. Study on strategy of CT image sequence segmentation for liver and tumor based on U-Net and Bi-ConvLSTM
Nalepa et al. Deep learning automates bidimensional and volumetric tumor burden measurement from MRI in pre-and post-operative glioblastoma patients
Zhao et al. Effective Combination of 3D-DenseNet's Artificial Intelligence Technology and Gallbladder Cancer Diagnosis Model
CN115564756A (en) Medical image focus positioning display method and system
Khagi et al. 3D CNN based Alzheimer’ s diseases classification using segmented Grey matter extracted from whole-brain MRI
Cao et al. Multi-target segmentation of pancreas and pancreatic tumor based on fusion of attention mechanism
Wang et al. Multiscale feature fusion for skin lesion classification
Pacal et al. Enhancing Skin Cancer Diagnosis Using Swin Transformer with Hybrid Shifted Window-Based Multi-head Self-attention and SwiGLU-Based MLP
Zhou et al. MCFA-UNet: multiscale cascaded feature attention U-Net for liver segmentation
Jeya Sundari et al. Factorization‐based active contour segmentation and pelican optimization‐based modified bidirectional long short‐term memory for ovarian tumor detection
Crasta et al. A novel Deep Learning architecture for lung cancer detection and diagnosis from Computed Tomography image analysis
Goel et al. Improving YOLOv6 using advanced PSO optimizer for weight selection in lung cancer detection and classification
CN111783796A (en) PET/CT image recognition system based on depth feature fusion
Zhao et al. NFMPAtt-Unet: Neighborhood fuzzy c-means multi-scale pyramid hybrid attention unet for medical image segmentation
Rajeashwari et al. Enhancing pneumonia diagnosis with ensemble-modified classifier and transfer learning in deep-CNN based classification of chest radiographs
CN113889235A (en) Unsupervised feature extraction system for three-dimensional medical image
CN112037167A (en) Target area determination system based on image omics and genetic algorithm
Peng et al. Spider-Net: High-resolution multi-scale attention network with full-attention decoder for tumor segmentation in kidney, liver and pancreas

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination