CN115619647A - Cross-modal super-resolution reconstruction method based on variational inference - Google Patents
Cross-modal super-resolution reconstruction method based on variational inference Download PDFInfo
- Publication number
- CN115619647A CN115619647A CN202211636769.3A CN202211636769A CN115619647A CN 115619647 A CN115619647 A CN 115619647A CN 202211636769 A CN202211636769 A CN 202211636769A CN 115619647 A CN115619647 A CN 115619647A
- Authority
- CN
- China
- Prior art keywords
- resolution
- super
- sequencing
- modal
- cross
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000012163 sequencing technique Methods 0.000 claims abstract description 105
- 239000011159 matrix material Substances 0.000 claims abstract description 38
- 230000007613 environmental effect Effects 0.000 claims abstract description 22
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 238000010186 staining Methods 0.000 claims abstract description 8
- 238000005070 sampling Methods 0.000 claims description 33
- 230000014509 gene expression Effects 0.000 claims description 25
- 230000006870 function Effects 0.000 claims description 14
- 238000005457 optimization Methods 0.000 claims description 12
- 238000004043 dyeing Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 2
- 238000004458 analytical method Methods 0.000 abstract description 5
- 230000004927 fusion Effects 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 108090000623 proteins and genes Proteins 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 208000024827 Alzheimer disease Diseases 0.000 description 1
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 230000007045 gastrulation Effects 0.000 description 1
- 238000012165 high-throughput sequencing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000000968 intestinal effect Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16B—BIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
- G16B30/00—ICT specially adapted for sequence analysis involving nucleotides or amino acids
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16B—BIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
- G16B40/00—ICT specially adapted for biostatistics; ICT specially adapted for bioinformatics-related machine learning or data mining, e.g. knowledge discovery or pattern finding
- G16B40/30—Unsupervised data analysis
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biotechnology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Proteomics, Peptides & Aminoacids (AREA)
- Artificial Intelligence (AREA)
- Bioethics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Public Health (AREA)
- Evolutionary Computation (AREA)
- Epidemiology (AREA)
- Software Systems (AREA)
- Investigating Or Analysing Biological Materials (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a cross-modal super-resolution reconstruction method based on variational inference, which comprises the steps of obtaining low-resolution space sequencing data and a high-resolution staining image; extracting sequencing characteristics according to the low-resolution spatial sequencing data, and constructing a corresponding matrix A; extracting image characteristics according to the high-resolution dye image, and constructing a corresponding matrix W; extracting an environmental factor by utilizing a space information extraction network according to the sequencing characteristics and the matrix A; and according to the environmental factor, the image characteristics and the matrix W, realizing super-resolution reconstruction of the low-resolution space sequencing data by utilizing a cross-modal super-resolution variation inference network. The algorithm disclosed by the invention can analyze the cross-modal fusion and complementation of the high-resolution image signal and the low-resolution sequencing signal, breaks through the technical bottleneck of a single mode, and realizes the fusion of higher-throughput data, higher-precision overdivision and higher reliability of modal analysis.
Description
Technical Field
The invention relates to the technical field of super-resolution reconstruction, in particular to a trans-modal super-resolution reconstruction method based on variational inference.
Background
The spatial position of the transcriptome has great academic value for deep analysis of complex physiological functions and pathological mechanisms. At present, the space transcriptome provides brand-new technical support for the research in the fields of tumor heterogeneity, gastrulation, alzheimer's disease principle and the like. The super-resolution reconstruction of the spatial transcriptome information plays a crucial role in researching the fine expression mode of the high-flux gene in the tissue and exploring a more complex gene-gene combined expression relationship so as to assist in deeply understanding the life process.
However, currently, the space transcriptome still faces the contradiction between the spatial resolution and the sequencing flux, although a method for performing cross-modal inverse convolution through unicellular omics and performing super-resolution reconstruction on the space omics through spatial information is proposed for the contradiction;
however, the former method is limited by the lack of spatial information of single cell gene expression sequencing data, and such a method only realizes the cell level hyper-differentiation but does not realize the spatial level hyper-differentiation yet, and is difficult to fully reflect the spatial expression heterogeneity of single cells, cannot provide a fine spatial expression mode of single cells, and cannot be widely used in other spatial omics technologies; the latter deduces more carefully on the spatial expression of genes, but has poor data expandability aiming at different platforms, unobvious resolution improvement and does not utilize more cross-modal information with high resolution.
Therefore, in view of the fact that the existing method cannot fully utilize the spatial information of a large number of unlabeled transcriptomes to perform super-resolution reconstruction, how to provide a cross-modal super-resolution reconstruction method to overcome the current defects is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a cross-modal super-resolution reconstruction method based on variation inference, and aims to perform super-resolution reconstruction on low-resolution spatial sequencing data by using spatial information of a large number of unmarked transcriptomes and additional information generated along with sequencing of the spatial transcriptomes.
In order to achieve the purpose, the invention adopts the following technical scheme:
a cross-modal super-resolution reconstruction method based on variational inference comprises the following steps:
acquiring low-resolution spatial sequencing data and a high-resolution staining image;
extracting sequencing characteristics according to the low-resolution spatial sequencing data, and constructing a corresponding matrix A; extracting image characteristics according to the high-resolution dyeing image, and constructing a corresponding matrix W;
extracting an environment factor by utilizing a space information extraction network according to the sequencing feature and the matrix A;
and realizing super-resolution reconstruction of the low-resolution space sequencing data by utilizing a cross-modal super-resolution variation inference network according to the environment factor, the image characteristics and the matrix W.
Preferably, the spatial information extraction network extracts the environmental factor according to the following formula,
wherein q is a probability density function, θ t Network parameters of the network are extracted for the spatial information,the environmental factors representing the low resolution sequencing data determine the average sequencing level of the sequencing points at low resolution,representing low resolution spatial sequencing characteristics, N being the total number of sample points, N representing the sequence number of the current sample point,the parameter of the variation approximation distribution q is shown.
Preferably, the cross-modal super-resolution variation inference network maps the environmental factor and the image feature to a parameter space of negative binomial distribution; the mapping relationship is as follows:
wherein ,representing the total success probability of super-resolution sequencing data generation;expressing logarithmic probability, K is super-resolution multiplying power,representing a cross-modal super-resolution variational inference network, θ r Network parameters of the cross-modal super-resolution variation inference network are represented,a feature of the image is represented by,
Z (n) representing the environmental factor at sample point n.
Preferably, according to saidThe above-mentionedExtracting super-resolution space sequencing characteristics according to the following formula, and performing super-resolution reconstruction on the low-resolution space sequencing data according to the extracted super-resolution space sequencing characteristics, wherein the formula is as follows:
wherein ,representing sample pointsThe super-resolution spatial sequencing feature of (a),representing a negative binomial.
Preferably, network parameters of the spatial information extraction network and the cross-modal super-resolution variational inference network are optimized according to an uncertainty of an optimization evidence, and an optimization formula is as follows:
in the formula ,representing super-resolution spatial sequencing features, Z representing an environmental factor of the low-resolution spatial sequencing data, X representing a low-resolution spatial sequencing feature, Y representing a high-resolution image feature, θ r Representing a network parameter, θ, of a cross-modal super-resolution variational inference network t Network parameters of the network are extracted for the spatial information,the expectation on the probability density function p is expressed,representing the expectation over the probability density function q,indicating divergence.
wherein ,representing a low-resolution spatial sequencing feature,a feature of a high-resolution image is represented,representing the environmental factor of the low-resolution sequencing data, N is the total number of sampling points, N is the serial number of the current sampling point, k is the super-resolution multiple, C is a constant irrelevant to optimization,is a matrix built on the super-resolution spatial sequencing features.
In a preferred embodiment of the method of the invention,obtained according to the following formula,
wherein, sigmoid (. Cndot.) is a Sigmoid function, i.e.And e is the base number of the natural logarithm,representing the average value, m, n represents the m and n sampling points; i represents the ith of the K super-resolution points contained in the nth sampling point, and j represents the jth of the K super-resolution points contained in the mth sampling point.
Preferably, the optimization evidence is infinitive, and the optimization is performed by using gradient descent until the network converges, and the formula is as follows:
in the formula, lr is a learning rate,infimum pair network parameter (theta) for the optimized evidence t ,θ r ) The derivative of (c).
Preferably, the extracted image features are obtained by the following formula:
the sequencing data is extracted and obtained according to the following formula:
in the formula, N is the serial number of the current sampling point.
Preferably, the matrix a is obtained according to the following formula:
in the formula ,n (N) is a point s (n) First order of jump of (A) 2 As a size factor, for regulationM, n represent the m-th and n-th sample points.
Preferably, the matrix W is obtained according to the following formula:
wherein i, j = 0., K-1; m, N =0, a, N-1, and
According to the technical scheme, compared with the prior art, the invention discloses and provides a cross-modal super-resolution reconstruction method based on variation inference.
Specifically, the method fully utilizes a cheap high-resolution staining image generated along with the sequencing process, designs a self-supervision super-resolution algorithm based on a graph neural network for intelligent sensing of the trans-modal biological signals, analyzes the trans-modal fusion and complementation of the high-resolution image signals and the low-resolution sequencing signals, and realizes the higher-precision super-resolution and higher-reliability fusion of modal analysis. In addition, under the data support provided by the original low-resolution high-throughput sequencing technology, the super-resolution reconstruction is sequentially carried out on each sequencing channel by using the support of parallel computing hardware, so that the analysis of modal analysis high-throughput data is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a cross-modal super-resolution reconstruction method provided by the present invention;
FIG. 2 is a schematic diagram of super-resolution sequencing data generation provided by the present invention;
FIG. 3 is a schematic diagram of high resolution image feature extraction provided by the present invention;
FIG. 4 is a schematic diagram of the generation of matrix A and matrix W provided by the present invention;
FIG. 5 is a schematic diagram of a cross-modal super-resolution variational inference network provided by the present invention;
FIG. 6 is a comparison of the effect of the algorithm provided by the present invention and the conventional method;
FIG. 7 is a graph of the mean square error of the algorithm provided by the present invention versus a conventional method;
fig. 8 is a comparative chart of pearson correlation between the algorithm provided by the present invention and the conventional method.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the embodiment of the invention discloses a cross-modal super-resolution reconstruction method based on variational inference, which comprises the following steps:
acquiring low-resolution space sequencing data and a high-resolution staining image;
extracting sequencing characteristics according to the low-resolution spatial sequencing data, and constructing a corresponding matrix A; extracting image characteristics according to the high-resolution dyeing image, and constructing a corresponding matrix W;
extracting an environment factor by utilizing a space information extraction network according to the sequencing characteristics and the matrix A;
and according to the environmental factor, the image characteristics and the matrix W, utilizing a cross-modal super-resolution variational inference network to realize super-resolution reconstruction of low-resolution spatial sequencing data.
According to the method, the modeling is carried out on the form of gene expression of the sub-nodes in the sampling points, the information of other modes is transferred to space omics sequencing information, the structure of the modes on the spatial expression is mined, the image structure information based on the dyeing image is constructed, and the cross-mode signal transfer and information mining are realized.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
Firstly, acquiring low-resolution space sequencing data and a high-resolution staining image; wherein the high resolution stain images are generated during the low resolution spatial sequencing process, for ease of understanding, the generation of the low resolution spatial sequencing data and the high resolution stain images is described below, as shown in figure 2,
specifically, the low-resolution space sequencing data has N sampling points, each sampling point is subjected to super resolution of K times to obtain K sub-pointsSequencing data for k sub-points are presented asN is the serial number of the current sampling point, the corresponding image is a high-resolution dyeing image, and the characteristics are recorded asAnd is andfurther on based onGenerating low resolution spatial sequencing featuresThe generation principle is thatThe sampling point is obtained by combining with the environmental factors of the sampling points, and further, the spatial sequencing data of each sampling point is obtained through the following formula;
wherein agg is an integration function, and is determined according to the property of the space sequencing characteristic X for summation or averaging; if it is a count, it is a sum (-) and if it is a feature obtained by Principal Component Analysis (PCA), it is an averaging mean (-).
After low-resolution spatial sequencing data X and a high-resolution staining image Y are obtained, according to the low-resolution spatial sequencing data X, sequencing characteristics are extracted, and a corresponding short array A is constructed: extracting image characteristics according to the obtained high-resolution dye image Y, and constructing a corresponding short array W:
for extracting image features, as shown in fig. 3, the acquired high-resolution staining image is input to a pre-trained multi-layer perceptron and obtained by the following formula:
for extracting sequencing data, it was obtained according to the following formula:
wherein N is the number of sampling points.
Further, a matrix A based on the sequencing features and a matrix W based on the image features are constructed, as shown in FIG. 4,
the matrix A corresponding to the sequencing features is obtained according to the following formula:
in the formula ,n (N) is a point s (n) First order of jump of (2), lambda 2 As a size factor, for regulationM, n represent the m-th and n-th sample points.
The matrix W corresponding to the image features is obtained according to the following formula:
wherein i, j = 0., K-1; m, N =0,. Cndot.n-1, and
Secondly, as shown in fig. 5, according to the extracted sequencing feature X and the constructed short matrix a, extracting the environmental factor through the spatial information extraction network, and the specific method is as follows:
where q is the probability density function, θ t Network parameters of the network are extracted for the spatial information,the environmental factors representing the low resolution sequencing data determine the average sequencing level of the sequencing points at low resolution,representing low resolution spatial sequencing characteristics, N being the total number of samples, N representing the sequence number of the current sample,the parameters of the variation approximation distribution q are shown.
Then, according to the environment factor, the image characteristic and the matrix W, the cross-modal super-resolution variational inference network is utilized to realize the super-resolution reconstruction on the spatial level of the low-resolution spatial sequencing data,
the extracted environmental factors and the high-resolution features are fused and input to the cross-modal super-resolution variation inference network, and the matrix W describes the difference between sampling points under each super-resolution condition, so that the cross-modal super-resolution variation inference network is optimized by using the matrix W, the super-resolution sequencing features are encouraged to be consistent with the high-resolution images, the expression of the super-resolution ratio sequencing features is more fit with the similarity of the images, namely the similar image features can lead to similar super-resolution sequencing features, and the image features with large differences lead to sequencing features with large differences; the specific super-resolution reconstruction process is shown in fig. 5:
mapping the environmental factors and the image features to a parameter space with Negative Binomial (NB) distribution by a trans-modal super-resolution variation inference network; the mapping relationship is as follows:
wherein ,representing the total success probability of super-resolution sequencing data generation;expressing logarithmic probability, K is super-resolution multiplying power,representing a cross-modal super-resolution variational inference network, θ r Representing network parameters across a modal super-resolution variational inference network,the features of the image are represented by a representation,
Z (n) representing the environmental factor at sample point n.
Performing maximum likelihood on the low-resolution sequencing features and the high-resolution matrix through a cross-modal information extraction network, so that the high-resolution sequencing features meet the condition that the aggregation of K high-resolution features in each sampling point n is equal to the low-resolution sequencing features, and meet the condition that every two spatially adjacent high-resolution spatial sequencing matrices are close to an image feature matrix, thereby optimizing the distribution parameters of the high-resolution sequencing features,namely theAnd。
further, after the optimization is finished, sampling is carried out under NB distribution constructed by the parameters according to,Extracting super-resolution space sequencing characteristics according to the following formula, and performing super-resolution reconstruction on low-resolution space sequencing data according to the extracted super-resolution space sequencing characteristics, wherein the formula is as follows:
wherein ,representing sample pointsThe super-resolution spatial sequencing feature of (a),representing a negative binomial.
In one embodiment, according to the matrix W, the variation inference network is optimized by optimizing the uncertainty of the evidence, that is, the network parameters of the spatial information extraction network and the cross-modal super-resolution variation inference network are optimized, and the optimization formula is as follows:
in the formula ,represents the super-resolution spatial sequencing characteristics,the expectation on the probability density function p is expressed,representing the expectation over the probability density function q,indicating divergence.
For the first term, the expression is:
for the second term, the specific expression is:
wherein ,represents the characteristics of the low-resolution space sequencing,a feature of a high-resolution image is represented,representing the environmental factor of the low-resolution sequencing data, N is the total number of sampling points, N is the serial number of the current sampling point, k is the super-resolution multiple, C is a constant irrelevant to optimization,to build on the super-resolution space sequencing featureA matrix of (c).
wherein Sigmoid (. Cndot.) is a Sigmoid function, i.e.And e is the base number of the natural logarithm,representing the average value, m, n represents the m and n sampling points; i represents the ith of the K super-resolution points contained in the nth sampling point, and j represents the jth of the K super-resolution points contained in the mth sampling point.
In one embodiment, the evidence infimum is optimized, using gradient descent to optimize until the network converges, as follows:
in the formula, lr is a learning rate,infimum pair network parameter (theta) for the optimized evidence t ,θ r ) The derivative of (c).
After network convergence, the low-resolution space omics sequencing data X, the space sequencing connection matrix A, the image feature Y and the image connection matrix W are sequentially input, and then super-resolution reconstruction of the low-resolution sequencing data can be realized.
Compared with the existing method, the cross-modal super-resolution reconstruction method based on variational inference disclosed by the invention has a plurality of advantages, and specifically comprises the following steps:
(1) The super-resolution information is used for assisting in correcting the super-resolution expression of the high-flux expression information without excessively depending on the modal information of super-resolution and low-flux, so that the over-reading of precious data is avoided, and the authenticity of a super-resolution result is ensured;
(2) The self-supervision algorithm is different from the traditional hyper-resolution algorithm, and the super-resolution reconstruction can be realized without the result of high-resolution-rate sequencing;
(3) The method comprises the steps of sufficiently utilizing high-resolution and cheap image characteristics generated in a sequencing process, analyzing low-resolution spatial sequencing data information by using super-resolution, designing corresponding loss functions for different modes, and providing an integrated platform for analyzing spatial multimodality omics;
on the basis of the super-resolution reconstruction algorithm provided by the application, a super-resolution experiment with quantifiable reconstruction results can be further designed, and compared with other existing algorithms, more reliable quantitative analysis rather than a simple qualitative expression mode can be given. In addition, the reconstruction method disclosed by the application enables precise medical treatment by using super-resolution expression data, can assist in distinguishing normal-lesion tissue boundaries, finely analyzes gene expression gradients at different positions in lesion tissues, distinguishes tissue subclasses which cannot be analyzed under low resolution, and the like.
In order to verify the effectiveness of the super-resolution reconstruction method, the super-resolution reconstruction method provided by the invention is compared with the traditional super-resolution reconstruction method through simulation experiments.
The traditional super-resolution algorithm has two main modes: an interpolation method taking spatial information as a leading factor comprises Bayes space, linear interpolation and Gaussian process interpolation; the method comprises the steps of conducting a regression task on sequencing characteristics by using image characteristics at a low-resolution sampling point, applying a trained model to a high-resolution sampling point to predict the sequencing characteristics on the high-resolution sampling point, and performing a regression method with cross-mode image information as a leading factor, wherein the regression method comprises linear regression, gaussian process regression and neural network regression;
in the simulation experiment, the invention takes the spatial transcriptome sequencing data of human intestinal tissues as an example, the expressions of adjacent 7 sampling points are fused into a whole in an averaging mode to obtain the spatial transcriptome expression with low resolution, and the spatial transcriptome expression is restored to the original resolution by utilizing the invention and the traditional method,
as shown in FIG. 6, for the result display after super-resolution reconstruction by different algorithms, it can be seen from the visualized super-resolution result that the expression features reconstructed by super-resolution of the present invention have clearer boundaries, more accurate reconstruction accuracy, and can more accurately depict high-resolution sequencing information. Meanwhile, the invention also carries out quantitative comparison on the super-resolution result, the comparison result is shown in fig. 7 and fig. 8, and as can be seen from the figure, the super-resolution reconstruction method disclosed by the invention has higher pierce correlation and lower average square error, thereby further explaining that the invention has more accurate reconstruction precision and can more accurately depict high-resolution sequencing information.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A cross-modal super-resolution reconstruction method based on variational inference is characterized by comprising the following steps:
acquiring low-resolution spatial sequencing data and a high-resolution staining image;
extracting sequencing characteristics according to the low-resolution spatial sequencing data, and constructing a corresponding matrix A; extracting image characteristics according to the high-resolution dyeing image, and constructing a corresponding matrix W;
extracting an environmental factor by utilizing a space information extraction network according to the sequencing feature and the matrix A;
and realizing super-resolution reconstruction of the low-resolution space sequencing data by utilizing a cross-modal super-resolution variation inference network according to the environment factor, the image characteristics and the matrix W.
2. The method for reconstructing cross-modal super-resolution based on variational inference as claimed in claim 1, wherein said spatial information extraction network extracts said environmental factor according to the following formula,
where q is the probability density function, θ t Extracting network parameters of the network for spatial information, wherein Z represents an environmental factor of low-resolution spatial sequencing data, X represents low-resolution spatial sequencing characteristics, N is the total number of sampling points, N represents the serial number of the current sampling point,the parameter of the variation approximation distribution q is shown.
3. The cross-modal super-resolution reconstruction method based on variation inference as claimed in claim 1, wherein the cross-modal super-resolution variation inference network maps the environmental factor and the image feature to a parameter space of negative binomial distribution; the mapping relationship is as follows:
wherein ,representing the total success probability of super-resolution sequencing data generation;expressing logarithmic probability, K is super-resolution multiplying power,representing a cross-modal super-resolution variational inference network, θ r Representing network parameters across a modal super-resolution variational inference network,the features of the image are represented by a representation,
Z (n) representing the environmental factor at sample point n.
4. The method of claim 3, wherein the cross-modal super-resolution reconstruction method based on variational inference is characterized in that the method is based on the variation inferenceSaidExtracting super-resolution space sequencing characteristics according to the following formula, and performing super-resolution reconstruction on the low-resolution space sequencing data according to the extracted super-resolution space sequencing characteristics, wherein the formula is as follows:
5. The method for reconstructing cross-modal super-resolution based on variational inference according to claim 1, wherein network parameters of the spatial information extraction network and the cross-modal super-resolution variational inference network are optimized according to an optimization evidence infinitium, and an optimization formula is as follows:
in the formula ,representing super-resolution spatial sequencing features, Z representing an environmental factor of the low-resolution spatial sequencing data, X representing a low-resolution spatial sequencing feature, Y representing a high-resolution image feature, θ r Representing a network parameter, θ, of a cross-modal super-resolution variational inference network t Network parameters of the network are extracted for the spatial information,the expectation on the probability density function p is expressed,representing the expectation over the probability density function q,indicating divergence.
6. The method of claim 5, wherein the cross-modal super-resolution reconstruction method based on variational inference is characterized in thatThe expression of (c) is:
7. The cross-modal super-resolution reconstruction method based on variational inference according to claim 6,obtained according to the following formula,
wherein Sigmoid (. Cndot.) is a Sigmoid function, i.e.And e is the base number of the natural logarithm,representing the average value, m, n represents the m and n sampling points; i represents the ith of the K super-resolution points contained in the nth sampling point, and j represents the jth of the K super-resolution points contained in the mth sampling point.
8. The method for reconstructing cross-modal super-resolution based on variational inference as claimed in claim 5, wherein the said optimized evidence infinitium is optimized using gradient descent until network convergence, and the formula is as follows:
9. The method for reconstructing trans-modal super-resolution based on variational inference as claimed in claim 1, wherein the matrix a is obtained according to the following formula:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211636769.3A CN115619647B (en) | 2022-12-20 | 2022-12-20 | Cross-modal super-resolution reconstruction method based on variation inference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211636769.3A CN115619647B (en) | 2022-12-20 | 2022-12-20 | Cross-modal super-resolution reconstruction method based on variation inference |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115619647A true CN115619647A (en) | 2023-01-17 |
CN115619647B CN115619647B (en) | 2023-05-09 |
Family
ID=84880038
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211636769.3A Active CN115619647B (en) | 2022-12-20 | 2022-12-20 | Cross-modal super-resolution reconstruction method based on variation inference |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115619647B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6208347B1 (en) * | 1997-06-23 | 2001-03-27 | Real-Time Geometry Corporation | System and method for computer modeling of 3D objects and 2D images by mesh constructions that incorporate non-spatial data such as color or texture |
JP2007240873A (en) * | 2006-03-08 | 2007-09-20 | Toshiba Corp | Image processor and image display method |
CN111353076A (en) * | 2020-02-21 | 2020-06-30 | 华为技术有限公司 | Method for training cross-modal retrieval model, cross-modal retrieval method and related device |
CN113362230A (en) * | 2021-07-12 | 2021-09-07 | 昆明理工大学 | Reversible flow model image super-resolution method based on wavelet transformation |
CN114331849A (en) * | 2022-03-15 | 2022-04-12 | 之江实验室 | Cross-mode nuclear magnetic resonance hyper-resolution network and image super-resolution method |
CN114708353A (en) * | 2022-06-06 | 2022-07-05 | 中国科学院自动化研究所 | Image reconstruction method and device, electronic equipment and storage medium |
-
2022
- 2022-12-20 CN CN202211636769.3A patent/CN115619647B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6208347B1 (en) * | 1997-06-23 | 2001-03-27 | Real-Time Geometry Corporation | System and method for computer modeling of 3D objects and 2D images by mesh constructions that incorporate non-spatial data such as color or texture |
JP2007240873A (en) * | 2006-03-08 | 2007-09-20 | Toshiba Corp | Image processor and image display method |
CN111353076A (en) * | 2020-02-21 | 2020-06-30 | 华为技术有限公司 | Method for training cross-modal retrieval model, cross-modal retrieval method and related device |
CN113362230A (en) * | 2021-07-12 | 2021-09-07 | 昆明理工大学 | Reversible flow model image super-resolution method based on wavelet transformation |
CN114331849A (en) * | 2022-03-15 | 2022-04-12 | 之江实验室 | Cross-mode nuclear magnetic resonance hyper-resolution network and image super-resolution method |
CN114708353A (en) * | 2022-06-06 | 2022-07-05 | 中国科学院自动化研究所 | Image reconstruction method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115619647B (en) | 2023-05-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Montavon et al. | Methods for interpreting and understanding deep neural networks | |
US20220237788A1 (en) | Multiple instance learner for tissue image classification | |
Graziani et al. | Concept attribution: Explaining CNN decisions to physicians | |
CN111090764B (en) | Image classification method and device based on multitask learning and graph convolution neural network | |
Sriwong et al. | Dermatological classification using deep learning of skin image and patient background knowledge | |
JP7221053B2 (en) | Method for determining the temporal progression of biological phenomena and related methods and devices | |
CN112950583A (en) | Method and device for training cell counting model in pathological image | |
CN114511710A (en) | Image target detection method based on convolutional neural network | |
Dawson-Elli et al. | On the creation of a chess-AI-inspired problem-specific optimizer for the pseudo two-dimensional battery model using neural networks | |
CN114974409A (en) | Zero-sample learning-based drug virtual screening system for newly discovered target | |
Dabass et al. | An Atrous Convolved Hybrid Seg-Net Model with residual and attention mechanism for gland detection and segmentation in histopathological images | |
Agarwal et al. | Weakly supervised lesion co-segmentation on ct scans | |
CN114496099A (en) | Cell function annotation method, device, equipment and medium | |
CN113160886B (en) | Cell type prediction system based on single cell Hi-C data | |
CN112990359B (en) | Image data processing method, device, computer and storage medium | |
CN113177576B (en) | Multi-example active learning method for target detection | |
CN114511733A (en) | Fine-grained image identification method and device based on weak supervised learning and readable medium | |
Harris et al. | DeepAction: a MATLAB toolbox for automated classification of animal behavior in video | |
Pap et al. | Implementation of intelligent model for pneumonia detection | |
Zabardast et al. | An automated framework for evaluation of deep learning models for splice site predictions | |
Tang et al. | Poststratification fusion learning in longitudinal data analysis | |
CN116579408A (en) | Model pruning method and system based on redundancy of model structure | |
CN116759076A (en) | Unsupervised disease diagnosis method and system based on medical image | |
CN115619647A (en) | Cross-modal super-resolution reconstruction method based on variational inference | |
CN116188396A (en) | Image segmentation method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |