CN112331275A - Memory network and attention-based drug relocation calculation method - Google Patents

Memory network and attention-based drug relocation calculation method Download PDF

Info

Publication number
CN112331275A
CN112331275A CN202011169358.9A CN202011169358A CN112331275A CN 112331275 A CN112331275 A CN 112331275A CN 202011169358 A CN202011169358 A CN 202011169358A CN 112331275 A CN112331275 A CN 112331275A
Authority
CN
China
Prior art keywords
drug
disease
neighborhood
drugs
implicit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011169358.9A
Other languages
Chinese (zh)
Inventor
何洁月
杨新星
龚倬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202011169358.9A priority Critical patent/CN112331275A/en
Publication of CN112331275A publication Critical patent/CN112331275A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C20/00Chemoinformatics, i.e. ICT specially adapted for the handling of physicochemical or structural data of chemical particles, elements, compounds or mixtures
    • G16C20/30Prediction of properties of chemical compounds, compositions or mixtures
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C20/00Chemoinformatics, i.e. ICT specially adapted for the handling of physicochemical or structural data of chemical particles, elements, compounds or mixtures
    • G16C20/70Machine learning, data mining or chemometrics
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/40ICT specially adapted for the handling or processing of medical references relating to drugs, e.g. their side effects or intended usage

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Crystallography & Structural Chemistry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Medicinal Chemistry (AREA)
  • Pharmacology & Pharmacy (AREA)
  • Toxicology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention discloses a method for computing drug relocation based on a memory network and attention, which sequentially comprises the following steps: step 1: extracting implicit characteristics of the medicine and the disease by using medicine-disease association and additional auxiliary information; step 2: generating a drug preference vector according to the implicit characteristics of the drugs and the diseases calculated in the step 1; and step 3: generating a neighborhood contribution representation part by combining a memory network according to the drug preference vector calculated in the step 2; and 4, step 4: drug implicit features, disease implicit features, and neighborhood contribution representations are integrated using a nonlinear function to generate a predicted value. According to the invention, the attention mechanism and the external memory unit are combined to generate the neighborhood contribution representation, so that the neighborhood information contained in a small amount of drug-disease strong association can be captured, and meanwhile, the implicit characteristics and neighborhood contribution representation of the drug and the disease are integrated by adopting a nonlinear function, so that the model provided by the invention can deduce a predicted value from the overall view angle of the drug-disease association.

Description

Memory network and attention-based drug relocation calculation method
Technical Field
The invention relates to a method for computing drug relocation, in particular to a method for computing drug relocation based on a memory network and attention, and belongs to the technical field of bioinformatics.
Background
Over the past few decades, while pharmaceutical technology has continued to advance and at the same time human cognition for disease has improved, the pace of converting such advances into new finished drugs has been far less than anticipated. New drug development remains a long, expensive and high risk process. According to statistics, the average cost of developing a new drug is 8-15 hundred million dollars and at least 13-15 years are needed to popularize the drug to the market. At the same time, the process is too high in wear rate, and only 10% of the drugs entering clinical trials can be approved by regulatory agencies. The remaining 90% of the drugs have limited predictive value for clinical laboratory studies, resulting in their failure to be approved by drug regulatory agencies for ineffectiveness or high toxicity. To overcome the above problems and simultaneously improve new drug production, more and more companies are adopting computational drug relocation techniques to accelerate the new drug development process. The technology of computing drug relocation aims at finding new application of drugs approved by drug regulatory departments, and is concerned by the industry due to the characteristics of short development cycle, low investment cost, strong controllability and the like.
Although computational drug relocation techniques have met with some success in the pharmaceutical industry, they still face a number of significant challenges. For example, the previous drug relocation model is simply applied to relevant models in other fields, and the industry knowledge of the pharmaceutical industry is not fully considered, so that the performance of the previous model in relevant scenes is low. In addition, traditional computational drug relocation models perform poorly on large-scale mass datasets, and how to predict effective drug-disease associations from mass data has become another big problem in the field of computational drug relocation.
By combining the above discussion, compared with the traditional drug research and development process, the computational drug relocation technology can significantly accelerate the drug research and development process, save the investment cost and enhance the drug controllability, and has great practical significance and economic value for the pharmaceutical industry. Meanwhile, the current technology for calculating the drug relocation still faces a series of challenges and difficulties, so that the research aiming at the technology for calculating the drug relocation has great economic value and social significance, and is worthy of high attention and further research of researchers.
Disclosure of Invention
The technical problem is as follows:
the invention aims to solve the defects of the existing drug relocation method and provides a memory network and attention-based drug relocation calculation method to improve the drug relocation performance.
The technical scheme is as follows:
the invention relates to a computing drug relocation method based on a memory network and attention, which sequentially comprises the following steps:
(1) an improved self-encoder is utilized to associate the medicines with diseases, and the similarity between the medicines and the similarity between the diseases are combined for extracting the respective hidden characteristics of the medicines and the diseases, so that the process can extract effective hidden characteristics and is not easy to be troubled by the cold start problem;
(2) calculating a drug preference vector according to the implicit characteristics of the drugs and the diseases calculated in the step (1) and used for measuring the similarity degree of the target drug and the neighbor drugs;
(3) generating a neighborhood contribution representation (neighbor contribution representation) by combining a drug preference vector and an external memory unit, wherein the neighbor contribution representation is used for capturing the neighborhood information contained by higher-order complex relation and a small amount of strong association between drugs and diseases, the drug preference vector can be distributed to the influential neighbors with larger weight, and the external memory unit can store the feature information of the related drugs under the role of the neighbors for a long time;
(4) implicit features of drugs and diseases are integrated with the neighborhood contribution representation by a non-linear function to arrive at a final predicted value.
The calculated predicted value represents a probability that the target drug can treat the target disease.
Has the advantages that:
the invention provides the estimation of the treatment probability of the existing drug-disease, and the neighborhood contribution representation is generated by combining an attention mechanism and an external memory unit, so that the neighborhood information contained in a small amount of drug-disease strong association can be captured, and meanwhile, the implicit characteristics and the neighborhood contribution representation of the drug and the disease are integrated by adopting a nonlinear function, so that the model provided by the invention can deduce the predicted value from the overall view of the drug-disease association. The method specifically comprises the following advantages:
(1) the problem of data sparsity can be solved to a certain extent by introducing auxiliary information of the drug diseases;
(2) an attention weight mechanism is introduced, so that the model can apply higher weight to similar drugs in the neighbors, and ensure that the similar drugs make greater contribution in a decision phase;
(3) combining an attention mechanism with an external memory unit to generate a neighborhood contribution representation, so that neighborhood information contained in a small amount of drug-disease strong association can be captured;
(4) the implicit features and neighborhood contribution representations of drugs and diseases are integrated using nonlinear functions, enabling the model to infer predictive values from the overall perspective of drug-disease association.
Drawings
FIG. 1 is a flow chart of the algorithm of the present invention;
FIG. 2 is a diagram illustrating the effect of dimension size of external memory cell vectors on model performance in an embodiment;
FIG. 3 is a diagram illustrating the influence of the magnitude of the balance parameter on the performance of the model in the embodiment.
Detailed Description
The technical solution of the present invention is described in detail below, but the scope of the present invention is not limited to the embodiments.
The symbols and parameters referred to hereinafter are defined in table 1:
TABLE 1 legends
Figure BDA0002746793650000031
Define one, drug-disease association matrix: r represents a drug-disease association matrix, wherein
Figure BDA0002746793650000032
m is the total number of drugs in the data set and n is the total number of diseases in the data set. If drug i is capable of treating disease j, then R [ i][j]Is set to 1, otherwise is set to 0.
Second, drug similarity matrix: the drug is a similarity matrix between drugs, wherein the value interval of drug [ i ] [ j ] is [0,1], and the value represents the similarity between drug i and drug j. The more similar drug i is to drug j, the closer the value is to 1, and vice versa the closer to 0.
Define three, drug similarity vector: drug simi*Is the similarity vector of the drug i, wherein, drug simi*=[DrugSimi1,DrugSimi2,...,DrugSimim]Representing the similarity vector of drug i to all drugs in the dataset.
Defining four, a disease similarity matrix: the DisaseSim represents a similarity matrix among diseases, wherein the value interval of the DisaseSim [ i ] [ j ] is [0,1], and the value represents the similarity degree of the disease i and the disease j. The more similar the disease i is to the disease j, the closer the value is to 1, and vice versa, the closer to 0.
Define five, the disease similarity vector: diseasej*Is the similarity vector of Disease j, where Diseasej*=[Diseasej1,Diseasej2,...Diseasejn]Representing the similarity vector of disease j to all drugs in the data set.
(1) Computing drug relocation method (HAMN) based on memory network and attention
The steps of the HAMN model are shown in fig. 1:
step 1: latent feature extraction
The HAMN model utilizes an improved self-encoder to extract implicit characteristics of drugs and diseases, and meanwhile, characterization information of the drugs and the diseases is enriched by the similarity between the drugs and the similarity between the diseases is brought in the process, so that the problem of data sparseness is solved.
The lower left part of fig. 1 is the process of HAMN model extracting the implicit features of drug i.
Figure BDA0002746793650000033
Generated from a given drug-disease association matrix R, represents the association vector of drug i with all diseases in the data set. To enhance the robustness of the input data, the information is separately input to the original input
Figure BDA0002746793650000041
And auxiliary input information drug simi*Adding gaussian-distributed noise for generation
Figure BDA0002746793650000042
And
Figure BDA0002746793650000043
the encoding and decoding operations described below are then performed.
First, the HAMN model performs the encoding operation as described in equation (1), the purpose of which is to generate implicit characteristics of drug i. Wherein
Figure BDA0002746793650000044
The dimension of the hidden feature of the medicine i is represented, and the value of k is usually smaller than the dimensions of the original input vector and the auxiliary input vector, so that the functions of reducing the dimension and extracting the hidden feature can be achieved; g represents any non-linear activation function, such as ReLU, Sigmoid, etc.; w1And V1Representing a weight parameter vector for encoding the original input information and the auxiliary input information; bdRepresenting a bias parameter vector.
Figure BDA0002746793650000045
Then, the decoding operations described in equations (2) and (3) are performed, which are intended to generate the original input information
Figure BDA0002746793650000046
And auxiliary input information drug simi*Is reconstructed value of
Figure BDA0002746793650000047
And
Figure BDA0002746793650000048
where f represents any activation function, such as ReLU, Sigmoid, etc.; w2And V2Representing weightA parameter for decoding the implicit characteristic of the drug i, thereby restoring the initial values of the original input information and the auxiliary input information; bsAnd bDRepresenting the bias parameter.
Figure BDA0002746793650000049
Figure BDA00027467936500000410
Thus, the loss function resulting from the above-described encoding and decoding operations is shown in equation (4), where
Figure BDA00027467936500000411
And
Figure BDA00027467936500000412
representing the error loss caused by the input value and the reconstructed value; i Wl||2+||Vl||2The loss of the L2 regularization parameters is expressed, the complexity of the model is controlled, and the model can have better generalization performance; alpha represents a balance parameter and has the function of adjusting the proportion weight of the error caused by the original input and the reconstruction value thereof and the error caused by the auxiliary input and the reconstruction value thereof in the loss function; λ is the regularization parameter, whose role is to adjust the duty-ratio weight that the L2 regularization parameter loses in the loss function of the ANMF model.
Figure BDA00027467936500000413
By minimizing equation (4), the latent character drug of drug i can be obtainedi
Meanwhile, the lower right part of the graph 1 shows a process of extracting hidden features of a disease j by the HAMN model, the process is theoretically the same as the process of extracting hidden features of a medicine i, and the only difference is that the process replaces original input information and auxiliary input information with the original input information
Figure BDA00027467936500000414
And Diseasej*Wherein
Figure BDA00027467936500000415
Representing the relationship vector between disease j and all drugs in the data set. The implicit feature disease of the disease j can be calculated by performing encoding and decoding operations similar to equations (1) - (3) on the inputsj
Step 2: drug preference vector generation
Through the implicit feature extraction operation in the step 1, the HAMN model respectively obtains the implicit features of the medicine and the disease. However, the implicit nature of drug and disease only stores the overall information that most drug-disease associations have in common, but does not take into account the neighborhood information that is contained in a small number of strong drug-disease associations. Inspired by the neighborhood model, neighborhood information contained in the drug-disease strong association is usually provided by the neighbors of the target drug, so the HAMN model utilizes the relevant neighbors of the target drug to capture neighborhood information contained in a small number of drug-disease strong associations. However, the contribution weight of each neighbor should not be fixed unique, and the more similar neighbors should have greater contribution weights to the target drug, and vice versa.
Thus the drug preference vector pijThe degree of similarity of the target drug to its neighbor drugs will be calculated according to equation (5), where each dimension pijnRepresenting the degree of similarity of the target drug i to its neighbor drug n for a given disease j.
Figure BDA0002746793650000051
Where N (i) represents the set of neighbor drugs for target drug i, which set consists of drugs that have a verified association with disease j. And (3) performing inner product operation on the hidden feature vector of the target drug i and the hidden feature vector of the neighbor drug n on the right side of the equal sign of the formula (5), thereby calculating the compatibility of the target drug i and the neighbor drug n. Equation (5) works because the inner product operation can make the neighbor drug similar to the target drug i achieve a larger compatible value, and the neighbor drug not similar thereto achieve a smaller compatible value.
And step 3: neighborhood contribution representation generation
The drug preference vector p calculated in step 2ijThe HAMN model has obtained the degree of similarity between the target drug and its neighbors. Based on the assumption that similar drugs can treat similar diseases, it can be concluded that when a target drug i makes a decision, i.e., whether it can treat a disease j is inferred, more similar drugs contribute more to the decision. The drug preference vector p is thus normalized by equation (6)ijObtaining the attention weight vector q of the target drug iijThe attention weight vector is used to infer the contribution ratio of the neighbor drug to the decision of the target drug i. Notably, the attention weight vector qijHigher weights can be applied to similar drugs in the neighbor set while reducing the importance to potentially less similar drugs, so that the target drug i focuses on the subset of influential drugs in the neighbor when making the decision.
Figure BDA0002746793650000052
Next, in order to learn neighborhood information included in a small number of drug-disease strong associations required for making a decision for the target drug i, the HAMN model captures neighborhood information required for its decision using neighbor drugs of the target drug, on the assumption that the neighborhood information included in the strong associations is usually provided by neighbors of the target drug. Meanwhile, in the field of recommendation systems, the memory network utilizes an external memory unit to store feature information of related users or commodities under the role of a neighbor for a long time, so that the memory network can effectively capture neighborhood information contained in the users or commodities. It is worth noting that the computational drug relocation problem can be seen in its nature as a recommendation problem. Therefore, inspired by the memory network, the HAMN model stores feature information of a related drug under a neighbor role by using an external memory unit in the memory network, and is used for capturing neighborhood information contained in the neighbor drug. Subsequently using the attention weight vector qijEach dimension of the vector stores the contribution weight of the corresponding neighbor drug, and the final neighborhood contribution representation o is obtained by weighting and accumulating neighborhood information contained in all the neighbor drugs of the target drug iijTo represent all neighborhood information needed for the target drug i to make a decision. The generation is shown in equation (7).
Figure BDA0002746793650000061
Where N (i) represents the set of neighbor drugs for target drug i, which set consists of drugs that have a verified association with disease j. q. q.sijnIs an attention weight vector qijIts value represents the contribution weight of the neighbor drug n to the target drug i decision. c. CnAnd the external memory unit represents the neighbor drug n, stores the characteristic information of the drug n under the neighbor role and is used for capturing the neighborhood information contained in the neighbor drug. It is essentially a set of parameter vectors, available vector cn=[m1,m2,...ml]Indicating, notably, an external memory cell cnThe dimension of the drug is not required to be equal to the drug implicit feature vector drugnIs kept consistent by adjusting the external memory unit cnThe dimension size of the model enables the model to meet the requirement of computing drug relocation data sets of different scales, and the expandability of the model is enhanced to a certain extent.
And 4, step 4: predictive value generation
The HAMN model represents o by neighborhood contributionijServing as the output of a neighborhood model, and capturing neighborhood information contained in a small part of strong association by using the neighborhood model; while utilizing drugiAnd diseasejThe product result of (a) serves as the output of a latent feature model, which captures the overall information that most drug-disease associations have in common. And finally, integrating the two by adopting a nonlinear function, so that the final predicted value can simultaneously consider the overall information commonly owned by most drug-disease associations and the neighborhood information contained by a small amount of strong associations. The predicted value generation function of the HAMN model is shown in equation (8).
Figure BDA0002746793650000062
Wherein the drugiAnd diseasejRespectively represent the implicit eigenvectors, o, of drug i and disease j calculated by the HAMN modelijA generation neighborhood contribution indicates that an element-wise product, i.e., a Hadamard product operation, represents. h and W represent weight parameter vectors, eta is a balance parameter, and the proportion weight of the hidden feature model output and the neighborhood model output in the final output is controlled. b represents a bias parameter for increasing the translation capability of the network. FoutRepresenting any activation function such as ReLU, Sigmoid, etc.
Figure BDA0002746793650000064
Indicates the predictive value and indicates the probability that drug i can treat disease j.
Furthermore hT(drugi⊙diseasej) Representing output values based on a hidden-feature model, WToijRepresenting output values based on the neighborhood model. The formula (8) smoothly integrates the two nonlinearities to obtain the predicted value of the target drug i to the disease j
Figure BDA0002746793650000063
The above operation ensures that the HAMN model can learn both the global structural information of the drug-disease association and the information contained in the partially strong association.
Example 1:
the deep learning platform used in this embodiment is a pytorech, all algorithms are written in python language, and the basic configuration of the software is shown in table 2.
The basic configuration is as follows in table 2:
table 2 experimental environment configuration
Figure BDA0002746793650000071
As shown in fig. 2 to 3, the experimental part evaluates the HAMN algorithm mainly from several aspects: the size of the external memory cell dimension, the value of the balance parameter. The default settings for the parameters in the experiment are shown in table 3 below.
TABLE 3 Experimental Default parameter configuration
Figure BDA0002746793650000072
The experiment used two current mainstream real datasets, the Gottlieb dataset and the Cdataset dataset. Wherein the Gottlieb dataset contains 593 drugs approved by the FDA, 313 registered diseases and 1933 validated drug-disease associations. The Cdataset data set contained 663 drugs approved by the U.S. food and drug administration FDA, 409 diseases that have been registered and 2532 drug-disease associations that have been validated.
The abscissa of FIG. 2 represents the external memory cell vector cnThe ordinate is the AUC value. Experimental results show that the performance of the model steadily improved as the dimension of the external memory vector increased, wherein the performance of the model peaked when the vector dimension was 6, with AUC values on both data sets of 0.946 and 0.958, respectively. It is however worth noting that an overfitting phenomenon may then occur, with the AUC values of the model beginning to decline as the dimensions are further increased. The experimental result shows that with the increase of the dimension of the external memory unit, the complexity of the neighborhood module is increased and the fitting capability is enhanced, so that the neighborhood module can learn the neighborhood information contained in a small amount of drug-disease strong association. However, the model is too complex due to the fact that the dimension of the external memory vector is set to be too large, an overfitting phenomenon is easy to generate, and the generalization capability of the model is reduced.
FIG. 3 shows the performance of the HAMN model on the Gottlieb and Cdat aset datasets when the value of the hyperparameter η varies between {0.1,0.3,0.5,0.7,0.9 }. It is worth noting that the performance of the HAMN model is improved in a stable linear mode with the increasing value of the over-parameter eta. The phenomenon shows that for the final predicted value, the importance of the hidden feature module is greater than that of the neighborhood module, and the generalization capability of the whole model can be improved by giving higher weight to the hidden feature module. Where the model performance peaked when η was 0.7 and the AUC values on both data sets maximized. However, as the value of η is further increased, the model performance begins to continue to decline, especially at η of 0.9, which is most pronounced. The phenomenon shows that the neighborhood model scores partial test set samples more accurately, and certain weight needs to be given to the neighborhood module, so that the final predicted value can take the contribution of the field module into consideration.
It can be seen from the above embodiment 1 that the appropriate vector dimension of the external memory unit can enhance the fitting ability of the HAMN model neighborhood module, and learn neighborhood information included in part of drug-disease strong associations, thereby further improving the overall performance of the HAMN model. Meanwhile, the importance level of the hidden feature module is higher than that of the neighborhood module, and higher weight is given to the hidden feature module. However, the neighborhood model can accurately judge part of the test set samples and the hidden feature module cannot accurately predict the part of the samples, so that part of weight should be given to the neighborhood model, and the final predicted value takes the contribution of the neighborhood module into consideration. Therefore, when the eta value is set to 0.7, the prediction effect and generalization performance of the HAMN model are improved to a certain extent.

Claims (1)

1. A method for calculating drug relocation based on memory network and attention is characterized by sequentially comprising the following steps:
(1) combining drug-disease association, inter-drug similarity and inter-disease similarity using an improved autoencoder for extracting implicit features of each of the drug and the disease;
(2) calculating a drug preference vector according to the implicit characteristics of the drugs and the diseases calculated in the step (1) and used for measuring the similarity degree of the target drug and the neighbor drugs;
(3) generating neighborhood contribution representation by combining the drug preference vector and an external memory unit, wherein the neighborhood contribution representation is used for capturing neighborhood information contained in higher-order complex relation and a small amount of strong association between the drug and the disease;
(4) implicit characteristics of drugs and diseases are integrated with the neighborhood contribution representation through a nonlinear function to obtain a final predicted value, and the calculated predicted value represents the probability that the target drug can treat the target disease.
CN202011169358.9A 2020-10-28 2020-10-28 Memory network and attention-based drug relocation calculation method Pending CN112331275A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011169358.9A CN112331275A (en) 2020-10-28 2020-10-28 Memory network and attention-based drug relocation calculation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011169358.9A CN112331275A (en) 2020-10-28 2020-10-28 Memory network and attention-based drug relocation calculation method

Publications (1)

Publication Number Publication Date
CN112331275A true CN112331275A (en) 2021-02-05

Family

ID=74296115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011169358.9A Pending CN112331275A (en) 2020-10-28 2020-10-28 Memory network and attention-based drug relocation calculation method

Country Status (1)

Country Link
CN (1) CN112331275A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114093527A (en) * 2021-12-01 2022-02-25 中国科学院新疆理化技术研究所 Drug relocation method and system based on spatial similarity constraint and non-negative matrix factorization
CN117038105A (en) * 2023-10-08 2023-11-10 武汉纺织大学 Drug repositioning method and system based on information enhancement graph neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653846A (en) * 2015-12-25 2016-06-08 中南大学 Integrated similarity measurement and bi-directional random walk based pharmaceutical relocation method
CN107545151A (en) * 2017-09-01 2018-01-05 中南大学 A kind of medicine method for relocating based on low-rank matrix filling
CN110853714A (en) * 2019-10-21 2020-02-28 天津大学 Drug relocation model based on pathogenic contribution network analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653846A (en) * 2015-12-25 2016-06-08 中南大学 Integrated similarity measurement and bi-directional random walk based pharmaceutical relocation method
CN107545151A (en) * 2017-09-01 2018-01-05 中南大学 A kind of medicine method for relocating based on low-rank matrix filling
CN110853714A (en) * 2019-10-21 2020-02-28 天津大学 Drug relocation model based on pathogenic contribution network analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIEYUE HE等: "Hybrid Attentional Memory Network for Computational drug repositioning", 《ARXIV》, 12 June 2020 (2020-06-12), pages 1 - 16 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114093527A (en) * 2021-12-01 2022-02-25 中国科学院新疆理化技术研究所 Drug relocation method and system based on spatial similarity constraint and non-negative matrix factorization
CN117038105A (en) * 2023-10-08 2023-11-10 武汉纺织大学 Drug repositioning method and system based on information enhancement graph neural network
CN117038105B (en) * 2023-10-08 2023-12-15 武汉纺织大学 Drug repositioning method and system based on information enhancement graph neural network

Similar Documents

Publication Publication Date Title
US20220059229A1 (en) Method and apparatus for analyzing medical treatment data based on deep learning
WO2023185243A1 (en) Expression recognition method based on attention-modulated contextual spatial information
Wen et al. Generalized incomplete multiview clustering with flexible locality structure diffusion
Wang et al. An effective multivariate time series classification approach using echo state network and adaptive differential evolution algorithm
Lin et al. Robust, discriminative and comprehensive dictionary learning for face recognition
CN109992779B (en) Emotion analysis method, device, equipment and storage medium based on CNN
Gavrishchaka et al. Advantages of hybrid deep learning frameworks in applications with limited data
Alam et al. Sparse simultaneous recurrent deep learning for robust facial expression recognition
Tu et al. Spatial-temporal data augmentation based on LSTM autoencoder network for skeleton-based human action recognition
Wan et al. Precise facial landmark detection by reference heatmap transformer
WO2022166158A1 (en) System for performing long-term hazard prediction on hemodialysis complications on basis of convolutional survival network
Jiang et al. Heterogenous-view occluded expression data recognition based on cycle-consistent adversarial network and K-SVD dictionary learning under intelligent cooperative robot environment
CN112331275A (en) Memory network and attention-based drug relocation calculation method
CN114333027B (en) Cross-domain novel facial expression recognition method based on combined and alternate learning frames
Yi et al. Not end-to-end: Explore multi-stage architecture for online surgical phase recognition
CN109815478A (en) Medicine entity recognition method and system based on convolutional neural networks
Chen et al. A novel imbalanced dataset mitigation method and ECG classification model based on combined 1D_CBAM-autoencoder and lightweight CNN model
Zhang et al. Deep compression of probabilistic graphical networks
Xu et al. Knowledge distillation guided by multiple homogeneous teachers
Zhang et al. Feature extraction framework based on contrastive learning with adaptive positive and negative samples
Mi et al. Principal component analysis based on block-norm minimization
US20230307087A1 (en) Method and system for determining free energy of permeation for molecules
Chen et al. Talking head generation driven by speech-related facial action units and audio-based on multimodal representation fusion
Fang et al. Ghost-based convolutional neural network for effective facial expression recognition
CN116843995A (en) Method and device for constructing cytographic pre-training model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination