Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In short, the chemical molecule related water solubility prediction method based on deep learning provided by the invention integrally comprises a pre-training process and an actual prediction process of a deep learning model. The pre-training process comprises the following steps: constructing a deep learning model, wherein the deep learning model is constructed on the basis of a bidirectional time sequence prediction model and an attention mechanism and is used for learning the corresponding relation between a chemical molecular structure sequence and a water-solubility attribute; and training the deep learning model by taking the set loss function minimization as a target, wherein the training process takes character sequence codes representing chemical molecule structures as input, and takes water-solubility attribute information related to the chemical molecules as output. The bidirectional time series prediction model can adopt a bidirectional long and short term memory network (BILSTM) or a bidirectional gating cyclic unit (BIGRU) and the like. The sequence of characters characterizing a chemical molecular structure may be in the SMILES format, which is a specification that explicitly describes the molecular structure in ASCII strings, or in other formats. For clarity, the BILSTM model and SMILES are described below as examples.
In the invention, a BCSA model architecture is constructed on the basis of the work of BILSTM, channel attention and spatial attention by using SMILES { Weininger, 1988#86} molecular characterization, and aiming at the non-uniqueness of SMILES molecular characterization, data is amplified by using a SMILES enhancement technology to obtain more effective marker data sets as the input of the model, and the average value of each amplified molecule is used as the final prediction result, so that the model has stronger generalization capability. Then, different common graph neural network models are used for carrying out comparative research on the same data set and the invention, and the performance advantages of the models provided by the invention under different molecular representations are explored.
Hereinafter, the data preprocessing process, the model architecture, and the evaluation result will be described in detail.
First, representation and preprocessing of molecular data sets
In one embodiment, the dataset used was derived from the work of Cui { Cui, 2020#69} et al 2020, containing 9943 non-redundant compounds. The molecules are presented in the format of SMILES (Simplex Molecular-Input Line-Entry System). This symbolic format is characterized by a single line of text and a series of atoms and covalent bonds. From the perspective of formal language theory, both atoms and covalent bonds are considered symbol labels, whereas the SMILES string is just a sequence of symbols. This representation has been used to predict biochemical properties, and to encode SMILES, the present invention labels them using regular expressions in { Schwaller, 2018#64}, with the labels separated by white spaces. The processing results are, for example: "c 1 c (C) c ccc 1". Next, a method similar to word2vec is employed for the embedding input. Further, the dataset is enhanced by SMILES enumeration to expand the dataset, and the SMILES string is padded with "pad" to a fixed length of 150 characters. Excess text beyond this length is directly discarded. Finally, the data set was randomly divided into a training set (80%), a validation set (10%) and a test set (10%).
Second, deep learning model architecture
Referring to fig. 1, the deep learning model body includes a BILSTM, a channel attention module and a spatial attention module, which are used for learning the corresponding relationship between the chemical molecule structure sequence and the water-solubility attribute.
BILSTM is mainly used for acquiring sequence information of SMILES, and the invention utilizes good processing capability of RNN (recurrent neural network) model on remote relations in a sequence in natural language processing to acquire context information of the SMILES sequence based on the BILSTM of a special variant of the LSTM model in a batch processing mode. The BILSTM is a combination of a forward processing sequence of LSTMs and a backward processing sequence of LSTMs, which allows it to process features not only from the past, but also from the future. BILSTM uses SMILES sequence encoding as input
Each time step t will output the hidden layer state in the forward direction
And backward hidden layer state
The output of the hidden layer at time t of the BILSTM is a concatenation of two states, which can be expressed as:
further, the processing procedure of BILSTM can be summarized as:
C=f(Wexi,ht-1) (2)
wherein f represents a multilayer BILSTM, WeIs the learning weight of the embedded vector, which is expressed in simplified form as:
C={h1,h2,…,hT} (3)
aiming at the Attention mechanism, the embodiment of the invention optimally embeds a CBAM (convolutional Block Attention Module) mechanism into a current forward propagation sequence neural network model, and the CBAM mechanism comprises two sub-modules, wherein one sub-Module is marked as Channel Attention map (M)c) And the other is labeled Spatial association map (M)s) For obtaining the emphasis information on different channels and spatial axes, respectively, the whole attention output process can be expressed as:
wherein
Representing a dot product of the elements. σ denotes sigmoid activation function, and C' is the final output.
Specifically, the Channel Attention module (Channel Attention module) focuses mainly on what the SMILES character content is. For example, the spatial information of the BILSTM output matrix is first aggregated by averaging-pooling and max-pooling operations to obtain two different spatial context descriptors CavgAnd CmaxRespectively representing average pooling output information and maximum pooling output information; and respectively inputting the two descriptors into a 2-layer shared MLP network, and finally obtaining an output vector of the Channel Attention by utilizing a summing mode. The whole process is formalized as:
Mc(C)=MLP(AvgPool1d(C))+MLP(MaxPool1d(C))
=W1(σ(W0(Cavg))+W1(σ(W0(Cmax))) (5)
to mitigate network overhead, σ uses, for example, a relu activation function, W0,W1The learning weights of the first and second layers of the shared MLP (multi-layer perceptron) model, respectively.
The Spatial attention module (Spatial attention module) focuses primarily on the SMILES character sequence information portion. In one embodiment, the implementation is realized by using a one-dimensional convolution network with two layers of kernels being 7, and the implementation is specifically realized by the following steps:
Ms(C)=Conv1d7,1(σ(Conv1d7,16(C)))(6)
where σ denotes the relu activation function, Conv1d7,xRepresenting a 1-dimensional convolved layer with kernel size of 7 and filters of x. The final overall attention network module is represented as:
wherein
Representing a dot product and O representing the hidden state mapping vector after aggregation attention weighting by the Avg-posing operation.
In the present invention, the last part of the regression task is to deliver the trained vector O to a two-layer fully-connected layer to predict the final attribute values. For example, relu, which is commonly used during deep learning studies, can be used as an intermediate activation function, and dropout can be used to mitigate the occurrence of overfitting. During the training process, MSE (mean square error) is used as a loss function for model training, and is expressed as:
wherein N represents the size of the training data,
indicates the predicted value, y
iRepresenting the true values of the experiment.
Selection of hyper-parameters
In the model provided by the invention, a plurality of parameters influence training and architecture, and the performance of the model is different under different parameter settings. In one embodiment, Bayesian optimization { Bergstra, 2011#92} is employed to explore the hyperparametric best choices to
As a minimized target acquisition function, wherein
Indicates the predicted value, y
iWhich represents the true value of the image data,
and (4) representing the mean value of the experimental true values. During optimization, a probability model is constructed according to the past result by utilizing a TPE (Tree-structured park Estimator) algorithm. Training is carried out on a training set, a total of 100 models are generated, each model is trained for 60 epochs, and an early-stopping strategy (probability is 20) is added to accelerate the training speed. Finally, the best hyper-parameter for training is found by using the best prediction effect of the verifier as shown in table 1. Eventually the model will be further trained to 30 points on an enumeration training set in anticipation of improving the final accuracy.
Table 1: hyper-parameter selection space and optimal hyper-parameters
The framework of models is implemented using a pytorch and all computations and model training are on a Linux server (openuse) Intel (R) Xeon (R) Platinum 8173M CPU @2.00GHz and NvidiGeForce RTX 2080Ti graphics card with 11G.
Fourth, evaluation criteria
In one embodiment, the provided model is evaluated using four performance indicators commonly used in regression tasks, including: (coefficient of determination) R-Squared (R)2) Spearman, RMSE, MAE. Wherein R is2The spearman coefficient measurement can help to observe whether the fitting capability of the whole model to data is good or not, the closer the calculation result is to 1, the better the model fitting effect is, and vice versa. The RMSE and MAE error measurement can help measure the difference between the predicted value and the true value, and the closer the calculation result is to 0, the better the prediction effect is, and vice versa.
Fifthly, aiming at the verification result of water solubility
The invention aims to develop a deep learning model by utilizing self-coding of a molecular SMILES sequence, which is used for exploring the effect of a deep neural network based on SMILES molecular sequence descriptors on predicting the solubility of molecules. For example, 7955 training sets, 996 validation sets, and 995 test sets were included on the original data set. And respectively building a BILSTM model by utilizing the optimal hyper-parameters trained in the table 1 and building a BCSA model on the basis of the BILSTM model. Fig. 2 shows the trend of the model fitting effect R2 for the validation set and the test set during 400 epochs of training when the smoothness of the curve is 0.8. As is obvious from the figure, the model of the invention has stronger fitting effect and generalization capability than the BILSTM model on both verification sets (evaluation sets) and test sets (test sets).
In deep learning, the more the number of samples is, the better the trained effect is, and the stronger the generalization ability of the model is. Data enhancement is possible and necessary because the model of the present invention is based on sequence encoding of SMILES molecules and there are a variety of different SMILES characters, i.e. there are a variety of sequence encodings, for different molecules. Preferably, the original segmented data set is further amplified using a SMILES enhancement technique, and BCSA models of 20-fold (20 SMILES per molecule) and 40-fold (40 SMILES per molecule) molecular enhancement are trained, respectively, wherein structurally simple molecules may have repeated SMILES. In order to prevent the influence on the training result, the repeated data is eliminated, and the finally obtained training set, the verification set and the test set are respectively the amplification data of (134454:19881:16834) and (239260:30042: 39800). In the experiment, the model with the best performance effect of the verification set R2 in the training process is utilized, the average value of amplified molecules in the test set is used as a final prediction result to measure the extraction capability of the model to the information of the molecular sequence, and the result is shown in Table 2. Verification results show that the stability and generalization capability of the enhanced data model are remarkably improved, and the model obtains the best effect in the SMILES40 data set, which shows that the enhanced model better focuses on different sequence information of molecules. The model will further increase the accuracy of the model by molecular amplification. Accuracy was achieved in the test set (R2-0.83-0.88, rmse-0.79-0.95). Compared with a deeper-net model (R2-0.72-0.79 and RMSE-0.988-1.151) which is originally developed by cui based on the data set and constructed by utilizing molecular fingerprints, the invention shows better prediction performance.
Table 2: prediction statistics for training and test sets
In order to better show the competitiveness of the model of the invention, a series of GCN { Kipf, 2016#3}, MPNN { Gilmer, 2017#50}, attentiveFP { P é z Sant i n, 2021#53} baseline models based on a graph neural network are further built, and the influence of sequence descriptors and molecular diagram descriptors based on molecular enhancement on the aspect of predicting solubility is studied. The construction of the models is realized by using a life science python software package DGL-Life Sci released by a DGL team. FIG. 3 shows scatter plots of predicted and actual solubility values for different models on a unified test set. As can be seen from the figure, the SEBSCA model based on molecular enhancement realizes the best molecular solubility performance prediction, and has better prediction on data in different ranges. Therefore, the model of the invention has certain competitive advantage.
Sixth, prediction for other related attributes
In the experiment, the BCSA (SMILES40) model was also used to make a correlation prediction of the oil-water distribution coefficients logP and logD (pH 7.4). The logP dataset is still based on the Cui { Cui, 2020#69} et al dataset. As can be seen in the left panel of fig. 4, good results were obtained on the test data set with an R2 of 0.99 and an RMSE of 0.29. As can be seen from the scatter plot, a better fit can be achieved for the data in each range. In addition, the logD (pH 7.4) training dataset is from Wang et al. The data set was randomly divided into 8:1: 1. Training data was obtained using the SMILES evaluation 40 x. Finally, a 40x data set was obtained at a ratio of 31290:3858:4031 (training: validation: testing). The average prediction per molecule was selected as the final prediction. As can be seen from the right panel of fig. 4, the test set had R2 of 0.93 and RMSE of 0.36. Compared with the reported Wang SVM model, where R2 is 0.89 and RMSE is 0.56 for the test set and R2 is 0.92 and RMSE is 0.51 for the training set, the prediction of the test set of the model provided by the present invention also exceeds the performance of the training set of Wang, 2015# 97. It can be seen that the present invention also exhibits better performance in oil-water related predictions, which can provide reliable and robust predictions.
In conclusion, aiming at the problem that accurate prediction of water solubility is a challenging task in drug deficiency, the invention provides an end-to-end deep learning model framework based on a molecular enhanced fusion attention mechanism utilizing LSTM, which utilizes a channel assessment and spatial assessment module added to the advantage of sequence processing in a long-short memory network to extract the important information part of SMILES sequence about water solubility prediction and utilizes Bayesian optimization, so that the provided model is simple and independent of additional auxiliary knowledge (such as a molecular complex spatial structure) and can be used for prediction of other physicochemical and ADMET characteristics (absorption, distribution, metabolism, excretion and toxicity characteristics).
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + +, Python, or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.