WO2021082753A1 - 蛋白质的结构信息预测方法、装置、设备及存储介质 - Google Patents

蛋白质的结构信息预测方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021082753A1
WO2021082753A1 PCT/CN2020/114386 CN2020114386W WO2021082753A1 WO 2021082753 A1 WO2021082753 A1 WO 2021082753A1 CN 2020114386 W CN2020114386 W CN 2020114386W WO 2021082753 A1 WO2021082753 A1 WO 2021082753A1
Authority
WO
WIPO (PCT)
Prior art keywords
sequence
sequence feature
amplified
protein
database
Prior art date
Application number
PCT/CN2020/114386
Other languages
English (en)
French (fr)
Inventor
吴家祥
郭宇智
黄俊洲
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP20882879.8A priority Critical patent/EP4009328A4/en
Priority to JP2022514493A priority patent/JP7291853B2/ja
Publication of WO2021082753A1 publication Critical patent/WO2021082753A1/zh
Priority to US17/539,946 priority patent/US20220093213A1/en

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B15/00ICT specially adapted for analysing two-dimensional or three-dimensional molecular structures, e.g. structural or functional relations or structure alignment
    • G16B15/20Protein or domain folding
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B15/00ICT specially adapted for analysing two-dimensional or three-dimensional molecular structures, e.g. structural or functional relations or structure alignment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B20/00ICT specially adapted for functional genomics or proteomics, e.g. genotype-phenotype associations
    • G16B20/30Detection of binding sites or motifs
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B30/00ICT specially adapted for sequence analysis involving nucleotides or amino acids
    • G16B30/10Sequence alignment; Homology search
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B40/00ICT specially adapted for biostatistics; ICT specially adapted for bioinformatics-related machine learning or data mining, e.g. knowledge discovery or pattern finding
    • G16B40/20Supervised data analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B5/00ICT specially adapted for modelling or simulations in systems biology, e.g. gene-regulatory networks, protein interaction networks or metabolic networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B50/00ICT programming tools or database systems specially adapted for bioinformatics
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B50/00ICT programming tools or database systems specially adapted for bioinformatics
    • G16B50/30Data warehousing; Computing architectures

Definitions

  • This application relates to the field of biological information technology, in particular to a method, device, equipment and storage medium for predicting protein structure information.
  • the protein’s amino acid sequence can be used to determine the protein’s structural information.
  • a multi-sequence alignment data query operation is performed in an amino acid sequence database to extract the sequence characteristics of the amino acid sequence of the protein, and then according to the amino acid sequence of the protein.
  • Sequence features predict the structural information of the protein. Among them, the accuracy of the above sequence feature extraction is directly related to the data scale of the database. The larger the data scale of the amino acid sequence database, the higher the accuracy of the sequence feature extraction.
  • the embodiments of the present application provide a method, device, equipment, and storage medium for predicting protein information structure, which can improve the prediction efficiency of protein structure information while ensuring the prediction accuracy of protein structure information.
  • the technical solutions are as follows:
  • a method for predicting the information structure of a protein comprising:
  • the initial sequence feature is processed by the sequence feature amplification model to obtain the amplified sequence feature of the protein;
  • the sequence feature amplification model is a machine learning model obtained by training the initial sequence feature sample and the amplified sequence feature sample
  • the initial sequence feature sample is obtained by performing a sequence alignment query in the first database based on an amino acid sequence sample, and the amplified sequence feature sample is obtained by performing a sequence alignment query in a second database based on the amino acid sequence sample ⁇ ;
  • the data size of the second database is greater than the data size of the first database;
  • the structural information of the protein is predicted based on the characteristics of the amplified sequence.
  • a protein structure information prediction device includes:
  • the data acquisition module is used to perform sequence alignment query in the first database according to the amino acid sequence of the protein to obtain multiple sequence alignment data;
  • An initial feature acquisition module configured to perform feature extraction on the multi-sequence alignment data to obtain initial sequence features
  • the amplification feature acquisition module is used to process the initial sequence feature through the sequence feature amplification model to obtain the amplified sequence feature of the protein;
  • the sequence feature amplification model uses the initial sequence feature sample and the amplified sequence A machine learning model obtained by feature sample training;
  • the initial sequence feature sample is obtained by performing a sequence alignment query in the first database based on the amino acid sequence sample, and the amplified sequence feature sample is based on the amino acid sequence sample in the first database. Obtained by performing sequence alignment query in the second database; the data size of the second database is larger than the data size of the first database;
  • the structure information prediction module is used to predict the structure information of the protein based on the characteristics of the amplified sequence.
  • the data distribution similarity between the first database and the second database is higher than a similarity threshold.
  • the first database is a database obtained after randomly removing a specified proportion of data on the basis of the second database.
  • the sequence feature amplification model is a fully convolutional neural network for one-dimensional sequence data, a recurrent neural network composed of multiple layers of Long Short-Term Memory (LSTM) units Model or recurrent neural network composed of bidirectional LSTM units.
  • LSTM Long Short-Term Memory
  • the initial sequence feature and the amplified sequence feature are a position-specific scoring matrix.
  • the device further includes:
  • An amplified sample acquisition module configured to process the initial sequence feature sample through the sequence feature amplification model to obtain an amplified initial sequence feature sample
  • the model update module is used to update the sequence feature amplification model according to the amplified initial sequence feature sample and the amplified sequence feature sample.
  • the model update module includes:
  • a loss function acquisition sub-module configured to perform loss function calculation according to the amplified initial sequence feature sample and the amplified sequence feature sample to obtain a loss function value
  • the parameter update sub-module is used to update the model parameters in the sequence feature amplification model according to the loss function value.
  • the loss function acquisition sub-module includes:
  • An error calculation unit configured to calculate the reconstruction error between the amplified initial sequence feature sample and the amplified sequence feature sample
  • the loss function acquiring unit is configured to acquire the reconstruction error as the loss function value.
  • the error calculation unit calculates a root mean square reconstruction error between the amplified initial sequence feature sample and the amplified sequence feature sample.
  • model update module is used to:
  • the model parameters in the sequence feature amplification model are updated according to the loss function value.
  • the structure information prediction module includes:
  • the structure information acquisition sub-module is used to predict the characteristics of the amplified sequence through a protein structure information prediction model to obtain the structure information of the protein;
  • the protein structure information prediction model is a model obtained by training based on the sequence characteristics of the protein sample and the structure information of the protein sample.
  • a computer device in one aspect, includes a processor and a memory.
  • the memory stores at least one instruction, at least one program, code set or instruction set, at least one instruction, at least one program, code set or instruction set. Loaded and executed by the processor to realize the above-mentioned protein structure information prediction method.
  • a computer-readable storage medium stores at least one instruction, at least one program, code set or instruction set, and the at least one instruction, at least one program, code set or instruction set is loaded by a processor And execute to realize the above-mentioned protein structure information prediction method.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the protein structure information prediction method provided in the various alternative implementations of the foregoing aspects.
  • the sequence alignment query is performed on the amino acid sequence of the protein
  • the feature extraction is performed on the multi-sequence alignment data
  • the amplified sequence feature of the protein is obtained through a sequence feature amplification model, and then the protein is predicted
  • the sequence feature amplification model it is only necessary to perform sequence alignment query in the first database with a smaller data size, which can obtain higher prediction accuracy.
  • the first database with a smaller data size It takes less time to perform sequence alignment queries. Therefore, the above solution can improve the prediction efficiency of protein structure information while ensuring the prediction accuracy of protein structure information.
  • FIG. 1 is a model training and protein structure information prediction framework diagram provided by an exemplary embodiment of the present application
  • Fig. 2 is a model architecture diagram of a machine learning model provided by an exemplary embodiment of the present application
  • Fig. 3 is a schematic flow chart of a method for predicting structure information of a protein provided by an exemplary embodiment of the present application
  • FIG. 4 is a schematic flowchart of a machine learning model training and protein structure information prediction method provided by an exemplary embodiment of the present application
  • FIG. 5 is a schematic diagram of training a sequence feature automatic amplification model involved in the embodiment shown in FIG. 4;
  • FIG. 6 is a schematic diagram of protein structure information prediction related to the embodiment shown in FIG. 4;
  • Fig. 7 is a block diagram showing the structure of an apparatus for predicting protein structure information according to an exemplary embodiment
  • Fig. 8 is a schematic structural diagram of a computer device according to an exemplary embodiment
  • Fig. 9 is a schematic structural diagram of a terminal according to an exemplary embodiment.
  • the present application provides a protein structure information prediction method, which can recognize the structure information of the protein through artificial intelligence (AI), thereby providing an efficient and high-accuracy protein structure information prediction scheme.
  • AI artificial intelligence
  • Amino acid is a compound in which the hydrogen atom on the carbon atom of a carboxylic acid is replaced by an amino group.
  • the amino acid molecule contains two functional groups: amino group and carboxyl group. Similar to hydroxy acids, amino acids can be divided into ⁇ -, ⁇ -, ⁇ -...w-amino acids according to the different positions of the amino groups attached to the carbon chain, but the amino acids obtained after protein hydrolysis are all ⁇ -amino acids, and There are only two dozen types, and they are the basic unit of protein.
  • the 20 amino acids refer to glycine, alanine, valine, leucine, isoleucine, phenylalanine, proline, tryptophan, serine, tyrosine, cysteine, methionine, Asparagine, glutamine, threonine, aspartic acid, glutamic acid, lysine, arginine and histidine are 20 amino acids that make up human protein.
  • Compounds containing multiple peptide bonds formed by dehydration and condensation of these 20 amino acid molecules are called polypeptides.
  • Polypeptides are usually chain-like structures called peptide chains. The peptide chain can be twisted and folded to form a protein molecule with a certain spatial structure.
  • Protein structure refers to the spatial structure of protein molecules. Protein is mainly composed of carbon, hydrogen, oxygen, nitrogen and other chemical elements. It is an important biological macromolecule. All proteins are polymers formed by connecting 20 different amino acids. After forming proteins, these amino acids are also called Is the residue.
  • the molecular structure of a protein can be divided into four levels to describe its different aspects:
  • Primary structure The linear amino acid sequence that composes the polypeptide chain of a protein.
  • Tertiary structure The three-dimensional structure of a protein molecule formed by the arrangement of multiple secondary structure elements in a three-dimensional space.
  • Quaternary structure used to describe the interaction of different polypeptide chains (subunits) to form functional protein complex molecules.
  • Artificial intelligence is a theory, method, technology and application system that uses digital computers or digital computer-controlled machines to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results.
  • artificial intelligence is a comprehensive technology of computer science, which attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Artificial intelligence technology is a comprehensive discipline, covering a wide range of fields, including both hardware-level technology and software-level technology.
  • Basic artificial intelligence technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, and mechatronics.
  • Artificial intelligence software technology mainly includes computer vision technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
  • Machine learning is a multi-field interdisciplinary subject, involving many subjects such as probability theory, statistics, approximation theory, convex analysis, and algorithm complexity theory. Specializing in the study of how computers simulate or realize human learning behaviors in order to acquire new knowledge or skills, and reorganize the existing knowledge structure to continuously improve its own performance.
  • Machine learning is the core of artificial intelligence and the fundamental way to make computers intelligent. Its applications are in all fields of artificial intelligence.
  • Machine learning and deep learning usually include artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, teaching learning and other technologies.
  • Fig. 1 is a framework diagram showing a model training and protein structure information prediction according to an exemplary embodiment.
  • the model training device 110 trains a machine learning model by performing multiple sequence alignment data query operations and sequence feature extraction operations on the amino acid sequence corresponding to the same protein on databases of different sizes.
  • the prediction device 120 can predict the structural information of the protein corresponding to the amino acid sequence based on the trained machine learning model and the input amino acid sequence.
  • the aforementioned model training device 110 and prediction device 120 may be computer equipment with machine learning capabilities.
  • the computer equipment may be stationary computer equipment such as personal computers, servers, and stationary scientific research equipment, or the computer equipment may also be It is a mobile computer device such as a tablet computer and an e-book reader.
  • the aforementioned model training device 110 and the prediction device 120 are the same device, or the model training device 110 and the prediction device 120 are different devices.
  • the model training device 110 and the prediction device 120 may be the same type of device.
  • the model training device 110 and the prediction device 120 may both be personal computers; or, the model The training device 110 and the prediction device 120 may also be different types of devices.
  • the model training device 110 may be a server, and the prediction device 120 may be a stationary scientific research experimental device. The embodiment of the present application does not limit the specific types of the model training device 110 and the prediction device 120.
  • Fig. 2 is a model architecture diagram of a machine learning model according to an exemplary embodiment.
  • the machine learning model 20 in the embodiment of the present application may include two models, where the sequence feature amplification model 210 is used to automatically amplify the input sequence features, and output the amplified sequence features.
  • the sequence feature amplification model 210 also inputs the amplified sequence features to the protein structure information prediction model 220.
  • the protein structure information prediction model 220 is used to amplify according to the sequence features.
  • the amplified sequence feature input by the model 210 performs protein structure information prediction, and outputs the prediction result of the protein structure information.
  • the protein structure information prediction does not only use the feature sequence extracted from a single database through multi-sequence alignment data query as the input data in the protein structure information prediction model.
  • the amplified sequence features are used as the input data for predicting protein structure information. Compared with the sequence features obtained from a single database comparison, the automatically amplified sequence features are more accurate in predicting protein structure information.
  • Proteins have important practical roles in organisms. For example, proteins can cause certain genetic diseases, or proteins can make organisms immune to specific diseases.
  • the role of a protein in an organism is largely determined by its three-dimensional structure, while the three-dimensional structure of a protein is essentially determined by its corresponding amino acid sequence information.
  • the three-dimensional structure of the protein can be determined by experimental methods, for example, the three-dimensional structure of the protein can be determined by methods such as X-ray crystallization, nuclear magnetic resonance, and cryo-electron microscopy. Due to the high time and economic cost of determining the three-dimensional structure of a protein based on experimental methods, it is of extremely high scientific significance and practical value to directly predict the three-dimensional structure of a protein based on the corresponding amino acid sequence of the protein through computational methods rather than experimental methods. .
  • part of the structure information of the protein determines the accuracy of the final three-dimensional structure of the protein.
  • part of the protein structure information includes main chain dihedral angle or secondary structure, etc. Therefore, in view of the contradiction between the prediction accuracy and calculation efficiency in the protein structure information prediction algorithm based on sequence features, the protein structure information proposed in this application
  • the prediction method can reduce the data scale requirements of the amino acid sequence database, and obtain protein structure information prediction accuracy similar to traditional methods with lower database storage and query costs, improve the prediction accuracy and calculation efficiency of protein structure information, and promote protein The prediction accuracy of three-dimensional structure is improved.
  • FIG. 3 shows a schematic flow chart of a method for predicting structure information of a protein provided by an exemplary embodiment of the present application.
  • the protein structure information prediction method can be executed by a computer device, such as the prediction device 120 shown in FIG. 1 described above.
  • the protein structure information prediction method may include the following steps:
  • Step 310 Perform a sequence alignment query in the first database according to the amino acid sequence of the protein to obtain multiple sequence alignment data.
  • the computer device can obtain the multi-sequence alignment data through the sequence alignment operation.
  • sequence alignment refers to aligning multiple amino acid sequences and highlighting similar structural regions among them.
  • the first database is a database containing several amino acid sequences.
  • Step 320 Perform feature extraction on the multi-sequence alignment data to obtain initial sequence features.
  • the prediction device can obtain each amino acid sequence through the iterative basic local alignment search tool (Position-Specific Iterative Basic Local Alignment Search Tool, PSI-BLAST) in the first database after multiple sequence alignments in the first database.
  • PSI-BLAST Position-Specific Iterative Basic Local Alignment Search Tool
  • the homologous sequences obtained by the data query operation are then compared with the homology information of each sequence to obtain a Position-Specific Scoring Matrices (PSSM), which can be used as the aforementioned sequence feature.
  • PSSM Position-Specific Scoring Matrices
  • the position-specific scoring matrix can be expressed as the frequency value of an amino acid at the corresponding position obtained after multi-sequence alignment of the amino acid sequence, or the frequency of each amino acid displayed at each corresponding position, or each The probability of each amino acid is displayed at the corresponding position.
  • step 330 the initial sequence feature is processed by the sequence feature amplification model to obtain the amplified sequence feature of the protein.
  • the prediction device may input the above-mentioned initial sequence feature to the sequence feature amplification model, and the sequence feature amplification model performs feature amplification on the initial sequence feature, that is, adding new features to the initial sequence feature to obtain A more comprehensive amplified sequence feature.
  • the sequence feature amplification model is a machine learning model obtained by training the initial sequence feature sample and the amplified sequence feature sample;
  • the initial sequence feature sample is obtained by performing sequence alignment query in the first database based on the amino acid sequence sample, and the amplified sequence
  • the characteristic samples are obtained by performing sequence alignment queries in the second database according to the amino acid sequence samples;
  • the data size of the second database is larger than the data size of the first database.
  • the computer device can use the initial sequence feature sample as the input of the sequence feature amplification model, and use the amplified sequence feature sample as the initial sequence feature sample. Annotate the data and train the sequence feature amplification model.
  • the sequence feature amplification model may be a fully convolutional neural network model (Fully Convolutional Networks for Semantic Segmentation, FCN) for one-dimensional sequence data.
  • FCN Fully Convolutional neural network model
  • CNN convolutional Neural Network
  • the sequence feature amplification model is a cyclic neural network model composed of multiple layers of long and short-term memory LSTM units, or a cyclic neural network model composed of bidirectional LSTM units.
  • a recurrent neural network (Recurrent Neural Network, RNN) is a type of recurrent neural network that takes sequence data as input, recursively in the evolution direction of the sequence, and all nodes, that is, recurrent units are connected in a chain.
  • Step 340 Predict the structural information of the protein based on the amplified sequence characteristics.
  • the prediction device predicts the structural information of the protein, which may include but is not limited to predicting the dihedral angle of the main chain of the protein and/or the secondary structure information of the protein.
  • the dihedral angle is between two adjacent amide planes, which can rotate with the common Ca as a fixed point, and the angle of rotation around the Ca-N bond is called Angle, the angle of rotation around the C-Ca bond is called the ⁇ angle. among them, The angle and the ⁇ angle are called dihedral angles.
  • the angle and the ⁇ angle are called dihedral angles.
  • the main chain of the peptide chain can be regarded as composed of many planes separated by Ca.
  • the dihedral angle determines the relative position of the two peptide planes, that is, determines the position and conformation of the main chain of the peptide chain.
  • Protein secondary structure refers to the specific conformation formed when the backbone atoms of the polypeptide backbone spiral or fold along a certain axis, that is, the spatial position of the backbone atoms of the peptide chain, and does not involve the side chains of amino acid residues.
  • the main forms of protein secondary structure include ⁇ -helix, ⁇ -sheet, ⁇ -turn and random coil. Due to the large molecular weight of proteins, different peptides of a protein molecule can contain different forms of secondary structure. In proteins, the main force for maintaining secondary structure is hydrogen bonding.
  • the secondary structure of a protein is not a simple ⁇ -helix or ⁇ -sheet structure, but also includes a combination of these different types of conformations. In different proteins, the proportions of different types of conformations may vary.
  • the sequence alignment query is performed on the amino acid sequence of the protein
  • the feature extraction is performed on the multi-sequence alignment data
  • the amplified sequence of the protein is obtained through a sequence feature amplification model. Feature, and then predict the structural information of the protein.
  • sequence feature amplification model it is only necessary to perform sequence alignment query in the first database with a smaller data scale, that is, a higher prediction accuracy can be obtained.
  • the small first database consumes less time for sequence alignment query. Therefore, the above solution can improve the prediction efficiency of protein structure information while ensuring the prediction accuracy of protein structure information.
  • FIG. 4 shows a schematic flow chart of a machine learning model training and protein structure information prediction method provided by an exemplary embodiment of the present application.
  • the program is divided into two parts: machine learning model training and protein structure information prediction.
  • the machine learning model training and protein structure information prediction methods can be executed by computer equipment, where the computer equipment may include the training equipment shown in Figure 1 above. 110 and the prediction device 120.
  • the machine learning model training and protein structure information prediction method may include the following steps:
  • Step 401 The training device performs a sequence alignment query in the first database according to the amino acid sequence sample, and obtains an initial sequence feature sample according to the query result.
  • the training device may perform sequence alignment query in the first database according to the amino acid sequence samples to obtain multi-sequence alignment data, and then perform feature extraction on the multi-sequence alignment data to obtain the aforementioned initial sequence feature samples.
  • the amino acid sequence of a certain protein can be composed of multiple amino acids (for example, 20 kinds of basic amino acids are known).
  • the above-mentioned amino acid sequence sample may be a currently known amino acid sequence of a protein, or the above-mentioned amino acid sequence sample may also be an amino acid sequence generated randomly or according to a certain rule.
  • the aforementioned amino acid sequence sample includes an amino acid sequence with known protein structure information, or an amino acid sequence with unknown protein structure information, or, at the same time, an amino acid sequence with known protein structure information and an unknown protein structure information.
  • the amino acid sequence includes an amino acid sequence with known protein structure information, or an amino acid sequence with unknown protein structure information, or, at the same time, an amino acid sequence with known protein structure information and an unknown protein structure information.
  • Step 402 The training device performs sequence alignment query in the second database according to the amino acid sequence samples, and obtains the amplified sequence feature samples according to the query results.
  • the training device may perform sequence alignment query in the second database according to the amino acid sequence samples to obtain multi-sequence alignment data, and then perform feature extraction on the multi-sequence alignment data to obtain the aforementioned amplified sequence feature samples.
  • the training device obtains the initial sequence feature sample and the amplified sequence feature sample from the first database and the second database through the same amino acid sequence sample, and the initial sequence feature sample and the amplified sequence feature sample have a one-to-one correspondence.
  • the aforementioned initial sequence feature samples and amplified sequence feature samples may be sequence features extracted according to the same feature extraction algorithm.
  • the aforementioned initial sequence feature samples and amplified sequence feature samples may both be position-specific scoring matrices, and The elements in the matrix are of the same type.
  • the data scale of the aforementioned second database is larger than the data scale of the first database.
  • the first database and the second database are respectively amino acid sequence databases, each database contains several amino acid sequences, and the number of amino acid sequences contained in the second database is greater than that contained in the first database. The number of amino acid sequences.
  • the aforementioned similarity of data distribution between the first database and the second database is higher than the similarity threshold.
  • the above-mentioned first database and second database may use databases with similar data distributions, that is, between the first database and the second database
  • the similarity between the data distributions needs to be higher than a predetermined similarity threshold.
  • the aforementioned similarity threshold may be a value preset by the developer.
  • the first database and the second database are the same kind of database, but have different data sizes.
  • the foregoing database may be two existing databases with similar data distribution.
  • the foregoing first database and second database may be UniRef databases with different data sizes; or, the foregoing first database and second database may be Swiss-Prot database and TrEMBL database in UniProtKB database.
  • the UniRef database can be divided into three levels according to the identity: 100%, 90% and 50%, respectively UniRef100, UniRef90 and UniRef50 databases, UniRef100, UniRef90 and UniRef50, the data volume of these three databases are in the complete database. Decrease by 10%, 40% and 70% on the basis of
  • the aforementioned first database may be the UniRef50 database
  • the second database may be the UniRef90 or UniRef100 database (the data size of the UniRef50 database is smaller than the data size of the UniRef90 or UniRef100 database).
  • the first database may be the UniRef90 database
  • the second database may be the UniRef100 database.
  • the above-mentioned first database is a database obtained after randomly removing a specified proportion of data on the basis of the second database.
  • the aforementioned specified ratio may be a ratio preset by the developer.
  • the training device may randomly remove a specified proportion (for example, 50%) of amino acid sequences on the basis of the second database to obtain the first database.
  • the aforementioned second database may be an existing database.
  • the second database may be the aforementioned UniRef90 database (or other existing databases), and the training device will remove general amino acid sequences from the UniRef90 database immediately to obtain the aforementioned first database.
  • Step 403 The training device processes the initial sequence feature sample through the sequence feature amplification model to obtain the amplified initial sequence feature sample.
  • the computer device processes the initial sequence feature sample through the sequence feature amplification model to obtain the amplified initial sequence feature sample, which is the same as the process of obtaining the amplified sequence feature in the embodiment shown in FIG. 3 above. Similar, not repeat them here.
  • the sequence feature amplification model in this step may be a model that has not been trained yet.
  • step 404 the training device updates the sequence feature amplification model according to the amplified initial sequence feature sample and the amplified sequence feature sample.
  • the training device performs a loss function calculation according to the amplified initial sequence feature sample and the amplified sequence feature sample to obtain the loss function value. Then, the training device updates the model parameters in the sequence feature amplification model according to the loss function value.
  • the training device calculates the reconstruction error between the amplified initial sequence feature sample and the amplified sequence feature sample, and obtains the reconstruction error as the loss function value.
  • the above reconstruction error is the root mean square reconstruction error, that is, when obtaining the reconstruction error, the training device calculates the difference between the amplified initial sequence feature sample and the amplified sequence feature sample
  • the root-mean-square reconstruction error between the two, and the root-mean-square reconstruction error is obtained as the loss function value.
  • both x and z are of size L ⁇ D matrix.
  • the reconstruction error between the original sequence feature sample after automatic amplification and the reference sequence feature can be obtained by the root mean square reconstruction error calculation method, and the calculation formula is:
  • x ij and z ij are the elements in the i-th row and j-th column of the matrix x and the matrix z, respectively.
  • FIG. 5 shows a schematic diagram of training of a sequence feature amplification model involved in an embodiment of the present application.
  • the training process of the sequence feature amplification model is as follows:
  • the training device obtains an amino acid sequence sample, and performs a multi-sequence alignment data query operation of the amino acid sequence sample on the UniRef50 database to obtain a multi-sequence alignment data result.
  • the training device performs feature extraction on the result of the multi-sequence alignment data of S51 to obtain sequence features before automatic amplification, which may also be referred to as an initial sequence feature sample.
  • the training device performs the multi-sequence alignment data query operation of the amino acid sequence on the UniRef90 database to obtain the multi-sequence alignment data result.
  • the training device performs feature extraction on the result of the multi-sequence alignment data of S53 to obtain a reference sequence feature, which may also be referred to as an amplified sequence feature sample.
  • the training device inputs the initial sequence feature sample into the sequence feature amplification model.
  • the sequence feature amplification model outputs the amplified sequence features, which can be referred to as the initial sequence feature sample after amplification.
  • the training device calculates the reconstruction error between the amplified sequence feature and the reference sequence feature as a loss function according to the formula, and trains and updates the sequence feature amplification model according to the loss function.
  • the training device updates the model parameters in the sequence feature amplification model according to the loss function value.
  • the training device can judge whether the model has converged according to the value of the loss function. If the sequence feature amplification model has converged, the training device can end the training and output the sequence feature amplification model to the prediction device. Predict the structural information of the protein.
  • the training device may update the model parameters in the sequence feature amplification model according to the loss function value.
  • the training device compares the above-mentioned loss function value with a preset loss function threshold. If the loss function value is less than the loss function threshold, it indicates that the sequence feature amplification model output The result of is already close to the result obtained from the query in the second database, indicating that the sequence feature amplification model can achieve better feature amplification effect, and the model has been determined to have converged; on the contrary, if the loss function value is not less than the loss function threshold, It shows that the output result of the sequence feature amplification model is far from the result obtained from the query in the second database, indicating that the sequence feature amplification model has not yet achieved a better feature amplification effect, and it is judged that the model has not converged at this time.
  • the training device compares the above-mentioned loss function value with the loss function value obtained in the previous round of update process. If the loss function value obtained this time is compared with the previous one The difference between the loss function values obtained in the round is less than the difference threshold, which means that the accuracy of the sequence feature amplification model is small, and the training can not achieve a significant improvement. At this time, the judgment model has converged; on the contrary, if The difference between the loss function value obtained this time and the loss function value obtained in the previous round is not less than the difference threshold, which indicates that the accuracy of the sequence feature amplification model has been greatly improved, and further training may have a significant improvement. At this time, it is determined that the model has not converged.
  • the training device compares the above-mentioned loss function value with the loss function value obtained in the previous round of update process, and at the same time compares the loss function value obtained this time with the loss function value.
  • the function threshold is compared. If the loss function value is less than the loss function threshold, and the difference between the loss function value obtained this time and the loss function value obtained in the previous round is less than the difference threshold, the model is determined to have converged.
  • the prediction device can predict the structure information of the protein whose structure is unknown according to the sequence feature amplification model and the above-mentioned first database.
  • the prediction process can refer to the subsequent steps.
  • Step 405 The prediction device performs a sequence alignment query in the first database according to the amino acid sequence of the protein to obtain multiple sequence alignment data.
  • the protein in this step may be a protein that requires structural information prediction.
  • Step 406 The prediction device performs feature extraction on the multi-sequence alignment data to obtain initial sequence features.
  • Step 407 The prediction device processes the initial sequence feature through the sequence feature amplification model to obtain the amplified sequence feature of the protein.
  • step 405 to step 407 For the process from step 405 to step 407, reference may be made to the description in the embodiment shown in FIG. 3, which will not be repeated here.
  • Step 408 Predict the structural information of the protein based on the amplified sequence characteristics.
  • the prediction device can predict the amplified sequence feature through the protein structure information prediction model to obtain the protein structure information of the protein; wherein, the protein structure information prediction model is based on the sequence feature of the protein sample, and The model obtained by training the structural information of the protein sample.
  • the aforementioned protein structure information prediction model is an existing one, which is a machine learning model trained by other computer equipment.
  • the protein structure information prediction model used to predict the structure information of the protein may also be a model obtained through machine learning.
  • the training device can obtain several protein samples with known structural information and the amino acid sequence of each protein sample; then, the training device performs sequence alignment query in the third database according to the amino acid sequence of the protein sample to obtain multiple sequence alignment data , And perform feature extraction on the multi-sequence alignment data obtained by the query to obtain the sequence feature of the protein sample; then take the sequence feature of the protein sample as input and the structure information of the protein sample as the annotation information to train the above-mentioned protein structure information prediction model . After the protein structure information prediction model is trained, it can be applied to this step. The prediction device predicts the structure information of the protein according to the amplified sequence characteristics of the protein to be predicted and the protein structure information prediction model.
  • the above-mentioned second database in order to improve the accuracy of predicting the structure information of the protein according to the amplified sequence characteristics of the protein to be predicted and the protein structure information prediction model, the above-mentioned second database can be used as the protein structure information prediction model training process
  • the database used in (ie, the third database), that is, the above-mentioned second database and the third database may be the same database.
  • the above-mentioned second database and the third database are different data.
  • the third database may be a database with a larger data scale than the second database, and the second database and the third database The similarity of the data distribution is higher than the similarity threshold.
  • the second database may be the UniRef90 database
  • the third database may be the UniRef100 database.
  • FIG. 6 shows a schematic diagram of protein structure information prediction involved in an embodiment of the present application.
  • the process of protein structure information prediction is as follows:
  • the prediction device obtains an amino acid sequence, and performs a multi-sequence alignment data query operation of the amino acid sequence on the UniRef50 database to obtain a multi-sequence alignment data result.
  • the prediction device performs feature extraction on the result of the multi-sequence alignment data to obtain the sequence feature before automatic amplification.
  • the prediction device inputs the sequence feature before automatic amplification into the trained sequence feature amplification model.
  • the sequence feature amplification model outputs the automatically amplified sequence features.
  • the prediction device inputs the automatically amplified sequence features into the protein structure information prediction model.
  • the protein structure information prediction model outputs the protein structure information prediction result corresponding to the amino acid sequence.
  • the training device and the prediction device may be the same computer device, that is, the computer device first trains to obtain the sequence feature amplification model, and then performs protein structure information according to the sequence feature amplification model prediction.
  • the training device and the prediction device may be different computer devices, that is, the training device first trains to obtain the sequence feature amplification model, provides the sequence feature amplification model to the prediction device, and the prediction device amplifies the sequence feature according to the sequence feature.
  • the model predicts the structural information of the protein.
  • the sequence alignment query is performed on the amino acid sequence of the protein
  • the feature extraction is performed on the multi-sequence alignment data
  • the amplified sequence of the protein is obtained through a sequence feature amplification model. Feature, and then predict the structural information of the protein.
  • sequence feature amplification model it is only necessary to perform sequence alignment query in the first database with a smaller data scale, that is, a higher prediction accuracy can be obtained.
  • the small first database consumes less time for sequence alignment query. Therefore, the above solution can improve the prediction efficiency of protein structure information while ensuring the prediction accuracy of protein structure information.
  • Fig. 7 is a block diagram showing the structure of an apparatus for predicting protein structure information according to an exemplary embodiment.
  • the protein structure information prediction device can be implemented as all or part of a computer device in a hardware or a combination of software and hardware, so as to perform all or part of the steps of the method shown in the corresponding embodiment of FIG. 3 or FIG. 4.
  • the protein structure information prediction device may include:
  • the data acquisition module 710 is configured to perform sequence alignment query in the first database according to the amino acid sequence of the protein to obtain multiple sequence alignment data;
  • the initial feature acquisition module 720 is configured to perform feature extraction on the multi-sequence alignment data to obtain initial sequence features
  • the amplification feature acquisition module 730 is configured to process the initial sequence feature through a sequence feature amplification model to obtain the amplified sequence feature of the protein; the sequence feature amplification model is based on the initial sequence feature sample and amplification A machine learning model obtained by training of a sequence feature sample; the initial sequence feature sample is obtained by performing a sequence alignment query in the first database based on an amino acid sequence sample, and the amplified sequence feature sample is obtained based on the amino acid sequence sample. Obtained by performing a sequence alignment query in the second database; the data size of the second database is larger than the data size of the first database;
  • the structure information prediction module 740 is configured to predict the structure information of the protein based on the amplified sequence features.
  • the data distribution similarity between the first database and the second database is higher than a similarity threshold.
  • the first database is a database obtained after randomly removing a specified proportion of data on the basis of the second database.
  • the sequence feature amplification model is a fully convolutional neural network for one-dimensional sequence data, a recurrent neural network model composed of multi-layer long and short-term memory LSTM units, or a bidirectional LSTM unit. Recurrent neural network.
  • the initial sequence feature and the amplified sequence feature are a position-specific scoring matrix.
  • the device further includes:
  • An amplified sample acquisition module configured to process the initial sequence feature sample through the sequence feature amplification model to obtain an amplified initial sequence feature sample
  • the model update module is used to update the sequence feature amplification model according to the amplified initial sequence feature sample and the amplified sequence feature sample.
  • the model update module includes:
  • a loss function acquisition sub-module configured to perform loss function calculation according to the amplified initial sequence feature sample and the amplified sequence feature sample to obtain a loss function value
  • the parameter update sub-module is used to update the model parameters in the sequence feature amplification model according to the loss function value.
  • the loss function acquisition sub-module includes:
  • An error calculation unit configured to calculate the reconstruction error between the amplified initial sequence feature sample and the amplified sequence feature sample
  • the loss function acquiring unit is configured to acquire the reconstruction error as the loss function value.
  • the error calculation unit calculates a root mean square reconstruction error between the amplified initial sequence feature sample and the amplified sequence feature sample.
  • model update module is used to:
  • the model parameters in the sequence feature amplification model are updated according to the loss function value.
  • the structure information prediction module 740 includes:
  • the structure information acquisition sub-module is used to predict the characteristics of the amplified sequence through a protein structure information prediction model to obtain the structure information of the protein;
  • the protein structure information prediction model is a model obtained by training based on the sequence characteristics of the protein sample and the structure information of the protein sample.
  • the sequence alignment query is performed on the amino acid sequence of the protein
  • the feature extraction is performed on the multi-sequence alignment data
  • the amplified sequence of the protein is obtained through a sequence feature amplification model. Feature, and then predict the structural information of the protein.
  • sequence feature amplification model it is only necessary to perform sequence alignment query in the first database with a smaller data scale, that is, a higher prediction accuracy can be obtained.
  • the small first database consumes less time for sequence alignment query. Therefore, the above solution can improve the prediction efficiency of protein structure information while ensuring the prediction accuracy of protein structure information.
  • Fig. 8 is a schematic structural diagram of a computer device according to an exemplary embodiment.
  • the computer device may be implemented as a training device or a prediction device in each of the foregoing embodiments, or may also be implemented as a combination of a training device and a prediction device.
  • the computer device 800 includes a central processing unit (CPU) 801, a system memory 804 including a random access memory (RAM) 802 and a read only memory (ROM) 803, and a system bus 805 connecting the system memory 804 and the central processing unit 801 .
  • CPU central processing unit
  • RAM random access memory
  • ROM read only memory
  • the server 800 also includes a basic input/output system (I/O system) 806 to help transfer information between various devices in the computer, and a large-capacity storage for storing the operating system 813, application programs 814, and other program modules 815 Equipment 807.
  • I/O system basic input/output system
  • a large-capacity storage for storing the operating system 813, application programs 814, and other program modules 815 Equipment 807.
  • the basic input/output system 806 includes a display 808 for displaying information and an input device 809 such as a mouse and a keyboard for the user to input information.
  • the display 808 and the input device 809 are both connected to the central processing unit 801 through the input and output controller 810 connected to the system bus 805.
  • the basic input/output system 806 may also include an input and output controller 810 for receiving and processing input from multiple other devices such as a keyboard, a mouse, or an electronic stylus.
  • the input and output controller 810 also provides output to a display screen, a printer, or other types of output devices.
  • the mass storage device 807 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805.
  • the mass storage device 807 and its associated computer readable medium provide non-volatile storage for the server 800. That is, the mass storage device 807 may include a computer-readable medium (not shown) such as a hard disk or a CD-ROM drive.
  • the computer-readable media may include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, EPROM, EEPROM, flash memory or other solid-state storage technologies, CD-ROM, DVD or other optical storage, tape cartridges, magnetic tape, disk storage or other magnetic storage devices.
  • RAM random access memory
  • ROM read-only memory
  • EPROM Erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • the server 800 may be connected to the Internet or other network devices through the network interface unit 811 connected to the system bus 805.
  • the memory also includes one or more programs, the one or more programs are stored in the memory, and the central processing unit 801 executes the one or more programs to realize the prediction of the structure information of the protein shown in FIG. 3 or 4. In the method, the steps performed by the computer equipment.
  • This application also provides a computer program product, which when the computer program product runs on a computer, causes the computer to execute the methods provided in the foregoing method embodiments.
  • FIG. 9 shows a structural block diagram of a terminal 900 provided by an exemplary embodiment of the present application.
  • the terminal 900 can be: a smartphone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, moving picture expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, moving picture expert compressing standard audio Level 4) Player, laptop or desktop computer.
  • the terminal 900 may also be called user equipment, portable terminal, laptop terminal, desktop terminal and other names.
  • the foregoing terminal may be implemented as the prediction device in each of the foregoing method embodiments. For example, it can be implemented as the prediction device 120 in FIG. 1.
  • the terminal 900 includes a processor 901 and a memory 902.
  • the processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 901 can adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). achieve.
  • the processor 901 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the wake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 901 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used to render and draw content that needs to be displayed on the display screen.
  • the processor 901 may further include an AI (Artificial Intelligence) processor, and the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 902 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 902 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 902 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 901 to realize the protein synthesis provided in the method embodiment of the present application. Prediction method of structural information.
  • the terminal 900 may optionally further include: a peripheral device interface 903 and at least one peripheral device.
  • the processor 901, the memory 902, and the peripheral device interface 903 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 903 through a bus, a signal line, or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 904, a touch display screen 905, a camera 906, an audio circuit 907, a positioning component 908, and a power supply 909.
  • the peripheral device interface 903 can be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 901 and the memory 902.
  • the processor 901, the memory 902, and the peripheral device interface 903 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 901, the memory 902, and the peripheral device interface 903 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 904 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 904 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 904 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
  • the radio frequency circuit 904 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area network and/or WiFi (Wireless Fidelity, wireless fidelity) network.
  • the radio frequency circuit 904 may also include a circuit related to NFC (Near Field Communication), which is not limited in this application.
  • the display screen 905 is used to display a UI (User Interface, user interface).
  • the UI can include graphics, text, icons, videos, and any combination thereof.
  • the display screen 905 also has the ability to collect touch signals on or above the surface of the display screen 905.
  • the touch signal can be input to the processor 901 as a control signal for processing.
  • the display screen 905 may also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • the display screen 905 may be a flexible display screen, which is disposed on the curved surface or the folding surface of the terminal 900.
  • the display screen 905 can also be configured as a non-rectangular irregular pattern, that is, a special-shaped screen.
  • the display screen 905 may be made of materials such as LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode, organic light-emitting diode).
  • the camera assembly 906 is used to capture images or videos.
  • the camera assembly 906 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
  • the camera assembly 906 may also include a flash.
  • the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • the audio circuit 907 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input to the processor 901 for processing, or input to the radio frequency circuit 904 to implement voice communication.
  • the microphone can also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert the electrical signal from the processor 901 or the radio frequency circuit 904 into sound waves.
  • the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert the electrical signal into human audible sound waves, but also convert the electrical signal into human inaudible sound waves for distance measurement and other purposes.
  • the audio circuit 907 may also include a headphone jack.
  • the positioning component 908 is used to locate the current geographic location of the terminal 900 to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 908 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, or the Galileo system of Russia.
  • the power supply 909 is used to supply power to various components in the terminal 900.
  • the power source 909 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • a wired rechargeable battery is a battery charged through a wired line
  • a wireless rechargeable battery is a battery charged through a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 900 further includes one or more sensors 910.
  • the one or more sensors 910 include, but are not limited to: an acceleration sensor 911, a gyroscope sensor 912, a pressure sensor 913, a fingerprint sensor 914, an optical sensor 915, and a proximity sensor 916.
  • the acceleration sensor 911 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 900.
  • the acceleration sensor 911 may be used to detect the components of gravitational acceleration on three coordinate axes.
  • the processor 901 may control the touch screen 905 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 911.
  • the acceleration sensor 911 may also be used for game or user motion data collection.
  • the gyroscope sensor 912 can detect the body direction and the rotation angle of the terminal 900, and the gyroscope sensor 912 can cooperate with the acceleration sensor 911 to collect the user's 3D actions on the terminal 900.
  • the processor 901 can implement the following functions according to the data collected by the gyroscope sensor 912: motion sensing (for example, changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 913 may be provided on the side frame of the terminal 900 and/or the lower layer of the touch screen 905.
  • the processor 901 performs left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 913.
  • the processor 901 controls the operability controls on the UI interface according to the user's pressure operation on the touch display screen 905.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 914 is used to collect the user's fingerprint, and the processor 901 can identify the user's identity according to the fingerprint collected by the fingerprint sensor 914, or the fingerprint sensor 914 can identify the user's identity according to the collected fingerprint. When it is recognized that the user's identity is a trusted identity, the processor 901 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 914 may be provided on the front, back, or side of the terminal 900. When a physical button or a manufacturer logo is provided on the terminal 900, the fingerprint sensor 914 may be integrated with the physical button or the manufacturer logo.
  • the optical sensor 915 is used to collect the ambient light intensity.
  • the processor 901 may control the display brightness of the touch screen 905 according to the ambient light intensity collected by the optical sensor 915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 905 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 905 is decreased.
  • the processor 901 may also dynamically adjust the shooting parameters of the camera assembly 906 according to the ambient light intensity collected by the optical sensor 915.
  • the proximity sensor 916 also called a distance sensor, is usually provided on the front panel of the terminal 900.
  • the proximity sensor 916 is used to collect the distance between the user and the front of the terminal 900.
  • the processor 901 controls the touch screen 905 to switch from the on-screen state to the off-screen state; when the proximity sensor 916 detects When the distance between the user and the front of the terminal 900 gradually increases, the processor 901 controls the touch display screen 905 to switch from the rest screen state to the bright screen state.
  • FIG. 9 does not constitute a limitation on the terminal 900, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
  • the program can be stored in a computer-readable storage medium.
  • the medium may be a computer-readable storage medium included in the memory in the foregoing embodiment; or may be a computer-readable storage medium that exists alone and is not assembled into the terminal.
  • the computer-readable storage medium stores at least one instruction, at least one program, code set or instruction set, and the at least one instruction, the at least one program, the code set or the instruction set is loaded and executed by the processor In order to realize the protein structure information prediction method as described in FIG. 3 or FIG. 4.
  • the computer-readable storage medium may include: read only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), solid state drive (SSD, Solid State Drives), optical disks, and the like.
  • random access memory may include resistive random access memory (ReRAM, Resistance Random Access Memory) and dynamic random access memory (DRAM, Dynamic Random Access Memory).
  • ReRAM resistive random access memory
  • DRAM Dynamic Random Access Memory
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the protein structure information prediction method provided in the various alternative implementations of the foregoing aspects.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium mentioned can be a read-only memory, a magnetic disk or an optical disk, etc.

Abstract

一种蛋白质的结构信息预测方法、装置、设备及存储介质,涉及生物信息技术领域。该方法包括:通过对蛋白质的氨基酸序列在第一数据库中进行序列对齐查询,获得多序列对齐数据,并对多序列对齐数据进行特征提取,获得初始序列特征后,通过一个序列特征扩增模型对初始序列特征进行处理,获得蛋白质的扩增序列特征,然后根据扩增序列特征预测蛋白质的结构信息。上述方案能够在基于人工智能预测蛋白质的结构信息时,在保证蛋白质的结构信息的预测准确度的情况下,提高蛋白质的结构信息的预测效率。

Description

蛋白质的结构信息预测方法、装置、设备及存储介质
本申请要求于2019年10月30日提交的申请号为201911042649.9、发明名称为“蛋白质的结构信息预测方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及生物信息技术领域,特别是涉及一种蛋白质的结构信息预测方法、装置、设备及存储介质。
背景技术
蛋白质在生物体中的实际作用与其三维结构存在密切的关系,因此,准确的确定蛋白质的三维结构具有很重要的意义。
由于蛋白质的三维结构本质上是由其对应的氨基酸序列信息决定的,因此,在相关技术中,可以通过蛋白质的氨基酸序列来确定蛋白质的结构信息。例如,在根据蛋白质的氨基酸序列来确定蛋白质的结构信息时,首先根据蛋白质的氨基酸序列,在一个氨基酸序列数据库中进行多序列对齐数据查询操作,以提取该蛋白质的氨基酸序列的序列特征,然后根据序列特征预测该蛋白质的结构信息。其中,上述序列特征提取的准确性与数据库的数据规模直接相关,氨基酸序列数据库的数据规模越大,则序列特征提取的准确性越高。
然而,在上述相关技术中,若要提取较为准确的序列特征,就需要基于数据规模较大的数据库进行查询操作,而数据规模较大的数据库会导致查询操作需要消耗较长时间,进而导致蛋白质的结构信息的预测效率较低。
发明内容
本申请实施例提供了一种蛋白质的信息结构预测方法、装置、设备及存储介质,可以在保证蛋白质的结构信息的预测准确度的情况下,提高蛋白质的结构信息的预测效率,技术方案如下:
一方面,提供了一种蛋白质的信息结构预测方法,所述方法包括:
根据蛋白质的氨基酸序列在第一数据库中进行序列对齐查询,获得多序列 对齐数据;
对所述多序列对齐数据进行特征提取,获得初始序列特征;
通过序列特征扩增模型对所述初始序列特征进行处理,获得所述蛋白质的扩增序列特征;所述序列特征扩增模型是通过初始序列特征样本和扩增序列特征样本训练获得的机器学习模型;所述初始序列特征样本是根据氨基酸序列样本在所述第一数据库中进行序列对齐查询获得的,所述扩增序列特征样本是根据所述氨基酸序列样本在第二数据库中进行序列对齐查询获得的;所述第二数据库的数据规模大于所述第一数据库的数据规模;
通过所述扩增序列特征预测所述蛋白质的结构信息。
一方面,提供了一种蛋白质结构信息预测装置,所述装置包括:
数据获取模块,用于根据蛋白质的氨基酸序列在第一数据库中进行序列对齐查询,获得多序列对齐数据;
初始特征获取模块,用于对所述多序列对齐数据进行特征提取,获得初始序列特征;
扩增特征获取模块,用于通过序列特征扩增模型对所述初始序列特征进行处理,获得所述蛋白质的扩增序列特征;所述序列特征扩增模型是通过初始序列特征样本和扩增序列特征样本训练获得的机器学习模型;所述初始序列特征样本是根据氨基酸序列样本在所述第一数据库中进行序列对齐查询获得的,所述扩增序列特征样本是根据所述氨基酸序列样本在第二数据库中进行序列对齐查询获得的;所述第二数据库的数据规模大于所述第一数据库的数据规模;
结构信息预测模块,用于通过所述扩增序列特征预测所述蛋白质的结构信息。
在一种可能的实现方式中,所述第一数据库和所述第二数据库之间的数据分布相似度高于相似度阈值。
在一种可能的实现方式中,所述第一数据库是在所述第二数据库的基础上随机剔除指定比例的数据后获得的数据库。
在一种可能的实现方式中,所述序列特征扩增模型是针对一维序列数据的全卷积神经网络、由多层长短期记忆(Long Short-Term Memory,LSTM)单元构成的循环神经网络模型或者由双向LSTM单元构成的循环神经网络。
在一种可能的实现方式中,所述初始序列特征和所述扩增序列特征为位置 特异性得分矩阵。
在一种可能的实现方式中,所述装置还包括:
扩增样本获取模块,用于通过所述序列特征扩增模型对所述初始序列特征样本进行处理,获得扩增后的初始序列特征样本;
模型更新模块,用于根据所述扩增后的初始序列特征样本,以及所述扩增序列特征样本,对所述序列特征扩增模型进行更新。
在一种可能的实现方式中,所述模型更新模块,包括:
损失函数获取子模块,用于根据所述扩增后的初始序列特征样本,以及所述扩增序列特征样本进行损失函数计算,获得损失函数值;
参数更新子模块,用于根据所述损失函数值对所述序列特征扩增模型中的模型参数进行更新。
在一种可能的实现方式中,所述损失函数获取子模块,包括:
误差计算单元,用于计算所述扩增后的初始序列特征样本与所述扩增序列特征样本之间的重构误差;
损失函数获取单元,用于将所述重构误差获取为所述损失函数值。
在一种可能的实现方式中,所述误差计算单元计算所述扩增后的初始序列特征样本与所述扩增序列特征样本之间的均方根重构误差。
在一种可能的实现方式中,所述模型更新模块,用于,
当根据所述损失函数值确定所述序列特征扩增模型未收敛时,根据所述损失函数值对所述序列特征扩增模型中的模型参数进行更新。
在一种可能的实现方式中,所述结构信息预测模块,包括:
结构信息获取子模块,用于通过蛋白质结构信息预测模型对所述扩增序列特征进行预测,获得所述蛋白质的结构信息;
其中,所述蛋白质结构信息预测模型是根据蛋白质样本的序列特征,以及所述蛋白质样本的结构信息训练获得的模型。
一方面,提供了一种计算机设备,该计算机设备包含处理器和存储器,存储器中存储由至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现上述蛋白质的结构信息预测方法。
一方面,提供了一种计算机可读存储介质,该存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现上述蛋白质的结构信息预测方法。
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述方面的各种可选实现方式中提供的蛋白质的结构信息预测方法。
本申请提供的技术方案可以包括以下有益效果:
在本申请实施例所示的方案中,通过对蛋白质的氨基酸序列进行序列对齐查询,对多序列对齐数据进行特征提取,通过一个序列特征扩增模型,获得蛋白质的扩增序列特征,然后预测蛋白质的结构信息,通过借助于序列特征扩增模型,只需要在数据规模较小的第一数据库进行序列对齐查询,即可以获得较高的预测准确性,同时,在数据规模较小的第一数据库进行序列对齐查询所消耗的时间更少,因此,上述方案能够在保证蛋白质的结构信息的预测准确度的情况下,提高蛋白质的结构信息的预测效率。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
图1是本申请一个示例性的实施例提供的一种模型训练及蛋白质结构信息预测框架图;
图2是本申请一个示例性的实施例提供的一种机器学习模型的模型架构图;
图3是本申请一个示例性的实施例提供的一种蛋白质的结构信息预测方法的流程示意图;
图4是本申请一个示例性的实施例提供的机器学习模型训练和蛋白质的结构信息预测方法的流程示意图;
图5是图4所示实施例涉及的一种序列特征自动扩增模型训练的示意图;
图6是图4所示实施例涉及的一种蛋白质结构信息预测的示意图;
图7是根据一示例性实施例示出的蛋白质的结构信息预测装置的结构方框图;
图8是根据一示例性实施例示出的计算机设备的结构示意图;
图9是根据一示例性实施例示出的终端的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
应当理解的是,在本文中提及的“若干个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
本申请提供一种蛋白质的结构信息预测方法,可以通过人工智能(Artificial Intelligence,AI)识别蛋白质的结构信息,从而提供一种高效并且高准确率的蛋白质的结构信息预测方案。为了便于理解,下面对本申请涉及的几个名词进行解释。
1)氨基酸序列
氨基酸,是羧酸碳原子上的氢原子被氨基取代后的化合物,氨基酸分子中含有氨基和羧基两种官能团。与羟基酸类似,氨基酸可按照氨基连在碳链上的不同位置而分为α-,β-,γ-...w-氨基酸,但经蛋白质水解后得到的氨基酸都是α-氨基酸,而且仅有二十几种,他们是构成蛋白质的基本单位。20种氨基酸是指甘氨酸、丙氨酸、缬氨酸、亮氨酸、异亮氨酸、苯丙氨酸、脯氨酸、色氨酸、丝氨酸、酪氨酸、半胱氨酸、蛋氨酸、天冬酰胺、谷氨酰胺、苏氨酸、天冬氨酸、谷氨酸、赖氨酸、精氨酸和组氨酸这20种组成人体蛋白质的氨基酸。由这20个氨基酸分子脱水缩合而成含有多个肽键的化合物叫做多肽。多肽通常呈链 状结构称为肽链。肽链通过盘曲、折叠,可以形成有一定空间结构的蛋白质分子。
2)蛋白质结构
蛋白质结构是指蛋白质分子的空间结构。蛋白质主要由碳、氢、氧、氮等化学元素组成,是一类重要的生物大分子,所有蛋白质都是由20种不同氨基酸连接形成的多聚体,在形成蛋白质后,这些氨基酸又被称为残基。
蛋白质的分子结构可划分为四级,以描述其不同的方面:
一级结构:组成蛋白质多肽链的线性氨基酸序列。
二级结构:依靠不同氨基酸之间的C=O和N-H基团间的氢键形成的稳定结构,主要为α螺旋和β折叠。
三级结构:通过多个二级结构元素在三维空间的排列所形成的一个蛋白质分子的三维结构。
四级结构:用于描述由不同多肽链(亚基)间相互作用形成具有功能的蛋白质复合物分子。
3)人工智能
人工智能是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。
人工智能技术是一门综合学科,涉及领域广泛,既有硬件层面的技术也有软件层面的技术。人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互系统、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。
4)机器学习(Machine Learning,ML)
机器学习是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。机器学习是人工智能的核心,是使计算机具有智能的根本途径,其应用遍及人 工智能的各个领域。机器学习和深度学习通常包括人工神经网络、置信网络、强化学习、迁移学习、归纳学习、示教学习等技术。
本申请实施例的方案包括模型训练阶段和预测阶段。图1是根据一示例性实施例示出的一种模型训练及蛋白质结构信息预测框架图。如图1所示,在模型训练阶段,模型训练设备110通过对同一蛋白质对应的氨基酸序列在不同规模的数据库上进行多序列对齐数据查询操作以及序列特征提取操作的结果训练出机器学习模型,在预测阶段,预测设备120根据训练好的机器学习模型以及输入的氨基酸序列可以预测出该氨基酸序列对应的蛋白质的结构信息。
其中,上述模型训练设备110和预测设备120可以是具有机器学习能力的计算机设备,比如,该计算机设备可以是个人电脑、服务器以及固定式科研设备等固定式计算机设备,或者,该计算机设备也可以是平板电脑、电子书阅读器等移动式计算机设备。
在一种可能的实现方式中,上述模型训练设备110和预测设备120是同一个设备,或者,模型训练设备110和预测设备120是不同的设备。并且,当模型训练设备110和预测设备120是不同的设备时,模型训练设备110和预测设备120可以是同一类型的设备,比如模型训练设备110和预测设备120可以都是个人电脑;或者,模型训练设备110和预测设备120也可以是不同类型的设备,比如模型训练设备110可以是服务器,而预测设备120可以是固定式科研实验设备等。本申请实施例对于模型训练设备110和预测设备120的具体类型不做限定。
图2是根据一示例性实施例实施例示出的一种机器学习模型的模型架构图。如图2所示,本申请实施例中的机器学习模型20可以包含两个模型,其中序列特征扩增模型210用于将输入的序列特征进行自动扩增,输出得到扩增后的序列特征。该序列特征扩增模型210除了输出扩增后的序列特征之外,还将扩增后的序列特征输入到蛋白质结构信息预测模型220,该蛋白质结构信息预测模型220,用于根据序列特征扩增模型210输入的扩增后的序列特征进行蛋白质结构信息预测,并输出蛋白质结构信息的预测结果。
在上述图2所示的机器学习模型中,蛋白质的结构信息预测并不是仅通过单一数据库中经过多序列对齐数据查询提取出的特征序列作为输入蛋白质结构 信息预测模型中的数据,而是将经过扩增之后的序列特征作为预测蛋白质结构信息的输入数据,相比于单个数据库比对得到的序列特征来说,经过自动扩增的序列特征对蛋白质结构信息预测的准确性更高。
蛋白质在生物体中具有重要的实际作用,例如,蛋白质会导致某种遗传疾病,或者,蛋白质会使得生物体对特定疾病具有免疫能力。蛋白质在生物体中发挥的作用在很大程度上是由其三维结构决定的,而蛋白质的三维结构,从本质上是由其对应的氨基酸序列信息决定的。
为了确定蛋白质的三维结构,可以通过实验方法进行测定,例如,可以通过X射线结晶、核磁共振以及冷冻电镜等方法测定蛋白质的三维结构。由于基于实验方法测定蛋白质三维结构的时间和经济成本过高,因此,通过计算方法而非实验方法,根据蛋白质所对应的氨基酸序列,直接预测蛋白质的三维结构,具有极高的科研意义和实用价值。
在基于计算方法预测蛋白质三维结构的过程中,能否准确地预测蛋白质的部分结构信息,在很大程度上决定了最终蛋白质三维结构预测精度的高低。其中,蛋白质的部分结构信息包括主链二面角或者二级结构等,因此,针对基于序列特征的蛋白质结构信息预测算法中的预测精度与计算效率的矛盾,本申请所提出的蛋白质的结构信息预测方法,可以降低对氨基酸序列数据库的数据规模要求,以较低的数据库存储与查询开销,取得与传统方法相仿的蛋白质结构信息预测精度,提升蛋白质结构信息的预测精度和计算效率,从而促进蛋白质三维结构预测精度的提升。
请参考图3,其示出了本申请一个示例性的实施例提供的蛋白质的结构信息预测方法的流程示意图。该蛋白质的结构信息预测方法可以由计算机设备执行,比如上述图1所示的预测设备120中。如图3所示,该蛋白质的结构信息预测方法可以包括如下步骤:
步骤310,根据蛋白质的氨基酸序列在第一数据库中进行序列对齐查询,获得多序列对齐数据。
在本申请实施例中,计算机设备可以通过序列对齐操作来获得多序列对齐数据。
其中,序列对齐是指通过多个氨基酸序列进行对齐,并将其中相似的结构区域进行突出显示,通过比较已知蛋白质结构和功能所对应的氨基酸序列,与 未知蛋白质结构和功能所对应的氨基酸序列,确定两种氨基酸序列之间的同源性,以便后续进行未知的氨基酸序列构成的蛋白质结构和功能的预测。
在一种可能的实现方式中,第一数据库是包含有若干种氨基酸序列的数据库。
步骤320,对该多序列对齐数据进行特征提取,获得初始序列特征。
在本申请实施例中,预测设备可以将每一条氨基酸序列通过使用特定位置的迭代基本局部对齐搜索工具(Position-Specific Iterative Basic Local Alignment Search Tool,PSI-BLAST)获取第一数据库中经过多序列对齐数据查询操作得到的同源序列,然后比对各个序列的同源信息得到位置特异性得分矩阵(Position-Specific Scoring Matrices,PSSM),该位置特异性得分矩阵即可以作为上述序列特征。
其中,位置特异性得分矩阵可以表示为将氨基酸序列进行多序列对齐后,得到的一个在对应位置的氨基酸出现的频率值,或者是每一个对应位置上显示每种氨基酸的频率,或者是每一个对应位置上显示每种氨基酸的概率等。
步骤330,通过序列特征扩增模型对该初始序列特征进行处理,获得该蛋白质的扩增序列特征。
在本申请实施例中,预测设备可以将上述初始序列特征输入至序列特征扩增模型,由序列特征扩增模型对初始序列特征进行特征扩增,即在初始序列特征中增加新的特征,得到一个特征更为全面的扩增序列特征。
其中,序列特征扩增模型是通过初始序列特征样本和扩增序列特征样本训练获得的机器学习模型;初始序列特征样本是根据氨基酸序列样本在第一数据库中进行序列对齐查询获得的,扩增序列特征样本是根据氨基酸序列样本在第二数据库中进行序列对齐查询获得的;第二数据库的数据规模大于第一数据库的数据规模。
在本申请实施例中,在上述序列特征扩增模型的训练过程中,计算机设备可以将初始序列特征样本可以作为序列特征扩增模型的输入,并将扩增序列特征样本作为初始序列特征样本的标注数据,对序列特征扩增模型进行训练。
在本申请实施例中,序列特征扩增模型可以是针对一维序列数据的全卷积神经网络模型(Fully Convolutional Networks for Semantic Segmentation,FCN)。
其中,卷积神经网络(Convolutional Neural Network,CNN)是一种前馈神经网络,它的人工神经元可以响应一部分覆盖范围内的周围单元,对于大型图像 处理有出色表现。CNN包括卷积层(convolutional layer)和池化层(pooling layer)。从CNN发展到FCN,通常CNN网络在卷积层之后会接上若干个全连接层,将卷积层产生的特征图(feature map)映射成一个固定长度的特征向量。
在一种可能的实现方式中,序列特征扩增模型是由多层长短期记忆LSTM单元构成的循环神经网络模型,或者由双向LSTM单元构成的循环神经网络模型。
其中,循环神经网络(Recurrent Neural Network,RNN)是一类以序列数据为输入,在序列的演进方向进行递归且所有节点,即循环单元按链式连接的递归神经网络。
步骤340,通过该扩增序列特征预测该蛋白质的结构信息。
在本申请实施例中,预测设备预测蛋白质的结构信息,可以包括但不限于预测蛋白质的主链二面角和/或蛋白质的二级结构信息等。
其中,二面角为两相邻酰胺平面之间,能以共同的Ca为定点而旋转,绕Ca-N键旋转的角度称为
Figure PCTCN2020114386-appb-000001
角,绕C-Ca键旋转的角度称为ψ角。其中,
Figure PCTCN2020114386-appb-000002
角和ψ角称作二面角。在蛋白质中,只有α-碳原子连接的两个键,即Ca-N键和C-Ca键是单键,能够自由旋转。肽链的主链可以看成是,由被Ca隔开的许多平面组成的,二面角决定了两个肽平面的相对位置,也就是决定了肽链主链的位置与构象。
蛋白质二级结构是指多肽主链骨架原子沿一定的轴盘旋或折叠,而形成的特定的构象,即肽链主链骨架原子的空间位置排布,不涉及氨基酸残基侧链。蛋白质二级结构的主要形式包括α-螺旋、β-折叠、β-转角和无规卷曲。由于蛋白质的分子量较大,一个蛋白质分子的不同肽段可含有不同形式的二级结构。在蛋白质中,维持二级结构的主要作用力为氢键。一种蛋白质的二级结构并非单纯的α螺旋或β折叠结构,还包括这些不同类型构象的组合,不同的蛋白质中,不同类型构象的占比也可能各不相同。
综上所述,在本申请实施例所示的方案中,通过对蛋白质的氨基酸序列进行序列对齐查询,对多序列对齐数据进行特征提取,通过一个序列特征扩增模型,获得蛋白质的扩增序列特征,然后预测蛋白质的结构信息,通过借助于序列特征扩增模型,只需要在数据规模较小的第一数据库进行序列对齐查询,即可以获得较高的预测准确性,同时,在数据规模较小的第一数据库进行序列对齐查询所消耗的时间更少,因此,上述方案能够在保证蛋白质的结构信息的预测准确度的情况下,提高蛋白质的结构信息的预测效率。
请参考图4,其示出了本申请一个示例性的实施例提供的机器学习模型训练和蛋白质的结构信息预测方法的流程示意图。该方案分为机器学习模型训练和蛋白质的结构信息预测两部分,该机器学习模型训练和蛋白质的结构信息预测方法可以由计算机设备执行,其中,该计算机设备可以包括上述图1所示的训练设备110和预测设备120。如图4所示,该机器学习模型训练和蛋白质的结构信息预测方法可以包括如下步骤:
步骤401,训练设备根据氨基酸序列样本在第一数据库中进行序列对齐查询,根据查询结果获得初始序列特征样本。
在本申请实施例中,训练设备可以根据氨基酸序列样本在第一数据库中进行序列对齐查询,获得多序列对齐数据,然后对多序列对齐数据进行特征提取,获得上述初始序列特征样本。
在本申请实施例中,某一蛋白质的氨基酸序列,可以由多种氨基酸(比如,由已知20种基本氨基酸)构成。上述氨基酸序列样本可以是目前已知的蛋白质的氨基酸序列,或者,上述氨基酸序列样本也可以是随机或者按照一定的规则生成的氨基酸序列。
在一种可能的实现方式中,上述氨基酸序列样本包括蛋白质结构信息已知的氨基酸序列,或者包括蛋白质结构信息未知的氨基酸序列,或者,同时包括蛋白质结构信息已知的氨基酸序列和蛋白质结构信息未知的氨基酸序列。
步骤402,训练设备根据氨基酸序列样本在第二数据库中进行序列对齐查询,根据查询结果获得扩增序列特征样本。
在本申请实施例中,训练设备可以根据氨基酸序列样本在第二数据库中进行序列对齐查询,获得多序列对齐数据,然后对多序列对齐数据进行特征提取,获得上述扩增序列特征样本。
其中,训练设备通过相同的氨基酸序列样本,分别从第一数据库和第二数据库中获取初始序列特征样本和扩增序列特征样本,且初始序列特征样本和扩增序列特征样本是一一对应的。
其中,上述初始序列特征样本和扩增序列特征样本可以是按照相同的特征提取算法提取出的序列特征,比如,上述初始序列特征样本和扩增序列特征样本可以都是位置特异性得分矩阵,且矩阵中的元素类型相同。
其中,上述的第二数据库的数据规模大于第一数据库的数据规模。
在本申请实施例中,第一数据库和第二数据库分别是氨基酸序列数据库, 每个数据库中分别包含若干条氨基酸序列,并且,第二数据库中包含的氨基酸序列的数量大于第一数据库中包含的氨基酸序列的数量。
在一种可能的实现方式中,上述的第一数据库和第二数据库之间的数据分布相似度高于相似度阈值。
在本申请实施例中,为了提高后续的序列特征扩增模型训练的准确性,上述的第一数据库和第二数据库可以使用数据分布相似的数据库,也就是说,第一数据库和第二数据库之间的数据分布的相似度,需要高于预定的相似度阈值。
其中,上述相似度阈值可以是开发人员预先设置的数值。
在一种可能的实现方式中,第一数据库和第二数据库是同一种数据库,但是分别是不同的数据规模。
例如,上述数据库可以是已有的数据分布相似的两个数据库,比如,上述的第一数据库和第二数据库可以是数据规模不同的UniRef数据库;或者,上述的第一数据库和第二数据库可以是UniProtKB数据库中的Swiss-Prot数据库和TrEMBL数据库。
其中,UniRef数据库根据同一性可以分为三个级别:100%、90%和50%,分别为UniRef100,UniRef90和UniRef50数据库,UniRef100、UniRef90和UniRef50,这三个数据库的数据量分别在完整的数据库的基础上减少10%、40%和70%。
在一种可能的实现方式中,上述的第一数据库可以是UniRef50数据库,第二数据库可以是UniRef90或者UniRef100数据库(UniRef50数据库的数据规模小于UniRef90或者UniRef100数据库的数据规模)。或者,第一数据库可以是UniRef90数据库,第二数据库可以是UniRef100数据库。
在另一种可能的实现方式中,上述的第一数据库是在第二数据库的基础上随机剔除指定比例的数据后获得的数据库。
其中,上述的指定比例可以是开发人员预先设置的比例。
在本申请实施例中,训练设备可以在第二数据库的基础上,随机剔除指定比例(比如50%)的氨基酸序列,得到第一数据库。
比如,上述的第二数据库可以是已有的数据库。例如,第二数据库可以是上述UniRef90数据库(也可以是其它已有的数据库),训练设备在UniRef90数据库中随即剔除一般氨基酸序列,得到上述的第一数据库。
步骤403,训练设备通过序列特征扩增模型对该初始序列特征样本进行处理,获得扩增后的初始序列特征样本。
在本申请实施例中,计算机设备通过序列特征扩增模型对该初始序列特征样本进行处理,获得扩增后的初始序列特征样本,与上述图3所示实施例中获取扩增序列特征的过程类似,此处不再赘述。
与上述图3所示的实施例不同的是,本步骤中的序列特征扩增模型可以是尚未完成训练的模型。
步骤404,训练设备根据该扩增后的初始序列特征样本,以及该扩增序列特征样本,对该序列特征扩增模型进行更新。
在本申请实施例中,训练设备根据该扩增后的初始序列特征样本,以及该扩增序列特征样本进行损失函数计算,获得损失函数值。然后,训练设备根据该损失函数值对该序列特征扩增模型中的模型参数进行更新。
在一种可能的实现方式中,训练设备通过计算扩增后的初始序列特征样本与扩增序列特征样本之间的重构误差,并将重构误差获取为损失函数值。
在一种可能的实现方式中,上述重构误差是均方根重构误差,也就是说,在获取重构误差时,训练设备计算扩增后的初始序列特征样本与扩增序列特征样本之间的均方根重构误差,并将均方根重构误差获取为上述损失函数值。
比如,记氨基酸序列样本长度为L,特征维度为D,自动扩增后的初始序列特征样本为x,参考序列特征(即扩增序列特征样本)为z,则x和z都是大小为L×D的矩阵。自动扩增后的初始序列特征样本与参考序列特征之间的重构误差可以通过均方根重构误差计算方式获得,其计算公式为:
Figure PCTCN2020114386-appb-000003
其中,x ij和z ij分别是矩阵x和矩阵z中第i行第j列的元素。
比如,上述模型训练过程可以如图5所示。请参考图5,其示出了本申请实施例涉及的一种序列特征扩增模型训练的示意图。如图5所示,序列特征扩增模型的训练过程如下:
S51,训练设备获取一条氨基酸序列样本,并且在UniRef50数据库上进行该氨基酸序列样本的多序列对齐数据查询操作,得到多序列对齐数据结果。
S52,训练设备将S51的多序列对齐数据结果进行特征提取,获得自动扩增前的序列特征,也可以称为初始序列特征样本。
S53,训练设备将上述的氨基酸序列样本在UniRef90数据库上进行该氨基酸序列的多序列对齐数据查询操作,得到多序列对齐数据结果.
S54,训练设备将S53的多序列对齐数据结果进行特征提取,获得参考序列特征,也可以称为扩增序列特征样本。
S55,训练设备将初始序列特征样本输入到序列特征扩增模型中。
S56,序列特征扩增模型输出扩增后的序列特征,可以称为扩增后的初始序列特征样本。
S57,训练设备根据公式计算出扩增后的序列特征与参考序列特征之间的重构误差作为损失函数,根据损失函数对序列特征扩增模型进行训练更新。
在一种可能的实现方式中,当根据损失函数值确定序列特征扩增模型未收敛时,训练设备根据损失函数值对序列特征扩增模型中的模型参数进行更新。
在上述步骤404执行之前,训练设备可以根据损失函数值判断模型是否收敛,如果序列特征扩增模型已经收敛,则训练设备可以结束训练,并将序列特征扩增模型输出给预测设备,由预测设备进行蛋白质的结构信息的预测。
反之,如果判断出序列特征扩增模型未收敛,则训练设备可以根据损失函数值对序列特征扩增模型中的模型参数进行更新。
在一种可能的实现方式中,在判断模型是否收敛时,训练设备将上述损失函数值与预设的损失函数阈值进行比较,如果损失函数值小于损失函数阈值,则说明序列特征扩增模型输出的结果已经接近于从第二数据库中查询获得的结果,说明序列特征扩增模型能够达到较好的特征扩增效果,此时判定模型已经收敛;反之,如果损失函数值不小于损失函数阈值,则说明序列特征扩增模型输出的结果与从第二数据库中查询获得的结果差距较大,说明序列特征扩增模型尚未能够达到较好的特征扩增效果,此时判定模型未收敛。
在另一种可能的实现方式中,在判断模型是否收敛时,训练设备将上述损失函数值与前一轮更新过程中得到的损失函数值进行比较,如果本次获得的损失函数值与前一轮得到的损失函数值之间的差值小于差值阈值,则说明序列特征扩增模型的准确性提升较小,再继续训练也无法达到明显提升,此时,判定模型已经收敛;反之,如果本次获得的损失函数值与前一轮得到的损失函数值之间的差值不小于差值阈值,则说明序列特征扩增模型的准确性提升较大,再继续训练可能还有明显提升,此时,判定模型未收敛。
在另一种可能的实现方式中,在判断模型是否收敛时,训练设备将上述损失函数值与前一轮更新过程中得到的损失函数值进行比较,同时将本次获得的损失函数值与损失函数阈值进行比较,如果损失函数值小于损失函数阈值,并 且本次获得的损失函数值与前一轮得到的损失函数值之间的差值小于差值阈值,则判定模型已经收敛。
在上述序列特征扩增模型训练完成(即模型训练至收敛后),预测设备可以根据序列特征扩增模型,以及上述的第一数据库,对结构未知的蛋白质进行结构信息预测。该预测过程可以参考后续步骤。
步骤405,预测设备根据蛋白质的氨基酸序列在第一数据库中进行序列对齐查询,获得多序列对齐数据。
其中,本步骤中的蛋白质可以是需要进行结构信息预测的蛋白质。
步骤406,预测设备对该多序列对齐数据进行特征提取,获得初始序列特征。
步骤407,预测设备通过序列特征扩增模型对该初始序列特征进行处理,获得该蛋白质的扩增序列特征。
其中,上述步骤405至步骤407的过程可以参考上述图3所示实施例中的描述,此处不再赘述。
步骤408,通过该扩增序列特征预测该蛋白质的结构信息。
在本申请实施例中,预测设备可以通过蛋白质结构信息预测模型对该扩增序列特征进行预测,获得该蛋白质的蛋白质结构信息;其中,该蛋白质结构信息预测模型是根据蛋白质样本的序列特征,以及该蛋白质样本的结构信息训练获得的模型。
在一种可能的实现方式中,上述蛋白质结构信息预测模型是已有的,是由其它计算机设备训练好的机器学习模型。
在本申请实施例中,预测蛋白质的结构信息所使用的蛋白质结构信息预测模型也可以是一个经过机器学习获得的模型。
例如,训练设备可以获取若干个结构信息已知的蛋白质样本,以及各个蛋白质样本的氨基酸序列;然后,训练设备根据蛋白质样本的氨基酸序列,在第三数据库中进行序列对齐查询,获得多序列对齐数据,并对查询获得的多序列对齐数据进行特征提取,获得蛋白质样本的序列特征;再以蛋白质样本的序列特征为输入,以蛋白质样本的结构信息作为标注信息,对上述蛋白质结构信息预测模型进行训练。在蛋白质结构信息预测模型训练完成后,即可以应用到本步骤中,由预测设备根据待预测蛋白质的扩增序列特征以及蛋白质结构信息预测模型预测蛋白质的结构信息。
在本申请实施例中,为了提高根据待预测的蛋白质的扩增序列特征以及蛋 白质结构信息预测模型预测该蛋白质的结构信息的准确性,可以使用上述的第二数据库作为蛋白质结构信息预测模型训练过程中使用的数据库(即第三数据库),也就是说,上述的第二数据库与第三数据库可以是同一个数据库。
在一种可能的实现方式中,上述第二数据库与第三数据库是不同的数据,比如,第三数据库可以是数据规模比第二数据库更大的数据库,且第二数据库与第三数据库之间的数据分布相似度高于相似度阈值。比如,第二数据库可以是UniRef90数据库,第三数据库可以是UniRef100数据库。
请参考图6,其示出了本申请实施例涉及的一种蛋白质结构信息预测的示意图。如图6所示,蛋白质结构信息预测的过程如下:
S61,预测设备获取一条氨基酸序列,并且在UniRef50数据库上进行该氨基酸序列的多序列对齐数据查询操作,得到多序列对齐数据结果。
S62,预测设备将多序列对齐数据结果进行特征提取,获得自动扩增前的序列特征。
S63,预测设备将自动扩增前的序列特征输入到训练过的序列特征扩增模型中。
S64,序列特征扩增模型输出自动扩增后的序列特征。
S65,预测设备将自动扩增后的序列特征输入到蛋白质结构信息预测模型中。
S66,蛋白质结构信息预测模型输出该氨基酸序列对应的蛋白质结构信息预测结果。
在本申请实施例所示的方案中,上述训练设备和预测设备可以是同一个计算机设备,即该计算机设备先训练获得上述序列特征扩增模型,再根据序列特征扩增模型进行蛋白质的结构信息预测。
或者,上述训练设备和预测设备也可以是不同的计算机设备,即训练设备先训练获得上述序列特征扩增模型,将该序列特征扩增模型提供给预测设备,并由预测设备根据序列特征扩增模型进行蛋白质的结构信息预测。
综上所述,在本申请实施例所示的方案中,通过对蛋白质的氨基酸序列进行序列对齐查询,对多序列对齐数据进行特征提取,通过一个序列特征扩增模型,获得蛋白质的扩增序列特征,然后预测蛋白质的结构信息,通过借助于序列特征扩增模型,只需要在数据规模较小的第一数据库进行序列对齐查询,即可以获得较高的预测准确性,同时,在数据规模较小的第一数据库进行序列对 齐查询所消耗的时间更少,因此,上述方案能够在保证蛋白质的结构信息的预测准确度的情况下,提高蛋白质的结构信息的预测效率。
图7是根据一示例性实施例示出的蛋白质的结构信息预测装置的结构方框图。该蛋白质的结构信息预测装置可以通过硬件或者软硬结合的方式实现为计算机设备中的全部或者部分,以执行图3或图4对应实施例所示的方法的全部或部分步骤。该蛋白质的结构信息预测装置可以包括:
数据获取模块710,用于根据蛋白质的氨基酸序列在第一数据库中进行序列对齐查询,获得多序列对齐数据;
初始特征获取模块720,用于对所述多序列对齐数据进行特征提取,获得初始序列特征;
扩增特征获取模块730,用于通过序列特征扩增模型对所述初始序列特征进行处理,获得所述蛋白质的扩增序列特征;所述序列特征扩增模型是通过初始序列特征样本和扩增序列特征样本训练获得的机器学习模型;所述初始序列特征样本是根据氨基酸序列样本在所述第一数据库中进行序列对齐查询获得的,所述扩增序列特征样本是根据所述氨基酸序列样本在第二数据库中进行序列对齐查询获得的;所述第二数据库的数据规模大于所述第一数据库的数据规模;
结构信息预测模块740,用于通过所述扩增序列特征预测所述蛋白质的结构信息。
在一种可能的实现方式中,所述第一数据库和所述第二数据库之间的数据分布相似度高于相似度阈值。
在一种可能的实现方式中,所述第一数据库是在所述第二数据库的基础上随机剔除指定比例的数据后获得的数据库。
在一种可能的实现方式中,所述序列特征扩增模型是针对一维序列数据的全卷积神经网络、由多层长短期记忆LSTM单元构成的循环神经网络模型或者由双向LSTM单元构成的循环神经网络。
在一种可能的实现方式中,所述初始序列特征和所述扩增序列特征为位置特异性得分矩阵。
在一种可能的实现方式中,所述装置还包括:
扩增样本获取模块,用于通过所述序列特征扩增模型对所述初始序列特征样本进行处理,获得扩增后的初始序列特征样本;
模型更新模块,用于根据所述扩增后的初始序列特征样本,以及所述扩增序列特征样本,对所述序列特征扩增模型进行更新。
在一种可能的实现方式中,所述模型更新模块,包括:
损失函数获取子模块,用于根据所述扩增后的初始序列特征样本,以及所述扩增序列特征样本进行损失函数计算,获得损失函数值;
参数更新子模块,用于根据所述损失函数值对所述序列特征扩增模型中的模型参数进行更新。
在一种可能的实现方式中,所述损失函数获取子模块,包括:
误差计算单元,用于计算所述扩增后的初始序列特征样本与所述扩增序列特征样本之间的重构误差;
损失函数获取单元,用于将所述重构误差获取为所述损失函数值。
在一种可能的实现方式中,所述误差计算单元计算所述扩增后的初始序列特征样本与所述扩增序列特征样本之间的均方根重构误差。
在一种可能的实现方式中,所述模型更新模块,用于,
当根据所述损失函数值确定所述序列特征扩增模型未收敛时,根据所述损失函数值对所述序列特征扩增模型中的模型参数进行更新。
在一种可能的实现方式中,所述结构信息预测模块740,包括:
结构信息获取子模块,用于通过蛋白质结构信息预测模型对所述扩增序列特征进行预测,获得所述蛋白质的结构信息;
其中,所述蛋白质结构信息预测模型是根据蛋白质样本的序列特征,以及所述蛋白质样本的结构信息训练获得的模型。
综上所述,在本申请实施例所示的方案中,通过对蛋白质的氨基酸序列进行序列对齐查询,对多序列对齐数据进行特征提取,通过一个序列特征扩增模型,获得蛋白质的扩增序列特征,然后预测蛋白质的结构信息,通过借助于序列特征扩增模型,只需要在数据规模较小的第一数据库进行序列对齐查询,即可以获得较高的预测准确性,同时,在数据规模较小的第一数据库进行序列对齐查询所消耗的时间更少,因此,上述方案能够在保证蛋白质的结构信息的预测准确度的情况下,提高蛋白质的结构信息的预测效率。
图8是根据一示例性实施例示出的一种计算机设备的结构示意图。所述计算机设备可以实现为上述各个实施例中的训练设备或者预测设备,或者,也可 以实现为训练设备和预测设备的结合。所述计算机设备800包括中央处理单元(CPU)801、包括随机存取存储器(RAM)802和只读存储器(ROM)803的系统存储器804,以及连接系统存储器804和中央处理单元801的系统总线805。所述服务器800还包括帮助计算机内的各个器件之间传输信息的基本输入/输出系统(I/O系统)806,和用于存储操作系统813、应用程序814和其他程序模块815的大容量存储设备807。
所述基本输入/输出系统806包括有用于显示信息的显示器808和用于用户输入信息的诸如鼠标、键盘之类的输入设备809。其中所述显示器808和输入设备809都通过连接到系统总线805的输入输出控制器810连接到中央处理单元801。所述基本输入/输出系统806还可以包括输入输出控制器810以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入输出控制器810还提供输出到显示屏、打印机或其他类型的输出设备。
所述大容量存储设备807通过连接到系统总线805的大容量存储控制器(未示出)连接到中央处理单元801。所述大容量存储设备807及其相关联的计算机可读介质为服务器800提供非易失性存储。也就是说,所述大容量存储设备807可以包括诸如硬盘或者CD-ROM驱动器之类的计算机可读介质(未示出)。
不失一般性,所述计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、EPROM、EEPROM、闪存或其他固态存储其技术,CD-ROM、DVD或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知所述计算机存储介质不局限于上述几种。上述的系统存储器804和大容量存储设备807可以统称为存储器。
服务器800可以通过连接在所述系统总线805上的网络接口单元811连接到互联网或者其它网络设备。
所述存储器还包括一个或者一个以上的程序,所述一个或者一个以上程序存储于存储器中,中央处理器801通过执行该一个或一个以上程序来实现图3或4所示的蛋白质的结构信息预测方法中,由计算机设备所执行的步骤。
本申请还提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述各个方法实施例提供的方法。
图9示出了本申请一个示例性实施例提供的终端900的结构框图。该终端900可以是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端900还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。其中,上述终端可以实现为上述各个方法实施例中的预测设备。比如,可以实现为图1中的预测设备120。
通常,终端900包括有:处理器901和存储器902。
处理器901可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器901可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器901也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器901可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器901还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器902可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器902还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器902中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器901所执行以实现本申请中方法实施例提供的蛋白质的结构信息的预测预测方法。
在一些实施例中,终端900还可选包括有:外围设备接口903和至少一个外围设备。处理器901、存储器902和外围设备接口903之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口903相连。具体地,外围设备包括:射频电路904、触摸显示屏905、摄像头906、音频电路907、定位组件908和电源909中的至少一种。
外围设备接口903可被用于将I/O(Input/Output,输入/输出)相关的至少 一个外围设备连接到处理器901和存储器902。在一些实施例中,处理器901、存储器902和外围设备接口903被集成在同一芯片或电路板上;在一些其他实施例中,处理器901、存储器902和外围设备接口903中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路904用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路904通过电磁信号与通信网络以及其他通信设备进行通信。射频电路904将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路904包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路904可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路904还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏905用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏905是触摸显示屏时,显示屏905还具有采集在显示屏905的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器901进行处理。此时,显示屏905还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏905可以为一个,设置终端900的前面板;在另一些实施例中,显示屏905可以为至少两个,分别设置在终端900的不同表面或呈折叠设计;在再一些实施例中,显示屏905可以是柔性显示屏,设置在终端900的弯曲表面上或折叠面上。甚至,显示屏905还可以设置成非矩形的不规则图形,也即异形屏。显示屏905可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件906用于采集图像或视频。可选地,摄像头组件906包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施 例中,摄像头组件906还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路907可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器901进行处理,或者输入至射频电路904以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端900的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器901或射频电路904的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路907还可以包括耳机插孔。
定位组件908用于定位终端900的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件908可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统或俄罗斯的伽利略系统的定位组件。
电源909用于为终端900中的各个组件进行供电。电源909可以是交流电、直流电、一次性电池或可充电电池。当电源909包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端900还包括有一个或多个传感器910。该一个或多个传感器910包括但不限于:加速度传感器911、陀螺仪传感器912、压力传感器913、指纹传感器914、光学传感器915以及接近传感器916。
加速度传感器911可以检测以终端900建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器911可以用于检测重力加速度在三个坐标轴上的分量。处理器901可以根据加速度传感器911采集的重力加速度信号,控制触摸显示屏905以横向视图或纵向视图进行用户界面的显示。加速度传感器911还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器912可以检测终端900的机体方向及转动角度,陀螺仪传感器912可以与加速度传感器911协同采集用户对终端900的3D动作。处理器 901根据陀螺仪传感器912采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器913可以设置在终端900的侧边框和/或触摸显示屏905的下层。当压力传感器913设置在终端900的侧边框时,可以检测用户对终端900的握持信号,由处理器901根据压力传感器913采集的握持信号进行左右手识别或快捷操作。当压力传感器913设置在触摸显示屏905的下层时,由处理器901根据用户对触摸显示屏905的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器914用于采集用户的指纹,由处理器901根据指纹传感器914采集到的指纹识别用户的身份,或者,由指纹传感器914根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器901授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器914可以被设置终端900的正面、背面或侧面。当终端900上设置有物理按键或厂商Logo时,指纹传感器914可以与物理按键或厂商Logo集成在一起。
光学传感器915用于采集环境光强度。在一个实施例中,处理器901可以根据光学传感器915采集的环境光强度,控制触摸显示屏905的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏905的显示亮度;当环境光强度较低时,调低触摸显示屏905的显示亮度。在另一个实施例中,处理器901还可以根据光学传感器915采集的环境光强度,动态调整摄像头组件906的拍摄参数。
接近传感器916,也称距离传感器,通常设置在终端900的前面板。接近传感器916用于采集用户与终端900的正面之间的距离。在一个实施例中,当接近传感器916检测到用户与终端900的正面之间的距离逐渐变小时,由处理器901控制触摸显示屏905从亮屏状态切换为息屏状态;当接近传感器916检测到用户与终端900的正面之间的距离逐渐变大时,由处理器901控制触摸显示屏905从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图9中示出的结构并不构成对终端900的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,该计算机可读存储介质可以是上述实施例中的存储器中所包含的计算机可读存储介质;也可以是单独存在,未装配入终端中的计算机可读存储介质。该计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如图3或图4所述的蛋白质的结构信息预测方法。
可选地,该计算机可读存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、固态硬盘(SSD,Solid State Drives)或光盘等。其中,随机存取记忆体可以包括电阻式随机存取记忆体(ReRAM,Resistance Random Access Memory)和动态随机存取存储器(DRAM,Dynamic Random Access Memory)。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述方面的各种可选实现方式中提供的蛋白质的结构信息预测方法。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (24)

  1. 一种蛋白质的结构信息预测方法,其特征在于,所述方法由计算机设备执行,所述方法包括:
    根据蛋白质的氨基酸序列在第一数据库中进行序列对齐查询,获得多序列对齐数据;
    对所述多序列对齐数据进行特征提取,获得初始序列特征;
    通过序列特征扩增模型对所述初始序列特征进行处理,获得所述蛋白质的扩增序列特征;所述序列特征扩增模型是通过初始序列特征样本和扩增序列特征样本训练获得的机器学习模型;所述初始序列特征样本是根据氨基酸序列样本在所述第一数据库中进行序列对齐查询获得的,所述扩增序列特征样本是根据所述氨基酸序列样本在第二数据库中进行序列对齐查询获得的;所述第二数据库的数据规模大于所述第一数据库的数据规模;
    通过所述扩增序列特征预测所述蛋白质的结构信息。
  2. 根据权利要求1所述的方法,其特征在于,所述第一数据库和所述第二数据库之间的数据分布相似度高于相似度阈值。
  3. 根据权利要求2所述的方法,其特征在于,所述第一数据库是在所述第二数据库的基础上随机剔除指定比例的数据后获得的数据库。
  4. 根据权利要求1所述的方法,其特征在于,
    所述序列特征扩增模型是针对一维序列数据的全卷积神经网络、由多层长短期记忆LSTM单元构成的循环神经网络模型或者由双向LSTM单元构成的循环神经网络。
  5. 根据权利要求1所述的方法,其特征在于,所述初始序列特征和所述扩增序列特征为位置特异性得分矩阵。
  6. 根据权利要求1至5任一所述的方法,其特征在于,所述根据蛋白质的氨基酸序列在第一数据库中进行序列对齐查询,获得多序列对齐数据之后,还包 括:
    通过所述序列特征扩增模型对所述初始序列特征样本进行处理,获得扩增后的初始序列特征样本;
    根据所述扩增后的初始序列特征样本,以及所述扩增序列特征样本,对所述序列特征扩增模型进行更新。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述扩增后的初始序列特征样本,以及所述扩增序列特征样本,对所述序列特征扩增模型进行更新,包括:
    根据所述扩增后的初始序列特征样本,以及所述扩增序列特征样本进行损失函数计算,获得损失函数值;
    根据所述损失函数值对所述序列特征扩增模型中的模型参数进行更新。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述扩增后的初始序列特征样本,以及所述扩增序列特征样本进行损失函数计算,获得损失函数值,包括:
    计算所述扩增后的初始序列特征样本与所述扩增序列特征样本之间的重构误差;
    将所述重构误差获取为所述损失函数值。
  9. 根据权利要求8所述的方法,其特征在于,所述计算所述扩增后的初始序列特征样本与所述扩增序列特征样本之间的重构误差,包括:
    计算所述扩增后的初始序列特征样本与所述扩增序列特征样本之间的均方根重构误差。
  10. 根据权利要求7所述的方法,其特征在于,所述根据所述损失函数值对所述序列特征扩增模型中的模型参数进行更新,包括:
    当根据所述损失函数值确定所述序列特征扩增模型未收敛时,根据所述损失函数值对所述序列特征扩增模型中的模型参数进行更新。
  11. 根据权利要求1至5任一所述的方法,其特征在于,所述通过所述扩增序列特征预测所述蛋白质的结构信息,包括:
    通过蛋白质结构信息预测模型对所述扩增序列特征进行预测,获得所述蛋白质的结构信息;
    其中,所述蛋白质结构信息预测模型是根据蛋白质样本的序列特征,以及所述蛋白质样本的结构信息训练获得的模型。
  12. 一种蛋白质的结构信息预测装置,其特征在于,所述装置用于计算机设备中,所述装置包括:
    数据获取模块,用于根据蛋白质的氨基酸序列在第一数据库中进行序列对齐查询,获得多序列对齐数据;
    初始特征获取模块,用于对所述多序列对齐数据进行特征提取,获得初始序列特征;
    扩增特征获取模块,用于通过序列特征扩增模型对所述初始序列特征进行处理,获得所述蛋白质的扩增序列特征;所述序列特征扩增模型是通过初始序列特征样本和扩增序列特征样本训练获得的机器学习模型;所述初始序列特征样本是根据氨基酸序列样本在所述第一数据库中进行序列对齐查询获得的,所述扩增序列特征样本是根据所述氨基酸序列样本在第二数据库中进行序列对齐查询获得的;所述第二数据库的数据规模大于所述第一数据库的数据规模;
    结构信息预测模块,用于通过所述扩增序列特征预测所述蛋白质的结构信息。
  13. 根据权利要求12所述的装置,其特征在于,所述第一数据库和所述第二数据库之间的数据分布相似度高于相似度阈值。
  14. 根据权利要求13所述的装置,其特征在于,所述第一数据库是在所述第二数据库的基础上随机剔除指定比例的数据后获得的数据库。
  15. 根据权利要求12所述的装置,其特征在于,所述序列特征扩增模型是针对一维序列数据的全卷积神经网络、由多层长短期记忆LSTM单元构成的循环 神经网络模型或者由双向LSTM单元构成的循环神经网络。
  16. 根据权利要求12所述的装置,其特征在于,所述初始序列特征和所述扩增序列特征为位置特异性得分矩阵。
  17. 根据权利要求12至16任一所述的装置,其特征在于,所述装置还包括:
    扩增样本获取模块,用于通过所述序列特征扩增模型对所述初始序列特征样本进行处理,获得扩增后的初始序列特征样本;
    模型更新模块,用于根据所述扩增后的初始序列特征样本,以及所述扩增序列特征样本,对所述序列特征扩增模型进行更新。
  18. 根据权利要求17所述的装置,其特征在于,所述模型更新模块,包括:
    损失函数获取子模块,用于根据所述扩增后的初始序列特征样本,以及所述扩增序列特征样本进行损失函数计算,获得损失函数值;
    参数更新子模块,用于根据所述损失函数值对所述序列特征扩增模型中的模型参数进行更新。
  19. 根据权利要求18所述的装置,其特征在于,所述损失函数获取子模块,包括:
    误差计算单元,用于计算所述扩增后的初始序列特征样本与所述扩增序列特征样本之间的重构误差;
    损失函数获取单元,用于将所述重构误差获取为所述损失函数值。
  20. 根据权利要求19所述的装置,其特征在于,所述误差计算单元计算所述扩增后的初始序列特征样本与所述扩增序列特征样本之间的均方根重构误差。
  21. 根据权利要求18所述的装置,其特征在于,所述模型更新模块,用于,
    当根据所述损失函数值确定所述序列特征扩增模型未收敛时,根据所述损失函数值对所述序列特征扩增模型中的模型参数进行更新。
  22. 根据权利要求12至16任一所述的装置,其特征在于,所述结构信息预测模块,包括:
    结构信息获取子模块,用于通过蛋白质结构信息预测模型对所述扩增序列特征进行预测,获得所述蛋白质的结构信息;
    其中,所述蛋白质结构信息预测模型是根据蛋白质样本的序列特征,以及所述蛋白质样本的结构信息训练获得的模型。
  23. 一种计算机设备,其特征在于,计算机设备包含处理器和存储器,所述存储器中存储由至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至11任一所述的蛋白质结构信息预测方法。
  24. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至11任一所述的蛋白质结构信息预测方法。
PCT/CN2020/114386 2019-10-30 2020-09-10 蛋白质的结构信息预测方法、装置、设备及存储介质 WO2021082753A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP20882879.8A EP4009328A4 (en) 2019-10-30 2020-09-10 METHOD, DEVICE AND APPARATUS FOR PREDICTING PROTEIN STRUCTURE INFORMATION, AND STORAGE MEDIUM
JP2022514493A JP7291853B2 (ja) 2019-10-30 2020-09-10 タンパク質構造情報予測方法及び装置、コンピュータデバイス、並びにコンピュータプログラム
US17/539,946 US20220093213A1 (en) 2019-10-30 2021-12-01 Protein structure information prediction method and apparatus, device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911042649.9 2019-10-30
CN201911042649.9A CN110706738B (zh) 2019-10-30 2019-10-30 蛋白质的结构信息预测方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/539,946 Continuation US20220093213A1 (en) 2019-10-30 2021-12-01 Protein structure information prediction method and apparatus, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021082753A1 true WO2021082753A1 (zh) 2021-05-06

Family

ID=69203871

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/114386 WO2021082753A1 (zh) 2019-10-30 2020-09-10 蛋白质的结构信息预测方法、装置、设备及存储介质

Country Status (5)

Country Link
US (1) US20220093213A1 (zh)
EP (1) EP4009328A4 (zh)
JP (1) JP7291853B2 (zh)
CN (1) CN110706738B (zh)
WO (1) WO2021082753A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114300038A (zh) * 2021-12-27 2022-04-08 山东师范大学 基于改进生物地理学优化算法的多序列比对方法及系统

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706738B (zh) * 2019-10-30 2020-11-20 腾讯科技(深圳)有限公司 蛋白质的结构信息预测方法、装置、设备及存储介质
CN111243668B (zh) * 2020-04-09 2020-08-07 腾讯科技(深圳)有限公司 分子结合位点检测方法、装置、电子设备及存储介质
CN111755065A (zh) * 2020-06-15 2020-10-09 重庆邮电大学 一种基于虚拟网络映射和云并行计算的蛋白质构象预测加速方法
CN112289370B (zh) * 2020-12-28 2021-03-23 武汉金开瑞生物工程有限公司 一种蛋白质结构预测方法及装置
CN112837204A (zh) * 2021-02-26 2021-05-25 北京小米移动软件有限公司 序列处理方法、序列处理装置及存储介质
CN113255770B (zh) * 2021-05-26 2023-10-27 北京百度网讯科技有限公司 化合物属性预测模型训练方法和化合物属性预测方法
CN113837036A (zh) * 2021-09-09 2021-12-24 成都齐碳科技有限公司 生物聚合物的表征方法、装置、设备及计算机存储介质
CN115881211B (zh) * 2021-12-23 2024-02-20 上海智峪生物科技有限公司 蛋白质序列比对方法、装置、计算机设备以及存储介质
CN114613427B (zh) * 2022-03-15 2023-01-31 水木未来(北京)科技有限公司 蛋白质三维结构预测方法及装置、电子设备和存储介质
CN115116559B (zh) * 2022-06-21 2023-04-18 北京百度网讯科技有限公司 氨基酸中原子坐标的确定及训练方法、装置、设备和介质
CN115240044B (zh) * 2022-07-22 2023-06-06 水木未来(北京)科技有限公司 蛋白质电子密度图处理方法、装置、电子设备和存储介质
CN117292743A (zh) * 2022-09-05 2023-12-26 北京分子之心科技有限公司 用于预测蛋白质复合物结构的方法、设备、介质及程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100057419A1 (en) * 2008-08-29 2010-03-04 Laboratory of Computational Biology, Center for DNA Fingerprinting and Diagnostics Fold-wise classification of proteins
CN106951736A (zh) * 2017-03-14 2017-07-14 齐鲁工业大学 一种基于多重进化矩阵的蛋白质二级结构预测方法
CN108197427A (zh) * 2018-01-02 2018-06-22 山东师范大学 基于深度卷积神经网络的蛋白质亚细胞定位方法和装置
CN109411018A (zh) * 2019-01-23 2019-03-01 上海宝藤生物医药科技股份有限公司 根据基因突变信息对样本分类的方法、装置、设备及介质
CN110706738A (zh) * 2019-10-30 2020-01-17 腾讯科技(深圳)有限公司 蛋白质的结构信息预测方法、装置、设备及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7157266B2 (en) * 1999-01-25 2007-01-02 Brookhaven Science Associates Llc Structure of adenovirus bound to cellular receptor car
WO2002064743A2 (en) * 2001-02-12 2002-08-22 Rosetta Inpharmatics, Inc. Confirming the exon content of rna transcripts by pcr using primers complementary to each respective exon
CN103175873B (zh) * 2013-01-27 2015-11-18 福州市第二医院 基于目标dna重复序列自身增强放大信号的dna电化学传感器
CN104615911B (zh) * 2015-01-12 2017-07-18 上海交通大学 基于稀疏编码及链学习预测膜蛋白beta‑barrel跨膜区域的方法
CN105574359B (zh) * 2015-12-15 2018-09-14 上海珍岛信息技术有限公司 一种蛋白质模板库的扩充方法及装置
CN107563150B (zh) * 2017-08-31 2021-03-19 深圳大学 蛋白质结合位点的预测方法、装置、设备及存储介质
CN109147868B (zh) * 2018-07-18 2022-03-22 深圳大学 蛋白质功能预测方法、装置、设备及存储介质
CN109300501B (zh) * 2018-09-20 2021-02-02 国家卫生健康委科学技术研究所 蛋白质三维结构预测方法及用其构建的预测云平台
CN109255339B (zh) * 2018-10-19 2021-04-06 西安电子科技大学 基于自适应深度森林人体步态能量图的分类方法
CN110097130B (zh) * 2019-05-07 2022-12-13 深圳市腾讯计算机系统有限公司 分类任务模型的训练方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100057419A1 (en) * 2008-08-29 2010-03-04 Laboratory of Computational Biology, Center for DNA Fingerprinting and Diagnostics Fold-wise classification of proteins
CN106951736A (zh) * 2017-03-14 2017-07-14 齐鲁工业大学 一种基于多重进化矩阵的蛋白质二级结构预测方法
CN108197427A (zh) * 2018-01-02 2018-06-22 山东师范大学 基于深度卷积神经网络的蛋白质亚细胞定位方法和装置
CN109411018A (zh) * 2019-01-23 2019-03-01 上海宝藤生物医药科技股份有限公司 根据基因突变信息对样本分类的方法、装置、设备及介质
CN110706738A (zh) * 2019-10-30 2020-01-17 腾讯科技(深圳)有限公司 蛋白质的结构信息预测方法、装置、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4009328A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114300038A (zh) * 2021-12-27 2022-04-08 山东师范大学 基于改进生物地理学优化算法的多序列比对方法及系统
CN114300038B (zh) * 2021-12-27 2023-09-29 山东师范大学 基于改进生物地理学优化算法的多序列比对方法及系统

Also Published As

Publication number Publication date
CN110706738A (zh) 2020-01-17
EP4009328A4 (en) 2022-09-14
JP7291853B2 (ja) 2023-06-15
US20220093213A1 (en) 2022-03-24
JP2022547041A (ja) 2022-11-10
EP4009328A1 (en) 2022-06-08
CN110706738B (zh) 2020-11-20

Similar Documents

Publication Publication Date Title
WO2021082753A1 (zh) 蛋白质的结构信息预测方法、装置、设备及存储介质
WO2020253657A1 (zh) 视频片段定位方法、装置、计算机设备及存储介质
CN111476306B (zh) 基于人工智能的物体检测方法、装置、设备及存储介质
WO2022127919A1 (zh) 表面缺陷检测方法、装置、系统、存储介质及程序产品
CN109086709B (zh) 特征提取模型训练方法、装置及存储介质
CN108596976B (zh) 相机姿态追踪过程的重定位方法、装置、设备及存储介质
CN110134804B (zh) 图像检索方法、装置及存储介质
WO2020048308A1 (zh) 多媒体资源分类方法、装置、计算机设备及存储介质
WO2020224479A1 (zh) 目标的位置获取方法、装置、计算机设备及存储介质
JP2021524957A (ja) 画像処理方法およびその、装置、端末並びにコンピュータプログラム
CN111243668B (zh) 分子结合位点检测方法、装置、电子设备及存储介质
CN111192262A (zh) 基于人工智能的产品缺陷分类方法、装置、设备及介质
CN111680697B (zh) 实现领域自适应的方法、装置、电子设备及介质
WO2022057435A1 (zh) 基于搜索的问答方法及存储介质
CN110163296B (zh) 图像识别的方法、装置、设备及存储介质
CN111919222A (zh) 识别图像中的对象的装置和方法
WO2022134634A1 (zh) 视频处理方法及电子设备
WO2021000956A1 (zh) 一种智能模型的升级方法及装置
CN113516143A (zh) 文本图像匹配方法、装置、计算机设备及存储介质
CN111104980A (zh) 确定分类结果的方法、装置、设备及存储介质
CN113918767A (zh) 视频片段定位方法、装置、设备及存储介质
WO2022193973A1 (zh) 图像处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品
CN113505256B (zh) 特征提取网络训练方法、图像处理方法及装置
CN111589138A (zh) 动作预测方法、装置、设备及存储介质
WO2022095640A1 (zh) 对图像中的树状组织进行重建的方法、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20882879

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 20882879.8

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022514493

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020882879

Country of ref document: EP

Effective date: 20220301

NENP Non-entry into the national phase

Ref country code: DE