CN115035958A - Method, system, device and storage medium for optimizing protein structure prediction - Google Patents

Method, system, device and storage medium for optimizing protein structure prediction Download PDF

Info

Publication number
CN115035958A
CN115035958A CN202210602616.0A CN202210602616A CN115035958A CN 115035958 A CN115035958 A CN 115035958A CN 202210602616 A CN202210602616 A CN 202210602616A CN 115035958 A CN115035958 A CN 115035958A
Authority
CN
China
Prior art keywords
video memory
data
openfold
module
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210602616.0A
Other languages
Chinese (zh)
Inventor
刘鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202210602616.0A priority Critical patent/CN115035958A/en
Publication of CN115035958A publication Critical patent/CN115035958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B45/00ICT specially adapted for bioinformatics-related data visualisation, e.g. displaying of maps or networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B25/00ICT specially adapted for hybridisation; ICT specially adapted for gene or protein expression

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Data Mining & Analysis (AREA)
  • Biotechnology (AREA)
  • Artificial Intelligence (AREA)
  • Genetics & Genomics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a system, a device and a storage medium for optimizing protein structure prediction, wherein the method comprises the following steps: modifying and perfecting openfold codes to realize initial run-through of a second generation deep learning neural network structure alphafold 2; segmenting data in openfold to carry out preliminary video memory optimization; analyzing a user configuration file of the second generation deep learning neural network structure alphafold2 to determine a module with the maximum current video memory consumption; and performing data segmentation on the module with the maximum video memory consumption at present so as to perform video memory optimization. The invention can quickly realize the training task of alphafold2, realize the support for the training of long-sequence protein, solve the strong requirement of the training on hardware resources and improve the training performance.

Description

Method, system, device and storage medium for optimizing protein structure prediction
Technical Field
The present invention relates to the field of deep learning, and more particularly, to a method, system, device and storage medium for optimizing protein structure prediction.
Background
With the development of deep learning, the application of the deep learning becomes wider and wider, and more fields gradually take the deep learning as the important development direction in the future, including the biopharmaceutical industry. In the field of structural biology, protein structure prediction is a problem which is concerned with all the time, in the traditional sense, protein structure prediction can be obtained only by carrying out multiple experiments with the aid of a cryoelectron microscope, the time spent by each protein structure prediction is calculated by taking the year as a unit, and too much manpower and material resources are consumed. However, in recent years, with the rapid development of deep learning, the accuracy of the protein structure obtained by deep learning prediction exceeds the experimental result once, so that a great amount of time and energy are saved for scientific researchers.
As a lead algorithm for protein structure prediction, alphafold2 has attracted wide attention once being opened, but for researchers, there are many problems in further research or commercial use by alphafold2, firstly, alphafold2 only opens an inference code, and cannot retrain or fine tune a model, secondly, model training requires a large amount of memory support, single attribute locations (attention logic) memory occupancy reaches 20.25GB during fine tuning, and cannot support training of long-sequence proteins, and the whole training process requires a great amount of hardware resource support.
Openfold is an open source project for reproducing alphafold2 by using pytorch, and the reproduction is relatively perfect compared with other projects, but there are many bugs (bugs) when a running code is successfully trained, fastfold is a video memory optimization method which is currently inferred for alphafold2, and the fastfold is optimized only for the evolver module of alphafold2, and much work is needed for realizing the training of long-sequence proteins when the fastfold is applied to the training of alphafold 2.
Disclosure of Invention
In view of this, an object of the embodiments of the present invention is to provide a method, a system, a computer device, and a computer-readable storage medium for optimizing protein structure prediction, which help researchers to quickly implement the training process of alphafold2 and support the training of long sequences, and by using the method, not only can the whole training and finalizing of alphafold2 be quickly implemented, but also the time of each iteration step in the training process can be reduced, so that a convenient idea is provided for researchers to perform further work based on alphafold2, and a large amount of time, material resources, and manpower are saved.
In view of the above objects, an aspect of the embodiments of the present invention provides a method for optimizing protein structure prediction, comprising the following steps: modifying and perfecting openfold codes to realize initial run-through of a second generation deep learning neural network structure alphafold 2; segmenting data in openfold to carry out preliminary video memory optimization; analyzing a user configuration file of the second generation deep learning neural network structure alphafold2 to determine a module with the maximum current video memory consumption; and performing data segmentation on the module with the largest current video memory consumption to perform video memory optimization.
In some embodiments, the modifying perfecting openfold code to achieve preliminary runs through the second generation deep learning neural network structure alphafold2 includes: and controlling openfold to gradually read the file by using a while loop, and training the openfold with FP32 precision.
In some embodiments, the segmenting the data in openfold for preliminary video memory optimization includes: copying the same data into a plurality of copies, and dividing the data through a dispersion function so that each device is distributed to a part of the data for calculation; and in response to the computation being complete, merging the data using an aggregation function.
In some embodiments, the segmenting the data in openfold for preliminary video memory optimization includes: the distributed training ddp of the pitch is modified into ddp of the apex, and the softmax operator is modified into the softmax of the pitch.
In another aspect of the embodiments of the present invention, there is provided a system for optimizing protein structure prediction, including: the modification module is configured for modifying and perfecting openfold codes to realize initial run-through of a second generation deep learning neural network structure alphafold 2; the first video memory module is configured to segment data in openfold to perform preliminary video memory optimization; the analysis module is configured to perform user configuration file analysis on the second generation deep learning neural network structure alphafold2 to determine a module with the maximum current video memory consumption; and the second video memory module is configured and used for carrying out data segmentation on the module with the largest video memory consumption so as to carry out video memory optimization.
In some embodiments, the modification module is configured to: the files are gradually read by using while loop control openfold, and the openfold is trained with FP32 precision.
In some embodiments, the first video memory module is configured to: copying the same data into a plurality of copies, and dividing the data through a dispersion function so that each device is distributed to a part of the data for calculation; and in response to the computation being complete, merging the data using an aggregation function.
In some embodiments, the first video memory module is configured to: the distributed training ddp of the pitch is modified into ddp of the apex, and the softmax operator is modified into the softmax of the pitch.
In another aspect of the embodiments of the present invention, there is also provided a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of the method as above.
In a further aspect of the embodiments of the present invention, a computer-readable storage medium is also provided, in which a computer program for implementing the above method steps is stored when the computer program is executed by a processor.
The invention has the following beneficial technical effects: the method helps researchers to quickly realize the training process of alphafold2 and support the training of long sequences, not only can quickly realize the whole training and finishing of alphafold2, but also can reduce the iteration time of each step in the training process, provides a convenient thought for researchers to carry out further work based on alphafold2, and saves a large amount of time, material resources and manpower.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
FIG. 1 is a schematic diagram of an embodiment of a method for optimizing protein structure prediction provided by the present invention;
FIG. 2 is a schematic diagram of an embodiment of a system for optimizing protein structure prediction provided by the present invention;
FIG. 3 is a schematic diagram of the hardware structure of an embodiment of the computer apparatus for optimizing protein structure prediction provided by the present invention;
FIG. 4 is a schematic diagram of an embodiment of a computer storage medium for optimizing protein structure prediction provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
In a first aspect of embodiments of the present invention, embodiments of a method for optimizing protein structure prediction are presented. FIG. 1 is a schematic diagram of an embodiment of the method for optimizing protein structure prediction provided by the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
s1, modifying and perfecting openfold codes to realize initial run-through of a second generation deep learning neural network structure alphafold 2;
s2, segmenting data in openfold to carry out preliminary video memory optimization;
s3, analyzing a user configuration file of the second generation deep learning neural network structure alphafold2 to determine a module with the maximum current video memory consumption; and
and S4, performing data segmentation on the module with the largest video memory consumption at present to perform video memory optimization.
Once the combination of biological structure and artificial intelligence is obtained, alphafold2 has attracted extensive attention from the scientific and biological fields, and since it is derived from depmind, the training of alphafold2 is performed based on TPU, so that the code itself has inherent insufficient support for GPU, and the used deep learning framework is depmind self-research framework jax, so it is inconvenient to use. Through code reproduction, the training needs a great amount of hardware resource support, how to realize the training of alphafold2 and reduce the video memory occupation can be found, and the realization of the training of long-sequence protein is a problem which needs to be solved urgently by many scholars and researchers. The invention provides a method for realizing alphafold2 training and video memory optimization, which helps researchers to quickly realize the training process of alphafold2 and support the training of long sequences, and by using the method, not only can the whole training and finishing of alphafold2 be quickly realized, but also the iteration time of each step in the training process can be reduced, so that a convenient thought is provided for the scientific researchers to carry out further work based on alphafold2, and a large amount of time, material resources and manpower are saved.
The openfold code is modified to achieve preliminary runs through the second generation deep learning neural network structure alphafold 2.
In some embodiments, the modifying perfecting openfold code to achieve preliminary runs through the second generation deep learning neural network structure alphafold2 includes: the files are gradually read by using while loop control openfold, and the openfold is trained with FP32 precision.
Openfold has a problem that data does not exist in a data reading stage, and therefore is often interrupted, the main reason is that a replacement file exists in a partial input cif file, and some replacement files have replacement files, so that the files need to be read step by using a while loop, and for files which do not exist finally, a try except operation is used to return the files to None (null). In addition, in the model preprocessing stage, openfold uses FP16 as training precision, but the problem of NaN in loss is found by using FP16 for a long time, so that the precision of FP32 is required to be adopted for training openfold.
And segmenting data in openfold to perform preliminary video memory optimization.
In some embodiments, the segmenting the data in openfold for preliminary video memory optimization includes: copying the same data into a plurality of copies, and dividing the data through a dispersion function so that each device is distributed to a part of data for calculation; and in response to the computation being complete, merging the data using an aggregation function.
In some embodiments, the segmenting the data in openfold for preliminary video memory optimization includes: the distributed training ddp of the pitch is modified into ddp of the apex, and the softmax operator is modified into the softmax of the pitch.
Fastfold optimizes the video memory of the evoformer module in alphafold2, and the main idea is to divide data into multiple parts, input the data parts to different devices, calculate the data parts, and then combine output results, that is, the same data is distributed to multiple devices for calculation, thereby achieving the purpose of reducing the video memory. The first step of applying the method to openfold is to split data, firstly, a look-up table is required to rewrite a look-up table, a data distributed table, and a distributed sampler, and the same data is copied into multiple copies, so that a device for processing the same data is distributed to the same batch of data. Then, data are segmented through scatter (scatter function), each device is distributed to a part of data for calculation, and gather (aggregation function) is used for merging the data after calculation is finished; as fastfold data merging involves communication problems and blocks with ddp of a pytorch, we modify distributed training ddp of the pytorch into ddp of apex, and besides fastfold proposes to reduce cache by segmenting data, modifications are made to layerorm and softmax operators, but the new softmax has many problems and needs to be modified into the softmax of the pytorch when in use.
And carrying out user configuration file analysis on the second generation deep learning neural network structure alphafold2 to determine the module with the largest current video memory consumption. And performing data segmentation on the module with the maximum video memory consumption at present so as to perform video memory optimization. Through profile analysis, it can be found that the maximum video memory consumption in the alphafold2 structure is the extra _ msa _ stack part, and the video memory optimization is also performed on the module by using the idea of data segmentation.
Through optimization, under 256 sequences, the video memory is reduced from 28GB to 13.6GB, the video memory is reduced by more than 50%, and the training performance is improved by 89%. Under 384 sequences, the problem of Out of memory originally occurs, normal training can be performed, and the video memory occupation is 30.8GB, which is enough to support training on 40GB a 100.
It should be noted that, the steps in the embodiments of the method for optimizing protein structure prediction described above can be mutually intersected, replaced, added, or deleted, and therefore, these reasonable permutation and combination transformations should also fall within the scope of the present invention, and should not limit the scope of the present invention to the embodiments.
In accordance with the above objects, in a second aspect of the embodiments of the present invention, a system for optimizing protein structure prediction is provided. As shown in fig. 2, the system 200 includes the following modules: the modification module is configured for modifying and perfecting openfold codes to realize initial run-through of a second generation deep learning neural network structure alphafold 2; the first video memory module is configured to segment data in openfold to perform preliminary video memory optimization; the analysis module is configured to perform user profile analysis on the second generation deep learning neural network structure alphafold2 to determine a module with the maximum current video memory consumption; and the second video memory module is configured and used for carrying out data segmentation on the module with the largest video memory consumption so as to carry out video memory optimization.
In some embodiments, the modification module is configured to: the files are gradually read by using while loop control openfold, and the openfold is trained with FP32 precision.
In some embodiments, the first video memory module is configured to: copying the same data into a plurality of copies, and dividing the data through a dispersion function so that each device is distributed to a part of the data for calculation; and in response to the computation being complete, merging the data using an aggregation function.
In some embodiments, the first video memory module is configured to: the distributed training ddp of the pitch is modified into ddp of the apex, and the softmax operator is modified into the softmax of the pitch.
In view of the above object, a third aspect of an embodiment of the present invention provides a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions being executable by the processor to perform the steps of: s1, modifying and perfecting openfold codes to realize initial run-through of a second generation deep learning neural network structure alphafold 2; s2, segmenting data in openfold to carry out preliminary video memory optimization; s3, analyzing a user configuration file of the second generation deep learning neural network structure alphafold2 to determine a module with the maximum current video memory consumption; and S4, performing data segmentation on the module with the maximum video memory consumption at present to perform video memory optimization.
In some embodiments, the modifying perfecting openfold code to achieve preliminary runs through the second generation deep learning neural network structure alphafold2 includes: the files are gradually read by using while loop control openfold, and the openfold is trained with FP32 precision.
In some embodiments, the segmenting the data in openfold for preliminary video memory optimization includes: copying the same data into a plurality of copies, and dividing the data through a dispersion function so that each device is distributed to a part of the data for calculation; and in response to the computation being complete, merging the data using an aggregation function.
In some embodiments, the segmenting the data in openfold for preliminary video memory optimization includes: the distributed training ddp of the pitch is modified into ddp of the apex, and the softmax operator is modified into the softmax of the pitch.
Fig. 3 is a schematic diagram of a hardware structure of an embodiment of the computer apparatus for optimizing protein structure prediction according to the present invention.
Taking the device shown in fig. 3 as an example, the device includes a processor 301 and a memory 302.
The processor 301 and the memory 302 may be connected by a bus or other means, and fig. 3 illustrates a connection by a bus as an example.
The memory 302 is a non-volatile computer-readable storage medium, and can be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the method for optimizing protein structure prediction in the embodiments of the present application. The processor 301 executes various functional applications of the server and data processing, i.e., implements a method of optimizing protein structure prediction, by running non-volatile software programs, instructions, and modules stored in the memory 302.
The memory 302 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the method of optimizing protein structure prediction, and the like. Further, the memory 302 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 302 may optionally include memory located remotely from processor 301, which may be connected to local modules over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Computer instructions 303 corresponding to one or more methods of optimizing protein structure prediction are stored in the memory 302 and when executed by the processor 301 perform the method of optimizing protein structure prediction in any of the method embodiments described above.
Any of the embodiments of a computer apparatus for performing the method for optimizing protein structure prediction described above may achieve the same or similar effects as any of the corresponding method embodiments described above.
The invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, performs a method of optimizing protein structure prediction.
FIG. 4 is a schematic diagram of an embodiment of a computer storage medium for optimizing protein structure prediction according to the present invention. Taking the computer storage medium as shown in fig. 4 as an example, the computer readable storage medium 401 stores a computer program 402 which, when executed by a processor, performs the method as described above.
Finally, it should be noted that, as one of ordinary skill in the art can appreciate that all or part of the processes of the methods of the above embodiments can be implemented by a computer program to instruct related hardware, and the program of the method for optimizing protein structure prediction can be stored in a computer readable storage medium, and when executed, the program can include the processes of the embodiments of the methods as described above. The storage medium of the program may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like. The embodiments of the computer program may achieve the same or similar effects as any of the above-described method embodiments.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (10)

1. A method for optimizing protein structure prediction, comprising the steps of:
modifying and perfecting openfold codes to realize initial run-through of a second generation deep learning neural network structure alphafold 2;
segmenting data in openfold to carry out preliminary video memory optimization;
analyzing a user configuration file of the second generation deep learning neural network structure alphafold2 to determine a module with the maximum current video memory consumption; and
and carrying out data segmentation on the module with the largest current video memory consumption so as to optimize the video memory.
2. The method as claimed in claim 1, wherein said modifying perfecting openfold code to achieve preliminary run-through to second generation deep learning neural network structure alphafold2 comprises:
the files are gradually read by using while loop control openfold, and the openfold is trained with FP32 precision.
3. The method of claim 1, wherein the segmenting data in openfold for preliminary video memory optimization comprises:
copying the same data into a plurality of copies, and dividing the data through a dispersion function so that each device is distributed to a part of the data for calculation; and
in response to the computation being completed, the data is merged using an aggregation function.
4. The method as claimed in claim 3, wherein the slicing the data in openfold for preliminary video memory optimization comprises:
the distributed training ddp of the pitch is modified into ddp of the apex, and the softmax operator is modified into the softmax of the pitch.
5. A system for optimizing protein structure prediction, comprising:
the modification module is configured for modifying and perfecting openfold codes to realize initial run-through of a second generation deep learning neural network structure alphafold 2;
the first video memory module is configured to segment data in openfold to perform preliminary video memory optimization;
the analysis module is configured to perform user profile analysis on the second generation deep learning neural network structure alphafold2 to determine a module with the maximum current video memory consumption; and
and the second video memory module is configured and used for carrying out data segmentation on the module with the maximum video memory consumption so as to carry out video memory optimization.
6. The system of claim 5, wherein the modification module is configured to:
the files are gradually read by using while loop control openfold, and the openfold is trained with FP32 precision.
7. The system of claim 5, wherein the first video memory module is configured to:
copying the same data into a plurality of copies, and dividing the data through a dispersion function so that each device is distributed to a part of the data for calculation; and
responsive to the computation being complete, the data is merged using an aggregation function.
8. The system of claim 7, wherein the first video memory module is configured to:
the distributed training ddp of the pitch is modified into ddp of the apex, and the softmax operator is modified into the softmax of the pitch.
9. A computer device, comprising:
at least one processor; and
a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of the method of any one of claims 1 to 4.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN202210602616.0A 2022-05-30 2022-05-30 Method, system, device and storage medium for optimizing protein structure prediction Pending CN115035958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210602616.0A CN115035958A (en) 2022-05-30 2022-05-30 Method, system, device and storage medium for optimizing protein structure prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210602616.0A CN115035958A (en) 2022-05-30 2022-05-30 Method, system, device and storage medium for optimizing protein structure prediction

Publications (1)

Publication Number Publication Date
CN115035958A true CN115035958A (en) 2022-09-09

Family

ID=83122510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210602616.0A Pending CN115035958A (en) 2022-05-30 2022-05-30 Method, system, device and storage medium for optimizing protein structure prediction

Country Status (1)

Country Link
CN (1) CN115035958A (en)

Similar Documents

Publication Publication Date Title
CN111626430B (en) Data processing method and related product
EP4036803A1 (en) Neural network model processing method and apparatus, computer device, and storage medium
CN110134697B (en) Method, device and system for automatically adjusting parameters of storage engine for key value
CN110674936A (en) Neural network processing method and device, computer equipment and storage medium
CN108809694B (en) Service arrangement method, system, device and computer readable storage medium
US20210350233A1 (en) System and Method for Automated Precision Configuration for Deep Neural Networks
DE102020108374A1 (en) METHOD AND DEVICE FOR THE MULTIPLE RUN-TIME PLANNING OF SOFTWARE EXECUTED IN A HETEROGENIC SYSTEM
JP2018526746A (en) Method and apparatus for optimizing database transactions
CN111984256A (en) Cloud native architecture-based low-code application flow system and operation method
CN111782201A (en) Method and device for realizing linkage of service codes and layout topological graph
CN111563584B (en) Splitting method of neural network model and related product
CN106227595A (en) Process the most quick operating method and system
CN112527450B (en) Super-fusion self-adaptive method, terminal and system based on different resources
CN114443178A (en) Rule engine based on flink cep
CN111612152B (en) Quantum computer simulation control method, system and related components
CN115035958A (en) Method, system, device and storage medium for optimizing protein structure prediction
WO2021000411A1 (en) Neural network-based document classification method and apparatus, and device and storage medium
CN115860061A (en) Graph neural network optimization method and graph neural network inference system
DE112022001810T5 (en) MIGRATION OF PROGRAM CONTEXT
CN114547349A (en) Method, device, equipment and storage medium for model adjustment and business processing
CN108831441B (en) A kind of training method and device of speech recognition modeling
CN110222018A (en) Data summarization executes method and device
CN113159290B (en) Neural network model network reasoning optimization method
CN116841712A (en) Multi-round execution method and system for artificial intelligent workflow
CN110222105A (en) Data summarization processing method and processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination