WO2021217138A1 - Method for efficiently optimizing a phenotype with a combination of a generative and a predictive model - Google Patents

Method for efficiently optimizing a phenotype with a combination of a generative and a predictive model Download PDF

Info

Publication number
WO2021217138A1
WO2021217138A1 PCT/US2021/029177 US2021029177W WO2021217138A1 WO 2021217138 A1 WO2021217138 A1 WO 2021217138A1 US 2021029177 W US2021029177 W US 2021029177W WO 2021217138 A1 WO2021217138 A1 WO 2021217138A1
Authority
WO
WIPO (PCT)
Prior art keywords
genotype
vectors
genotype vectors
model
sample
Prior art date
Application number
PCT/US2021/029177
Other languages
French (fr)
Inventor
Eduardo ABELIUK
Andres Igor Perez Manriquez
Juan Adres Ramirez NEILSON
Diego Francisco Valenzuela Iturra
Original Assignee
TeselaGen Biotechnology Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TeselaGen Biotechnology Inc. filed Critical TeselaGen Biotechnology Inc.
Publication of WO2021217138A1 publication Critical patent/WO2021217138A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B20/00ICT specially adapted for functional genomics or proteomics, e.g. genotype-phenotype associations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • a recurrent problem in synthetic biology is to find the genetic sequence that optimizes, for a given biological system, the production of a specific molecule or compound, or more generally that optimizes a specific metric that characterizes the phenotype of a given biological system.
  • this search can be quite expensive because it requires numerous experiments. Evaluating the performance and characterizing the phenotype of different genetic variants can consume a lot of time and resources.
  • FIG. 1 illustrates a flowchart for efficiently optimizing a phenotype with a combination of a generative and a predictive model according to an exemplary embodiment.
  • Fig. 2 illustrates a flowchart for encoding genotype information in a plurality of experimental data points corresponding to a set of constraints as a plurality of experiential genotype vectors according to an exemplary embodiment.
  • FIG. 3 illustrates a data flow chart showing the process for generating encoded experiential genotype vectors according to an exemplary embodiment.
  • Fig. 4 illustrates a data flow chart showing the process for generating encoded sample genotype vectors according to an exemplary embodiment.
  • Fig. 5 illustrates a flowchart for training a phenotype prediction model based at least in part on the plurality of experiential genotype vectors, the corresponding phenotype information, and the one or more constraints according to an exemplary embodiment.
  • Fig. 6 illustrates a diagram of the parameter adjustment algorithm of the surrogate model according to an exemplary embodiment.
  • Fig. 7 illustrates a flowchart for training a genotype generation model based at least in part on a plurality of sample genotype vectors according to an exemplary embodiment.
  • FIG. 8 illustrates a flowchart for concurrently training both a generator model function and a discriminator model function with sample genotype vectors according to an exemplary embodiment.
  • FIG. 9 illustrates a representative diagram of the adversarial training framework used for training a generative model according to an exemplary embodiment.
  • Fig. 10 illustrates a diagram of the parameter adjustment algorithm of the generative model according to an exemplary embodiment.
  • Fig. 11 illustrates a flowchart for generating a plurality of new genotype vectors with the genotype generation model according to an exemplary embodiment.
  • Fig. 12 illustrates a flowchart for updating the predictive model and determining whether to generate additional constructs according to an exemplary embodiment.
  • Fig. 13 illustrates a flowchart for generating a result based at least in part on the plurality of result genotypes when the user requests a combinatorial design according to an exemplary embodiment.
  • FIG. 14 illustrates high-level flowchart of the methods and system components described herein according to an exemplary embodiment.
  • Fig. 15 illustrates the components and process flow of the system for efficiently optimizing a phenotype with a combination of a generative and a predictive model according to an exemplary embodiment.
  • Fig. 16 illustrates the components and process flow of the system for efficiently optimizing a phenotype with a combination of a generative and a predictive model with embedding according to an exemplary embodiment.
  • FIG. 17 illustrates the components and process flow of the system for efficiently optimizing a phenotype with a combination of a generative and a predictive model with an encoder/decoder architecture according to an exemplary embodiment.
  • Fig. 18 illustrates the components of a specialized computing environment for efficiently optimizing a phenotype with a combination of a generative and a predictive model according to an exemplary embodiment.
  • the word “can” is used in a permissive sense (i.e., meaning having the potential to) rather than the mandatory sense (i.e., meaning must).
  • the words “include,” “including,” and “includes” mean including, but not limited to.
  • Applicant has discovered a method, apparatus, and computer-readable medium for efficiently optimizing a phenotype with a combination of a generative and a predictive model.
  • the prediction model is constructed specifically to optimize the phenotype of a biological system by generating phenotype predictions relating to genotypes which have not been experimentally characterized and which meet a user’s requirements.
  • the disclosed method, apparatus, and computer-readable medium are further configured to reduce the computational complexity of exploring the search space of possible genotypes using a variety of specialized heuristics adapted to this domain and, in some cases, adapted to the hardware that is used to apply the prediction model.
  • the novel method, apparatus, and computer-readable medium disclosed herein generates and progressively adjusts a generative model together with a predictive model. These models are configured to generate new optimized data in the sense of a specific metric associated with a particular quality of interest.
  • This algorithm is based on a generative model capable of generating “new” valid ( real-looking ) data configured to optimize a particular quality or property, and there is also a predictive model capable of predicting the value of such property for this new data which of course has not yet been experimentally characterized.
  • the described process for generating and adjusting a generative and a predictive model is described in the context of optimizing the phenotype of a biological system. These models are used in tandem to optimize the exploration of untested genotypes.
  • the described method of utilizing Sequential Model Based Optimization (SMBO) with a predictive model with uncertainty estimation as a surrogate together with a generative model as a candidate provider is applicable to any task involving optimization of the input of a black-box function (i.e., control parameters of a dynamic system).
  • SMBO Sequential Model Based Optimization
  • the disclosed method, apparatus, and computer-readable medium serves as a solution for optimization problems where the dimensionality of the solution space is very large, and where evaluating an acquisition function on all its points is not possible. Also, in input spaces with high dimensionality, the distribution of experimental data points has a small compact support. This makes it difficult (probabilistically) to sample new data points that are valid candidates.
  • the generative model is used to sample valid data points by correctly exploring the multidimensional space around the support of the experimental data. Also, the predictive model is used by the generator in order to improve its sampling towards the generation of more efficient candidates that can better explore the solution space.
  • Fig. 1 illustrates a flowchart for efficiently optimizing a phenotype with a combination of a generative and a predictive model according to an exemplary embodiment.
  • the disclosed process utilizes experiential genotype vectors and to train a phenotype prediction model (also referred to herein as the “predictive model,” the “prediction model,” or the “predictor model”) and sample genotype vectors to train a genotype generation model (also referred to as the “generative model,” the “generation model,” or the “generator model”).
  • a phenotype prediction model also referred to herein as the “predictive model,” the “prediction model,” or the “predictor model”
  • sample genotype vectors also referred to as the “generative model,” the “generation model,” or the “generator model”.
  • the experiential genotype vectors are determined based upon experimental data and the sample genotype vectors are determined based upon a sample database.
  • the experiential genotype vectors can reflect the underlying structure of the genotype information in the experimental data and the sample genotype vectors can reflect the underlying structure of the genotype information in the sample data.
  • the underlying dataset can be transformed/encoded to generate the experiential genotype vectors.
  • the genotype information in the sample database is stored in a different format or includes different attributes
  • the underlying dataset can be transformed/encoded to generate the sample genotype vectors.
  • the experiential genotype vectors can be generated from genotype information in an experimental database based at least in part on one or more constraints.
  • Fig. 2 illustrates a flowchart for encoding genotype information in a plurality of experimental data points corresponding to a set of constraints as a plurality of experiential genotype vectors according to an exemplary embodiment.
  • the one or more constraints are constrains particular to a specific user’s goals, experimental conditions, limitations, or other factors relating to the desired output from the predictive model.
  • the one or more constraints can include a plurality of desired phenotypic attributes.
  • the plurality of desired phenotypic attributes correspond to the phenotypes that a user is seeking to optimize through use of the system and subsequent experimentation.
  • the one or more constrains can also include a plurality of available genotypes.
  • the plurality of available genotypes can correspond to the genotypes that a particular user is able to create or has access to for experimental purposes.
  • the novel method and system disclosed herein limits the search space for the optimization problem through the use of a generative model, it is not necessary for the constraints to include plurality of available genotypes.
  • the limitations on available genotypes can also be input into the system through the choice of training data for the generative model, as discussed in greater detail below.
  • genotype refers to a genetic constitution or sequence and phenotype refers to an observable characteristic resulting from the interaction of a genotype with a particular environment.
  • a phenotype can include, for example, the ability of a particular genotype to produce a specified molecule, compound or metabolite (determined by the titer of the molecule), bacterial growth (determined by optical density data), resistance of a strain to extreme conditions and temperature, salinity, or pH conditions, etc.
  • the constraints can be received from a user via an input interface or in a communication via a network interface.
  • the constraints can also be received from a software process or computing system via a messaging protocol, a network connection, or other communication mechanism.
  • the step of receiving the constraints can include the user specifying the pool of variants that should be explored for each bin.
  • the user can use just the labels or, alternatively, the genetic/amino-acid sequences if using a sophisticated embedding approach.
  • the user also needs to specify the property to be optimized (this means that the user has to provide, for example, the name of the column that contains the target value in the database).
  • the algorithm will always try to maximize that phenotypic value, so the user should be aware of that and perform a transformation on the property if required to obtain a benefit from the process.
  • a common transformation is, for example, multiplying all values by -1 in order to minimize the original phenotype measurement.
  • genotype information in a plurality of experimental data points corresponding to the set of constraints is encoded as a plurality of experiential genotype vectors, the plurality of experimental data points comprising the genotype information and phenotype information corresponding to the genotype information.
  • This step can include, for example, interfacing and communicating with experimental database storing all experimental data points and extracting the plurality of experimental data points that correspond to the set of constraints.
  • the experimental database can be a distributed database, such as a cloud database, that is accessible to a plurality of researchers. Each researcher can then upload experimental results to the database in order to provide additional training data for training the model, as will be discussed further below.
  • Each experimental data point can include phenotypic measurements and corresponding genotype data.
  • the experimental data point can include corresponding to a particular genetic sequence, gene, and/or gene fragment and can also include phenotypic measurements that correspond to that particular genetic sequence, gene, and/or gene fragment.
  • the phenotypic measurements can be measurements that were experimentally determined in previous experiments.
  • the experimental data points can be configured to link genotype data with phenotypic measurements in a memory of the database, such as through a relational database, directed graph, or other techniques.
  • this step can include encoding all genotype information in the plurality of experimental data points as a plurality of experiential genotype vectors, irrespective of the constraints. For example, when the experimental dataset is small (say a few hundred constructs), all of these constructs can be used to generate the plurality of experiential genotype vectors. [0045] It only contains the experiments that the scientists have already performed in the lab. This group of samples is a subset of all the possible candidates that you can build if you recombine the alleles (variants) that are present on those constructs. Thus, the group of all possible candidates is bigger, and can contain thousands or hundreds of thousands candidates
  • the step of encoding genotype information in a plurality of experimental data points corresponding to the set of constraints as a plurality of experiential genotype vectors can include sub-steps 202 A and 202B.
  • the plurality of experimental data points are identified in a database of experimental data points based at least in part on one or more of: at least one available genotype in the plurality of available genotypes and at least one desired phenotypic attribute in the plurality of desired phenotypic attributes.
  • This step can conduct a search of the database for all experimental data points that have genotypes matching or interchangeable with at least one genotype listed in the plurality of available genotypes.
  • This step can also conduct a search of the database for all experimental data points that have phenotypic measurements matching at least one desired phenotypic attributes in the plurality of desired phenotypic attributes.
  • phenotypic attributes can include, for example, the ability of a genotype to produce a specified molecule or compound.
  • the genotypes associated with the identified plurality of experimental data points are encoded as a plurality of experiential genotype vectors.
  • This encoding process is discussed in greater detail below.
  • the genotypes associated with the identified plurality of experimental data points can be encoded in other ways, such as by representing the genotypes using a categorical or nominal representation correspond to categories/sequences/sub-sequences, etc.
  • the phenotypic measurements in the identified plurality of experimental data points can optionally also be encoded using one or more of the above-described schemes to later enable more efficient analysis.
  • a similar process to step 202B can be performed for generating the sample genotype vectors.
  • genotype information in a plurality of sample genotypes of a sample database can be encoded as the plurality of sample genotype vectors.
  • the encoding process can be the same encoding process used to generate the experiential genotype vectors and is discussed in greater detail below.
  • sample genotypes can optionally be labeled and include phenotypic and other attributes, this is not required for training the generative model. All that is required for the sample genotypes is that they include genotype information (i.e., unlabeled data).
  • the plurality of sample genotypes can be selected in any way from the sample database, such as through random sampling.
  • a user can optionally provide sample constraints relating to the sample genotypes to be selected. These constraints can be used to filter the sample genotypes selected from the sample database.
  • the user can exercise control over the results produced by the generative model through selection of the sample database itself. Since the training data for the generative model is sourced from the sample database, the selection of the sample database (and the selection of samples within that sample database) can be used to control the type of genotypes produced by the system. For example, when designing a protein (a sequence of amino acids) a user can select a sample database that includes only feasible proteins to minimize the likelihood that the generative model will produce sequences of amino acid letters that are not feasible proteins.
  • the sample database can also be the same database used to generate the experiential genotype vectors (i.e., the experimental data).
  • Fig. 3 illustrates a data flow chart showing the process for generating encoded experiential genotype vectors according to an exemplary embodiment.
  • constraints 302 including a plurality of desired phenotypic attributes 302 A and, optionally, a plurality of available genotypes 302B can be received.
  • This receiving step can optionally include encoding the plurality of available genotypes 302B with an encoder 304 to generate encoded available genotype information 306.
  • This data flow is shown by the dashed arrows.
  • the encoded available genotype information 306 can include a plurality of available genotype vectors.
  • the constraints 302 are applied to the experimental data store 301, containing multiple experimental data points, to identify a plurality of experimental data points 303 corresponding to the constraints 302.
  • Data point 301 A illustrates an example of how genotype data can be linked to corresponding phenotype data within each experimental data point.
  • the identified experimental data points 303 will include genotype information 303B and phenotype information 303 A corresponding to the genotype information 303B.
  • the genotype information 303B is then supplied to the encoder 304, which encodes it to generate encoded experiential genotype vectors 305.
  • the different encoding techniques that can be used by the 304 encoder are discussed in greater detail below.
  • Fig. 4 illustrates a data flow chart showing the process for generating encoded sample genotype vectors according to an exemplary embodiment.
  • a sample database 401 includes multiple samples, such as sample 401 A, including genotype data. All that is required is that each of the samples include genotype data, but of course the samples can optionally include additional data, such as phenotype information.
  • a plurality of identified samples 403 are selected and extracted from the sample database 401. As discussed earlier, these samples can be selected in any way, such as through random sampling, a user- defined process, or filtering based upon one or more sample constraints.
  • the identified samples 403, including sample genotype information 403B, are provided to the encoder 404.
  • the samples 403 are encoded by the encoder 404 to generate encoded sample genotype vectors 405.
  • the disclosed systems and method can be implemented using SBMO.
  • the SBMO strategy for biological systems can be applied using different ways to represent data. The most direct way is to just use labels. With the “label representation” the gene variants and promoter sequences are represented with nominal variables, so the model is expected to learn from data how these labels are related to each other and how they affect the outcome. This kind of representation is very easy to implement and test and it doesn’t need the use of the variant’s genetic sequences. It just needs labels to be able to distinguish between nominal categories among the features. The drawback is that, as it can’t use sequence information, the model could miss the underlying biochemical information that can be useful for the prediction task.
  • sequences can be represented as vectors that lie in a multidimensional space. These vectors encode sequence information and allows making relationships in the multidimensional space that have biochemical and biophysical sense. This dense representation in a continuous space makes it possible for the model to have access to information about the sequences that allows the model to identify, extract and exploit the relationships between them.
  • n can take values between 1 and the maximum length of all the sequences, where the length is measured according to the maximum amount of the smallest subunits in which it is possible to divide the sequence.
  • these n-grams correspond to their (sub)sequences of n residues
  • these n-grams correspond to their (sub)sequences of n nucleotides or codons, etc.
  • a parametric model such as a neural network with one or more layers, can be utilized to learn a multidimensional representation for each token. This can be done using a representation learning algorithm (such asWord2Vec, GloVe, FastText, Autoencoders, Variational Autoencoders (VAEs), ProtVec and GeoVec, dna2vec, etc.)
  • a representation learning algorithm such asWord2Vec, GloVe, FastText, Autoencoders, Variational Autoencoders (VAEs), ProtVec and GeoVec, dna2vec, etc.
  • the above method allows for representation of larger sequences using, for example, a weighted sum of n-grams.
  • This weighting can be learned by a parametric model such as a neural network with one or more layers. It can also be the arithmetic sum, the average, the weighting by the inverse of the frequency of the tokens, among others.
  • the resulting representation contains enough information, for example, to group the sequences according to their biochemical and biophysical properties.
  • the SBMO might take advantage of this and use a surrogate model with analytical optimum (such as Gaussian Process) or derivable (such as Deep Ensembles, Deep Gaussian Processes, Bayesian Networks or other Bayesian approaches).
  • analytical optimum such as Gaussian Process
  • derivable such as Deep Ensembles, Deep Gaussian Processes, Bayesian Networks or other Bayesian approaches.
  • optimum can be searched by means of Gradient Descent algorithm, Stochastic Gradient Descent or similar.
  • the above-described encoding/embedding and corresponding decoding schemes can be utilized for encoding/decoding the experiential genotype vectors (such as by encoder 304 in Fig. 3) and the sample genotype vectors (such as by encoder 404 in Fig. 4).
  • a phenotype prediction model is trained based at least in part on the plurality of experiential genotype vectors, the corresponding phenotype information, and the one or more constraints.
  • the phenotype prediction model is a surrogate model (sometimes referred to as a metamodel or a substitute model) that is used to approximate an objective function.
  • a surrogate model (also referred to as a “surrogate function”) is a model that approximates the behavior of a more complex model or physical system as closely as possible. Surrogate models are explained in greater detail below and can be utilized when the computational complexity of a physical system, experimental data points, and/or constraints would result in computationally indeterminable or infeasible training or application steps.
  • Finding a suitable may also be a non-trivial task.
  • One approach is to represent it as a parametric model with parameters such that Attaining the optimal parameters is in itself another optimization problem that may be expressed as follows.
  • M is some measure of distance between its arguments.
  • M is some measure of distance between its arguments.
  • Sequential Model Based Optimization is used to train the surrogate model and is explained in greater detail below.
  • SMBO algorithms are a family of optimization methods based on the use of a predictive model to iteratively search for the optimum of an unknown function. They were originally designed for experimental design and oil exploration. SMBO methods are generally applicable to scenarios in which a user wishes to minimize some scalar-valued function /(x) that is costly to evaluate. These methods progressively use the data that is compiled from a process or objective function to adjust a model (i.e., the surrogate model). This model is used on each iteration to make predictions of the objective function over a set of candidates, which are ranked according to their predicted score. On each iteration, the top ranked candidate is suggested to be evaluated for the next iteration.
  • a model i.e., the surrogate model
  • SMBO has never been applied before to the optimization of a biological system, which poses specific challenges that are addressed by the methods, apparatuses, and computer- readable media disclosed herein.
  • SMBO methods When ranking candidates, SMBO methods usually use a scalar score that combines the predicted value with an estimation of its uncertainty for each sample.
  • the intuition behind using the uncertainty is that the reliability of the predictions may decrease on unexplored areas of the solution space.
  • the consideration of an uncertainty term in the acquisition function promotes the exploration and the diversity of recommendations, helping to avoid local optima.
  • acquisition functions There are many options for acquisition functions in literature. One of the most commonly used is the expected improvement (El).
  • Bayesian Optimization which uses Gaussian Processes (GP) as a surrogate model. This approach has been successfully applied in many fields, however GP modeling cannot be applied directly to discrete variables (like genotypes).
  • Other approaches include Hyperopt (or TPE algorithm) which uses a Tree- Structured Parzen Estimator and SMAC which uses Random Forests (RF) as surrogate model.
  • f(x *) is the current optimum value and /(x) is the value of the surrogate’s prediction.
  • /(x) should be associated with a probability function.
  • Ensemble models e.g., Random Forests, Deep Neural Network Ensembles, implicit ensembles, or Bayesian models, Deep Gaussian Processes, etc.
  • RF Random Forests
  • the RF prediction is used as an estimation of the statistical mean of the surrogate model’s predictions, for which a gaussian distribution is assumed.
  • the calculation of the variance of the prediction considers RF’s estimators deviation and the leaf training variance for each tree. Both estimations are combined using the Law of total variance.
  • the new acquisition function is maximized to obtain the next candidate.
  • heuristics that could be used to decide how to fabricate Xj's evaluation value (ex: use the mean predicted value m or; the mean prediction plus the deviation or; the prediction mean minus sigma m — s, etc).
  • Fig. 5 illustrates a flowchart for training a phenotype prediction model based at least in part on the plurality of experiential genotype vectors, the corresponding phenotype information, and the one or more constraints according to an exemplary embodiment.
  • one or more parameters are determined for the surrogate model, the one or more parameters being configured to maximize accuracy of the surrogate model while reducing a computational complexity of training and applying the model.
  • Fig. 6 illustrates a diagram of the parameter adjustment algorithm of the surrogate model according to an exemplary embodiment.
  • the surrogate is a model that not only makes predictions, but also quantifies the uncertainty of its estimations on data that haven’t been yet observed (hasn’t been evaluated by the oracle, described in greater detail below). To adjust its parameters, it is required to observe a threshold amount of data that will depend on the nature of the problem as well as the architecture of the model.
  • Fig. 6 shows a diagram of the parameter adjustment algorithm of the surrogate model
  • Data points labeled by the oracle are used (Oracle samples).
  • the Oracle samples refer real-world results, such as the experimental data points previously discussed (linking genotype data to phenotype data).
  • the Oracle samples can be derived in ways other than experimentation. For example, in certain scenarios, it may be possible to mathematically derive results by modeling the molecular/biological behavior. These mathematically derived results can also correspond to real-world behavior.
  • the tuple represents a labeled data point (where x s refers to a sample point and J refers to its evaluation in the oracle) , and corresponds to the surrogate model evaluation of the given sample.
  • the update of the parameters of the surrogate model is done in a way to reduce the cost function that receives as arguments, this update is indicated by the dotted arrow.
  • the SMBO logic can be applied to determine the next points to evaluate experimentally and to successively update the predictive model until the most optimal phenotype has been determined.
  • an objective function is determined based at least in part on the plurality of desired phenotypic attributes.
  • the process of recommending genotypes for experimentation involves maximization of the objective function, as well as maximization of an acquisition function and additional steps, as discussed in greater detail further below.
  • the objective function is iteratively adjusted by repeatedly selecting one or more experiential genotype vectors in the plurality of experiential genotype vectors that maximize an acquisition function of the phenotype prediction model and updating the objective function based at least in part on one or more experimentally-determined phenotypic attributes corresponding to the one or more experiential genotype vectors. For example, this can be performed by modeling the plurality of experiential genotype vectors as a random forest and training the random forest using the corresponding experimentally-determined phenotypic attributes.
  • step 503 The process of step 503 is repeated until all of the experiential genotype vectors in the plurality of experiential genotype vectors are processed. Specifically, at step 504 a determination is made regarding whether there are additional experiential genotype vectors in the plurality of experiential genotype vectors that have not yet been processed. If so, then step 503 repeats. Otherwise, the training is terminated at step 505.
  • the novel method and system disclosed herein utilizes a genotype generation model (also referred to as the “generative model” or the “generator model”) to generate candidate genotypes.
  • a genotype generation model also referred to as the “generative model” or the “generator model”.
  • optimization problems are subject to some circumstances or constraints on the system that limits the region of feasible solutions. Although in some cases it’s difficult to know the specific rules that define this solution space, the above optimization problem can be reformulated as [0097] 1.1
  • the objective is a black-box function and the feasibility of obtaining evaluations may be restricted by budget constraints of different types: computational, temporal, economic, or others; and/or
  • SMBO methods consider the optimization of an acquisition function to obtain candidates (new samples from the solution space to be tested).
  • Some Bayesian Optimization approaches use quasi-Newton methods such as BFGS or L-BFGS, which explore the solution space starting from some initialization points to optimal values of the surrogate function by following the direction of an approximate gradient.
  • BFGS quasi-Newton methods
  • L-BFGS L-BFGS
  • Most of these methods require to specifically define the restrictions that describe the solution space.
  • Other methods generate random samples (or alternatively, samples are selected from all possible candidates by heuristic rules) to make massive evaluations of the surrogate function in order to search for optima.
  • Those methods also require to explicitly define the restrictions that confine the solution space.
  • Example 1 consider the case of designing a protein (a sequence of amino acids) so that its properties optimize a specific function. This can be defined as an optimization problem which aims to find a set of amino acid sequences x * that optimize said function, subject to an unknown set of constraints, which define the space of feasible proteins.
  • the dimensionality of the domain space (domain of is vast due to the immense number of possible amino acid sequences which is roughly given by Where l represents the length of a sequence within the range he number of possible amino acids (21).
  • the probability distribution of the generator model taking as input a random variable is some measure of probability distribution distance (e.g., Kullback-Liebler ), so in other words, optimization 1.3 searches for the optimum parameters uch that both distributions become the same.
  • this solution space could be represented by a subset of all real known protein sequences. These can be found in online public protein databases such as the “Worldwide Protein Data Bank” . With these elements, the parameters of the generator model can be fitted to model the probability distribution of this much more compacted and tractable solution space by solving optimization 1.3. Then, this generator can be used within an SMBO process, and one could simply sample a set of “new” elements from this distribution, rank them according to an acquisition function and generate a list of “candidates” from the top scored samples to be evaluated on function J ( ). Then, just as in any common SMBO approach, repeat this for each iteration of the optimization process of problem 1.1.
  • This approach is related to “Transfer Learning.” In Deep Learning, this term refers to the improvement of learning in a new task through the transfer of knowledge from a related task that has already been learned.
  • the generator model can be trained using a dataset that is not specific to the optimization task, the approach may be described as a way of applying Transfer Learning into SMBO.
  • This relation implies the sharing of some interesting properties that comes from this kind of learning, like model reusability (e.g., protein Generator model can be applied to several distinct tasks that involves proteins) and the possibility to achieve successful results without the need of a huge number of task-specific samples.
  • a genotype generation model is trained based at least in part on a plurality of sample genotype vectors, the genotype generation model being configured to generate new genotype vectors.
  • Fig. 7 illustrates a flowchart for training a genotype generation model based at least in part on a plurality of sample genotype vectors according to an exemplary embodiment.
  • the genotype generation model can be a Generative Adversarial Network (GAN). These networks seek to train a generative model from a set of input data. During training, the generator is encouraged to fit the probability distribution that characterizes this data set (i.e: the evidence probability distribution . Generally, the goal here is to use this trained generative model, to create new data with properties similar to the original set.
  • GAN Generative Adversarial Network
  • a generator model function having a plurality of trainable generator parameters is stored.
  • the generator model function is configured to mimic the distribution of the plurality of sample genotype vectors.
  • the generator model function with trainable parameters mimics the distribution of the input data. For better readability the subindex “ s omitted in this document, but it is implicit in the G function.
  • a discriminator model function having a plurality of trainable discriminator parameters is stored.
  • the discriminator model function is configured to estimate a probability that a data sample comes from the plurality of sample genotype vectors instead of from the generator model function.
  • the discriminator model function with trainable parameters estimates the probability that a data sample comes from the real evidence distribution instead of from the synthetic generator distribution. For better readability the subindex is omitted in this document, but it is implicit in the D function.
  • the adversarial training procedure involves training both of these functions/models concurrently.
  • the generator model faces an "adversary": the discriminator model, trained to determine if a sample comes from the distribution of the generative model or from the distribution of the original real data.
  • the weights/parameters of these networks are adjusted using optimization methods based on gradient ascent/descent in which an objective function is maximized/minimized.
  • GANs are based on a "minimax game” and defined as a "zero-sum game” under the Game Theory approach.
  • one agent a neural network
  • another agent another neural network
  • a special objective function is used for the adversarial training process.
  • a minimax objective function that is configured to be minimized by the generator model function and maximized by the discriminator model function.
  • the minimax objective function is one in which one agent attempts to minimize the function and the other agent attempts to maximize it.
  • the generator agent tries to minimize the objective function while the discriminator agent tries to maximize it:
  • x are the real samples coming from the evidence probability distribution are the synthetic generated samples coming from the generator distribution which is defined by the function being the generator input, which is sampled from a simple noise distribution * such as a uniform or normal Gaussian distribution.
  • both the generator model function and the discriminator model function are concurrently trained with the plurality of sample genotype vectors until the minimax objective function converges to a saddle point, which is a minimum with respect to the strategy of one player and a maximum with respect to the strategy of the other one.
  • FIG. 8 illustrates a flowchart for concurrently training both a generator model function and a discriminator model function with sample genotype vectors according to an exemplary embodiment.
  • step 801 one or more sample genotype vectors are sampled from the plurality of sample genotype vectors. Step 801 is repeated so that one or more sample genotype vectors are repeatedly samples from the plurality of sample genotype vectors.
  • step 802 one or more generated genotype vectors are generated with generator model function. Step 802 is repeated so that one or more generated genotype vectors are repeatedly generated.
  • the discriminator model function is iteratively to the one or more sample genotype vectors and the one or more generated genotype vectors until the discriminator model function cannot distinguish between the one or more sample genotype vectors and the one or more generated genotype vectors. As shown in Fig. 8, this step repeats after each iteration. Application of the discriminator model function alternates between the one or more sample genotype vectors and the one or more generated genotype vectors.
  • the training process alternates between the sample genotype vectors and the generated genotype vectors.
  • the sequence of steps shown in Fig. 8 can be 801 803 ⁇ 802 ⁇ 803 ⁇ 801 ⁇ 803 ... etc. This allows the minimax objective function to reach the saddle point and terminate the training.
  • Fig. 9 illustrates a representative diagram of the adversarial training framework used for training a generative model according to an exemplary embodiment.
  • the mathematical terms represent the gradients used to update the parameters of the generative and discriminative networks that are obtained from equation 1.1.
  • the term with the symbol corresponds to the gradient taken with respect to the parameters of the Discriminator and the term with the symbol correspond to the gradient taken with respect to the parameters of the Generator.
  • the Discriminator (D) is trained to "discern” (or classify) if the samples come from the Generator (G) or from the Real data (R).
  • the Discriminator (D) models a cost function related to the probability of performing the classification correctly. This signal serves to iteratively adjust, through the gradients shown, both the Generator (G) and the Discriminator (D) parameters. The updates of the weights of both networks are done in shifts.
  • the source of the samples to be discriminated is determined by the action represented by the "switch" at the center of the image.
  • the sampling action is represented by the letter S.
  • To sample from the distribution of the real data (R) an example is simply chosen randomly from said set. On the other hand, to sample from the generator, a random sample from a random noise source distribution is selected and then passed through the generator function G
  • a GAN architecture converges when the Discriminator and the Generator reach a Nash Equilibrium. That is, the “game” ends when the optimization converges into a “saddle point” of the objective function, which is a minimum with respect to the strategy of one player and a maximum with respect to the strategy of the other one. This “competition” leads both players to optimize themselves until the generated samples are indistinguishable from the real samples (i.e., the generator distribution matches the real distribution).
  • Fig. 10 illustrates a diagram of the parameter adjustment algorithm of the generative model according to an exemplary embodiment. The operation of the parameter adjustment algorithm is described in greater detail below.
  • Fig. 10 shows a diagram of the parameter adjustment algorithm of the generative model G , which is done under a modified version of the generative adversarial training framework.
  • the discriminator D or critic
  • the generator is trained to minimize the Wasserstein distance estimate given by the critic.
  • feedback from the surrogate model can be incorporated by adding a regularization term to the generator cost function, and updating its parameters while the Discriminator's weights are fixed.
  • a plurality of new genotype vectors are generated with the genotype generation model.
  • Fig. 11 illustrates a flowchart for generating a plurality of new genotype vectors with the genotype generation model according to an exemplary embodiment.
  • one or more parameters are stored, the one or more parameters including a batch size and a selection rate.
  • a selection method is used to choose the most promising candidates resulting from the entire process to be evaluated by the Oracle (e.g., experimental testing). This selection method can be dependent on a number of different parameters, which can be provided by a user, set to some default value, or algorithmically determined.
  • One of the parameters used can be the batch size, which describes the number of candidates to be evaluated by the Oracle on each SMBO iteration.
  • Another parameter can be the selection rate a value between 0 and 1 that denotes the size of the quantile of top candidates to be selected.
  • a set of new genotype vectors is generated with the generator model function.
  • the phenotype prediction model is applied to the plurality of new genotype vectors to generate a plurality of scores, the phenotype prediction model being configured to predict one or more phenotypic attributes of the new genotype vectors.
  • This step can include applying the objective function to the plurality of new genotype vectors to generate a plurality of prediction scores corresponding to the plurality of new genotype vectors.
  • This step can also include applying an acquisition function of the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of acquisition scores corresponding to the plurality of new genotype vectors. Additionally, this step can include determining an uncertainty score associated with each of the new genotype vectors in the plurality of new genotype vectors.
  • a plurality of result genotypes are determined based at least in part on a ranking of the plurality of new genotype vectors according to the plurality of scores.
  • This step can include ranking the plurality of new genotype vectors based at least in part on the plurality of prediction scores corresponding to the plurality of new genotype vectors and filtering out a percentage of the plurality of new genotype vectors below a predetermined ranking percentage to generate a plurality of result genotype vectors.
  • the predetermined ranking percentage can be set by a user, set to some default value, based upon prior results or the particulars of the constraints or the experimental data set, or based upon implementation details of the particular predictive model type utilized.
  • a result is generated based at least in part on the plurality of result genotypes, the result indicating one or more genetic constructs for testing.
  • this result can be a batch of single genetic constructs for testing or a combinatorial sequence that specifies multiple genetic constructs for testing.
  • a user can optionally select whether they wish to receive a list of single genetic constructs or a combinatorial sequence.
  • Construct is used as a synonym of “candidate.” Each construct represents a particular combination of genotype variants. For example, a combinatorial DNA design of complexity N, will lead to the assembly of N constructs.
  • the step of generating a result can include ranking the plurality of result genotype vectors based at least in part on an acquisition score associated with each result genotype vector and selecting one or more result genotype vectors in the plurality of result genotype vectors based at least in part on the ranking.
  • This step can further include decoding the result genotype vectors to present the genotype information in a format that the user can understand and utilize or create in an experimental setting.
  • This step can be configured to select only the top-ranked result genotype vector.
  • the system must determine whether sufficient constructs have been generated and otherwise take the appropriate actions to update the model and generate additional constructs.
  • Fig. 12 illustrates a flowchart for updating the predictive model and determining whether to generate additional constructs according to an exemplary embodiment.
  • one or more selected result genotype vectors are the output of an initial application (evaluation) of the predictive model.
  • the result genotypes corresponding to the selected result genotype vectors are provided to a user.
  • the user experimentally determines phenotype data (i.e., Oracle determined phenotype measurements) corresponding to the result genotypes.
  • the phenotype information can be determined by the Oracle by any of the methods discussed herein, such as experimental testing or mathematical modeling of real-world values.
  • the experimental data points are updated with the genotype data of the results and the corresponding phenotype data.
  • a further step can be performed of re-training the phenotype prediction model based at least in part on the one or more result genotype vectors, the corresponding phenotype information, and the one or more constraints discussed previously.
  • phenotypes corresponding to the selected result genotype vector are algorithmically generated. These generated phenotypes are fabricated values and can be generated in a variety of ways. One way to generate these phenotypes is use a phenotype prediction generated by the phenotype prediction model. Another way is to use the phenotype prediction minus the standard deviation of the prediction. Yet another way is to use some other linear combination of the predicted phenotype and its corresponding standard deviation. After the phenotypes are generated, the experimental data points are updated with the genotype data of the results and the corresponding phenotype data at step 1207.
  • step 1208 the steps of encoding genotype information in the experimental data points, training the predictive model and optionally the generator model, generating new genotype vectors, applying the model, determining result genotypes, and generating a result are repeated with the updated experimental data points.
  • a determination of phenotype data corresponding to the selected one or more result genotype vectors is made and the plurality of experimental data points are updated with the phenotype data and genotype data corresponding to the selected one or more result genotype vectors.
  • the predictive model can be applied to generate either one result genetic construct at a time or multiple genetic constructs per iteration.
  • multiple genetic constructs are determined per iteration, Applicant has discovered optimizations which reduce noise and improve computational efficiency. These optimizations are discussed in greater detail below. Recommending multiple experiments (n batch experiments)
  • the optimization framework should provide several candidates per iteration step. For this, the disclosed method repeats the training and evaluation steps as much as needed, following the constant liar method. The actual predicted value of the untested candidate was used as fabricated value.
  • a subset is built from the a% candidates with top predicted values. This limits the calculation time and ensures more consistency between different runs.
  • the idea is to identify a that can define a list of top prediction candidates that will probably contain all selected candidates from the liar approach. We found that a value of a set to 60% worked on most of the experiments that we run. It should be noted that a value may change if a different number of selected candidates is required (we’ve set the limit to 100 candidates by default, as clients are rarely interested in having more).
  • the use of the a rule helps to cut down by near a half the computation time without adding unnecessary randomness to the batch generation process.
  • Fig. 12 described the scenario where a user requested a batch single genetic constructs. However, a user can also specify that the step of generating a result should return a combinatorial design that is used to generate multiple genetic constructs.
  • Fig. 13 illustrates a flowchart for generating a result based at least in part on the plurality of result genotypes when the user requests a combinatorial design according to an exemplary embodiment.
  • the plurality of result genotype vectors are filtered to remove one or more first result genotype vectors corresponding to one or more categories of genotypes having genotype vectors with acquisition scores below acquisition scores of genotype vectors in other categories of genotypes.
  • genotype variant is used herein as a synonym of genotype variant. For example, if each gene included in a design contains only one category (genotype variant), then the design would not be considered “combinatorial.”
  • a plurality of filtered genotype vectors are selected from the filtered plurality of result genotype vectors, the selected plurality of filtered genotype vectors corresponding to one or more additional categories of genotypes having genotype vectors with acquisition scores above acquisition scores of genotype vectors in other categories of genotypes.
  • a plurality of aggregate acquisition scores are determined corresponding to a plurality of combinations of genotype vectors in the selected plurality of filtered genotype vectors.
  • step 1304 the plurality of combinations of genotype vectors are ranked according to the plurality of aggregate acquisition scores.
  • one or more top-ranked combinations of genotype vectors are selected as the result, each combination of genotype vectors corresponding to two or more genetic constructs for testing.
  • Applicant has developed a method for returning a reduced combinatorial design as output (instead of recommending a linear list of constructs). This can streamline the process of genotype optimization.
  • the present section describes in detail a novel method to find the optimal combinatorial design out of the predictions over all single candidates.
  • Table 1 illustrates an example of a combinatorial design. It contains 2 genes and each gene has a different number of possible variants. This specific example represents a biochemical reaction that depends on two enzymes of different kinds. Those enzymes are encoded as genes. The scientist has found 2 valid sequence alternatives for the first enzyme, and 3 options for the second gene.
  • the first gene position or bin may have 1 from 2 different alternatives, while the second gene has 3 variants to choose from.
  • the data displayed at table 1 represents a combinatorial design. Usually the scientist is searching for the best combination of the variants and looks for the one with the highest production rate of a certain product. Given the above example, there are 6 possible solutions for the problem which are generated from the combinations of all variants. In the following table, each row represents one of these, also called as “single solutions” or constructs:
  • Table 2 illustrates a list of the 6 singular solutions / constructs associated with the combinatorial example shown in Table 1.
  • the Combinatorial Solution option was implemented to suggest a “reduced” combinatorial design rather than a list of single candidates.
  • This approach can allow the user to test hundreds or thousands of different meaningful designs at each optimization step, instead of just a few.
  • this kind of solution may reduce experimental costs, hence increasing the number of samples tested on each iteration and improving the achieved optima. Also it may help to reduce experimentation time.
  • the Combinatorial Output is a new step in the optimization process that runs (optionally) after all single candidates are evaluated. Considering that part of this method can be computationally demanding, the applicants created a first filtering stage, where some categories are discarded by using some heuristic rules, and then a fine-tuning stage where all the remaining categories are studied in detail.
  • the first stage uses two pre-filter methods. The first one finds, for each bin ‘b’, the worst performing categories where all its singular construct’s scores are below the ones of the other categories. After identifying these low scored categories, the associated singular constructs are removed from the candidates list. Then, the second pre-filter is applied, which starts building a combinatorial design by collecting the best top ‘N’ performing categories according to a ranking based on the acquisition value of their corresponding singular candidates. The number ‘N’ of collected categories will be given by a pre-determined combinatorial complexity threshold. The combinatorial complexity is given by the product formula below. Where correspond, respectively, to the initial number of categories and the final number of categories of bin ‘b’. The final number of categories of each bin is predetermined by the user based on her needs.
  • the fine-tuning stage basically calculates an aggregated score from the acquisition values of every single construct that belongs to each combinatorial candidate.
  • the user may select the score to be the average acquisition value of the constructs, or the maximum, or in fact, any other combination of the statistics of the acquisition values (s.a: mean, standard deviation). Based on this score, the best combinatorial designs are stored during execution and returned to the user after evaluating all combinatorial candidates.
  • Table 3 illustrates acquisition values for each construct within a hypothetical (big) combinatorial design. Acquisition values will be combined to calculate the scores for each (reduced) combinatorial candidate.
  • Table 4 illustrates combinatorial candidates for the example problem with their respectives scores.
  • the Combinatorial Solution method returns the top ranked combinatorial designs according with the aggregated scores.
  • the user may build all the associated constructs from one or more proposed solutions and evaluate them. After that, she can feed the model with constructs’ data and generate a new set of combinatorial or singular candidates.
  • GPUs graphical processing units
  • S ⁇ B s x k Is the binary matrix of single solutions. Each row represents a single solution and each column represents one of k total categories. Each component will have a value of 1 if the category is present in the construct and 0 if not.
  • T ⁇ R s ls the target vector. It contains the float valued scores predicted for each single design.
  • C ⁇ B c x k is the binary matrix of valid combinatorial solutions. Each row represents a combinatorial design and each column represents one of k total categories. Each component will have a value of 1 if the category is present in the design and 0 if not.
  • the single scores associated with each combinatorial design can be obtained by means of boolean indexing in the target vector T. With those acquisition values, the combinatorial aggregated score can be easily calculated.
  • cf is the number of singular constructs associated with a combinatorial candidate. This value is equal across all valid combinatorial candidates as they were selected to have the same complexity.
  • Table 5 illustrates computation times of the combinatorial solution step. Results are shown for different number of categories per bin of the original combinatorial design n 0.
  • Fig. 14 illustrates high-level flowchart of the methods and system components for efficiently optimizing a phenotype with a combination of a generative and a predictive model described herein according to an exemplary embodiment.
  • a “Selection Method” is used to suggest a list of candidates based on a set of criteria. These criteria may be non-trivial depending on the trade-off between exploration and exploitation desired. As discussed earlier, the Selection Method can be based on the parameters of batch size and a selection rate. Tire Selection Method can also be based upon the rankings or scores of the candidates/genotype vectors.
  • An “oracle” represents the “black-box” function As discussed earlier, the oracle can correspond to the results of experimental testing in a lab or mathematical modeling of real-world behavior.
  • Two sets of data are also defined: a labelled set with and with a possible noisy measurement of the black-box function for input used to adjust parameter ⁇ f of the predictive model and another set (not necessarily labeled) with used to adjust parameters of the generative model.
  • Step 1 Adjust the surrogate/predictive model / using the labelled dataset frj to adjust parameters by optimizing problem 1.2.
  • Step 2 Adjust the generative model G using generator’s dataset (L to adjust parameters by optimizing problem 1.3.
  • Step 3 Generate k samples
  • Step 4 Use the “ Selection Method ’ method to choose the top samples considering the values provided by the acquisition function.
  • Step 5 Evaluate m selected samples with the “oracle”.
  • Step 6 Readjust the parameters of the predictive model considering the new information obtained from the oracle , using labelled samples.
  • Step 7 Repeat from point 3 until number of desired genotypes obtained.
  • the overall process is shown in Fig. 14.
  • the iterative process begins with a set of initial experimental database of data points used to train the predictive model.
  • the initial experimental database of data points can then also be used to train the generative model.
  • a different data set such as a sample database, can be used to train the generative model.
  • the training data selected for the generative model can be used to improve/refme results produced by the system by selecting training data which is more likely to contain an optimal genotype. Applicant notes that the order of training is unimportant, and that the predictive and generative model can be trained either together/jointly or taking turns.
  • the generative model is then used to synthesize a set of new data points known as “candidates” that are evaluated and ranked using the prediction and uncertainty of the predictive model. Subsequently, the top ranked candidates are tested experimentally using these results to update the experimental database (Oracle samples). Optionally, the solution space dataset can be updated. The cycle shown in Fig. 14 is repeated until satisfaction of criteria/optimal results are obtained.
  • both the Surrogate model and the Generator model are neural networks.
  • the adjustment of both models’ parameters is performed using gradient-based optimization techniques, like stochastic gradient descent (ascent), through which a loss function is minimized (maximized).
  • gradient-based optimization techniques like stochastic gradient descent (ascent), through which a loss function is minimized (maximized).
  • the optimization process can be streamlined if the available data is used to adjust the parameters of both models before starting.
  • the disclosed SMBO methods can use as surrogate any machine/deep learning model that can manage numerical and categorical inputs, and can be configured to output an estimation of uncertainty as well as a prediction value.
  • RF Random Forest Regressor
  • the disclosed methods can use many different acquisition methods as scoring functions. The Expected Improvement was selected for the current implementation. However, the disclosed method is not limited to it, and any other score that can be used within the disclosed SMBO framework could also be applied.
  • Fig. 15 illustrates the components and process flow of the system for efficiently optimizing a phenotype with a combination of a generative and a predictive model according to an exemplary embodiment.
  • Fig. 15 shows in greater detail the components used to carry out multiple steps shown in Fig. 14.
  • the generative model is trained with samples obtained from the solution space. This training process may also include feedback results from the acquisition function (see additional features discussed below).
  • the surrogate function is trained using data processed by the oracle. Once the generator and the surrogate are trained, the generator is used to generate synthetic samples and each synthetic sample is scored by the acquisition function. After that, a Selection Method chooses the best scored candidates and suggests them to be evaluated by the Oracle, evaluated samples may be added to the oracle dataset in order to be considered in the next SMBO iterations.
  • the SMBO strategy supports both categorical and numerical data types as inputs. However, depending on the surrogate model used, these inputs might need to be encoded into a different representation. For instance, when working with genetic sequences, which in their original form correspond to sequences of categorical symbols or tokens (e.g., the four nucleotides A, T, G, and C), one would likely use a different representation (which we generally call “embeddings”) so to use them to feed a model (e.g one-hot representation instead of strings) or even reduce dimensionality (as these sequences can be huge) and standardize the dimensions of model’s input (considering that genetic sequences may have different lengths).
  • embeddings e.g one-hot representation instead of strings
  • An embedding may be constructed by training deep learning models on vast amounts of data.
  • the volume of datasets used in SMBO are sometimes too low to be useful for building an effective embedding, so these encoder models tend to be pre-trained on external much larger datasets which exploits the advantages of “Transfer Learning”, a methodology that allows a model to apply knowledge from a large dataset to a more specific task.
  • the input data can be represented as numerical vectors that lie in a multidimensional space.
  • These numerical vectors encode sequence information capable of making relationships in the multidimensional space that have some physical properties (e.g., biochemical properties)
  • This dense representation in a continuous space makes it possible for the model to have access to richer information about the sequences that allows the model to better identify, extract and exploit the relationships between them.
  • the inputs of the surrogate are continuous.
  • the SMBO approach might take advantage of this and use a surrogate model where optima can be obtained through newton or quasi-newton methods (such as Deep Ensembles, Deep Gaussian Processes, Bayesian Networks or other Bayesian approaches).
  • a useful additional feature of the disclosed method and system is to encode the original domain into a latent space. This can help to reduce the dimensionality of the generator and surrogate models.
  • the approach to encode samples can vary and depends upon the nature of the solution space. In the case of using a learnable embedding, samples from 1 4 or another dataset could be used for training. An overview of this approach is shown in Fig. 16.
  • Fig. 16 illustrates the components and process flow of the system for efficiently optimizing a phenotype with a combination of a generative and a predictive model with embeddings according to an exemplary embodiment.
  • This extension of proposed framework uses embeddings. With this extension, instead of using the original domain space, the generator and surrogate models are trained using encoded versions of the samples instead. Encoder and Decoder modules are used to map from the solution space to a latent space and vice versa. [0229] A reduced version of this can be applied when using an encoder/decoder architecture as a generator. This is the case of some Autoencoders, Variational Autoencoders, some GANs architectures and others.
  • the generative model can be divided into three separate modules: the first module is the encoder, which translates samples from the domain to the latent space, the second module is the decoder, which maps samples from the latent space to the problem domain, and the third module represents the sampling process from the latent space Z.
  • Fig. 17 illustrates the components and process flow of the system for efficiently optimizing a phenotype with a combination of a generative and a predictive model with an encoder/decoder architecture according to an exemplary embodiment.
  • This figure shows the modifications that can be made to the proposed architecture when the generator is built with an encoder/decoder architecture.
  • the Encoder module is applied to translate oracle samples into latent space and the surrogate model is trained to use the latent space as input domain.
  • the surrogate takes its inputs from a block that samples the latent space Z. The way this space is sampled will depend on the specific architecture of the generator.
  • the Decoder is later applied to all selected samples and maps them to Oracle’s domain. Both Encoder and Decoder models are trained by the Generative Model Fit process described earlier.
  • the first term of the above objective function will drive the generator’s distribution towards the real data probability distribution while the second term will increase the distribution’s support on regions of the solution space with a higher Acquisition value.
  • the optimization problem 1.3b might help to achieve a generator with a distribution that accelerates the exploration/exploitation process of the SMBO approach used to solve the main optimization 1.1
  • the proposed method can be implemented considering most GANs approaches.
  • the previous statement includes models that do not rely on a randomized input only, but also ones that use other kinds of input.
  • ACGANs uses a mixed type input which contains a randomized vector as well as label information that is used to condition sample generation.
  • the proposed method can be easily extended.
  • the samples used to train the generative model can be described by the expression is a vector that contains additional information associated with the sample " i This information may encode not only label descriptors, but also context images, text, etc. Those approaches would require a generative model capable of processing that type of information.
  • One approach is to apply scalarization of the objective function. This can be achieved by combining all dimensions into one using some other function (e.g., linear combination) and then proceed with the optimization of the scalar result with a single surrogate model. Scalarization can also be achieved by building a model per target component and calculating a score that combines individual predictions. Other approaches are oriented to search for the Pareto front. Within those methods, a model is built for each component and the selection criteria is changed to penalize non-dominant candidates.
  • some other function e.g., linear combination
  • the Oracle will probably penalize candidates that lie outside the solution space. For example, if the objective is to maximize the production of certain compound within a bacteria by altering some of its genes (where each alteration corresponds to a candidate), the cell may fail to perform the involved metabolic pathway (and won’t produce the target compound) if some gene was changed in a way that kills the organism.
  • the gene could be encoding an enzyme that works very well in the isolated pathway, but in the context of the living organism interacts with other elements in a negative way, driving the cell to death.
  • the Oracle will validate solution space compliance as well as the objective function value. This isn’t the common scenario in optimization problems, where usually the limits of the solution space are defined by a set of restriction rules and the objective function works in an independent fashion.
  • the proposed method can be extended to take advantage of that and use Oracle’s evaluations not just to train the surrogate model, but also to re-train the generator on each optimization step. This helps to reduce the number of iterations as the generator’s knowledge base will grow throughout the process.
  • the main approach considers a Selection Method that optimizes the acquisition function by looking for the best candidates in a set of samples synthesized by the generator. This task can be done in several different ways.
  • the proposed approach can be extended to maximize acquisition by using a newton or quasi-newton method that explores for the best combination of latent space features. Considering this extension, the steps of generating samples and using the selection method to select the top samples can be changed in order to apply a gradient descent algorithm from different starting points and, in that way, obtaining multiple optimal samples.
  • Fig. 18 illustrates the components of a specialized computing environment for efficiently optimizing a phenotype with a combination of a generative and a predictive model according to an exemplary embodiment.
  • Specialized computing environment 1800 can be made up of one or more computing devices that include a memory 1801 that is a non-transitory computer-readable medium and can be volatile memory (e.g., registers, cache, RAM), non- volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two.
  • volatile memory e.g., registers, cache, RAM
  • non- volatile memory e.g., ROM, EEPROM, flash memory, etc.
  • memory 1801 stores experimental data points 1801A, constraints 1801B, sample genotypes 1801C, phenotype prediction model 1801D, prediction model training software 1801E, genotype generation model 1801F, generation model training software 1801G, generator and discriminator model functions 1801H, genotype scoring and ranking software 18011, combinatorial output software 1801J, encoding/decoding software 180 IK, and genetic construct generation software 1801L.
  • Each of the software components in memory 1801 store specialized instructions and data structures configured to perform the methods for efficiently optimizing a phenotype with a combination of a generative and a predictive model described herein.
  • All of the software stored within memory 1801 can be stored as a computer- readable instructions, that when executed by one or more processors 1802, cause the processors to perform the functionality described with respect to Figs. 1-17.
  • Processor(s) 1802 execute computer-executable instructions and can be a real or virtual processors. In a multi-processing system, multiple processors or multicore processors can be used to execute computer-executable instructions to increase processing power and/or to execute certain software in parallel. As discussed earlier in the application, processors can be processors specialized for the task of training and applying a predictive model, such as graphical processing units (GPUs).
  • GPUs graphical processing units
  • Computing environment 1800 additionally includes a communication interface 1103, such as a network interface, which is used to communicate with devices, applications, or processes on a computer network or computing system, collect data from devices on a network, and implement encryption/decryption actions on network communications within the computer network or on data stored in databases of the computer network.
  • the communication interface conveys information such as computer-executable instructions, audio or video information, or other data in a modulated data signal.
  • a modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
  • Computing environment 1800 further includes input and output interfaces 1304 that allow users (such as system administrators) to provide input to the controller to cause the neurological screening device to display information, to edit data stored in memory 1301, or to perform other administrative functions.
  • users such as system administrators
  • an administrator can configure, add, or edit, for example, constraints, encoding software, or experimental data points stored in memory 1801.
  • An interconnection mechanism (shown as a solid line in Fig. 18), such as a bus, controller, or network interconnects the components of the computing environment 1800.
  • Input and output interfaces 1804 can be coupled to input and output devices.
  • USB Universal Serial Bus
  • USB ports can allow for the connection of a keyboard, mouse, pen, trackball, touch screen, or game controller, a voice input device, a scanning device, a digital camera, remote control, or another device that provides input to the computing environment.
  • the computing environment 1800 can additionally utilize a removable or non- removable storage, such as magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, USB drives, or any other medium which can be used to store information and which can be accessed within the computing environment 1800.
  • a removable or non- removable storage such as magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, USB drives, or any other medium which can be used to store information and which can be accessed within the computing environment 1800.
  • Computing environment 1800 can be a set-top box, personal computer, or one or more servers, for example a farm of networked servers, a clustered server environment, or a cloud network of computing devices.

Abstract

A method, apparatus, and computer-readable medium for efficiently optimizing a phenotype with a combination of a generative and a predictive model, training a phenotype prediction model based on experiential genotype vectors, training a genotype generation model based on sample genotype vectors, generating new genotype vectors, applying the phenotype prediction model to the new genotype vectors to generate scores, determining result genotypes based on a ranking of the available genotypes according to the scores, and generating a result based on the result genotypes, the result indicating one or more genetic constructs for testing.

Description

METHOD FOR EFFICIENTLY OPTIMIZING A PHENOTYPE WITH A COMBINATION OF A GENERATIVE AND A PREDICTIVE MODEL
RELATED APPLICATION DATA
[0001] This application claims priority to U.S. Provisional Application No. 63/015,140, filed April 24, 2020, and titled “PREDICTIVE AND A GENERATIVE MODEL TO OPTIMIZE THE HIGHLY DIMENSIONAL EXPLORATION OF OPTIMUM SOLUTIONS,” the disclosure of which is hereby incorporated by reference in its entirety.
BACKGROUND
[0002] A recurrent problem in synthetic biology is to find the genetic sequence that optimizes, for a given biological system, the production of a specific molecule or compound, or more generally that optimizes a specific metric that characterizes the phenotype of a given biological system. In general this search can be quite expensive because it requires numerous experiments. Evaluating the performance and characterizing the phenotype of different genetic variants can consume a lot of time and resources.
[0003] Instead of searching for an optimal genetic design within the universe of all possible genetic sequences, it becomes important to focus the search to certain known variants of genes or parts of DNA directly involved in the production of the compound, or phenotype of the corresponding biological system. Despite the narrowing of the search space to be explored, the number of possible genetic designs is typically quite large and it is necessary to have tools that allow finding the optimal genetic design with the smallest number of experiments as possible.
[0004] Unfortunately, even with a reduced search space, it is completely unfeasible for a geneticist to implement and collect results from even a small fraction of the possible genetic designs, as the number of combinatorial possibilities scales exponentially according to the number of component genotype variants. [0005] Additionally, attempts to reduce the search space using automated techniques and algorithms are also impractical due to both the exponential computational complexity of the search problem and the difficulty in quantifying the phenotype expressions for genotype sequences which have not previously been assessed experimentally.
[0006] Even predictive models that utilize an acquisition function to evaluate the search/solution space can sometimes be insufficient to identify an optimal sequence. For example, in situations where the dimensionality of the solution space is very large, evaluating an acquisition function on all its points makes is virtually impossible. Another problem is that in input spaces with high dimensionality, the distribution of experimental data points has a small compact support. This makes it difficult (probabilistically) to sample new data points that are valid candidates.
[0007] Accordingly, improvements are needed in technology for predictive modeling of the phenotype of a biological system and efficiently optimizing a phenotype.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] Fig. 1 illustrates a flowchart for efficiently optimizing a phenotype with a combination of a generative and a predictive model according to an exemplary embodiment.
[0009] Fig. 2 illustrates a flowchart for encoding genotype information in a plurality of experimental data points corresponding to a set of constraints as a plurality of experiential genotype vectors according to an exemplary embodiment.
[0010] Fig. 3 illustrates a data flow chart showing the process for generating encoded experiential genotype vectors according to an exemplary embodiment.
[0011] Fig. 4 illustrates a data flow chart showing the process for generating encoded sample genotype vectors according to an exemplary embodiment.
[0012] Fig. 5 illustrates a flowchart for training a phenotype prediction model based at least in part on the plurality of experiential genotype vectors, the corresponding phenotype information, and the one or more constraints according to an exemplary embodiment. [0013] Fig. 6 illustrates a diagram of the parameter adjustment algorithm of the surrogate model according to an exemplary embodiment.
[0014] Fig. 7 illustrates a flowchart for training a genotype generation model based at least in part on a plurality of sample genotype vectors according to an exemplary embodiment.
[0015] Fig. 8 illustrates a flowchart for concurrently training both a generator model function and a discriminator model function with sample genotype vectors according to an exemplary embodiment.
[0016] Fig. 9 illustrates a representative diagram of the adversarial training framework used for training a generative model according to an exemplary embodiment.
[0017] Fig. 10 illustrates a diagram of the parameter adjustment algorithm of the generative model according to an exemplary embodiment.
[0018] Fig. 11 illustrates a flowchart for generating a plurality of new genotype vectors with the genotype generation model according to an exemplary embodiment.
[0019] Fig. 12 illustrates a flowchart for updating the predictive model and determining whether to generate additional constructs according to an exemplary embodiment.
[0020] Fig. 13 illustrates a flowchart for generating a result based at least in part on the plurality of result genotypes when the user requests a combinatorial design according to an exemplary embodiment.
[0021] Fig. 14 illustrates high-level flowchart of the methods and system components described herein according to an exemplary embodiment.
[0022] Fig. 15 illustrates the components and process flow of the system for efficiently optimizing a phenotype with a combination of a generative and a predictive model according to an exemplary embodiment. [0023] Fig. 16 illustrates the components and process flow of the system for efficiently optimizing a phenotype with a combination of a generative and a predictive model with embedding according to an exemplary embodiment.
[0024] Fig. 17 illustrates the components and process flow of the system for efficiently optimizing a phenotype with a combination of a generative and a predictive model with an encoder/decoder architecture according to an exemplary embodiment.
[0025] Fig. 18 illustrates the components of a specialized computing environment for efficiently optimizing a phenotype with a combination of a generative and a predictive model according to an exemplary embodiment.
DETAILED DESCRIPTION
[0026] While methods, apparatuses, and computer-readable media are described herein by way of examples and embodiments, those skilled in the art recognize that methods, apparatuses, and computer-readable media for efficiently optimizing a phenotype with a combination of a generative and a predictive model are not limited to the embodiments or drawings described. It should be understood that the drawings and description are not intended to be limited to the particular form disclosed. Rather, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the appended claims. Any headings used herein are for organizational purposes only and are not meant to limit the scope of the description or the claims. As used herein, the word “can” is used in a permissive sense (i.e., meaning having the potential to) rather than the mandatory sense (i.e., meaning must). Similarly, the words “include,” “including,” and “includes” mean including, but not limited to.
[0027] As discussed above, while running new experiments to optimize the phenotype of the biological system (such as the titer of a metabolite), it would be useful to have a system that optimizes the phenotype of a biological system based on experimental data previously obtained and that also reduces the computational complexity of the search problem so that a possible solution set can be determined in a feasible time. Researchers would then have additional information that would allow them to make better decisions regarding which experiments to perform. [0028] Additionally, improved methods and systems are needed for optimization problems where the dimensionality of the solution space is very large, and where evaluating an acquisition function on all its points is impossible.
[0029] Applicant has discovered a method, apparatus, and computer-readable medium for efficiently optimizing a phenotype with a combination of a generative and a predictive model. The prediction model is constructed specifically to optimize the phenotype of a biological system by generating phenotype predictions relating to genotypes which have not been experimentally characterized and which meet a user’s requirements. The disclosed method, apparatus, and computer-readable medium are further configured to reduce the computational complexity of exploring the search space of possible genotypes using a variety of specialized heuristics adapted to this domain and, in some cases, adapted to the hardware that is used to apply the prediction model.
[0030] The novel method, apparatus, and computer-readable medium disclosed herein generates and progressively adjusts a generative model together with a predictive model. These models are configured to generate new optimized data in the sense of a specific metric associated with a particular quality of interest. This algorithm is based on a generative model capable of generating “new” valid ( real-looking ) data configured to optimize a particular quality or property, and there is also a predictive model capable of predicting the value of such property for this new data which of course has not yet been experimentally characterized.
[0031] The described process for generating and adjusting a generative and a predictive model is described in the context of optimizing the phenotype of a biological system. These models are used in tandem to optimize the exploration of untested genotypes. However, the described method of utilizing Sequential Model Based Optimization (SMBO) with a predictive model with uncertainty estimation as a surrogate together with a generative model as a candidate provider is applicable to any task involving optimization of the input of a black-box function (i.e., control parameters of a dynamic system).
[0032] The disclosed method, apparatus, and computer-readable medium serves as a solution for optimization problems where the dimensionality of the solution space is very large, and where evaluating an acquisition function on all its points is not possible. Also, in input spaces with high dimensionality, the distribution of experimental data points has a small compact support. This makes it difficult (probabilistically) to sample new data points that are valid candidates. In the proposed approach the generative model is used to sample valid data points by correctly exploring the multidimensional space around the support of the experimental data. Also, the predictive model is used by the generator in order to improve its sampling towards the generation of more efficient candidates that can better explore the solution space.
[0033] Fig. 1 illustrates a flowchart for efficiently optimizing a phenotype with a combination of a generative and a predictive model according to an exemplary embodiment. As shown in Fig. 1, the disclosed process utilizes experiential genotype vectors and to train a phenotype prediction model (also referred to herein as the “predictive model,” the “prediction model,” or the “predictor model”) and sample genotype vectors to train a genotype generation model (also referred to as the “generative model,” the “generation model,” or the “generator model”).
[0034] The experiential genotype vectors are determined based upon experimental data and the sample genotype vectors are determined based upon a sample database. In some cases the experiential genotype vectors can reflect the underlying structure of the genotype information in the experimental data and the sample genotype vectors can reflect the underlying structure of the genotype information in the sample data. However, in situations where the genotype information in the experimental database is stored in a different format or includes different attributes, the underlying dataset can be transformed/encoded to generate the experiential genotype vectors. Similarly, if the genotype information in the sample database is stored in a different format or includes different attributes, the underlying dataset can be transformed/encoded to generate the sample genotype vectors.
[0035] The experiential genotype vectors can be generated from genotype information in an experimental database based at least in part on one or more constraints. Fig. 2 illustrates a flowchart for encoding genotype information in a plurality of experimental data points corresponding to a set of constraints as a plurality of experiential genotype vectors according to an exemplary embodiment. [0036] At step 201 one or more constraints are received. The one or more constraints are constrains particular to a specific user’s goals, experimental conditions, limitations, or other factors relating to the desired output from the predictive model. The one or more constraints can include a plurality of desired phenotypic attributes. The plurality of desired phenotypic attributes correspond to the phenotypes that a user is seeking to optimize through use of the system and subsequent experimentation.
[0037] Optionally, the one or more constrains can also include a plurality of available genotypes. The plurality of available genotypes can correspond to the genotypes that a particular user is able to create or has access to for experimental purposes. As the novel method and system disclosed herein limits the search space for the optimization problem through the use of a generative model, it is not necessary for the constraints to include plurality of available genotypes. The limitations on available genotypes can also be input into the system through the choice of training data for the generative model, as discussed in greater detail below.
[0038] One of skill in the art will of course understand that genotype refers to a genetic constitution or sequence and phenotype refers to an observable characteristic resulting from the interaction of a genotype with a particular environment. A phenotype can include, for example, the ability of a particular genotype to produce a specified molecule, compound or metabolite (determined by the titer of the molecule), bacterial growth (determined by optical density data), resistance of a strain to extreme conditions and temperature, salinity, or pH conditions, etc.
[0039] The constraints can be received from a user via an input interface or in a communication via a network interface. The constraints can also be received from a software process or computing system via a messaging protocol, a network connection, or other communication mechanism.
[0040] The step of receiving the constraints can include the user specifying the pool of variants that should be explored for each bin. The user can use just the labels or, alternatively, the genetic/amino-acid sequences if using a sophisticated embedding approach. The user also needs to specify the property to be optimized (this means that the user has to provide, for example, the name of the column that contains the target value in the database). The algorithm will always try to maximize that phenotypic value, so the user should be aware of that and perform a transformation on the property if required to obtain a benefit from the process. A common transformation is, for example, multiplying all values by -1 in order to minimize the original phenotype measurement.
[0041] At step 202 genotype information in a plurality of experimental data points corresponding to the set of constraints is encoded as a plurality of experiential genotype vectors, the plurality of experimental data points comprising the genotype information and phenotype information corresponding to the genotype information.
[0042] This step can include, for example, interfacing and communicating with experimental database storing all experimental data points and extracting the plurality of experimental data points that correspond to the set of constraints. The experimental database can be a distributed database, such as a cloud database, that is accessible to a plurality of researchers. Each researcher can then upload experimental results to the database in order to provide additional training data for training the model, as will be discussed further below.
[0043] Each experimental data point can include phenotypic measurements and corresponding genotype data. For example, the experimental data point can include corresponding to a particular genetic sequence, gene, and/or gene fragment and can also include phenotypic measurements that correspond to that particular genetic sequence, gene, and/or gene fragment. The phenotypic measurements can be measurements that were experimentally determined in previous experiments. The experimental data points can be configured to link genotype data with phenotypic measurements in a memory of the database, such as through a relational database, directed graph, or other techniques.
[0044] Optionally, this step can include encoding all genotype information in the plurality of experimental data points as a plurality of experiential genotype vectors, irrespective of the constraints. For example, when the experimental dataset is small (say a few hundred constructs), all of these constructs can be used to generate the plurality of experiential genotype vectors. [0045] It only contains the experiments that the scientists have already performed in the lab. This group of samples is a subset of all the possible candidates that you can build if you recombine the alleles (variants) that are present on those constructs. Thus, the group of all possible candidates is bigger, and can contain thousands or hundreds of thousands candidates
[0046] As shown in Fig. 2, the step of encoding genotype information in a plurality of experimental data points corresponding to the set of constraints as a plurality of experiential genotype vectors (step 202) can include sub-steps 202 A and 202B.
[0047] At step 202A the plurality of experimental data points are identified in a database of experimental data points based at least in part on one or more of: at least one available genotype in the plurality of available genotypes and at least one desired phenotypic attribute in the plurality of desired phenotypic attributes. This step can conduct a search of the database for all experimental data points that have genotypes matching or interchangeable with at least one genotype listed in the plurality of available genotypes. This step can also conduct a search of the database for all experimental data points that have phenotypic measurements matching at least one desired phenotypic attributes in the plurality of desired phenotypic attributes. As discussed earlier, phenotypic attributes can include, for example, the ability of a genotype to produce a specified molecule or compound.
[0048] At step 202B the genotypes associated with the identified plurality of experimental data points are encoded as a plurality of experiential genotype vectors. This encoding process is discussed in greater detail below. Of course, the genotypes associated with the identified plurality of experimental data points can be encoded in other ways, such as by representing the genotypes using a categorical or nominal representation correspond to categories/sequences/sub-sequences, etc. As part of the encoding process, the phenotypic measurements in the identified plurality of experimental data points can optionally also be encoded using one or more of the above-described schemes to later enable more efficient analysis.
[0049] A similar process to step 202B can be performed for generating the sample genotype vectors. In particular, genotype information in a plurality of sample genotypes of a sample database can be encoded as the plurality of sample genotype vectors. The encoding process can be the same encoding process used to generate the experiential genotype vectors and is discussed in greater detail below.
[0050] Note that while the sample genotypes can optionally be labeled and include phenotypic and other attributes, this is not required for training the generative model. All that is required for the sample genotypes is that they include genotype information (i.e., unlabeled data).
[0051] The plurality of sample genotypes can be selected in any way from the sample database, such as through random sampling. A user can optionally provide sample constraints relating to the sample genotypes to be selected. These constraints can be used to filter the sample genotypes selected from the sample database.
[0052] The user can exercise control over the results produced by the generative model through selection of the sample database itself. Since the training data for the generative model is sourced from the sample database, the selection of the sample database (and the selection of samples within that sample database) can be used to control the type of genotypes produced by the system. For example, when designing a protein (a sequence of amino acids) a user can select a sample database that includes only feasible proteins to minimize the likelihood that the generative model will produce sequences of amino acid letters that are not feasible proteins. The sample database can also be the same database used to generate the experiential genotype vectors (i.e., the experimental data).
[0053] Fig. 3 illustrates a data flow chart showing the process for generating encoded experiential genotype vectors according to an exemplary embodiment. As shown in Fig. 3, constraints 302 including a plurality of desired phenotypic attributes 302 A and, optionally, a plurality of available genotypes 302B can be received. This receiving step can optionally include encoding the plurality of available genotypes 302B with an encoder 304 to generate encoded available genotype information 306. This data flow is shown by the dashed arrows. When using certain encoding schemes, such as embedding schemes (discussed below), the encoded available genotype information 306 can include a plurality of available genotype vectors. [0054] As additionally shown in Fig. 3, the constraints 302 are applied to the experimental data store 301, containing multiple experimental data points, to identify a plurality of experimental data points 303 corresponding to the constraints 302. Data point 301 A illustrates an example of how genotype data can be linked to corresponding phenotype data within each experimental data point. The identified experimental data points 303 will include genotype information 303B and phenotype information 303 A corresponding to the genotype information 303B. The genotype information 303B is then supplied to the encoder 304, which encodes it to generate encoded experiential genotype vectors 305. The different encoding techniques that can be used by the 304 encoder are discussed in greater detail below.
[0055] Fig. 4 illustrates a data flow chart showing the process for generating encoded sample genotype vectors according to an exemplary embodiment. As shown in Fig. 4, a sample database 401 includes multiple samples, such as sample 401 A, including genotype data. All that is required is that each of the samples include genotype data, but of course the samples can optionally include additional data, such as phenotype information. A plurality of identified samples 403 are selected and extracted from the sample database 401. As discussed earlier, these samples can be selected in any way, such as through random sampling, a user- defined process, or filtering based upon one or more sample constraints. The identified samples 403, including sample genotype information 403B, are provided to the encoder 404. The samples 403 are encoded by the encoder 404 to generate encoded sample genotype vectors 405.
Encoding DNA parts using embeddings for Sequential Model Based Optimization (SBMO) strategy
[0056] As will be discussed in greater detail further below, the disclosed systems and method can be implemented using SBMO. The SBMO strategy for biological systems can be applied using different ways to represent data. The most direct way is to just use labels. With the “label representation” the gene variants and promoter sequences are represented with nominal variables, so the model is expected to learn from data how these labels are related to each other and how they affect the outcome. This kind of representation is very easy to implement and test and it doesn’t need the use of the variant’s genetic sequences. It just needs labels to be able to distinguish between nominal categories among the features. The drawback is that, as it can’t use sequence information, the model could miss the underlying biochemical information that can be useful for the prediction task.
[0057] In order to be able to use information from DNA/Protein sequences, alternative ways to represent data can be utilized. These methods utilize Natural Language Processing and are focused on building low-dimensional numerical representations (embeddings) of text sequences. The use of a continuous numerical representation provides biological and chemical information about the variants that the learner (surrogate model) could use to make better predictions, improving the efficiency in getting an optimal design.
[0058] The building of the embeddings requires the training of machine learning models on vasts amounts of data. The number of sequences involved in the problems SBMO should apply are typically low, so the model that encodes the text information needs to be trained previously on external large datasets. This transfer learning methodology allows a model to apply knowledge from a large dataset to a more specific task.
Encoding via multidimensional representation of sequences
[0059] Through embedding techniques, sequences can be represented as vectors that lie in a multidimensional space. These vectors encode sequence information and allows making relationships in the multidimensional space that have biochemical and biophysical sense. This dense representation in a continuous space makes it possible for the model to have access to information about the sequences that allows the model to identify, extract and exploit the relationships between them.
[0060] To model biological sequences natural language processing techniques can be utilized. For this, we consider a sequence as a composition of sub units represented by symbols or sequences of symbols, which we call tokens. Each sequence is related to a set of tokens. In our case, these tokens can be defined as n-grams, where a n-gram is a sequence of n contiguous sequence elements. The parameter n can take values between 1 and the maximum length of all the sequences, where the length is measured according to the maximum amount of the smallest subunits in which it is possible to divide the sequence. For example in the case of proteins, these n-grams correspond to their (sub)sequences of n residues, and in the case of genes, correspond to their (sub)sequences of n nucleotides or codons, etc.
[0061] A parametric model, such as a neural network with one or more layers, can be utilized to learn a multidimensional representation for each token. This can be done using a representation learning algorithm (such asWord2Vec, GloVe, FastText, Autoencoders, Variational Autoencoders (VAEs), ProtVec and GeoVec, dna2vec, etc.)
[0062] The above method allows for representation of larger sequences using, for example, a weighted sum of n-grams. This weighting can be learned by a parametric model such as a neural network with one or more layers. It can also be the arithmetic sum, the average, the weighting by the inverse of the frequency of the tokens, among others. The resulting representation contains enough information, for example, to group the sequences according to their biochemical and biophysical properties.
[0063] Similar methods can be used to train Language Models that directly translate sequences into vector representations, without defining explicitly an arithmetic for combining token representations. Some of these approaches include the training of Recurrent Neural Networks on token prediction tasks or use models like Autoencoders, VAEs, BERT, etc.
[0064] When using the embedding approach, the input of the surrogate will be fully continuous. Hence, the SBMO might take advantage of this and use a surrogate model with analytical optimum (such as Gaussian Process) or derivable (such as Deep Ensembles, Deep Gaussian Processes, Bayesian Networks or other Bayesian approaches). In case of using a derivable model, optimum can be searched by means of Gradient Descent algorithm, Stochastic Gradient Descent or similar.
[0065] The above-described encoding/embedding and corresponding decoding schemes can be utilized for encoding/decoding the experiential genotype vectors (such as by encoder 304 in Fig. 3) and the sample genotype vectors (such as by encoder 404 in Fig. 4).
[0066] Returning to Fig. 1, at step 101 a phenotype prediction model is trained based at least in part on the plurality of experiential genotype vectors, the corresponding phenotype information, and the one or more constraints. [0067] The phenotype prediction model is a surrogate model (sometimes referred to as a metamodel or a substitute model) that is used to approximate an objective function. A surrogate model (also referred to as a “surrogate function”) is a model that approximates the behavior of a more complex model or physical system as closely as possible. Surrogate models are explained in greater detail below and can be utilized when the computational complexity of a physical system, experimental data points, and/or constraints would result in computationally indeterminable or infeasible training or application steps.
Surrogate Model
[0068] In some real-world optimization problems, it is not always possible to directly obtain analytical solutions. These kinds of objective functions are called “black-box” functions and describe a situation where there is no expression of the objective function that can be analyzed, and it’s not possible to know its derivatives. Evaluating the function is restricted to querying at a point x and getting a (possibly noisy) response. Due to the above-mentioned challenges, the optimization formulation can be updated by replacing the objective function with a surrogate objective function such that they share the same optimal solutions.
Figure imgf000016_0013
Figure imgf000016_0011
Thus solving the optimization for
Figure imgf000016_0012
must also solve the optimization for / as expressed
Figure imgf000016_0014
below.
[0069]
Figure imgf000016_0002
[0070] Finding a suitable may also be a non-trivial task. One approach is to represent
Figure imgf000016_0004
it as a parametric model with parameters such that Attaining the optimal
Figure imgf000016_0005
Figure imgf000016_0006
parameters is in itself another optimization problem that may be expressed as follows.
[0071]
Figure imgf000016_0007
Figure imgf000016_0003
[0072] Where M is some measure of distance between its arguments. In other words, we are looking for parameters
Figure imgf000016_0010
such that the distance M between and is minimized, for
Figure imgf000016_0008
Figure imgf000016_0009
a subset of N elements from the solution space
Figure imgf000016_0001
(in practice this subset corresponds to the available experimental data). These elements are used to optimize the parametric function so that it is informative enough to perform a successful optimization.
[0073] Sequential Model Based Optimization is used to train the surrogate model and is explained in greater detail below.
Sequential Model Based Optimization (SMBO)
[0074] SMBO algorithms are a family of optimization methods based on the use of a predictive model to iteratively search for the optimum of an unknown function. They were originally designed for experimental design and oil exploration. SMBO methods are generally applicable to scenarios in which a user wishes to minimize some scalar-valued function /(x) that is costly to evaluate. These methods progressively use the data that is compiled from a process or objective function to adjust a model (i.e., the surrogate model). This model is used on each iteration to make predictions of the objective function over a set of candidates, which are ranked according to their predicted score. On each iteration, the top ranked candidate is suggested to be evaluated for the next iteration.
[0075] SMBO has never been applied before to the optimization of a biological system, which poses specific challenges that are addressed by the methods, apparatuses, and computer- readable media disclosed herein.
[0076] When ranking candidates, SMBO methods usually use a scalar score that combines the predicted value with an estimation of its uncertainty for each sample. The intuition behind using the uncertainty is that the reliability of the predictions may decrease on unexplored areas of the solution space. Also, the consideration of an uncertainty term in the acquisition function promotes the exploration and the diversity of recommendations, helping to avoid local optima. There are many options for acquisition functions in literature. One of the most commonly used is the expected improvement (El).
[0077] There can be found several SMBO methods in literature. These methods differ from the present system on the modeling approach. One of the most known methods is called Bayesian Optimization, which uses Gaussian Processes (GP) as a surrogate model. This approach has been successfully applied in many fields, however GP modeling cannot be applied directly to discrete variables (like genotypes). Other approaches include Hyperopt (or TPE algorithm) which uses a Tree- Structured Parzen Estimator and SMAC which uses Random Forests (RF) as surrogate model.
[0078] As noted before, one of the most common acquisition functions is called Expected Improvement (El). The general formulation is:
Figure imgf000018_0001
[0079] where f(x *) is the current optimum value and /(x) is the value of the surrogate’s prediction. As it is required a random variable to calculate expectancy, /(x) should be associated with a probability function. Unfortunately, with most machine learning models it’s not trivial to obtain a distribution of a prediction (instead of just a plain prediction value) to be used as surrogate. For this, a possible approach (among a few others) is to use Ensemble models (e.g., Random Forests, Deep Neural Network Ensembles, implicit ensembles, or Bayesian models, Deep Gaussian Processes, etc.). With Random Forests (RF), the RF prediction is used as an estimation of the statistical mean of the surrogate model’s predictions, for which a gaussian distribution is assumed. The calculation of the variance of the prediction considers RF’s estimators deviation and the leaf training variance for each tree. Both estimations are combined using the Law of total variance.
[0080] Classic SMBO approaches are formulated to recommend one experimental observation per iteration. However, in the context of optimizing a biological system, a single experiment can typically make multiple evaluations of the objective function. In those cases, experimental recommendations should be grouped into batches, and the method should be able to suggest on each iteration a batch of n candidates Xj when j ∈ {1, . , ., n} instead of just one single recommendation.
[0081] The simplest way to recommend a batch of n experiments is to get the n optimal values from the acquisition function. However, this criterion (“take the n designs with highest acquisition values”) may not consider that some designs in the selected set may be very close to each other, reducing solution’s space exploration and incurring in wasteful experimental evaluations. A possible strategy to deal with batch suggestions is the use of the constant liar technique. With this technique, the first candidate, x1, is obtained by maximizing the acquisition function in the same way as for the single sample case. However, to obtain Xj+1 it is assumed that the evaluation at point Xj exists, and the model is retrained adding a fabricated value for Xj's evaluation into the training dataset. Then, the new acquisition function is maximized to obtain the next candidate. There are several heuristics that could be used to decide how to fabricate Xj's evaluation value (ex: use the mean predicted value m or; the mean prediction plus the deviation
Figure imgf000019_0005
or; the prediction mean minus sigma m — s, etc).
[0082] Fig. 5 illustrates a flowchart for training a phenotype prediction model based at least in part on the plurality of experiential genotype vectors, the corresponding phenotype information, and the one or more constraints according to an exemplary embodiment.
[0083] At step 501 one or more parameters are determined for the surrogate model, the one or more parameters being configured to maximize accuracy of the surrogate model while reducing a computational complexity of training and applying the model.
[0084] Fig. 6 illustrates a diagram of the parameter adjustment algorithm of the surrogate model according to an exemplary embodiment. The surrogate is a model that not only makes predictions, but also quantifies the uncertainty of its estimations on data that haven’t been yet observed (hasn’t been evaluated by the oracle, described in greater detail below). To adjust its parameters, it is required to observe a threshold amount of data that will depend on the nature of the problem as well as the architecture of the model.
[0085] Within the current implementation, it is considered a surrogate model based on Deep Ensembles (an ensemble of deep neural networks). This model is readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates.
[0086] The training scheme for the surrogate model is described in Fig. 6. Here is
Figure imgf000019_0003
evaluated in and its output
Figure imgf000019_0001
is compared with the target value ί that has been
Figure imgf000019_0004
Figure imgf000019_0002
labeled by the oracle. Then, the distance between them is computed, what we call cost. The
Figure imgf000020_0003
Surrogate Model parameters θf are adjusted so that the value of this cost function is minimized. The parameters are adjusted using an optimizer with default parameters.
[0087] Fig. 6 shows a diagram of the parameter adjustment algorithm of the surrogate model Data points labeled by the oracle are used (Oracle samples). The Oracle samples
Figure imgf000020_0002
refer real-world results, such as the experimental data points previously discussed (linking genotype data to phenotype data). Optionally, the Oracle samples can be derived in ways other than experimentation. For example, in certain scenarios, it may be possible to mathematically derive results by modeling the molecular/biological behavior. These mathematically derived results can also correspond to real-world behavior.
[0088] The tuple represents a labeled data point (where xs refers to a sample point and J
Figure imgf000020_0001
refers to its evaluation in the oracle) , and corresponds to the surrogate model evaluation of the given sample. The update of the parameters of the surrogate model is done in a way to reduce the cost function that receives as arguments, this update is indicated by the dotted arrow.
[0089] Once the parameters that define the architecture of the model have been found, the SMBO logic can be applied to determine the next points to evaluate experimentally and to successively update the predictive model until the most optimal phenotype has been determined.
[0090] Returning to Fig. 5, at step 502 an objective function is determined based at least in part on the plurality of desired phenotypic attributes. The process of recommending genotypes for experimentation involves maximization of the objective function, as well as maximization of an acquisition function and additional steps, as discussed in greater detail further below.
[0091] At step 503 the objective function is iteratively adjusted by repeatedly selecting one or more experiential genotype vectors in the plurality of experiential genotype vectors that maximize an acquisition function of the phenotype prediction model and updating the objective function based at least in part on one or more experimentally-determined phenotypic attributes corresponding to the one or more experiential genotype vectors. For example, this can be performed by modeling the plurality of experiential genotype vectors as a random forest and training the random forest using the corresponding experimentally-determined phenotypic attributes.
[0092] The process of step 503 is repeated until all of the experiential genotype vectors in the plurality of experiential genotype vectors are processed. Specifically, at step 504 a determination is made regarding whether there are additional experiential genotype vectors in the plurality of experiential genotype vectors that have not yet been processed. If so, then step 503 repeats. Otherwise, the training is terminated at step 505.
[0093] Once the phenotype prediction model is trained, it is necessary to evaluation genotypes within the solution space to determine the genotype that optimizes phenotype. However, in cases where the domain space of the objective function is huge and highly dimensional, the solution space of the optimization problem may become very difficult to define and explore efficiently. To solve this problem, the novel method and system disclosed herein utilizes a genotype generation model (also referred to as the “generative model” or the “generator model”) to generate candidate genotypes. The derivation of this solution and use of the generative model in exploring the solution space is described in greater detail below.
Derivation of the proposed solution
[0094] In different areas, there is interest in designing and optimizing systems or processes so as to optimize a set of its properties or responses. Defining J
Figure imgf000021_0002
as the obj ective function of the optimization problem and x ∈ X tXhe domain of the system, then this can be expressed as follows.
[0095]
Figure imgf000021_0001
[0096] Generally, optimization problems are subject to some circumstances or constraints on the system that limits the region of feasible solutions. Although in some cases it’s difficult to know the specific rules that define this solution space, the above optimization problem can be reformulated as [0097]
Figure imgf000022_0001
1.1
[0098] where denotes the solution space, or the set of all feasible solutions.
Figure imgf000022_0002
[0099] Applicants discovered a method that solves this optimization problem when:
[0100] The objective is a black-box function and the feasibility of obtaining
Figure imgf000022_0003
evaluations may be restricted by budget constraints of different types: computational, temporal, economic, or others; and/or
[0101] Some of the fundamental restrictions of the optimization problem that describe the solution space may be non-trivial to define.
Sampling feasible solutions
[0102] SMBO methods consider the optimization of an acquisition function to obtain candidates (new samples from the solution space to be tested). Some Bayesian Optimization approaches use quasi-Newton methods such as BFGS or L-BFGS, which explore the solution space starting from some initialization points to optimal values of the surrogate function by following the direction of an approximate gradient. Most of these methods require to specifically define the restrictions that describe the solution space. Other methods generate random samples (or alternatively, samples are selected from all possible candidates by heuristic rules) to make massive evaluations of the surrogate function in order to search for optima. Those methods also require to explicitly define the restrictions that confine the solution space.
[0103] As explained previously, in cases where the domain space of the objective function is huge and highly dimensional, the solution space of the optimization problem may become very difficult to define. For example (herein “Example 1”), consider the case of designing a protein (a sequence of amino acids) so that its properties optimize a specific functio This
Figure imgf000022_0006
can be defined as an optimization problem which aims to find a set of amino acid sequences x* that optimize said function, subject to an unknown set of constraints, which define the space of feasible proteins. The dimensionality of the domain space (domain of is vast due to
Figure imgf000022_0005
the immense number of possible amino acid sequences which is roughly given by
Figure imgf000022_0004
Where l represents the length of a sequence within the range
Figure imgf000023_0007
he number of possible amino acids (21). The above number can be huge, but in practice only a small subset of these combinations actually constitute the solution space, given that not all those sequences of amino-acid letters are feasible proteins. Most of them are only hypothetical molecules that won’t behave as natural proteins (may not be safe for any organism or even stable for certain temperature ranges, etc.) and almost all the restrictions that limits the solution space are unknown.
[0104] The applicant proposes a solution to deal with this type of immense and/or unknown solution spaces. This proposal uses a clever way to solve these optimization problems by learning from samples of the solution space. The distribution of those samples is approximated by means of a generator model G which is trained with samples that aren’t necessarily directly related with the optimization task at hand. These generator models come in many forms (Hidden Markov Models - HMMs, Variational AutoEncoders - VAEs, Generative Adversarial Networks - GANs, etc.), but usually, what all of these do is to model a certain probability distribution from which sample elements can then be sampled. Now, one way to have a generator model is to use a parametric function
Figure imgf000023_0004
with adjustable parameters such
Figure imgf000023_0005
that:
Figure imgf000023_0001
[0106] Meaning that elements xg can be sampled following distribution where z represents latent random variables that distributes with a known distribution and G are the
Figure imgf000023_0009
parameters that are fitted in such a way that the generator function learns to map from a latent space to the X space,
Figure imgf000023_0003
following the modelled distribution
Figure imgf000023_0006
[0107] Now, finding the parameters θG is a non-trivial task. Attaining an optimal configuration of them is in itself another optimization problem, that may be expressed as follows. [0108] 1.3
Figure imgf000023_0002
[0109] Where represents the probability distribution of the generator model taking as
Figure imgf000023_0008
input a random variable the probability distribution of the solution space
Figure imgf000024_0004
DM is some measure of probability distribution distance (e.g., Kullback-Liebler ), so in other words, optimization 1.3 searches for the optimum parameters uch that both distributions
Figure imgf000024_0003
become the same.
[0110] In practice, there are several ways of optimizing these parameters, which are mainly dependent on the type of generative model being implemented (e.g., HMM, VAEs or GANs). As explained in greater detail below, the optimization process usually works by adjusting parameters such that the generator’s distribution with z
Figure imgf000024_0002
is able to model the distribution of a subset (of size f elements from the solution space
Figure imgf000024_0005
Figure imgf000024_0001
[0111] In “Example 1”, this solution space could be represented by a subset of all real known protein sequences. These can be found in online public protein databases such as the “Worldwide Protein Data Bank” . With these elements, the parameters of the generator model can be fitted to model the probability distribution of this much more compacted and tractable solution space by solving optimization 1.3. Then, this generator can be used within an SMBO process, and one could simply sample a set of “new” elements from this distribution, rank them according to an acquisition function and generate a list of “candidates” from the top scored samples to be evaluated on function J ( ). Then, just as in any common SMBO approach, repeat this for each iteration of the optimization process of problem 1.1.
[0112] This approach is related to “Transfer Learning.” In Deep Learning, this term refers to the improvement of learning in a new task through the transfer of knowledge from a related task that has already been learned. In the proposed method the generator model can be trained using a dataset that is not specific to the optimization task, the approach may be described as a way of applying Transfer Learning into SMBO. This relation implies the sharing of some interesting properties that comes from this kind of learning, like model reusability (e.g., protein Generator model can be applied to several distinct tasks that involves proteins) and the possibility to achieve successful results without the need of a huge number of task-specific samples.
[0113] Returning to Fig. 1, at step 102 a genotype generation model is trained based at least in part on a plurality of sample genotype vectors, the genotype generation model being configured to generate new genotype vectors.
[0114] Fig. 7 illustrates a flowchart for training a genotype generation model based at least in part on a plurality of sample genotype vectors according to an exemplary embodiment.
[0115] The genotype generation model can be a Generative Adversarial Network (GAN). These networks seek to train a generative model from a set of input data. During training, the generator is encouraged to fit the probability distribution that characterizes this data set (i.e: the evidence probability distribution
Figure imgf000025_0001
. Generally, the goal here is to use this trained generative model, to create new data with properties similar to the original set.
[0116] There are several mechanisms by which the probabilistic distribution of a data set can be estimated. One of them, for example, estimates through an adversarial training procedure with two functions, described below.
[0117] At step 701 a generator model function having a plurality of trainable generator parameters is stored. The generator model function is configured to mimic the distribution of the plurality of sample genotype vectors. The generator model function with trainable
Figure imgf000025_0002
parameters mimics the distribution of the input data. For better readability the subindex “ s omitted in this document, but it is implicit in the G function.
[0118] At step 702 a discriminator model function having a plurality of trainable discriminator parameters is stored. The discriminator model function is configured to estimate a probability that a data sample comes from the plurality of sample genotype vectors instead of from the generator model function. The discriminator model function with trainable
Figure imgf000025_0003
parameters estimates the probability that a data sample comes from the real evidence distribution instead of from the synthetic generator distribution. For better readability the subindex is omitted in this document, but it is implicit in the D function.
[0119] The adversarial training procedure involves training both of these functions/models concurrently. In this process, the generator model faces an "adversary": the discriminator model, trained to determine if a sample comes from the distribution of the generative model or from the distribution of the original real data. [0120] In the traditional case, the weights/parameters of these networks are adjusted using optimization methods based on gradient ascent/descent in which an objective function is maximized/minimized.
[0121] However, instead of being trained as a traditional optimization problem, GANs are based on a "minimax game" and defined as a "zero-sum game" under the Game Theory approach. Under this perspective, one agent (a neural network) takes on the role of the Generator and another agent (another neural network) the role of the Discriminator. As explained below, a special objective function is used for the adversarial training process.
[0122] At step 703 a minimax objective function that is configured to be minimized by the generator model function and maximized by the discriminator model function. The minimax objective function is one in which one agent attempts to minimize the function and the other agent attempts to maximize it. In this case, the generator agent tries to minimize the objective function while the discriminator agent tries to maximize it:
[0123] [log
Figure imgf000026_0001
Figure imgf000026_0002
[0124] Where x are the real samples coming from the evidence probability distribution
Figure imgf000026_0003
are the synthetic generated samples coming from the generator distribution which
Figure imgf000026_0007
Figure imgf000026_0004
is defined by the function
Figure imgf000026_0005
being
Figure imgf000026_0008
the generator input, which is sampled from a simple noise distribution * such as a uniform or normal Gaussian distribution.
Figure imgf000026_0006
[0125] At step 704 both the generator model function and the discriminator model function are concurrently trained with the plurality of sample genotype vectors until the minimax objective function converges to a saddle point, which is a minimum with respect to the strategy of one player and a maximum with respect to the strategy of the other one.
[0126] Fig. 8 illustrates a flowchart for concurrently training both a generator model function and a discriminator model function with sample genotype vectors according to an exemplary embodiment.
[0127] At step 801 one or more sample genotype vectors are sampled from the plurality of sample genotype vectors. Step 801 is repeated so that one or more sample genotype vectors are repeatedly samples from the plurality of sample genotype vectors.
[0128] At step 802 one or more generated genotype vectors are generated with generator model function. Step 802 is repeated so that one or more generated genotype vectors are repeatedly generated.
[0129] At step 803 the discriminator model function is iteratively to the one or more sample genotype vectors and the one or more generated genotype vectors until the discriminator model function cannot distinguish between the one or more sample genotype vectors and the one or more generated genotype vectors. As shown in Fig. 8, this step repeats after each iteration. Application of the discriminator model function alternates between the one or more sample genotype vectors and the one or more generated genotype vectors.
[0130] The training process alternates between the sample genotype vectors and the generated genotype vectors. For example, the sequence of steps shown in Fig. 8 can be 801 803 → 802 → 803 → 801 → 803 ... etc. This allows the minimax objective function to reach the saddle point and terminate the training.
[0131] Fig. 9 illustrates a representative diagram of the adversarial training framework used for training a generative model according to an exemplary embodiment. The mathematical terms represent the gradients used to update the parameters of the generative and discriminative networks that are obtained from equation 1.1. The term with the symbol
Figure imgf000027_0001
corresponds to the gradient taken with respect to the parameters of the Discriminator and the term with the symbol
Figure imgf000027_0002
correspond to the gradient taken with respect to the parameters of the Generator.
[0132] As shown in Fig. 9, the Discriminator (D) is trained to "discern" (or classify) if the samples come from the Generator (G) or from the Real data (R). The Discriminator (D) models a cost function related to the probability of performing the classification correctly. This signal serves to iteratively adjust, through the gradients shown, both the Generator (G) and the Discriminator (D) parameters. The updates of the weights of both networks are done in shifts. The source of the samples to be discriminated is determined by the action represented by the "switch" at the center of the image. The sampling action is represented by the letter S. To sample from the distribution of the real data (R) an example is simply chosen randomly from said set. On the other hand, to sample from the generator, a random sample from a random noise source distribution is selected and then passed through the generator function G
[0133] Under this approach, a GAN architecture converges when the Discriminator and the Generator reach a Nash Equilibrium. That is, the “game” ends when the optimization converges into a “saddle point” of the objective function, which is a minimum with respect to the strategy of one player and a maximum with respect to the strategy of the other one. This “competition” leads both players to optimize themselves until the generated samples are indistinguishable from the real samples (i.e., the generator distribution matches the real distribution).
[0134] Fig. 10 illustrates a diagram of the parameter adjustment algorithm of the generative model according to an exemplary embodiment. The operation of the parameter adjustment algorithm is described in greater detail below.
[0135] The Wasserstein GAN with Gradient Penalty (WGAN-GP) was selected for generation. This approach is similar to the Wasserstein GANs, but to promote the “Lipschitz Condition” it uses a penalty term at the loss function instead of applying gradient clipping.
[0136] To adjust the Generator parameters, data that satisfies the problem’s constraints should be used. Parameters are modified through an adversarial training framework. Here, 2 sets of parameters are optimized: Generator’s (θG) and Discriminator’s (θD) Once the training is finished, the generator should be able to model the probability distribution of the training dataset, so it is expected that synthetic samples obtained from this generator will satisfy the constraints of the problem.
[0137] Fig. 10 shows a diagram of the parameter adjustment algorithm of the generative model G , which is done under a modified version of the generative adversarial training framework. In particular, the discriminator D (or critic) is trained to estimate the (scaled) Wasserstein distance between the probability distribution of the generator and the modelled probability distribution of the solution space U. The generator is trained to minimize the Wasserstein distance estimate given by the critic. Additionally, to guide the generator to model the distribution that solves the optimization problem, feedback from the surrogate model
Figure imgf000029_0002
can be incorporated by adding a regularization term to the generator cost function, and
Figure imgf000029_0001
updating its parameters while the Discriminator's weights are fixed.
[0138] Returning to Fig. 1, at step 103 a plurality of new genotype vectors are generated with the genotype generation model. Fig. 11 illustrates a flowchart for generating a plurality of new genotype vectors with the genotype generation model according to an exemplary embodiment.
[0139] At step 1101 one or more parameters are stored, the one or more parameters including a batch size and a selection rate. A selection method is used to choose the most promising candidates resulting from the entire process to be evaluated by the Oracle (e.g., experimental testing). This selection method can be dependent on a number of different parameters, which can be provided by a user, set to some default value, or algorithmically determined. One of the parameters used can be the batch size, which describes the number of candidates to be evaluated by the Oracle on each SMBO iteration. Another parameter can be the selection rate a value between 0 and 1 that denotes the size of the quantile of top
Figure imgf000029_0003
candidates to be selected.
[0140] At step 1102 a set of new genotype vectors is generated with the generator model function. The size of the set of new genotype vectors (i.e., the quantity of new genotype vectors) can determined based at least in part on one or more parameters. In one example, the size of the set of new genotype vectors is determined based at least in part on the batch size and the selection rate. For example, when looking for top performing candidates, a total of = m/sr elements can be sampled from (generated by) the generator model function of the genotype generation model. For the overall process, candidates can be selected following the constant liar approach, which stops after choosing rn elements.
[0141] Returning again to Fig. 1, at step 104 the phenotype prediction model is applied to the plurality of new genotype vectors to generate a plurality of scores, the phenotype prediction model being configured to predict one or more phenotypic attributes of the new genotype vectors. [0142] This step can include applying the objective function to the plurality of new genotype vectors to generate a plurality of prediction scores corresponding to the plurality of new genotype vectors. This step can also include applying an acquisition function of the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of acquisition scores corresponding to the plurality of new genotype vectors. Additionally, this step can include determining an uncertainty score associated with each of the new genotype vectors in the plurality of new genotype vectors.
[0143] At step 105 a plurality of result genotypes are determined based at least in part on a ranking of the plurality of new genotype vectors according to the plurality of scores. This step can include ranking the plurality of new genotype vectors based at least in part on the plurality of prediction scores corresponding to the plurality of new genotype vectors and filtering out a percentage of the plurality of new genotype vectors below a predetermined ranking percentage to generate a plurality of result genotype vectors. The predetermined ranking percentage can be set by a user, set to some default value, based upon prior results or the particulars of the constraints or the experimental data set, or based upon implementation details of the particular predictive model type utilized.
[0144] At step 106 a result is generated based at least in part on the plurality of result genotypes, the result indicating one or more genetic constructs for testing. As discussed in greater detail below, this result can be a batch of single genetic constructs for testing or a combinatorial sequence that specifies multiple genetic constructs for testing. A user can optionally select whether they wish to receive a list of single genetic constructs or a combinatorial sequence.
[0145] Construct is used as a synonym of “candidate.” Each construct represents a particular combination of genotype variants. For example, a combinatorial DNA design of complexity N, will lead to the assembly of N constructs.
[0146] The step of generating a result can include ranking the plurality of result genotype vectors based at least in part on an acquisition score associated with each result genotype vector and selecting one or more result genotype vectors in the plurality of result genotype vectors based at least in part on the ranking. This step can further include decoding the result genotype vectors to present the genotype information in a format that the user can understand and utilize or create in an experimental setting.
[0147] This step can be configured to select only the top-ranked result genotype vector. However, in the scenario where a user has specified that they would like a batch of results (genetic constructs for testing), the system must determine whether sufficient constructs have been generated and otherwise take the appropriate actions to update the model and generate additional constructs.
[0148] Fig. 12 illustrates a flowchart for updating the predictive model and determining whether to generate additional constructs according to an exemplary embodiment. As shown in Fig. 12, one or more selected result genotype vectors are the output of an initial application (evaluation) of the predictive model. At step 1202 it is determined whether the quantity of results is greater than or equal to the requested batch size.
[0149] If so, then at step 1203 the result genotypes corresponding to the selected result genotype vectors are provided to a user. At step 1204 the user experimentally determines phenotype data (i.e., Oracle determined phenotype measurements) corresponding to the result genotypes. The phenotype information can be determined by the Oracle by any of the methods discussed herein, such as experimental testing or mathematical modeling of real-world values. At step 1205 the experimental data points are updated with the genotype data of the results and the corresponding phenotype data. At this point, a further step can be performed of re-training the phenotype prediction model based at least in part on the one or more result genotype vectors, the corresponding phenotype information, and the one or more constraints discussed previously.
[0150] If at step 1202 the results are determined to be less than the requested batch size, then the process proceeds to step 1206. At step 1206 phenotypes corresponding to the selected result genotype vector are algorithmically generated. These generated phenotypes are fabricated values and can be generated in a variety of ways. One way to generate these phenotypes is use a phenotype prediction generated by the phenotype prediction model. Another way is to use the phenotype prediction minus the standard deviation of the prediction. Yet another way is to use some other linear combination of the predicted phenotype and its corresponding standard deviation. After the phenotypes are generated, the experimental data points are updated with the genotype data of the results and the corresponding phenotype data at step 1207.
[0151] At step 1208, the steps of encoding genotype information in the experimental data points, training the predictive model and optionally the generator model, generating new genotype vectors, applying the model, determining result genotypes, and generating a result are repeated with the updated experimental data points. This includes encoding genotype information in the updated plurality of experimental data points as an updated plurality of experiential genotype vectors, retraining the phenotype prediction model based at least in part on the updated plurality of experiential genotype vectors, the corresponding phenotype information, and the one or more constraints, optionally retraining the genotype generation model (if the same database of experiential genotype vectors are used as sample genotype vectors), applying the phenotype prediction model to remaining new genotype vectors in the plurality of new genotype vectors to generate an updated plurality of scores, determining an updated plurality of result genotypes based at least in part on an updated ranking of the remaining available genotypes according to the updated plurality of scores, and generating an additional result based at least in part on the updated plurality of result genotypes, the additional result indicating one or more additional genetic constructs for testing.
[0152] As shown in Fig. 12, regardless of whether the results are less than the requested batch size or greater than or equal to the requested batch size, a determination of phenotype data corresponding to the selected one or more result genotype vectors is made and the plurality of experimental data points are updated with the phenotype data and genotype data corresponding to the selected one or more result genotype vectors.
[0153] As discussed above, when a user requests a batch of single genetic constructs for testing, the predictive model can be applied to generate either one result genetic construct at a time or multiple genetic constructs per iteration. When multiple genetic constructs are determined per iteration, Applicant has discovered optimizations which reduce noise and improve computational efficiency. These optimizations are discussed in greater detail below. Recommending multiple experiments (n batch experiments)
[0154] In order to recommend a set of N candidates simultaneously, the optimization framework should provide several candidates per iteration step. For this, the disclosed method repeats the training and evaluation steps as much as needed, following the constant liar method. The actual predicted value of the untested candidate was used as fabricated value.
[0155] When maximizing the acquisition function within the liar approach, considering that usually the problem is restricted to a fixed number of possible designs, the most accurate way to find the optimal candidate is to evaluate the acquisition function in all designs. The drawback of this approach is that the number of possible designs could be huge, implying large computation times. This problem can be overcome by randomly sampling a fixed number of designs (10.000) from all possible designs for evaluation and selection of the optimal. However, this implementation may introduce an inconvenient noise factor into the formulation, making difficult to achieve consistent predictions between models trained with the same data. To speed up the algorithm without compromising accuracy, the applicants introduced a new heuristic rule: Instead of evaluating all possible designs or making a random subset of all candidates, a subset is built from the a% candidates with top predicted values. This limits the calculation time and ensures more consistency between different runs. The idea is to identify a that can define a list of top prediction candidates that will probably contain all selected candidates from the liar approach. We found that a value of a set to 60% worked on most of the experiments that we run. It should be noted that a value may change if a different number of selected candidates is required (we’ve set the limit to 100 candidates by default, as clients are rarely interested in having more). The use of the a rule helps to cut down by near a half the computation time without adding unnecessary randomness to the batch generation process.
[0156] Fig. 12 described the scenario where a user requested a batch single genetic constructs. However, a user can also specify that the step of generating a result should return a combinatorial design that is used to generate multiple genetic constructs.
[0157] Fig. 13 illustrates a flowchart for generating a result based at least in part on the plurality of result genotypes when the user requests a combinatorial design according to an exemplary embodiment. [0158] At step 1301 the plurality of result genotype vectors are filtered to remove one or more first result genotype vectors corresponding to one or more categories of genotypes having genotype vectors with acquisition scores below acquisition scores of genotype vectors in other categories of genotypes.
[0159] The word “category” is used herein as a synonym of genotype variant. For example, if each gene included in a design contains only one category (genotype variant), then the design would not be considered “combinatorial.”
[0160] At step 1302 a plurality of filtered genotype vectors are selected from the filtered plurality of result genotype vectors, the selected plurality of filtered genotype vectors corresponding to one or more additional categories of genotypes having genotype vectors with acquisition scores above acquisition scores of genotype vectors in other categories of genotypes.
[0161] At step 1303 a plurality of aggregate acquisition scores are determined corresponding to a plurality of combinations of genotype vectors in the selected plurality of filtered genotype vectors.
[0162] At step 1304 the plurality of combinations of genotype vectors are ranked according to the plurality of aggregate acquisition scores.
[0163] Additionally, at step 1305 one or more top-ranked combinations of genotype vectors are selected as the result, each combination of genotype vectors corresponding to two or more genetic constructs for testing.
[0164] The process for generating a result based at least in part on the plurality of result genotypes when the user requests a combinatorial design corresponding to steps 1301-1305 is explained in greater detail below, with reference to specific examples.
Recommending combinatorial output
[0165] Applicant has developed a method for returning a reduced combinatorial design as output (instead of recommending a linear list of constructs). This can streamline the process of genotype optimization. The present section describes in detail a novel method to find the optimal combinatorial design out of the predictions over all single candidates.
[0166] When optimizing an organism by means of synthetic biology, some of the most common problems look as follows:
Figure imgf000035_0001
[0167] Table 1 illustrates an example of a combinatorial design. It contains 2 genes and each gene has a different number of possible variants. This specific example represents a biochemical reaction that depends on two enzymes of different kinds. Those enzymes are encoded as genes. The scientist has found 2 valid sequence alternatives for the first enzyme, and 3 options for the second gene.
[0168] Usually there is a set of bins or positions in a genetic design where, within each bin, there’s a limited number of possibilities to choose from. In the example case, the first gene position or bin may have 1 from 2 different alternatives, while the second gene has 3 variants to choose from.
[0169] The data displayed at table 1 represents a combinatorial design. Usually the scientist is searching for the best combination of the variants and looks for the one with the highest production rate of a certain product. Given the above example, there are 6 possible solutions for the problem which are generated from the combinations of all variants. In the following table, each row represents one of these, also called as “single solutions” or constructs:
Figure imgf000036_0001
[0170] Table 2 illustrates a list of the 6 singular solutions / constructs associated with the combinatorial example shown in Table 1.
[0171] As it was noted before, the scientist is looking for a construct within a combinatorial design that maximizes a specific experimental measurement. This is the kind of candidate solution that SMBO methods provide. Single solutions. In this case, the scientist should build each of the proposed candidates, make experiments, evaluate them in the lab, feed the algorithm and continue with the DBTL cycle until criteria is met. This process works fine, however there is room for improvement.
[0172] One of the shortcuts that biochemistry allows scientists to do is to generate all of the constructs from a combinatorial design at once, with just a few biochemical reactions. This is not free, as sequencing and labelling all combinations from a huge combinatorial design can be very hard, but the applicants have found that in some cases the algorithm can take advantage of this property and streamline the optimization process.
[0173] As scientists can find interesting to work with combinatorial designs of limited complexity (instead of working with a huge combinatorial design or lists of isolated constructs) the Combinatorial Solution option was implemented to suggest a “reduced” combinatorial design rather than a list of single candidates. This approach can allow the user to test hundreds or thousands of different meaningful designs at each optimization step, instead of just a few. Depending on the nature of the problem, this kind of solution may reduce experimental costs, hence increasing the number of samples tested on each iteration and improving the achieved optima. Also it may help to reduce experimentation time.
[0174] The Combinatorial Output is a new step in the optimization process that runs (optionally) after all single candidates are evaluated. Considering that part of this method can be computationally demanding, the applicants created a first filtering stage, where some categories are discarded by using some heuristic rules, and then a fine-tuning stage where all the remaining categories are studied in detail.
[0175] The first stage uses two pre-filter methods. The first one finds, for each bin ‘b’, the worst performing categories where all its singular construct’s scores are below the ones of the other categories. After identifying these low scored categories, the associated singular constructs are removed from the candidates list. Then, the second pre-filter is applied, which starts building a combinatorial design by collecting the best top ‘N’ performing categories according to a ranking based on the acquisition value of their corresponding singular candidates. The number ‘N’ of collected categories will be given by a pre-determined combinatorial complexity threshold. The combinatorial complexity is given by the product formula below. Where correspond, respectively, to the initial number of categories
Figure imgf000037_0002
and the final number of categories of bin ‘b’. The final number of categories of each bin is predetermined by the user based on her needs.
Figure imgf000037_0001
[0176] Higher combinatorial complexity means wider exploration space, increasing the chances of attaining global optima, but at the cost of a higher computational complexity. The result of the first stage will then be another combinatorial design, with lower complexity (due to the pre-filtering), that will be used as the input for the second stage. [0177] The second stage (fine-tuning) is an exhaustive search that calculates a score for all possible combinatorial candidates of limited complexity that can be derived from the input combinatorial design (the one coming from the first stage). The implementation of this method is not trivial as complexity scales quickly with the size of the input and resources should be managed carefully. In what follows from this section, in addition to describing in detail the strategy approached to find the optimum, the limitations of the algorithm are studied. The latter allow to define the combinatorial complexity threshold to be used in the filtering stage.
[0178] The fine-tuning stage basically calculates an aggregated score from the acquisition values of every single construct that belongs to each combinatorial candidate. The user may select the score to be the average acquisition value of the constructs, or the maximum, or in fact, any other combination of the statistics of the acquisition values (s.a: mean, standard deviation). Based on this score, the best combinatorial designs are stored during execution and returned to the user after evaluating all combinatorial candidates.
[0179] To better understand what the fine-tuning stage does, consider the acquisition function results of the constructs from the previous example:
Figure imgf000038_0001
[0180] Table 3 illustrates acquisition values for each construct within a hypothetical (big) combinatorial design. Acquisition values will be combined to calculate the scores for each (reduced) combinatorial candidate.
[0181] Depending on the requirements, the user can set the final desired complexity or, alternatively, the maximum number of categories per bin of the output combinatorial design. This will determine the number of possible different constructs that can be derived from the resulting combinatorial design and, also, define the set of all combinatorial candidates to be ranked. For instance, following our example case, if the scientists wants a combinatorial output with nf = 2 final categories per bin, the number of constructs that can result from that output is Sf = ( nf )b = 4, and the resulting candidates will be the ones listed in the following table. This table also shows the calculation of candidates’ scores:
Figure imgf000039_0001
[0182] Table 4 illustrates combinatorial candidates for the example problem with their respectives scores.
[0183] The Combinatorial Solution method returns the top ranked combinatorial designs according with the aggregated scores. The user may build all the associated constructs from one or more proposed solutions and evaluate them. After that, she can feed the model with constructs’ data and generate a new set of combinatorial or singular candidates.
[0184] From the example above, this problem (scoring all combinatorial candidates) seems very simple. However, the number of combinatorial candidates explodes with big combinatorial designs so, from the computational complexity perspective, the implementation of this solution provides many challenges.
[0185] A typical strain optimization problem contains b = 6 bins and a number of original categories per feature n0 = 7. The number of all single solutions will be given by the expression:
Figure imgf000040_0001
[0186] which here takes the value s = 117,649.
[0187] If the user wants to reduce to = 2 final categories per feature, the number of
Figure imgf000040_0002
combinatorial candidates will be given by the following expression:
Figure imgf000040_0003
[0188] which in this case takes the value c = 85,766,121 combinatorial candidates (and increases quickly with bigger values of no )
[0189] One way to achieve massive calculations these days is by using graphical processing units (GPUs) to take advantage of their parallel processing capabilities. To do so is important to find a valid representation of the problem that suits the available tools to exploit hardware’s parallelism. With that objective, the following definitions were made:
[0190] S ∈ Bs x k : Is the binary matrix of single solutions. Each row represents a single solution and each column represents one of k total categories. Each component will have a value of 1 if the category is present in the construct and 0 if not.
[0191] T ∈ Rs: ls the target vector. It contains the float valued scores predicted for each single design. [0192] C ∈ Bc x k : is the binary matrix of valid combinatorial solutions. Each row represents a combinatorial design and each column represents one of k total categories. Each component will have a value of 1 if the category is present in the design and 0 if not.
[0193] Considering that when a single construct is contained by a combinatorial design they will share components with value “1” in b (number of bins) dimensions, we can define the Membership matrix M ∈ Bc x s as:
Figure imgf000041_0001
[0194] that will be valued “1” in position i,j iff the construct j can be obtained from the combinatorial design i.
[0195] After constructing the Membership matrix M, the single scores associated with each combinatorial design can be obtained by means of boolean indexing in the target vector T. With those acquisition values, the combinatorial aggregated score can be easily calculated.
[0196] A more formal expression for the average acquisition score specifically, denoted as the vector A ∈ Rc for all combinatorial candidates is given by:
Figure imgf000041_0002
[0197] where cf is the number of singular constructs associated with a combinatorial candidate. This value is equal across all valid combinatorial candidates as they were selected to have the same complexity.
[0198] The above formulation was implemented for GPU execution. It has to be run in batches, as the calculations may not fit in the processor's memory. Scores are calculated using partitions C e Bbs x k of combinatorial candidates, where bs is the batch size. The best scored designs are stored during the process. The following table shows the execution times of the search of best scores, given the list of scores for the singular constructs (tests were run on an AWS p2.xlarge machine instance):
Figure imgf000042_0001
[0199] Table 5 illustrates computation times of the combinatorial solution step. Results are shown for different number of categories per bin of the original combinatorial design n0.
[0200] As shown on Table 5, for high values of c, calculations can become very time consuming. Also, matrix S increases its size with n0, which may become high enough to not fit into memory. For this reason, the reach of the fine-tuning stage has to be limited to available resources. Using table 5, the applicants set the combinatorial complexity threshold to 8.6E+07, considering the machines that are currently available in their environment, and the amount of time their users are willing to spend on this calculation.
[0201] Fig. 14 illustrates high-level flowchart of the methods and system components for efficiently optimizing a phenotype with a combination of a generative and a predictive model described herein according to an exemplary embodiment.
[0202] The applicant has discovered a black-box optimization method and system for efficiently optimizing a phenotype with a combination of a generative and a predictive model. The disclosed methods and systems are based on SMBO and can be used to deal with immense domain spaces. Some of the terms used in the above-described sections and throughout this application are explained in greater detail below.
[0203] Parametric functions
Figure imgf000043_0001
(generative model) and
Figure imgf000043_0002
(predictive model (a. lea surrogate)) are defined with trainable parameters θG and θf respectively, being a given latent space with dimension the
Figure imgf000043_0003
optimization problem’s solution space with dimension
Figure imgf000043_0004
Figure imgf000043_0005
[0204] A “Selection Method” is used to suggest a list of candidates based on a set of criteria. These criteria may be non-trivial depending on the trade-off between exploration and exploitation desired. As discussed earlier, the Selection Method can be based on the parameters of batch size and a selection rate. Tire Selection Method can also be based upon the rankings or scores of the candidates/genotype vectors.
[0205] An “oracle” represents the “black-box” function As discussed earlier,
Figure imgf000043_0006
the oracle can correspond to the results of experimental testing in a lab or mathematical modeling of real-world behavior.
[0206] Two sets of data are also defined: a labelled set with
Figure imgf000043_0008
and with a possible noisy measurement of the black-box function for input
Figure imgf000043_0007
Figure imgf000043_0017
used to adjust parameter
Figure imgf000043_0011
θf of the predictive model and another set (not
Figure imgf000043_0009
necessarily labeled) with
Figure imgf000043_0010
used to adjust parameters of the generative model.
Figure imgf000043_0016
[0207] With the above terms, the disclosed method that optimizes problem 1,1 is described by the following steps:
[0208] Step 1 : Adjust the surrogate/predictive model / using the labelled dataset frj to adjust parameters by optimizing problem 1.2.
[0209] Step 2: Adjust the generative model G using generator’s dataset (L to adjust parameters by optimizing problem 1.3.
[0210] Step 3 : Generate k samples
Figure imgf000043_0012
[0211] Step 4: Use the “ Selection Method ’ method to choose the top
Figure imgf000043_0013
samples considering the values provided by the acquisition function.
[0212] Step 5: Evaluate m selected samples with the “oracle”.
[0213] Step 6: Readjust the parameters of the predictive model considering the new
Figure imgf000043_0015
information obtained from the oracle , using labelled samples.
Figure imgf000043_0014
[0214] Step 7: Repeat from point 3 until number of desired genotypes obtained. [0215] As stated earlier, the overall process is shown in Fig. 14. Referring to Fig. 14, the iterative process begins with a set of initial experimental database of data points used to train the predictive model. The initial experimental database of data points can then also be used to train the generative model. Alternatively, a different data set, such as a sample database, can be used to train the generative model. The training data selected for the generative model can be used to improve/refme results produced by the system by selecting training data which is more likely to contain an optimal genotype. Applicant notes that the order of training is unimportant, and that the predictive and generative model can be trained either together/jointly or taking turns.
[0216] Continuing to refer to Fig. 14, the generative model is then used to synthesize a set of new data points known as “candidates” that are evaluated and ranked using the prediction and uncertainty of the predictive model. Subsequently, the top ranked candidates are tested experimentally using these results to update the experimental database (Oracle samples). Optionally, the solution space dataset can be updated. The cycle shown in Fig. 14 is repeated until satisfaction of criteria/optimal results are obtained.
[0217] The steps described above can be implemented in various ways. For example, in one implementation, both the Surrogate model and the Generator model are neural networks. In this case, the adjustment of both models’ parameters is performed using gradient-based optimization techniques, like stochastic gradient descent (ascent), through which a loss function is minimized (maximized). The optimization process can be streamlined if the available data is used to adjust the parameters of both models before starting.
[0218] The disclosed SMBO methods can use as surrogate any machine/deep learning model that can manage numerical and categorical inputs, and can be configured to output an estimation of uncertainty as well as a prediction value. This includes most of ensemble-based algorithms. For example, Random Forests, XGBoost and others; also, Bayesian approaches can be used, like Bayesian Networks, etc. Additionally, includes methods based on Deep Ensembles, Bayesian Deep Learning, etc. For example, an implementation can use a sklearn implementation of Random Forest Regressor (RF) as the surrogate model. [0219] The disclosed methods can use many different acquisition methods as scoring functions. The Expected Improvement was selected for the current implementation. However, the disclosed method is not limited to it, and any other score that can be used within the disclosed SMBO framework could also be applied.
[0220] Fig. 15 illustrates the components and process flow of the system for efficiently optimizing a phenotype with a combination of a generative and a predictive model according to an exemplary embodiment. Fig. 15 shows in greater detail the components used to carry out multiple steps shown in Fig. 14.
[0221] As shown in Fig. 15, the generative model is trained with samples obtained from the solution space. This training process may also include feedback results from the acquisition function (see additional features discussed below). The surrogate function is trained using data processed by the oracle. Once the generator and the surrogate are trained, the generator is used to generate synthetic samples and each synthetic sample is scored by the acquisition function. After that, a Selection Method chooses the best scored candidates and suggests them to be evaluated by the Oracle, evaluated samples may be added to the oracle dataset in order to be considered in the next SMBO iterations.
[0222] There are various additional features and additional steps, some of which are described above, that can be utilized as part of the disclosed method, apparatus, and computer- readable medium for efficiently optimizing a phenotype with a combination of a generative and a predictive model. These additional features and additional steps (referred to as extensions) will now be described in greater detail below.
Using embeddings to use a surrogate that evaluates samples in latent space domain
[0223] The SMBO strategy supports both categorical and numerical data types as inputs. However, depending on the surrogate model used, these inputs might need to be encoded into a different representation. For instance, when working with genetic sequences, which in their original form correspond to sequences of categorical symbols or tokens (e.g., the four nucleotides A, T, G, and C), one would likely use a different representation (which we generally call “embeddings”) so to use them to feed a model (e.g one-hot representation instead of strings) or even reduce dimensionality (as these sequences can be huge) and standardize the dimensions of model’s input (considering that genetic sequences may have different lengths).
[0224] An embedding may be constructed by training deep learning models on vast amounts of data. The volume of datasets used in SMBO are sometimes too low to be useful for building an effective embedding, so these encoder models tend to be pre-trained on external much larger datasets which exploits the advantages of “Transfer Learning”, a methodology that allows a model to apply knowledge from a large dataset to a more specific task.
[0225] Through embedding techniques, the input data can be represented as numerical vectors that lie in a multidimensional space. These numerical vectors encode sequence information capable of making relationships in the multidimensional space that have some physical properties (e.g., biochemical properties) This dense representation in a continuous space makes it possible for the model to have access to richer information about the sequences that allows the model to better identify, extract and exploit the relationships between them.
[0226] When using these embeddings, the inputs of the surrogate are continuous. Hence, the SMBO approach might take advantage of this and use a surrogate model where optima can be obtained through newton or quasi-newton methods (such as Deep Ensembles, Deep Gaussian Processes, Bayesian Networks or other Bayesian approaches).
[0227] A useful additional feature of the disclosed method and system is to encode the original domain into a latent space. This can help to reduce the dimensionality of the generator and surrogate models. The approach to encode samples can vary and depends upon the nature of the solution space. In the case of using a learnable embedding, samples from 14 or another dataset could be used for training. An overview of this approach is shown in Fig. 16.
[0228] Fig. 16 illustrates the components and process flow of the system for efficiently optimizing a phenotype with a combination of a generative and a predictive model with embeddings according to an exemplary embodiment. This extension of proposed framework uses embeddings. With this extension, instead of using the original domain space, the generator and surrogate models are trained using encoded versions of the samples instead. Encoder and Decoder modules are used to map from the solution space to a latent space and vice versa. [0229] A reduced version of this can be applied when using an encoder/decoder architecture as a generator. This is the case of some Autoencoders, Variational Autoencoders, some GANs architectures and others. In these cases, the generative model can be divided into three separate modules: the first module is the encoder, which translates samples from the domain to the latent space, the second module is the decoder, which maps samples from the latent space to the problem domain, and the third module represents the sampling process from the latent space Z.
[0230] When a generative model satisfies the above property, adjustments can be made to the architecture with embeddings to reduce models’ complexity. These modifications are shown Fig. 17.
[0231] Fig. 17 illustrates the components and process flow of the system for efficiently optimizing a phenotype with a combination of a generative and a predictive model with an encoder/decoder architecture according to an exemplary embodiment. This figure shows the modifications that can be made to the proposed architecture when the generator is built with an encoder/decoder architecture. Within this formulation, the Encoder module is applied to translate oracle samples into latent space and the surrogate model is trained to use the latent space as input domain. When looking for optimal candidates, the surrogate takes its inputs from a block that samples the latent space Z. The way this space is sampled will depend on the specific architecture of the generator. The Decoder is later applied to all selected samples and maps them to Oracle’s domain. Both Encoder and Decoder models are trained by the Generative Model Fit process described earlier.
Guided Generator Training (an extension of the generative process)
[0232] The applicant also proposes an extension to the above algorithm. Specifically regarding optimization 1.3. This extension modifies problem 1.3 by including a
Figure imgf000047_0001
and a ZA'i function as follows.
[0233] 1.3b
Figure imgf000047_0002
[0234] Function represents a measure on the Generator’s distribution that quantifies a certain property
Figure imgf000048_0009
epresents the objective function of the new optimization problem 1.3b. This modification changes the original objective of 1.3 such that it not only optimizes to mimic some real data probability distribution
Figure imgf000048_0010
but also adjusts the generator’s parameters θG.· so that the generator’s distribution
Figure imgf000048_0001
has additional features.
[0235] This is particularly useful for narrowing down even more the exploration space of the main optimization problem 1.1. For instance, in “Example 1”, the solution space X may still be quite large, due to the number of possible proteins sequence combinations. So a very simple choice for function that could help with reducing the exploration space,
Figure imgf000048_0006
and accelerating the optimization process even more could be the following:
Figure imgf000048_0002
[0238] The first term of the above
Figure imgf000048_0007
objective function
Figure imgf000048_0003
, will drive the generator’s distribution towards the real data probability distribution
Figure imgf000048_0008
while the second term will increase the distribution’s support on regions of the solution space with a higher
Figure imgf000048_0004
Acquisition value. With this particular selection of functions
Figure imgf000048_0005
the optimization problem 1.3b might help to achieve a generator with a distribution that accelerates the exploration/exploitation process of the SMBO approach used to solve the main optimization 1.1
Additional generation input (an extension of the generative process)
[0239] Besides VAEs, HMMs and other generative models, the proposed method can be implemented considering most GANs approaches. The previous statement includes models that do not rely on a randomized input only, but also ones that use other kinds of input. For example, ACGANs uses a mixed type input which contains a randomized vector as well as label information that is used to condition sample generation. In that case, and in many other approaches where additional information is required, the proposed method can be easily extended. For these situations, the samples used to train the generative model can be described by the expression
Figure imgf000049_0001
is a vector that contains additional information associated with the sample " i This information may encode not only label descriptors, but also context images, text, etc. Those approaches would require a generative model
Figure imgf000049_0002
capable of processing that type of information.
[0240] Extension to Multi-objective optimization
[0241] The proposed approach could be extended to multi objective optimization. Here, instead of optimizing within a scalar target range, these problems are characterized by the existence of an objective function with multidimensional output. This objective function can be described as a set of target functions which have to be
Figure imgf000049_0003
optimized simultaneously.
[0242] There are multiple ways to extend the proposed method to multi-objective optimization. One approach is to apply scalarization of the objective function. This can be achieved by combining all dimensions into one using some other function (e.g., linear combination) and then proceed with the optimization of the scalar result with a single surrogate model. Scalarization can also be achieved by building a model per target component and calculating a score that combines individual predictions. Other approaches are oriented to search for the Pareto front. Within those methods, a model is built for each component and the selection criteria is changed to penalize non-dominant candidates.
Extend the database with which the generator is trained (an extension of the generative process)
[0243] In some applications, the Oracle will probably penalize candidates that lie outside the solution space. For example, if the objective is to maximize the production of certain compound within a bacteria by altering some of its genes (where each alteration corresponds to a candidate), the cell may fail to perform the involved metabolic pathway (and won’t produce the target compound) if some gene was changed in a way that kills the organism. The gene could be encoding an enzyme that works very well in the isolated pathway, but in the context of the living organism interacts with other elements in a negative way, driving the cell to death. On this type of applications the Oracle will validate solution space compliance as well as the objective function value. This isn’t the common scenario in optimization problems, where usually the limits of the solution space are defined by a set of restriction rules and the objective function works in an independent fashion.
[0244] In the type of situations where the restrictions of the problem are considered by Oracle’s evaluations, the proposed method can be extended to take advantage of that and use Oracle’s evaluations not just to train the surrogate model, but also to re-train the generator on each optimization step. This helps to reduce the number of iterations as the generator’s knowledge base will grow throughout the process.
Extend method to optimize acquisition by a newton or quasi-newton method
[0245] The main approach considers a Selection Method that optimizes the acquisition function by looking for the best candidates in a set of samples synthesized by the generator. This task can be done in several different ways. The proposed approach can be extended to maximize acquisition by using a newton or quasi-newton method that explores for the best combination of latent space features. Considering this extension, the steps of generating samples and using the selection method to select the top samples can be changed in order to apply a gradient descent algorithm from different starting points and, in that way, obtaining multiple optimal samples.
[0246] Fig. 18 illustrates the components of a specialized computing environment for efficiently optimizing a phenotype with a combination of a generative and a predictive model according to an exemplary embodiment. Specialized computing environment 1800 can be made up of one or more computing devices that include a memory 1801 that is a non-transitory computer-readable medium and can be volatile memory (e.g., registers, cache, RAM), non- volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two.
[0247] As shown in Fig. 18, memory 1801 stores experimental data points 1801A, constraints 1801B, sample genotypes 1801C, phenotype prediction model 1801D, prediction model training software 1801E, genotype generation model 1801F, generation model training software 1801G, generator and discriminator model functions 1801H, genotype scoring and ranking software 18011, combinatorial output software 1801J, encoding/decoding software 180 IK, and genetic construct generation software 1801L.
[0248] Each of the software components in memory 1801 store specialized instructions and data structures configured to perform the methods for efficiently optimizing a phenotype with a combination of a generative and a predictive model described herein.
[0249] All of the software stored within memory 1801 can be stored as a computer- readable instructions, that when executed by one or more processors 1802, cause the processors to perform the functionality described with respect to Figs. 1-17.
[0250] Processor(s) 1802 execute computer-executable instructions and can be a real or virtual processors. In a multi-processing system, multiple processors or multicore processors can be used to execute computer-executable instructions to increase processing power and/or to execute certain software in parallel. As discussed earlier in the application, processors can be processors specialized for the task of training and applying a predictive model, such as graphical processing units (GPUs).
[0251] Computing environment 1800 additionally includes a communication interface 1103, such as a network interface, which is used to communicate with devices, applications, or processes on a computer network or computing system, collect data from devices on a network, and implement encryption/decryption actions on network communications within the computer network or on data stored in databases of the computer network. The communication interface conveys information such as computer-executable instructions, audio or video information, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
[0252] Computing environment 1800 further includes input and output interfaces 1304 that allow users (such as system administrators) to provide input to the controller to cause the neurological screening device to display information, to edit data stored in memory 1301, or to perform other administrative functions. For example, an administrator can configure, add, or edit, for example, constraints, encoding software, or experimental data points stored in memory 1801.
[0253] An interconnection mechanism (shown as a solid line in Fig. 18), such as a bus, controller, or network interconnects the components of the computing environment 1800.
[0254] Input and output interfaces 1804 can be coupled to input and output devices. For example, Universal Serial Bus (USB) ports can allow for the connection of a keyboard, mouse, pen, trackball, touch screen, or game controller, a voice input device, a scanning device, a digital camera, remote control, or another device that provides input to the computing environment.
[0255] The computing environment 1800 can additionally utilize a removable or non- removable storage, such as magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, USB drives, or any other medium which can be used to store information and which can be accessed within the computing environment 1800.
[0256] Computing environment 1800 can be a set-top box, personal computer, or one or more servers, for example a farm of networked servers, a clustered server environment, or a cloud network of computing devices.
[0257] It will be appreciated by those skilled in the art that changes could be made to the embodiments described above without departing from the broad inventive concept thereof. For example, the steps or order of operation of one of the above-described methods could be rearranged or occur in a different series, as understood by those skilled in the art. It is understood, therefore, that this disclosure is not limited to the particular embodiments disclosed, but it is intended to cover modifications within the spirit and scope of the following claims.

Claims

Claims:
1. A method executed by one or more computing devices for efficiently optimizing a phenotype with a combination of a generative and a predictive model, the method comprising: training, by at least one of the one or more computing devices, a phenotype prediction model based at least in part on the plurality of experiential genotype vectors, corresponding phenotype information, and one or more constraints, the phenotype prediction model comprising a surrogate model; training, by at least one of the one or more computing devices, a genotype generation model based at least in part on a plurality of sample genotype vectors, the genotype generation model being configured to generate new genotype vectors; generating, by at least one of the one or more computing devices, a plurality of new genotype vectors with the genotype generation model; applying, by at least one of the one or more computing devices, the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of scores, the phenotype prediction model being configured to predict one or more phenotypic attributes of the new genotype vectors; determining, by at least one of the one or more computing devices, a plurality of result genotypes based at least in part on a ranking of the plurality of new genotype vectors according to the plurality of scores; and generating, by at least one of the one or more computing devices, a result based at least in part on the plurality of result genotypes, the result indicating one or more genetic constructs for testing.
2. The method of claim 1, further comprising: receiving, by at least one of the one or more computing devices, one or more constraints, the one or more constraints comprising a plurality of desired phenotypic attributes; and encoding, by at least one of the one or more computing devices, genotype information corresponding to the one or more constraints in a plurality of experimental data points as the plurality of experiential genotype vectors, the plurality of experimental data points comprising the genotype information and phenotype information corresponding to the genotype information.
3. The method of claim 2, wherein encoding genotype information in a plurality of experimental data points corresponding to the one or more constraints as a plurality of experiential genotype vectors comprises: identifying the plurality of experimental data points in a database of experimental data points based at least in part on at least one desired phenotypic attribute in the plurality of desired phenotypic attributes; and encoding genotypes associated with the identified plurality of experimental data points as the plurality of experiential genotype vectors.
4. The method of claim 1, further comprising: encoding, by at least one of the one or more computing devices, genotype information in a plurality of sample genotypes of a sample database as the plurality of sample genotype vectors.
5. The method of claim 1, wherein the phenotype prediction model is a surrogate model and wherein training a phenotype prediction model based at least in part on the plurality of experiential genotype vectors, the phenotype information, and the one or more constraints comprises: determining, by at least one of the one or more computing devices, one or more meta parameters for the phenotype prediction model, the one or more meta-parameters being configured to maximize accuracy of the phenotype prediction model; determining an objective function based at least in part on the plurality of desired phenotypic attributes; and iteratively adjusting the objective function by repeatedly selecting one or more experiential genotype vectors in the plurality of experiential genotype vectors that maximize an acquisition function of the phenotype prediction model and updating the objective function based at least in part on one or more experimentally-determined phenotypic attributes corresponding to the one or more experiential genotype vectors.
6. The method of claim 5, wherein applying the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of scores comprises: applying the objective function to the plurality of new genotype vectors to generate a plurality of prediction scores corresponding to the plurality of new genotype vectors.
7. The method of claim 5, wherein applying the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of scores comprises: applying an acquisition function of the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of acquisition scores corresponding to the plurality of new genotype vectors.
8. The method of claim 1, wherein training a genotype generation model based at least in part on a plurality of sample genotype vectors, the genotype generation model being configured to generate new genotype vectors comprises: storing a generator model function having a plurality of trainable generator parameters that is configured to mimic the distribution of the plurality of sample genotype vectors; storing a discriminator model function having a plurality of trainable discriminator parameters that is configured to estimate a probability that a data sample comes from the plurality of sample genotype vectors instead of from the generator model function; store a minimax objective function that is configured to be minimized by the generator model function and maximized by the discriminator model function; and concurrently training both the generator model function and the discriminator model function with the plurality of sample genotype vectors until the minimax objective function converges to a saddle point.
9. The method claim 8, wherein concurrently training both the generator model function and the discriminator model function with the plurality of sample genotype vectors until the minimax objective function converges to a saddle point comprises: repeatedly sampling one or more sample genotype vectors from the plurality of sample genotype vectors; repeatedly generating one or more generated genotype vectors with generator model function; and iteratively applying the discriminator model function to the one or more sample genotype vectors and the one or more generated genotype vectors until the discriminator model function cannot distinguish between the one or more sample genotype vectors and the one or more generated genotype vectors, wherein application of the discriminator model function alternates between the one or more sample genotype vectors and the one or more generated genotype vectors.
10. The method of claim 8, wherein generating a plurality of new genotype vectors with the genotype generation model comprises: storing one or more parameters, the one or more parameters comprising a batch size and a selection rate; and generating a set of new genotype vectors with the generator model function, the size of the set of new genotype vectors being determined based at least in part on the batch size and the selection rate.
11. The method of claim 1, further comprising: determining phenotype information corresponding to the one or more result genotype vectors in the plurality of result genotype vectors; and re-training, by at least one of the one or more computing device, the phenotype prediction model based at least in part on the one or more result genotype vectors, the corresponding phenotype information, and the one or more constraints.
12. The method of claim 7, wherein generating a result based at least in part on the plurality of result genotypes comprises: filtering the plurality of result genotype vectors to remove one or more first result genotype vectors corresponding to one or more categories of genotypes having genotype vectors with acquisition scores below acquisition scores of genotype vectors in other categories of genotypes; selecting a plurality of filtered genotype vectors from the filtered plurality of result genotype vectors, the selected plurality of filtered genotype vectors corresponding to one or more additional categories of genotypes having genotype vectors with acquisition scores above acquisition scores of genotype vectors in other categories of genotypes; determining a plurality of aggregate acquisition scores corresponding to a plurality of combinations of genotype vectors in the selected plurality of filtered genotype vectors; ranking the plurality of combinations of genotype vectors according to the plurality of aggregate acquisition scores; and selecting one or more top-ranked combinations of genotype vectors as the result, wherein each combination of genotype vectors corresponds to two or more genetic constructs for testing.
13. A apparatus executed by one or more computing devices for efficiently optimizing a phenotype with a combination of a generative and a predictive model, the apparatus comprising: one or more processors; and one or more memories operatively coupled to at least one of the one or more processors and having instructions stored thereon that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to: train a phenotype prediction model based at least in part on the plurality of experiential genotype vectors, corresponding phenotype information, and one or more constraints, the phenotype prediction model comprising a surrogate model; train a genotype generation model based at least in part on a plurality of sample genotype vectors, the genotype generation model being configured to generate new genotype vectors; generate a plurality of new genotype vectors with the genotype generation model; apply the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of scores, the phenotype prediction model being configured to predict one or more phenotypic attributes of the new genotype vectors; determine a plurality of result genotypes based at least in part on a ranking of the plurality of new genotype vectors according to the plurality of scores; and generate a result based at least in part on the plurality of result genotypes, the result indicating one or more genetic constructs for testing.
14. The apparatus of claim 13, wherein at least one of the one or more memories has further instructions stored thereon that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to: receive one or more constraints, the one or more constraints comprising a plurality of desired phenotypic attributes; and encode genotype information corresponding to the one or more constraints in a plurality of experimental data points as the plurality of experiential genotype vectors, the plurality of experimental data points comprising the genotype information and phenotype information corresponding to the genotype information.
15. The apparatus of claim 14, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to encode genotype information in a plurality of experimental data points corresponding to the one or more constraints as a plurality of experiential genotype vectors further cause at least one of the one or more processors to: identify the plurality of experimental data points in a database of experimental data points based at least in part on at least one desired phenotypic attribute in the plurality of desired phenotypic attributes; and encode genotypes associated with the identified plurality of experimental data points as the plurality of experiential genotype vectors.
16. The apparatus of claim 13, wherein at least one of the one or more memories has further instructions stored thereon that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to: encode genotype information in a plurality of sample genotypes of a sample database as the plurality of sample genotype vectors.
17. The apparatus of claim 13, wherein the phenotype prediction model is a surrogate model and wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to train a phenotype prediction model based at least in part on the plurality of experiential genotype vectors, the phenotype information, and the one or more constraints further cause at least one of the one or more processors to: determine one or more meta-parameters for the phenotype prediction model, the one or more meta-parameters being configured to maximize accuracy of the phenotype prediction model; determine an objective function based at least in part on the plurality of desired phenotypic attributes; and iteratively adjust the objective function by repeatedly selecting one or more experiential genotype vectors in the plurality of experiential genotype vectors that maximize an acquisition function of the phenotype prediction model and updating the objective function based at least in part on one or more experimentally-determined phenotypic attributes corresponding to the one or more experiential genotype vectors.
18. The apparatus of claim 17, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to apply the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of scores further cause at least one of the one or more processors to: apply the objective function to the plurality of new genotype vectors to generate a plurality of prediction scores corresponding to the plurality of new genotype vectors.
19. The apparatus of claim 17, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to apply the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of scores further cause at least one of the one or more processors to: apply an acquisition function of the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of acquisition scores corresponding to the plurality of new genotype vectors.
20. The apparatus of claim 13, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to train a genotype generation model based at least in part on a plurality of sample genotype vectors, the genotype generation model being configured to generate new genotype vectors further cause at least one of the one or more processors to: store a generator model function having a plurality of trainable generator parameters that is configured to mimic the distribution of the plurality of sample genotype vectors; store a discriminator model function having a plurality of trainable discriminator parameters that is configured to estimate a probability that a data sample comes from the plurality of sample genotype vectors instead of from the generator model function; store a minimax objective function that is configured to be minimized by the generator model function and maximized by the discriminator model function; and concurrently train both the generator model function and the discriminator model function with the plurality of sample genotype vectors until the minimax objective function converges to a saddle point.
21. The apparatus claim 20, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to concurrently train both the generator model function and the discriminator model function with the plurality of sample genotype vectors until the minimax objective function converges to a saddle point further cause at least one of the one or more processors to: repeatedly sample one or more sample genotype vectors from the plurality of sample genotype vectors; repeatedly generate one or more generated genotype vectors with generator model function; and iteratively apply the discriminator model function to the one or more sample genotype vectors and the one or more generated genotype vectors until the discriminator model function cannot distinguish between the one or more sample genotype vectors and the one or more generated genotype vectors, wherein application of the discriminator model function alternates between the one or more sample genotype vectors and the one or more generated genotype vectors.
22. The apparatus of claim 20, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to generate a plurality of new genotype vectors with the genotype generation model further cause at least one of the one or more processors to: store one or more parameters, the one or more parameters comprising a batch size and a selection rate; and generate a set of new genotype vectors with the generator model function, the size of the set of new genotype vectors being determined based at least in part on the batch size and the selection rate.
23. The apparatus of claim 13, wherein at least one of the one or more memories has further instructions stored thereon that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to: determine phenotype information corresponding to the one or more result genotype vectors in the plurality of result genotype vectors; and re-train the phenotype prediction model based at least in part on the one or more result genotype vectors, the corresponding phenotype information, and the one or more constraints.
24. The apparatus of claim 19, wherein the instructions that, when executed by at least one of the one or more processors, cause at least one of the one or more processors to generate a result based at least in part on the plurality of result genotypes further cause at least one of the one or more processors to: filter the plurality of result genotype vectors to remove one or more first result genotype vectors corresponding to one or more categories of genotypes having genotype vectors with acquisition scores below acquisition scores of genotype vectors in other categories of genotypes; select a plurality of filtered genotype vectors from the filtered plurality of result genotype vectors, the selected plurality of filtered genotype vectors corresponding to one or more additional categories of genotypes having genotype vectors with acquisition scores above acquisition scores of genotype vectors in other categories of genotypes; determine a plurality of aggregate acquisition scores corresponding to a plurality of combinations of genotype vectors in the selected plurality of filtered genotype vectors; rank the plurality of combinations of genotype vectors according to the plurality of aggregate acquisition scores; and select one or more top-ranked combinations of genotype vectors as the result, wherein each combination of genotype vectors corresponds to two or more genetic constructs for testing.
25. At least one non-transitory computer-readable medium storing computer-readable instructions for efficiently optimizing a phenotype with a combination of a generative and a predictive model that, when executed by one or more computing devices, cause at least one of the one or more computing devices to: train a phenotype prediction model based at least in part on the plurality of experiential genotype vectors, corresponding phenotype information, and one or more constraints, the phenotype prediction model comprising a surrogate model; train a genotype generation model based at least in part on a plurality of sample genotype vectors, the genotype generation model being configured to generate new genotype vectors; generate a plurality of new genotype vectors with the genotype generation model; apply the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of scores, the phenotype prediction model being configured to predict one or more phenotypic attributes of the new genotype vectors; determine a plurality of result genotypes based at least in part on a ranking of the plurality of new genotype vectors according to the plurality of scores; and generate a result based at least in part on the plurality of result genotypes, the result indicating one or more genetic constructs for testing.
26. The at least one non-transitory computer-readable medium of claim 25, further storing computer-readable instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to: receive one or more constraints, the one or more constraints comprising a plurality of desired phenotypic attributes; and encode genotype information corresponding to the one or more constraints in a plurality of experimental data points as the plurality of experiential genotype vectors, the plurality of experimental data points comprising the genotype information and phenotype information corresponding to the genotype information.
27. The at least one non-transitory computer-readable medium of claim 26, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to encode genotype information in a plurality of experimental data points corresponding to the one or more constraints as a plurality of experiential genotype vectors further cause at least one of the one or more computing devices to: identify the plurality of experimental data points in a database of experimental data points based at least in part on at least one desired phenotypic attribute in the plurality of desired phenotypic attributes; and encode genotypes associated with the identified plurality of experimental data points as the plurality of experiential genotype vectors.
28. The at least one non-transitory computer-readable medium of claim 25, further storing computer-readable instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to: encode genotype information in a plurality of sample genotypes of a sample database as the plurality of sample genotype vectors.
29. The at least one non-transitory computer-readable medium of claim 25, wherein the phenotype prediction model is a surrogate model and wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to train a phenotype prediction model based at least in part on the plurality of experiential genotype vectors, the phenotype information, and the one or more constraints further cause at least one of the one or more computing devices to: determine one or more meta-parameters for the phenotype prediction model, the one or more meta-parameters being configured to maximize accuracy of the phenotype prediction model; determine an objective function based at least in part on the plurality of desired phenotypic attributes; and iteratively adjust the objective function by repeatedly selecting one or more experiential genotype vectors in the plurality of experiential genotype vectors that maximize an acquisition function of the phenotype prediction model and updating the objective function based at least in part on one or more experimentally-determined phenotypic attributes corresponding to the one or more experiential genotype vectors.
30. The at least one non-transitory computer-readable medium of claim 29, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to apply the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of scores further cause at least one of the one or more computing devices to: apply the objective function to the plurality of new genotype vectors to generate a plurality of prediction scores corresponding to the plurality of new genotype vectors.
31. The at least one non-transitory computer-readable medium of claim 29, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to apply the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of scores further cause at least one of the one or more computing devices to: apply an acquisition function of the phenotype prediction model to the plurality of new genotype vectors to generate a plurality of acquisition scores corresponding to the plurality of new genotype vectors.
32. The at least one non-transitory computer-readable medium of claim 25, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to train a genotype generation model based at least in part on a plurality of sample genotype vectors, the genotype generation model being configured to generate new genotype vectors further cause at least one of the one or more computing devices to: store a generator model function having a plurality of trainable generator parameters that is configured to mimic the distribution of the plurality of sample genotype vectors; store a discriminator model function having a plurality of trainable discriminator parameters that is configured to estimate a probability that a data sample comes from the plurality of sample genotype vectors instead of from the generator model function; store a minimax objective function that is configured to be minimized by the generator model function and maximized by the discriminator model function; and concurrently train both the generator model function and the discriminator model function with the plurality of sample genotype vectors until the minimax objective function converges to a saddle point.
33. The at least one non-transitory computer-readable medium claim 32, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to concurrently train both the generator model function and the discriminator model function with the plurality of sample genotype vectors until the minimax objective function converges to a saddle point further cause at least one of the one or more computing devices to: repeatedly sample one or more sample genotype vectors from the plurality of sample genotype vectors; repeatedly generate one or more generated genotype vectors with generator model function; and iteratively apply the discriminator model function to the one or more sample genotype vectors and the one or more generated genotype vectors until the discriminator model function cannot distinguish between the one or more sample genotype vectors and the one or more generated genotype vectors, wherein application of the discriminator model function alternates between the one or more sample genotype vectors and the one or more generated genotype vectors.
34. The at least one non-transitory computer-readable medium of claim 32, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to generate a plurality of new genotype vectors with the genotype generation model further cause at least one of the one or more computing devices to: store one or more parameters, the one or more parameters comprising a batch size and a selection rate; and generate a set of new genotype vectors with the generator model function, the size of the set of new genotype vectors being determined based at least in part on the batch size and the selection rate.
35. The at least one non-transitory computer-readable medium of claim 25, further storing computer-readable instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to: determine phenotype information corresponding to the one or more result genotype vectors in the plurality of result genotype vectors; and re-train the phenotype prediction model based at least in part on the one or more result genotype vectors, the corresponding phenotype information, and the one or more constraints.
36. The at least one non-transitory computer-readable medium of claim 31, wherein the instructions that, when executed by at least one of the one or more computing devices, cause at least one of the one or more computing devices to generate a result based at least in part on the plurality of result genotypes further cause at least one of the one or more computing devices to: filter the plurality of result genotype vectors to remove one or more first result genotype vectors corresponding to one or more categories of genotypes having genotype vectors with acquisition scores below acquisition scores of genotype vectors in other categories of genotypes; select a plurality of filtered genotype vectors from the filtered plurality of result genotype vectors, the selected plurality of filtered genotype vectors corresponding to one or more additional categories of genotypes having genotype vectors with acquisition scores above acquisition scores of genotype vectors in other categories of genotypes; determine a plurality of aggregate acquisition scores corresponding to a plurality of combinations of genotype vectors in the selected plurality of filtered genotype vectors; rank the plurality of combinations of genotype vectors according to the plurality of aggregate acquisition scores; and select one or more top-ranked combinations of genotype vectors as the result, wherein each combination of genotype vectors corresponds to two or more genetic constructs for testing.
PCT/US2021/029177 2020-04-24 2021-04-26 Method for efficiently optimizing a phenotype with a combination of a generative and a predictive model WO2021217138A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063015140P 2020-04-24 2020-04-24
US63/015,140 2020-04-24

Publications (1)

Publication Number Publication Date
WO2021217138A1 true WO2021217138A1 (en) 2021-10-28

Family

ID=78270083

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/029177 WO2021217138A1 (en) 2020-04-24 2021-04-26 Method for efficiently optimizing a phenotype with a combination of a generative and a predictive model

Country Status (1)

Country Link
WO (1) WO2021217138A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283882A (en) * 2021-12-31 2022-04-05 华智生物技术有限公司 Nondestructive poultry egg quality character prediction method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015173435A1 (en) * 2014-05-16 2015-11-19 Katholieke Universiteit Leuven, KU LEUVEN R&D Method for predicting a phenotype from a genotype
US20160132635A1 (en) * 2013-06-14 2016-05-12 Keygene N.V. Directed strategies for improving phenotypic traits
US20200202241A1 (en) * 2018-12-21 2020-06-25 TeselaGen Biotechnology Inc. Method, apparatus, and computer-readable medium for efficiently optimizing a phenotype with a specialized prediction model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160132635A1 (en) * 2013-06-14 2016-05-12 Keygene N.V. Directed strategies for improving phenotypic traits
WO2015173435A1 (en) * 2014-05-16 2015-11-19 Katholieke Universiteit Leuven, KU LEUVEN R&D Method for predicting a phenotype from a genotype
US20200202241A1 (en) * 2018-12-21 2020-06-25 TeselaGen Biotechnology Inc. Method, apparatus, and computer-readable medium for efficiently optimizing a phenotype with a specialized prediction model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YELMEN BURAK, DECELLE AURÉLIEN, ONGARO LINDA, MARNETTO DAVIDE, TALLEC CORENTIN, MONTINARO FRANCESCO, FURTLEHNER CYRIL, PAGANI LUCA: "Creating Artificial Human Genomes Using Generative Models", PLOS GENETICS, 7 October 2019 (2019-10-07), pages 1 - 26, XP055867428, Retrieved from the Internet <URL:https://www.biorxiv.org/content/10.1101/769091v2.full> [retrieved on 20210624] *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283882A (en) * 2021-12-31 2022-04-05 华智生物技术有限公司 Nondestructive poultry egg quality character prediction method and system
CN114283882B (en) * 2021-12-31 2022-08-19 华智生物技术有限公司 Non-destructive poultry egg quality character prediction method and system

Similar Documents

Publication Publication Date Title
US11620544B2 (en) Method, apparatus, and computer-readable medium for efficiently optimizing a phenotype with a specialized prediction model
CN107862173B (en) Virtual screening method and device for lead compound
US11574703B2 (en) Method, apparatus, and computer-readable medium for efficiently optimizing a phenotype with a combination of a generative and a predictive model
Castro et al. Transformer-based protein generation with regularized latent space optimization
US20240029834A1 (en) Drug Optimization by Active Learning
Arowolo et al. A survey of dimension reduction and classification methods for RNA-Seq data on malaria vector
WO2019077494A1 (en) System, apparatus, and method for sequence-based enzyme ec number prediction by deep learning
US20200058376A1 (en) Bioreachable prediction tool for predicting properties of bioreachable molecules and related materials
Bi et al. A genetic algorithm-assisted deep learning approach for crop yield prediction
Pu et al. A novel artificial bee colony clustering algorithm with comprehensive improvement
WO2021217138A1 (en) Method for efficiently optimizing a phenotype with a combination of a generative and a predictive model
Sanchez Reconstructing our past˸ deep learning for population genetics
CN110914912A (en) Prioritizing genetic modifications to increase throughput for phenotypic optimization
KR20230018358A (en) Conformal Inference for Optimization
Poulakis Unsupervised AutoML: a study on automated machine learning in the context of clustering
Cingiz k-Strong Inference Algorithm: A Hybrid Information Theory Based Gene Network Inference Algorithm
Giannis Unsupervised AutoMl: A Study on Automated Machine Learning in the Context of Clustering
Wang Bayesian evolutionary optimization for heterogeneously expensive multi-objective problems
Jagtap Multilayer Graph Embeddings for Omics Data Integration in Bioinformatics
Barot Deep Learning for Protein Function Prediction and Novel Class Discovery
Moreno López Development of a predictive system for the generation of biosensor libraries with application to the dynamic regulation of bioproduction pathways
Hoffbauer et al. TransMEP: Transfer learning on large protein language models to predict mutation effects of proteins from a small known dataset
Abreu Development of DNA sequence classifiers based on deep learning
Al-Safarini USING A LOGICAL MODEL TO PREDICT THE FUNCTION OF GENE: A SYSTEMATIC REVIEW
Yang et al. Does Negative Sampling Matter? A Review with Insights into its Theory and Applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21793318

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21793318

Country of ref document: EP

Kind code of ref document: A1