WO2013188886A2 - Method and system for parallel batch processing of data sets using gaussian process with batch upper confidence bound - Google Patents
Method and system for parallel batch processing of data sets using gaussian process with batch upper confidence bound Download PDFInfo
- Publication number
- WO2013188886A2 WO2013188886A2 PCT/US2013/046196 US2013046196W WO2013188886A2 WO 2013188886 A2 WO2013188886 A2 WO 2013188886A2 US 2013046196 W US2013046196 W US 2013046196W WO 2013188886 A2 WO2013188886 A2 WO 2013188886A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- batch
- input data
- function
- observations
- data
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
- G06N5/022—Knowledge engineering; Knowledge acquisition
- G06N5/025—Extracting rules from data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
Definitions
- the present invention relates generally to efficient selection of input data for maximizing a reward from a function, and more particularly, to a system for selecting a batch of input data for parallel evaluation to obtain optimal outputs from a function.
- a central challenge is choosing actions that both explore (estimate) a function and exploit knowledge about likely high reward regions for determining the best results based on the function. Carefully calibrating this exploration-exploitation tradeoff in selecting input data is especially important in cases where the experiments are costly in some sense, e.g., when each experiment takes a long time to perform and the time window available for experiments is short. This approach relies on the completion of a first experiment before other values may be tested, but such an approach requires time to conduct multiple trials which rely on the results of the previous trials.
- a method of selecting a batch of input data from available input data for parallel evaluation by a function is disclosed.
- the function is modeled as drawn from a Gaussian process. Observations are used to determine a mean and a variance of the modeled function. An upper confidence bound is determined from the determined mean and variance.
- a decision rule is applied to select input data from the available input data to add to the batch of input data. The selection is based on a domain-specific time varying parameter. Intermediate observations are hallucinated within the batch. The hallucinated observations with the decision rule are used to select subsequent input data from the available input data for the batch of input data.
- the input data of the batch is evaluated in parallel with the function.
- the resulting determined data outputs are stored in a memory device.
- Another example is a system for selecting a batch of input data from available input data for parallel evaluation by a function.
- the system includes a storage device including a database storing the available input data and a controller coupled to the storage device.
- the controller is operable to model the function as drawn from a Gaussian process.
- the controller is operable to use observations to determine a mean and a variance of the modeled function and determine an upper confidence bound from the determined mean and variance.
- the controller applies a decision rule to select input data from the available input data to add to the batch of input data, wherein the selection is based on a domain-specific time varying parameter.
- the controller hallucinates intermediate observations within the batch and selects subsequent input data from the available input data for the batch of input data using the hallucinated observations with the decision rule.
- the controller evaluates the input data of the batch in parallel with the function and stores the resulting determined data outputs.
- Another example is a non-transitory, machine readable medium having stored thereon instructions for selecting a batch of input data from available input data for parallel evaluation by a function.
- the instructions when executed by at least one machine processor, cause the machine to model the function as drawn from a Gaussian process.
- the instructions cause the machine to use observations to determine a mean and a variance of the modeled function and determine an upper confidence bound from the determined mean and variance.
- the instructions cause the machine to apply a decision rule to select input data from the available input data to add to the batch of input data, wherein the selection is based on a domain-specific time varying parameter.
- the instructions cause the machine to hallucinate intermediate observations within the batch and select subsequent input data from the available input data for the batch of input data using the hallucinated observations with the decision rule.
- the instructions cause the machine to evaluate the input data of the batch in parallel with the function.
- FIG. 1 is a block diagram of a system to make selection of batches of input data for parallel evaluation in a function:
- FIG. 2A-2C are graphs of successive steps during the parallel selection of batches of input data in FIG. 1 resulting in different confidence bounds
- FIG. 3 is a block diagram of an example computing device in the system in FIG. l.
- FIG. 4 is a flow diagram of the process run by the system in FIG. 1 to select batches of data for parallel processing.
- FIG. 1 shows a system 100 that optimizes processing a batch of input data for parallel evaluation by a function f.
- the system 100 selects a batch of input data from available input for evaluating the optimal outputs of the function, f, based on multi-variable data inputs.
- the system 100 includes a database 102 that stores available input data.
- the available input data may be derived from different possible values from inputs such as physical locations and dimensions, chemical compositions, electrical levels, etc.
- the system 100 determines which available inputs for assembling a batch to result in optimal outputs from the function, f.
- a Gaussian model determination module 104 outputs a model of the function as drawn from a Gaussian process.
- the Gaussian model determination module 104 uses observations to determine a mean and a variance. The module 104 determines an upper confidence bound from the determined mean and variance.
- a batch selection engine 106 is coupled to the input database 102 and the Gaussian process model engine 104. The results of the Gaussian determination module 104 are used by the batch selection engine 106 to apply a decision rule to select input data from the available input data to add to the batch of input data for evaluation that will have a high probability of obtaining the optimal outputs via algorithms which will be described below. Immediate observations are hallucinated from the batch without actual feedback from outputs based on the inputs. The hallucinated observations with the decision rule are used by the batch selection engine 106 to select subsequent input data from the available input data for the batch of input data.
- the system 100 includes a function evaluation engine 108 for evaluating the output of the function, f.
- the batch of input data selected by the function evaluation engine 108 may be evaluated in parallel for the function, f, by the function evaluation engine 108.
- the function evaluation engine 108 is coupled to a memory device 110 that includes an output database.
- the batches of input data selected from the batch selection engine 106 are processed in parallel by the function evaluation engine 108 without relying on sequential feedback for each individual batch from the evaluation of a previous batch.
- the outputs of the function evaluation engine 108 are stored in the output database.
- the resulting data provides the optimal inputs for maximizing the reward (output) of the function, f. Additional real feedback may be provided from the results of the function evaluation engine 108 on further selections of additional batches of input data to further maximize the reward of the function. Such feedback may be delayed to speed the processing.
- the memory 110 may store additional correlation data with the batches of input data producing optimal outputs from the function, f.
- a display device 112 is coupled to the memory 110 to display correlation between a batch of input data and the outputs of the function, f to a user from the memory 110.
- the resulting correlation data may be used for optimizing the operation of the function such as by selecting physical positioning or input values for operation of a device, selection of compositions for a formulation, etc.
- a control workstation 114 may also be coupled to the memory device 1 10 for using the correlated data in the memory 110 to optimize the operation of a device using the function.
- the system 100 may be applied for functions where: 1) simulation is possible based on first principles/a reasonable model; 2) the desired phenomena are complex, chaotic, emergent, or uncertain without executing the simulation; and 3) success is quantifiable upon receipt of the simulation results.
- the system 100 may also be run on a continuous basis to determine optimal inputs when the function changes over time.
- the process of the system 100 begins with a Gaussian characterization of the function, f by the Gaussian process module 104.
- the Gaussian posterior is f(x)
- ⁇ ,_ ⁇ (x) k[K + ⁇ 2 ⁇ ⁇ ] ⁇ y 1 :t -i (1)
- (x) is the mean and a t-1 (x) is the standard deviation
- k k (x, X) is the row vector of kernel evaluations between x and X
- K K (X,X) is the matrix of kernel evaluations between past observations.
- the maximum mutual information between the payoff function, f and observations yA of any set A - ⁇ D of the T decisions evaluated up until time T is defined max I (f;yA)-
- the conditional mutual information gain of a set of potential observations with respect to the payoff function measures the degree to which acquiring these observations will influence the model.
- the conditional mutual information gain of a set of observations yA of a set A ⁇ x t , x T ⁇ with respect to f, given observations y t-1 is
- I(f;y A I y t ) ⁇ log (1 + ⁇ ' ⁇ 2 ⁇ ( ⁇ ⁇ )) .
- This sum may be calculated solely by knowing A, without needing the actual values of the observation as, yA- [0026]
- the modeling of the function, f, as a sample from a Gaussian process by the Gaussian process module 104 in FIG. 1 quantifies predictive uncertainty. This may be used to guide exploration and exploitation. A selection rule for inputs is used to guide exploration and exploitation.
- GP-UCB Gaussian process upper confidence bound
- This decision rule uses a t , a domain-specific time-varying parameter, to trade off exploitation (sampling x with high mean) and exploration (sampling x with high standard deviation) by changing the relative weighting of the posterior mean and standard deviation, respectively ⁇ : _ ⁇ (x) and ⁇ : _ ⁇ (x) from Equations (1) and (2) above.
- a t the cumulative regret of the upper confidence bound grows sub-linearly for many commonly used kernel functions, providing the first regret bounds and convergence rates for Gaussian process optimization.
- Implicit in the definition of the decision rule is the corresponding confidence interval, C t seq , where this confidence interval's upper confidence bound is the value of the argument of the decision rule.
- the difference between the uppermost limit and the lowermost limit is the width w, which in this example is 2a t 1 ⁇ 2 a 2 t _i (x).
- This confidence interval is based on the posterior over the function, f, given y 1:t-1 .
- a new confidence interval is created for round t + 1 after adding y t to the set of observations.
- the time domain parameter, a t is selected such that a union bound over all t > 1 and x £ D yields a high-probability guarantee of confidence interval correctness.
- ⁇ is the confidence interval width multiplier described above
- ⁇ is the maximum mutual information between the payoff function f and the observations ⁇ 1: ⁇ .
- ⁇ grows sub-linearly and ⁇ only needs to grow poly-logarithmically in T, implying that Rx is also sub-linear.
- Rx/T ⁇ 0 as T ⁇ ⁇ .
- the decision rule in equation (3) is a no-regret algorithm.
- the performance of the decision rule may be generalized to batch and parallel selection (i.e., B > 1).
- a Gaussian process - Batch Upper Confidence Bound (GP-BUCB) algorithm of the batch selection module 106 of the system 100 encourages diversity in exploration, uses past information in a principled fashion, and yields strong performance guarantees.
- the GP-BUCB algorithm may also be modified to a Gaussian process - Adaptive Upper Confidence Bound (GP-AUCB) algorithm, which retains the theoretical guarantees of the GP- BUCB algorithm but creates batches of variable length in a data-driven manner as will be explained below.
- GP-AUCB Gaussian process - Adaptive Upper Confidence Bound
- the Gaussian Process - Batch Upper Confidence Bound (GP-BUCB) algorithm in this example encourages diversity in exploration and has strong performance guarantees.
- the approach towards parallel exploration alters equation (3) to sequentially choose decisions within the batch as
- the role of the domain-specific time-varying variable, ⁇ is analogous to that of a t , in the GP-UCB algorithm in equation (3).
- the confidence interval for the GP-BUCB algorithm has a width of 2p t 1 ⁇ 2 (x). This approach naturally encourages diversity in exploration by taking into account the change in predictive variance since the payoffs of "similar" decisions have similar predictive distributions, exploring one decision will automatically reduce the predictive variance of similar decisions.
- the confidence intervals used in this approach are predicated on having information from the early decisions in the batch. Since that information is not currently available, the confidence intervals are "overconfident" about the knowledge of the function, f, at those locations. This overconfidence requires compensation in a principled manner. As will be explained below, one approach to doing so is to increase the width of the confidence intervals (through proper choice of p t ), such that the confidence intervals used by the GP-BUCB algorithm are conservative, i.e. they contain the true function f(x) with high probability.
- FIG. 2A-2C are graphs showing the selection of confidence bounds to compensate for overconfidence in the GP-BUCB algorithm.
- FIG. 2A is a graph 200 showing confidence intervals in dark areas 202 which are computed from previous noisy observations shown as the crosses at the beginning of a batch. The crosses shape the posterior mean represented by a line 204. The true, unknown function f(x) is represented by the dashed line 206. To avoid overconfidence, the GP-BUCB algorithm chooses batches using the confidence intervals represented by the areas 208 and 210 such that even in the worst case, the succeeding confidence intervals in the batch will contain the confidence intervals 202.
- FIG. 2B is a graph 240 showing the last decisions from the end of a batch in begun in FIG. 2 A.
- Like elements such as the posterior mean 204 and the function f(x) 206 in FIG. 2A are labeled identically in FIG. 2B.
- the hallucinated observations are designated as stars and are used to shrink the outer posterior confidence intervals shown in areas 242 and 244 from their comparative values at the start of the batch represented by the dashed lines 246 and 248.
- the areas 242 and 244 still contain the confidence intervals 202 as desired.
- FIG. 2C is a graph 260 that shows the beginning of the next batch after decisions from the previous batch in FIGs. 2A-2B. Like elements such as the posterior mean 204 and the function f(x) 206 in FIG. 2A are labeled identically in FIG. 2C. The feedback from all decisions is obtained and new confidence intervals represented by an area 262 and corresponding batches represented by areas 264 and 266 are computed.
- GP-BUCB One major computational bottleneck of applying GP-BUCB is calculating the posterior mean ⁇ ⁇ (x) and variance, a t 2 (x) for the candidate decisions.
- the mean is updated only whenever feedback is obtained, and upon computation of the Cholesky factorization of K(X;X) + ⁇ ⁇ 2 ⁇ which is necessary whenever new feedback arrives, predicting ⁇ ⁇ (x) requires O(t) additions and multiplications.
- a t 2 (x) must be recomputed for every x in D after every single round, and requires solving back- substitution, which requires 0(t 2 ) computations. Therefore, the variance computation dominates the computational cost of the GP-BUCB algorithm.
- a t 2 (x) is monotonically decreasing in t. This fact can be exploited to dramatically improve the running time of GP-BUCB, at least for finite (or when using discretizations of the) decision sets D. Instead of recomputing a t _i (x) for all decisions x in every round t, an upper bound, ⁇ ⁇ ⁇ ) , may be maintained which is initialized to ⁇ 0 - ⁇ . In every round, the GP-BUCB rule is applied with the upper bound to identify x t of the upper bound, such that the decision rule is:
- the relative amount by which the confidence intervals can shrink with respect to decision x is bounded by the worst-case (greatest) mutual information I (f; y ft[t]
- a constant bound, C is applied on the maximum conditional mutual information that can be accrued within a batch, it may be used to guide the choice of p t to ensure that the algorithm is not overconfident.
- the machinery of the upper confidence bound algorithm may be used to derive the regret bound.
- the main result bounds the regret of the GP-BUCB algorithm in terms of a bound, C, on the maximum conditional mutual information. It holds under any of three different assumptions about the payout function f, which may all be of interest. In particular, it holds even if the assumption that f is sampled from a GP is replaced by the assumption that f has low norm in the Reproducing Kernel Hilbert Space (RKHS) associated with the kernel function.
- RKHS Reproducing Kernel Hilbert Space
- the GP-BUCB algorithm may ensure a bound on the conditional mutual information gain with respect to f(x) by bounding the global conditional mutual information gain (i.e., the information gained with respect to f as a whole) or by noting that the information which can be gained by a set of B-l observations can only decrease (submodularity), and JB-I can be calculated using only knowledge of the kernel function.
- the local information gain with respect to any f(x), x E D, t E N is bounded by fixing the feedback times and then bounding the maximum conditional mutual information with respect to the entire function f which can be acquired by any algorithm which chooses any set of B - 1 or fewer observations. While this argument uses multiple upper bounds, any or all of which may be overly conservative, this approach is still applicable because such a bound C holds for any possible algorithm for constructing batches.
- the GP-BUCB algorithm is applied to the posterior Gaussian process distribution, conditioned on y init .
- the initialization set D init is constructed via uncertainty sampling.
- T init may be chosen as a function of B such that ⁇ ⁇ 1 ⁇ - ⁇ ⁇ C. Using this choice of C bounds the post-initialization regret. In order to derive bounds on T init , a bound on ⁇ ⁇ which is analytical and sublinear is required.
- the total regret of the initialization and subsequent run of GP-BUCB is bounded by 3 ⁇ 4 3 ⁇ 43 ⁇ 4 w i m probability l- ⁇ , where is the regret of the GP-UCB algorithm run ⁇ rounds with a t dictated by the problem conditions and ⁇ .
- mapping fb[t] and the sequence of actions selected by the algorithm GP-BUCB be such that there exists some bound C, holding for all t > 1 and x £ D, on I(f(x); yfb[t]+i:t -i I yi:fb[t]), for the hallucinated information as of round t with respect to any value f(x).
- This requirement on fb[t] in terms of C may appear stringent, but it can be easily satisfied by on-line, data-driven construction of the mapping fb[t] after having pre-selected a value for C.
- the GP-AUCB algorithm controls feedback adaptively through pre-selecting a value of C limiting the amount of hallucinated batch information.
- the GP-AUCB algorithm chooses fb[t] online, using a limit on the amount of information hallucinated within the batch.
- Such adaptive batch length control is possible because the amount of hallucinated information may be measured online using the following equation, even in the absence of the observations themselves.
- the GP-AUCB algorithm can also be employed in the delay setting, but rather than using the hallucinated information to decide whether or not to terminate the current batch, the algorithm chooses whether or not to submit an action in this round; the algorithm submits an action if the hallucinated information is ⁇ C and refuses to submit an action ("balks") if the hallucinated information is > C.
- the information gain locally under the GP-AUCB algorithm is bounded by the information gain with respect to f as a whole, which is constrained to be ⁇ C by the stopping condition.
- the equality ma be used to maintain a guarantee of confidence interval correctness for batches of variable length.
- the batch length may possibly become quite large as the shape of f is better and better understood and the variance of f(x t ) tends to decrease.
- exploratory actions are chosen, the high information gain of these actions contributes to a relatively early arrival at the information gain threshold C and thus relatively short batch length, even late in the algorithm's run.
- the batch length is chosen in response to the algorithm's need to explore or exploit as dictated by the decision rule (Equation (4) associated with the GP-BUCB algorithm, not simply following a monotonically increasing schedule.
- C may be selected to deliver batches with a specified minimum size
- C may be set such that C > Y Bm in - 1 , i.e., no set of queries of size less than B m i n could possibly gain enough information to end the batch.
- C is chosen such that C ⁇ (B m i n ), it is possible to select a batch of size B m i n which does attain the required amount of information to terminate the batch, and thus B m i n may be thought of as the minimum batch size which could be produced by the GP-AUCB algorithm.
- the GP-AUCB Local algorithm can take advantage of the efficient upper bounding lazy variance technique discussed above, despite the fact that the local stopping condition nominally requires all of the variances to be calculated every round; this is because I(f(x); yfb[t]+i:t-i
- the GP-AUCB Local algorithm may be run lazily until the global information gain reaches C, at which point it must be run non-lazily for the remainder of the batch.
- the GP-BUCB algorithm is able to handle parallel exploration problems (batches of B experiments executed concurrently). This approach generalizes the GP-UCB approach to the parallel setting. Near linear speedup is possible for many commonly used kernel functions as long as the batch size B grows at most polylogarithmically in the number of rounds T, the GP-BUCB regret bounds only increase by a constant factor independent of B as compared to the known bounds for the sequential algorithm.
- the GP-BUCB algorithm may also be drastically accelerated by using lazy evaluations of variance.
- the GP-BUCB algorithm is also able to handle delayed feedback problems (where each decision can only use feedback up to B rounds ago or where there are at most B-l observations are pending).
- the GP-AUCB algorithm handles the same cases and provides the same regret bounds, with batches or delay queues of variable length, controlled by the algorithm in accordance with the data.
- GP-BUCB and GP-AUCB algorithms are generally applicable for any application which requires multiple rounds/batches, where each experiment is cheap but will take too long to be practical before the choice of the next location, and where the evaluation of each experiment is costly.
- Some possible applications of the batch version of the algorithm include the selection of reagent combinations for chemical experiments, the selection of home-use electrical stimuli in such systems as the MyStim home epidural electrostimulation (EES) controller manufactured by Medtronic.
- EES MyStim home epidural electrostimulation
- the GP-BUCB and GP-AUCB algorithms are suitable for applications like image annotation where many human annotators are available, but the amount of time required to generate an annotation is comparatively long with respect to the run time of the algorithm. Additional applications may include robotics, fluid dynamics analysis, machine learning, biosystems modeling, chemical modeling, finite element analysis, etc.
- Additional applications of the delay version of the GP-BUCB and GP- AUCB algorithms include cases where the sequence of experiments is running continuously, but the evaluation of each experiment may be lengthy.
- EES spinal cord therapy epidural electrostimulation
- EMG Electromyography
- additional processing may be performed based on delayed feedback from previous tests without having to wait for intervening processing to be finished.
- the ability to select EES configurations without processing EMG data is useful.
- the manual image annotation application mentioned above for the batch case could also be formulated in the delay case, where each annotator finishes working on their assigned image asynchronously, and relatively slowly. This capability would be very useful in a variety of on-line applications.
- the application of the system 100 may be made in epidural electrostimulation (EES) for spinal cord injury therapy alone or in combination with other interventions such as pharmacological agents and motor training.
- EES epidural electrostimulation
- electrode arrays of increasing sophistication are being developed. Optimizing the stimuli delivered by these EES arrays is difficult as the numerous parameters yield large and complex stimulus spaces.
- a further complication is a lack of reliable predictive models for the physiological effects of a given stimulus, necessitating physical experiments with the EES system and patient.
- Another related application may be using the GP-AUCB algorithm to control costly finite element simulations of the electric field and voltage distribution in the spinal cord under epidural electro-stimulation, which would then be coupled to simulations of neurons.
- a variant of the GP-BUCB algorithm explained above may be used to actively optimize an EMG-based metric of lower spinal cord function in 4 complete spinal rats during 3-15 experimental sessions.
- PtP peak-to-peak
- This latency implies a single interneuronal delay, not direct activation of the motoneurons.
- the amplitude is treated as a surrogate for the ability of the spinal interneurons to transduce sensory information into muscle responses and assume that supplying effectively transduced stimuli provides activity-based therapeutic excitation of the lower spinal cord.
- the GP-BUCB algorithm models the response function over stimuli and time as drawn from a Gaussian process, a probability distribution over functions. It then selects experimental stimuli to balance exploring poorly understood regions of the stimulus space and exploiting stimuli that produce strong responses.
- the performance of the GP-BUCB algorithm under this reward metric typically achieves superior performance in terms of time-averaged reward and parity in terms of best single PtP response obtained in comparison an expert human experimenter's simultaneous efforts. Further, despite being initially unaware of the spinal location of effective stimuli, the algorithm learns a human-interpretable "shape" of the response function via active experimentation, allowing discovery of the gross functional organization of the spinal cord.
- Another application may be in the field of automated vaccine design.
- the GP-BUCB algorithm may evaluate a database which describes the binding affinity of various peptides with a Major Histocompatibility Complex (MHC) Class I molecule. This is of importance when designing vaccines to exploit peptide binding properties.
- MHC Major Histocompatibility Complex
- Each of the peptides is described by a set of chemical features in R 45 .
- the binding affinity of each peptide, which is treated as the reward or payoff, is described as an offset IC50 value.
- the experiments used an isotropic linear ARD kernel fitted on a different MHC molecule from the same data set.
- An example computer system 300 in FIG. 3 includes a processor 302 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory 304, and a static memory 306, which communicate with each other via a bus 308.
- the computer system 300 may further include a video display unit 310 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)).
- the computer system 300 also includes an input device 312 (e.g., a keyboard), a cursor control device 314 (e.g., a mouse), a disk drive unit 316, a signal generation device 318 (e.g., a speaker), and a network interface device 320.
- the disk drive unit 316 includes a machine-readable medium 322 on which is stored one or more sets of instructions (e.g., software 324) embodying any one or more of the methodologies or functions described herein.
- the instructions 324 may also reside, completely or at least partially, within the main memory 304, the static memory 306, and/or within the processor 302 during execution thereof by the computer system 300.
- the main memory 304 and the processor 302 also may constitute machine -readable media.
- the instructions 324 may further be transmitted or received over a network.
- machine-readable medium is shown in an example to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
- the term “machine -readable medium” can also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the various embodiments, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions.
- the term “machine -readable medium” can accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
- RAM random access memory
- ROM read only memory
- floppy disk hard disk
- CD ROM compact disc
- DVD ROM digital versatile disc
- other computer readable medium that is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to the processor, may be used for the memory.
- each of the computing devices of the system 100 may be conveniently implemented using one or more general purpose computer systems, microprocessors, digital signal processors, micro-controllers, application specific integrated circuits (ASIC), programmable logic devices (PLD), field programmable logic devices (FPLD), field programmable gate arrays (FPGA), and the like, programmed according to the teachings as described and illustrated herein, as will be appreciated by those skilled in the computer, software, and networking arts.
- ASIC application specific integrated circuits
- PLD programmable logic devices
- FPLD field programmable logic devices
- FPGA field programmable gate arrays
- two or more computing systems or devices may be substituted for any one of the computing systems in the system 100. Accordingly, principles and advantages of distributed processing, such as redundancy, replication, and the like, also can be implemented, as desired, to increase the robustness and performance of the devices and systems of the system 100.
- the system 100 may also be implemented on a computer system or systems that extend across any network environment using any suitable interface mechanisms and communications technologies including, for example telecommunications in any suitable form (e.g., voice, modem, and the like), Public Switched Telephone Network (PSTNs), Packet Data Networks (PDNs), the Internet, intranets, a combination thereof, and the like.
- PSTNs Public Switched Telephone Network
- PDNs Packet Data Networks
- the Internet intranets, a combination thereof, and the like.
- the operation of the example system 100 shown in FIG. 1, which may be controlled on the example workstation, will now be described with reference to FIG. 1 in conjunction with the flow diagram shown in FIG. 4.
- the flow diagram in FIG. 4 is representative of example machine readable instructions for implementing selection of a batch of input data for parallel evaluation.
- the machine readable instructions comprise an algorithm for execution by: (a) a processor, (b) a controller, and/or (c) one or more other suitable processing device(s).
- the algorithm may be embodied in software stored on tangible media such as, for example, a flash memory, a CD-ROM, a floppy disk, a hard drive, a digital video (versatile) disk (DVD), or other memory devices, but persons of ordinary skill in the art will readily appreciate that the entire algorithm and/or parts thereof could alternatively be executed by a device other than a processor and/or embodied in firmware or dedicated hardware in a well-known manner (e.g., it may be implemented by an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable logic device (FPLD), a field programmable gate array (FPGA), discrete logic, etc.).
- ASIC application specific integrated circuit
- PLD programmable logic device
- FPLD field programmable logic device
- FPGA field programmable gate array
- any or all of the components of the interfaces could be implemented by software, hardware, and/or firmware.
- some or all of the machine readable instructions represented by the flowchart of FIG. 4 may be implemented manually.
- the example algorithm is described with reference to the flowchart illustrated in FIG. 4, persons of ordinary skill in the art will readily appreciate that many other methods of implementing the example machine readable instructions may alternatively be used.
- the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined.
- FIG. 4 is a flow diagram of the process of for implementing selection of a batch of input data for parallel evaluation with a function.
- the system 100 selects an initial set of inputs from the dataset of available input data stored in the database 102 (400).
- the initial set of inputs is constructed via uncertainty sampling and is selected non-adaptively without prior feedback.
- Feedback is obtained from the initial set of inputs with the function (402).
- the Gaussian process module 104 then provides the Gaussian process posterior and a confidence bound decision rule conditioned on the feedback from the initial set of inputs. (404).
- the Gaussian process module 104 thus models the function as drawing from a Gaussian process, determines a means and a variance of the modeled function via observations and determines an upper confidence bound from the means and variance.
- the batch selection engine 106 determines whether there are sufficient sets of inputs, B, in the batch (406). If there are sufficient inputs, the batch is stored and is complete for evaluation by the function evaluation engine 108 (408). The input data of the batch is evaluated in parallel with the function and the resulting determined data outputs are stored in a memory device such as the output memory device 110.
- the batch selection engine 106 selects a set of inputs based on the Gaussian process posterior and a confidence bound decision rule (410). The batch selection engine 106 then hallucinates a corresponding observation based on the selected action (412). The batch selection engine 106 then updates the Gaussian process model using the hallucinated observations (414). The batch selection engine 106 then loops back to determine whether sufficient sets of inputs, B, have been assigned to the batch (406).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Complex Calculations (AREA)
Abstract
A method and system for selecting a batch of input data from available input data for parallel evaluation by a function is disclosed. The function is modeled as drawn from a Gaussian process. Observations are used to determine a mean and a variance of the modeled function. An upper confidence bound is determined from the determined mean and variance. A decision rule is applied to select input data from the available input data to add to the batch of input data. The selection of the input data is based on a domain-specific time varying parameter. Intermediate observations are hallucinated within the batch. The hallucinated observations are used with the decision rule to select subsequent input data from the available input data for the batch of input data. The input data of the batch is evaluated in parallel with the function. The resulting determined data outputs are stored.
Description
METHOD AND SYSTEM FOR PARALLEL BATCH PROCESSING OF DATA SETS USING GAUSSIAN PROCESS WITH BATCH UPPER CONFIDENCE
BOUND
COPYRIGHT
[0001] A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
PRIORITY
[0002] The present application claims priority to U.S. Provisional Application 61/660,110 filed on June 15, 2012, which is hereby incorporated by reference in its entirety.
FEDERAL SUPPORT STATEMENT
[0003] The subject matter of this invention was made was made with government support under NS062009 awarded by the National Institutes of Health and under IIS0953413 awarded by the National Science Foundation and FA8650-11-1-7156 awarded by USAF/ESC. The government has certain rights in the invention.
TECHNICAL FIELD
[0004] The present invention relates generally to efficient selection of input data for maximizing a reward from a function, and more particularly, to a system for selecting a batch of input data for parallel evaluation to obtain optimal outputs from a function.
BACKGROUND
[0005] Many problems involving correlating inputs require optimizing an unknown reward function from which only noisy observations may be obtained. For example, in the field of spinal therapy, there may be multiple electrodes placed in different locations on the spine. Determining the optimal location and polarities (cathode or anode) of the multiple electrodes for the most optimal response from
electrical stimulation involves testing multiple combinations of locations and electrodes for outputs that reflect both reactions and noise. Analyzing the output of the function is used to continue to refine the input data sets to obtain the optimal location of electrodes to maximize desired attributes of the stimulus response.
[0006] A central challenge is choosing actions that both explore (estimate) a function and exploit knowledge about likely high reward regions for determining the best results based on the function. Carefully calibrating this exploration-exploitation tradeoff in selecting input data is especially important in cases where the experiments are costly in some sense, e.g., when each experiment takes a long time to perform and the time window available for experiments is short. This approach relies on the completion of a first experiment before other values may be tested, but such an approach requires time to conduct multiple trials which rely on the results of the previous trials.
[0007] In many applications, it is desirable to select of a batch of input data to be evaluated in parallel, which increases the speed of obtaining solutions since the many inputs constituting a batch may be tested simultaneously. By parallelizing the experiments, substantially more information may be gathered in the same time frame, however, future actions must be chosen without the benefit of intermediate results. This involves choosing groups of experiments to run simultaneously. The challenge is to assemble groups of experiments which both explore the function and exploit what are currently known to be high-performing regions. This challenge is significant when dealing with the combinatorially large set of possible data inputs. Further, the statistical question of quantitatively how the algorithm's performance depends on the size of the batch (i.e., the degree of informational parallelism) is important to resolve.
[0008] Exploration-exploitation tradeoffs have been studied in context of the multi-armed bandit problem, in which a single action is taken at each round, and a corresponding (possibly noisy) reward is observed. Early work has focused on the case of a finite number of decisions and payoffs that are independent across the arms. In this setting, under some strong assumptions, optimal policies can be computed.
[0009] Optimistic allocation of actions according to upper-confidence bounds (UCB) on the payoffs has proven to be particularly effective. Recently, approaches for coping with large (or infinite) sets of decisions have been developed. In these cases, dependence between the payoffs associated with different decisions must be modeled
and exploited. Examples include bandits with linear or Lipschitz-continuous payoffs or bandits on trees. The exploration-exploitation tradeoff has also been studied in Bayesian global optimization and response surface modeling, where Gaussian process models are often used due to their flexibility in incorporating prior assumptions about the payoff function
[0010] One natural application is the design of high-throughput experiments, where several experiments are performed in parallel, but only receive feedback after the experiments have concluded. In other settings, feedback may be received only after a delay. To enable parallel selection, one must account for the lag between decisions and observations. Most existing approaches that can deal with such delay result in a multiplicative increase in the cumulative regret as the delay grows. Only recently, methods have demonstrated that it is possible to obtain regret bounds that only increase additively with the delay (i.e., the penalty becomes negligible for large numbers of decisions). However, such approaches only apply to contextual bandit problems with finite decision sets, and thus not to settings with complex (even nonparametric) payoff functions.
[0011] There is therefore a need for a method to select a batch of input data for numerous evaluations of a function performed in parallel to maximize reward. There is also a need for a system to use existing Gaussian process models with upper confidence bounds in order to select a batch of input data without relying on previous evaluation output data. There is a further need to provide a process for selecting a batch with a variable length for function evaluation in parallel.
SUMMARY
[0012] According to one example, a method of selecting a batch of input data from available input data for parallel evaluation by a function is disclosed. The function is modeled as drawn from a Gaussian process. Observations are used to determine a mean and a variance of the modeled function. An upper confidence bound is determined from the determined mean and variance. A decision rule is applied to select input data from the available input data to add to the batch of input data. The selection is based on a domain-specific time varying parameter. Intermediate observations are hallucinated within the batch. The hallucinated observations with the decision rule are used to select subsequent input data from the
available input data for the batch of input data. The input data of the batch is evaluated in parallel with the function. The resulting determined data outputs are stored in a memory device.
[0013] Another example is a system for selecting a batch of input data from available input data for parallel evaluation by a function. The system includes a storage device including a database storing the available input data and a controller coupled to the storage device. The controller is operable to model the function as drawn from a Gaussian process. The controller is operable to use observations to determine a mean and a variance of the modeled function and determine an upper confidence bound from the determined mean and variance. The controller applies a decision rule to select input data from the available input data to add to the batch of input data, wherein the selection is based on a domain-specific time varying parameter. The controller hallucinates intermediate observations within the batch and selects subsequent input data from the available input data for the batch of input data using the hallucinated observations with the decision rule. The controller evaluates the input data of the batch in parallel with the function and stores the resulting determined data outputs.
[0014] Another example is a non-transitory, machine readable medium having stored thereon instructions for selecting a batch of input data from available input data for parallel evaluation by a function. The instructions, when executed by at least one machine processor, cause the machine to model the function as drawn from a Gaussian process. The instructions cause the machine to use observations to determine a mean and a variance of the modeled function and determine an upper confidence bound from the determined mean and variance. The instructions cause the machine to apply a decision rule to select input data from the available input data to add to the batch of input data, wherein the selection is based on a domain-specific time varying parameter. The instructions cause the machine to hallucinate intermediate observations within the batch and select subsequent input data from the available input data for the batch of input data using the hallucinated observations with the decision rule. The instructions cause the machine to evaluate the input data of the batch in parallel with the function.
[0015] Additional aspects of the invention will be apparent to those of ordinary skill in the art in view of the detailed description of various embodiments, which is made with reference to the drawings, a brief description of which is provided below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] FIG. 1 is a block diagram of a system to make selection of batches of input data for parallel evaluation in a function:
[0017] FIG. 2A-2C are graphs of successive steps during the parallel selection of batches of input data in FIG. 1 resulting in different confidence bounds;
[0018] FIG. 3 is a block diagram of an example computing device in the system in FIG. l; and
[0019] FIG. 4 is a flow diagram of the process run by the system in FIG. 1 to select batches of data for parallel processing.
[0020] While the invention is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. It should be understood, however, that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
DETAILED DESCRIPTION
[0021] FIG. 1 shows a system 100 that optimizes processing a batch of input data for parallel evaluation by a function f. The system 100 selects a batch of input data from available input for evaluating the optimal outputs of the function, f, based on multi-variable data inputs. The system 100 includes a database 102 that stores available input data. The available input data may be derived from different possible values from inputs such as physical locations and dimensions, chemical compositions, electrical levels, etc. The system 100 determines which available inputs for assembling a batch to result in optimal outputs from the function, f. A Gaussian model determination module 104 outputs a model of the function as drawn from a Gaussian process. The Gaussian model determination module 104 uses observations to determine a mean and a variance. The module 104 determines an upper confidence bound from the determined mean and variance. A batch selection engine 106 is
coupled to the input database 102 and the Gaussian process model engine 104. The results of the Gaussian determination module 104 are used by the batch selection engine 106 to apply a decision rule to select input data from the available input data to add to the batch of input data for evaluation that will have a high probability of obtaining the optimal outputs via algorithms which will be described below. Immediate observations are hallucinated from the batch without actual feedback from outputs based on the inputs. The hallucinated observations with the decision rule are used by the batch selection engine 106 to select subsequent input data from the available input data for the batch of input data.
[0022] The system 100 includes a function evaluation engine 108 for evaluating the output of the function, f. The batch of input data selected by the function evaluation engine 108 may be evaluated in parallel for the function, f, by the function evaluation engine 108. The function evaluation engine 108 is coupled to a memory device 110 that includes an output database. The batches of input data selected from the batch selection engine 106 are processed in parallel by the function evaluation engine 108 without relying on sequential feedback for each individual batch from the evaluation of a previous batch. The outputs of the function evaluation engine 108 are stored in the output database. The resulting data provides the optimal inputs for maximizing the reward (output) of the function, f. Additional real feedback may be provided from the results of the function evaluation engine 108 on further selections of additional batches of input data to further maximize the reward of the function. Such feedback may be delayed to speed the processing.
[0023] The memory 110 may store additional correlation data with the batches of input data producing optimal outputs from the function, f. A display device 112 is coupled to the memory 110 to display correlation between a batch of input data and the outputs of the function, f to a user from the memory 110. The resulting correlation data may be used for optimizing the operation of the function such as by selecting physical positioning or input values for operation of a device, selection of compositions for a formulation, etc. A control workstation 114 may also be coupled to the memory device 1 10 for using the correlated data in the memory 110 to optimize the operation of a device using the function. The system 100 may be applied for functions where: 1) simulation is possible based on first principles/a reasonable model; 2) the desired phenomena are complex, chaotic, emergent, or uncertain
without executing the simulation; and 3) success is quantifiable upon receipt of the simulation results. The system 100 may also be run on a continuous basis to determine optimal inputs when the function changes over time.
[0024] The process of the system 100 begins with a Gaussian characterization of the function, f by the Gaussian process module 104. As is known, a Gaussian process is a probability distribution across a class of typically smooth functions, which is parameterized by a kernel function k (x, x'), which characterizes the smoothness of the function, f, and a prior mean function μ (x), which is, for notational simplicity, assumed to be μ (x) = 0 without loss of generality. The function, f, is modeled from such a Gaussian process. It is assumed that the noise is independent identically distributed ("i.i.d.") Gaussian. Conditioned on a set of observations y1 :t-1 = [yi, . ..., yt- i] corresponding to X = (xi, xt-1) at any x E D, the Gaussian posterior is f(x)| yl :t-l ~ JM (μ,_ι (x), a2 t_i (x)), where:
μ,_ι (x) = k[K + σ2 ηΙ]Λ y1 :t-i (1) and
σ2,_ι (x) = k(x; x) - k[K+ σ2 η I]"1 kT (2)
where, (x) is the mean and at-1(x) is the standard deviation, k = k (x, X) is the row vector of kernel evaluations between x and X and K = K (X,X) is the matrix of kernel evaluations between past observations.
[0025] The maximum mutual information between the payoff function, f and observations yA of any set A -Ξ D of the T decisions evaluated up until time T is defined max I (f;yA)- In the Gaussian Process model, the conditional mutual information gain of a set of potential observations with respect to the payoff function measures the degree to which acquiring these observations will influence the model. The conditional mutual information gain of a set of observations yA of a set A = {xt, xT} with respect to f, given observations yt-1 is
T
I(f;yA I yt ) = ^log (1 + σΙ' σ2 τΛ(χτ)) . This sum may be calculated solely by knowing A, without needing the actual values of the observation as, yA- [0026] The modeling of the function, f, as a sample from a Gaussian process by the Gaussian process module 104 in FIG. 1 quantifies predictive uncertainty. This may be used to guide exploration and exploitation. A selection rule for inputs is used to guide exploration and exploitation. Such a process may be based on a Gaussian process upper confidence bound (GP-UCB) selection rule such as:
Xt = M8™x [^i (x) + at ½ c2t_i (x)] (3)
[0027] This decision rule uses at, a domain-specific time-varying parameter, to trade off exploitation (sampling x with high mean) and exploration (sampling x with high standard deviation) by changing the relative weighting of the posterior mean and standard deviation, respectively μ:_ι (x) and σ:_ι (x) from Equations (1) and (2) above. With a proper choice of the domain-specific time varying parameter, at, the cumulative regret of the upper confidence bound grows sub-linearly for many commonly used kernel functions, providing the first regret bounds and convergence rates for Gaussian process optimization.
[0028] Implicit in the definition of the decision rule, is the corresponding confidence interval, Ct seq, where this confidence interval's upper confidence bound is the value of the argument of the decision rule. For this (or any) confidence interval, the difference between the uppermost limit and the lowermost limit is the width w, which in this example is 2at ½ a2 t_i (x). This confidence interval is based on the posterior over the function, f, given y1:t-1. A new confidence interval is created for round t + 1 after adding yt to the set of observations. The time domain parameter, at is selected such that a union bound over all t > 1 and x £ D yields a high-probability guarantee of confidence interval correctness. It is this guarantee which enables the construction of high-probability regret bounds. The cumulative regret of the selection rule may be bounded (up to logarithmic factors) as Rx = 0*( Taxyx) where χ is the confidence interval width multiplier described above and γχ is the maximum mutual information between the payoff function f and the observations ν1:χ. For many commonly used kernel functions, γχ grows sub-linearly and χ only needs to grow poly-logarithmically in T, implying that Rx is also sub-linear. Thus Rx/T→ 0 as T→ ∞. Thus, the decision rule in equation (3) is a no-regret algorithm.
[0029] As will be detailed below, the performance of the decision rule may be generalized to batch and parallel selection (i.e., B > 1). In order to address a diverse set of inputs while relying upon only outdated feedback, a Gaussian process - Batch Upper Confidence Bound (GP-BUCB) algorithm of the batch selection module 106 of the system 100 encourages diversity in exploration, uses past information in a principled fashion, and yields strong performance guarantees. The GP-BUCB algorithm may also be modified to a Gaussian process - Adaptive Upper Confidence Bound (GP-AUCB) algorithm, which retains the theoretical guarantees of the GP-
BUCB algorithm but creates batches of variable length in a data-driven manner as will be explained below.
[0030] The Gaussian Process - Batch Upper Confidence Bound (GP-BUCB) algorithm in this example encourages diversity in exploration and has strong performance guarantees. A key property of Gaussian processes is that the predictive variance in equation (2) above only depends on where the observations are made (X = (xi, ...xt-i)), but not which values were actually observed (Y1:t-1 =[yi, yt-i]T). Thus, it is possible to compute the posterior variance used in the sequential GP-UCB score, even while previous observations are not yet available. To do so, observations are hallucinated for every observation not received (Y1:t-1
....,μ (xt-ΐ)])· The approach towards parallel exploration alters equation (3) to sequentially choose decisions within the batch as
Xt = argmax ^fb[t] (x) +|3tVi (x)] (4)
The role of the domain-specific time-varying variable, β: is analogous to that of at, in the GP-UCB algorithm in equation (3). The confidence interval for the GP-BUCB algorithm has a width of 2pt ½
(x). This approach naturally encourages diversity in exploration by taking into account the change in predictive variance since the payoffs of "similar" decisions have similar predictive distributions, exploring one decision will automatically reduce the predictive variance of similar decisions.
[0031] One example of a subroutine to implement the GP-BUCB algorithm is as follows:
Input: Decision set D, GP prior μ0, σ0, kernel function k(. , .)
for t = 1, 2, ... , T do
Choose xt = argmax χ Ε θ[μί¾[ΐ] (x) +βι½σι-ι (x)]
Compute σ: (.)
if t = fb[t + l] then
Obtain y,' = f(x,>) + ¾· for t'≡ {f [t], ...., t}
Perform Bayesian inference to obtain μΐ (.)
end if
end for
[0032] The confidence intervals used in this approach are predicated on having information from the early decisions in the batch. Since that information is not currently available, the confidence intervals are "overconfident" about the knowledge
of the function, f, at those locations. This overconfidence requires compensation in a principled manner. As will be explained below, one approach to doing so is to increase the width of the confidence intervals (through proper choice of pt), such that the confidence intervals used by the GP-BUCB algorithm are conservative, i.e. they contain the true function f(x) with high probability.
[0033] FIG. 2A-2C are graphs showing the selection of confidence bounds to compensate for overconfidence in the GP-BUCB algorithm. FIG. 2A is a graph 200 showing confidence intervals in dark areas 202 which are computed from previous noisy observations shown as the crosses at the beginning of a batch. The crosses shape the posterior mean represented by a line 204. The true, unknown function f(x) is represented by the dashed line 206. To avoid overconfidence, the GP-BUCB algorithm chooses batches using the confidence intervals represented by the areas 208 and 210 such that even in the worst case, the succeeding confidence intervals in the batch will contain the confidence intervals 202.
[0034] FIG. 2B is a graph 240 showing the last decisions from the end of a batch in begun in FIG. 2 A. Like elements such as the posterior mean 204 and the function f(x) 206 in FIG. 2A are labeled identically in FIG. 2B. As shown in FIG. 2B, the hallucinated observations are designated as stars and are used to shrink the outer posterior confidence intervals shown in areas 242 and 244 from their comparative values at the start of the batch represented by the dashed lines 246 and 248. The areas 242 and 244 still contain the confidence intervals 202 as desired.
[0035] FIG. 2C is a graph 260 that shows the beginning of the next batch after decisions from the previous batch in FIGs. 2A-2B. Like elements such as the posterior mean 204 and the function f(x) 206 in FIG. 2A are labeled identically in FIG. 2C. The feedback from all decisions is obtained and new confidence intervals represented by an area 262 and corresponding batches represented by areas 264 and 266 are computed.
[0036] One major computational bottleneck of applying GP-BUCB is calculating the posterior mean μΐ (x) and variance, at 2 (x) for the candidate decisions. The mean is updated only whenever feedback is obtained, and upon computation of the Cholesky factorization of K(X;X) + ση 2Ι which is necessary whenever new feedback arrives, predicting μι (x) requires O(t) additions and multiplications. In contrast, at 2 (x) must be recomputed for every x in D after every single round, and requires solving back-
substitution, which requires 0(t2) computations. Therefore, the variance computation dominates the computational cost of the GP-BUCB algorithm.
[0037] However, for any fixed decision x, at 2 (x) is monotonically decreasing in t. This fact can be exploited to dramatically improve the running time of GP-BUCB, at least for finite (or when using discretizations of the) decision sets D. Instead of recomputing at_i (x) for all decisions x in every round t, an upper bound, σί {χ) , may be maintained which is initialized to σ0 -∞ . In every round, the GP-BUCB rule is applied with the upper bound to identify xt of the upper bound, such that the decision rule is:
xt = argma f^] (x) + β)'2σί (x)] (5)
[0038] The upper bound is recomputed σί {χ) ·<Γ- σί {χ) , If xt still lies in the argmax of equation 5, the next decision to be made is identified and at(x)is set to
(7i_1( ) for all remaining decisions x in the batch. This concept of "lazy" variance calculation leads to dramatically improved computational speeds since the variance, <5x , does not have to be recalculated for each decision.
[0039] As explained above, proper choice of pt such that the confidence intervals used by the GP-BUCB algorithm contain the true function f(x) with high probability is necessary. The choice for pt is made based on the assumption that for a function f sampled from a known Gaussian process prior with known noise variance, ση 2, the ratio of Cfb(t) (x) to at_i (x) is bounded as:
[0040] Therefore, the relative amount by which the confidence intervals can shrink with respect to decision x is bounded by the worst-case (greatest) mutual information I (f; yft[t] | yi:fb[t]) obtained during selection of Xfbtq+i n-i , for those decisions for which feedback is not available. Thus, if a constant bound, C, is applied on the maximum conditional mutual information that can be accrued within a batch, it may be used to guide the choice of ptto ensure that the algorithm is not overconfident.
[0041] The machinery of the upper confidence bound algorithm may be used to derive the regret bound. The main result bounds the regret of the GP-BUCB algorithm in terms of a bound, C, on the maximum conditional mutual information. It holds under any of three different assumptions about the payout function f, which may all be of interest. In particular, it holds even if the assumption that f is sampled from a
GP is replaced by the assumption that f has low norm in the Reproducing Kernel Hilbert Space (RKHS) associated with the kernel function.
[0042] Given an upper bound, C on the conditional mutual information gain arising from any batch with respect to f(x) for any x in D, and given that the problem conditions and the scheduling of at satisfy one of three sets of conditions (listed below), then, with probability at least l-δ, where δ e (0,1), any algorithm using the GP-BUCB decision rule with pt = exp(2C) α¾[ί] suffers regret no more than
RT≤ ^jCxT Qx (2C)aTYT + 2 .
[0043] These sets of assumptions are respectively: Set 1) D is a finite set, f is sampled from a known Gaussian Process prior with known noise variance and at = 2 log(|D|t27i2/(65) ); Set 2) D is a compact and convex set in [0, l]d, f is sampled from a known Gaussian Process prior with known noise variance, k(x,x') satisfies a probabilistic smoothness condition on sample paths which involves parameters a and b, and a, = 2 log(t¾2/(35) ) + 2d log(t2dbl(log(4da/5))1/2); and Set 3) D is arbitrary, f has an RKHS norm bounded by a constant M, the noise on the observations forms an arbitrary martingale difference sequence, uniformly bounded by ση, and at = 2M2 + 300 γ, ln3(t/5).
[0044] The functional significance of a bound C on the information hallucinated with respect to any f(x) arises through this quantity's ability to bound the degree of contamination of the confidence intervals of the GP-BUCB algorithm with hallucinated information. Two properties of the mutual information are particularly useful. These properties are monotonicity (adding an element x to the set A cannot decrease the mutual information between f and the corresponding set of observations VA) and submodularity (the increase in mutual information between f and yA with the addition of an element x to set A cannot be greater than the corresponding increase in mutual information if x is added to A', where A' -Ξ A).
[0045] The amount of information with respect to f(x) hallucinated within any batch may be bounded by several quantities. Two particularly useful bounds are given in the following series of inequalities:
I (/( );J>M+1: I y t])≤ 1 (/;J>M+1: I ¾[ί])≤ ΪΒ )
where B is the batch size or delay length. These imply that the GP-BUCB algorithm may ensure a bound on the conditional mutual information gain with respect to f(x) by bounding the global conditional mutual information gain (i.e., the information gained
with respect to f as a whole) or by noting that the information which can be gained by a set of B-l observations can only decrease (submodularity), and JB-I can be calculated using only knowledge of the kernel function.
[0046] In the GP-BUCB algorithm described above, the local information gain with respect to any f(x), x E D, t E N is bounded by fixing the feedback times and then bounding the maximum conditional mutual information with respect to the entire function f which can be acquired by any algorithm which chooses any set of B - 1 or fewer observations. While this argument uses multiple upper bounds, any or all of which may be overly conservative, this approach is still applicable because such a bound C holds for any possible algorithm for constructing batches. It is otherwise quite difficult to disentangle the role of C in setting the exploration-exploitation tradeoff parameter pt from its role as a bound on how much information is hallucinated by the algorithm, since a larger value of C (and thus pt ) typically produces more information gain by promoting exploration under the GP-BUCB decision rule, in Equation (4).
[0047] Since the bound C is related to the maximum amount of conditional mutual information which could be acquired by a set of B-l actions, C grows monotonically with B. With a larger set of pending actions, there is more potential for explorations which gain additional information. One easy upper bound for the information gained in any batch may be derived as follows. Since mutual information is submodular, and thus the maximum conditional mutual information which can be gained by any set of observations is maximized when the set of observations so far obtained is empty. Letting the maximum mutual information with respect to f which may be obtained by any observation set of size B -1 be denoted JB-I and choosing C = YB-I provides a bound on the possible local conditional mutual information gain for any t E N and x £ D.
[0048] The selection of domain-specific time-varying variable, pt, using the processes explained above in relation to C may not be necessary for numerous applications. The domain- specific time -varying variable, pt in certain applications may be similar to the parameter, at, used in the decision rule in equation (3) with an additional small multiplier applied such that the GP-BUCB algorithm is more aggressive than theoretically allowable at the price of the regret bound.
[0049] To obtain regret bounds independent of batch size B, the monotonicity properties of conditional mutual information may be exploited. This can be done structuring GP-BUCB as a two-stage procedure. First, an initialization set Dinit of size |Dinit| = |: ίΏ:£ϊ| = f j^8^ TO«1!#¾-¾¾ } is selected non-adaptively (i.e., without any feedback). Following the selection of this entire set, feedback yinit for all decisions in Dinit is obtained. In the second stage, the GP-BUCB algorithm is applied to the posterior Gaussian process distribution, conditioned on yinit. The regret of the two- stage algorithm is bounded by T = 4 |Τγ ^«¾.«φξ 21} \ where ?έ£ is the maximum conditional mutual information gain which can result from the post- initialization acquisition of a set of observations of size T.
[0050] The initialization set Dinit is constructed via uncertainty sampling. The process starts with Dinit 0 = { }, and for each t = 1, ..., Tinit the most uncertain decision is determined as xinit Mg™x σ2,_ι (x) and Dinit t =Dinit t_i Π xinit t. Uncertainty sampling is a special case of the GP-BUCB algorithm with a constant prior mean of 0 and the requirement that for all 1 < t < Tinit, fb[t] = 0, i.e., no feedback is taken into account for the first Tinit iterations. Under the above procedure, if uncertainty sampling is used to generate an initialization set Dinit of size Tinit, then f 1¾ ≤ (1 - llY- aii/CXss* - Whenever γτ is sublinear in T (i.e., γτ = o(T)), then the bound on γ^1 converges to zero for sufficiently large Tinit, For any constant C > 0,
Tinit may be chosen as a function of B such that γιηι1β-ι < C. Using this choice of C bounds the post-initialization regret. In order to derive bounds on Tinit, a bound on γτ which is analytical and sublinear is required.
[0051] For the Squared Exponential, Matern, and Linear kernels, such bounds on γτ exist and so Tinit can be shown to be finite in these cases. Practically, initialization should be stopped when there exists no batch of size B which would include too much hallucinated information regarding the function at any point; one sufficient condition is when there does not exist an x £ D such that (B-l) log(l+an "2 at 2(x)) > C The analysis above shows that this will occur with a finite number of observations for particular kernels. Given that this initialization condition is satisfied, the total regret of the initialization and subsequent run of GP-BUCB is bounded by
¾¾¾ wim probability l-δ, where is the regret of the GP-UCB algorithm run τ rounds with at dictated by the problem conditions and δ.
[0052] While the analysis of the GP-BUCB algorithm above uses feedback mappings fb[t] specified by the problem instance, it may be useful to let the algorithm control when to request feedback, and to allow this feedback period to vary in some range not easily described by any constant B. For example, allowing the algorithm to control parallelism is desirable in situations where the cost of executing the algorithm's queries to the oracle depends on both the number of batches and the number of individual actions or experiments in those batches. For example, in a chemical experiment, in which cost is a weighted sum of the time to complete the batch of reactions and the cost of the reagents needed for each individual experiment, this is the case. In such a case, confronting an initial state of relative ignorance about the reward function, it may be desirable to avoid using a wasteful level of parallelism.
[0053] In order to address these concerns, it may be required that the mapping fb[t] and the sequence of actions selected by the algorithm GP-BUCB be such that there exists some bound C, holding for all t > 1 and x £ D, on I(f(x); yfb[t]+i:t -i I yi:fb[t]), for the hallucinated information as of round t with respect to any value f(x). This requirement on fb[t] in terms of C may appear stringent, but it can be easily satisfied by on-line, data-driven construction of the mapping fb[t] after having pre-selected a value for C.
[0054] The GP-AUCB algorithm explained below controls feedback adaptively through pre-selecting a value of C limiting the amount of hallucinated batch information. The GP-AUCB algorithm chooses fb[t] online, using a limit on the amount of information hallucinated within the batch. Such adaptive batch length control is possible because the amount of hallucinated information may be measured online using the following equation, even in the absence of the observations themselves. T
t'= fb[t] +1
[0055] When this value exceeds a pre-determined constant C, the algorithm terminates the batch, setting fb for the next batch to the current t (i.e., fb[t + 1] = t), and waits for the oracle to return values for the pending queries. The algorithm, GP- AUCB, is shown in the code below.
Input: Decision set D, GP prior μθ, σθ, kernel function k(. , .), information gain threshold C.
Set fb[t'] = 0 _ t'≥l , G = 0.
for t = 1 , 2, . .. , T do
if G > C then
Obtain yf = f(xf) + ε,' for t' £ {fb[t - 1], . .. , t - 1 }
Perform Bayesian inference to obtain μ^ι (.) Set G = 0
Set fb[t'] = t - 1 Vt' > 1
end if
Choose xt = argmaxxeD [μ&[ΐ](χ) + βι½ at-1 (x)]
Set G = G + ½ log (1 + cn 2 σ2,_ι (x,))
Compute at(.)
end for
[0056] The GP-AUCB algorithm can also be employed in the delay setting, but rather than using the hallucinated information to decide whether or not to terminate the current batch, the algorithm chooses whether or not to submit an action in this round; the algorithm submits an action if the hallucinated information is < C and refuses to submit an action ("balks") if the hallucinated information is > C.
[0057] In comparison to the GP-BUCB algorithm described above, by concerning itself with only the batches actually chosen, rather than worst-case batches, the GP- AUCB algorithm eliminates the requirement that C be greater than the information which could be gained in any batch, and thus makes the information gain bounding argument less conservative. For such a C:
I(f(x); yfb[t]+i:t-i I yi:fb[t])≤ I(f; yfb[t]+i:t-i I yi:fb[t]) < C; Vx £ D; Vt > 1.
Because of the monotonicity of conditional mutual information, the information gain locally under the GP-AUCB algorithm is bounded by the information gain with respect to f as a whole, which is constrained to be < C by the stopping condition. Using such an adaptive stopping condition and the corresponding value of pt, the equality
ma be used to maintain a guarantee of confidence interval correctness for batches of variable length. In particular, the
batch length may possibly become quite large as the shape of f is better and better understood and the variance of f(xt) tends to decrease. Further, if exploratory actions are chosen, the high information gain of these actions contributes to a relatively early arrival at the information gain threshold C and thus relatively short batch length, even late in the algorithm's run.
[0058] In this way, the batch length is chosen in response to the algorithm's need to explore or exploit as dictated by the decision rule (Equation (4) associated with the GP-BUCB algorithm, not simply following a monotonically increasing schedule.
[0059] This approach allows the regret of GP-AUCB to be bounded for both the batch and delay settings. If the GP-AUCB is employed with a specified constant value C, for which I(f; yfb[t]+i:t-i I yi:fb[t])≤ C; Vt E { 1 , ....T} , a specified constant, δ, in the interval (0,1) one of the three conditions of the above explained maximum regret and under resulting feedback mapping fb[t], pt =exp(2C)afb[T], Vc £ {1,...T}, then the inequality, RT < ^CXT ' exp(2C)aTyT + 2 holds with a probability at least 1-δ.
[0060] C may be selected to deliver batches with a specified minimum size
To ensure this occurs, C may be set such that C > YBmin - 1 , i.e., no set of queries of size less than Bmin could possibly gain enough information to end the batch. Further, C is chosen such that C < (Bmin), it is possible to select a batch of size Bmin which does attain the required amount of information to terminate the batch, and thus Bmin may be thought of as the minimum batch size which could be produced by the GP-AUCB algorithm.
[0061] It is also possible to choose a very small value for the constant C and produce nearly sequential actions early, while retaining late-run parallelism and producing a very low regret bound. This can be seen if Bmin is set to 1 and such a C must satisfy the inequalities γ0 = 0 < C < yls i.e., C can be a very small positive number. With regard to action selection, choosing C to be a small positive value results in the GP-AUCB algorithm beginning its run by acting sequentially, since most actions gain information greater than C. However, the algorithm has the potential to construct batches of increasing length as T→∞. Even assuming the worst case, that all observations are independent and each gains the same amount of information, the batch length allowed with a given posterior is lower-bounded. If the algorithm converges toward the optimal subset then the variances of the actions selected can be expected to generally become very small, producing batches of very
long length, even for very small C. Choosing C as a small positive value thus produces the potential for naturally occurring late-run parallelism for very little additional regret relative to GP-UCB.
[0062] It is also possible to calculate the ratio afb[t](x)/at-1(x) directly, for every x in D, if the algorithm remembers σ¾[ί](χ) for each x. By setting a value C, and stopping the batch selection if 3 x e D : afb[t](x)/at-1(x) > exp(C), the algorithm also ensures the confidence intervals used by the GP-BUCB decision rule contain f with high probability. This is a variant algorithm, termed GP-AUCB Local, for the local ratio checking, and this algorithm's regret is also bounded by Theorem 1. The GP- AUCB Local algorithm may have advantages in cases where many observations are needed to acquire a good initial understanding of f.
[0063] Importantly, the GP-AUCB Local algorithm can take advantage of the efficient upper bounding lazy variance technique discussed above, despite the fact that the local stopping condition nominally requires all of the variances to be calculated every round; this is because I(f(x); yfb[t]+i:t-i | yi:fb[t]) < I(f; yfb[t]+i:t-i | yi:fb[t]), that is, the local information gain is bounded by the global information gain, and further, the global information gain may be easily calculated on-line, using only the posterior variances of the actions actually selected, which are available to the lazy algorithm. If the GP-AUCB Local algorithm is set to stop when C or more information has been acquired with respect to f(x) for any x in D, checking of this condition is only necessary when at least C or more information has been acquired within the batch with respect to f as a whole. Thus, the GP-AUCB Local algorithm may be run lazily until the global information gain reaches C, at which point it must be run non-lazily for the remainder of the batch.
[0064] Thus, the GP-BUCB algorithm is able to handle parallel exploration problems (batches of B experiments executed concurrently). This approach generalizes the GP-UCB approach to the parallel setting. Near linear speedup is possible for many commonly used kernel functions as long as the batch size B grows at most polylogarithmically in the number of rounds T, the GP-BUCB regret bounds only increase by a constant factor independent of B as compared to the known bounds for the sequential algorithm. The GP-BUCB algorithm may also be drastically accelerated by using lazy evaluations of variance. The GP-BUCB algorithm is also able to handle delayed feedback problems (where each decision can only use feedback
up to B rounds ago or where there are at most B-l observations are pending). The GP-AUCB algorithm handles the same cases and provides the same regret bounds, with batches or delay queues of variable length, controlled by the algorithm in accordance with the data.
[0065] The above mentioned GP-BUCB and GP-AUCB algorithms are generally applicable for any application which requires multiple rounds/batches, where each experiment is cheap but will take too long to be practical before the choice of the next location, and where the evaluation of each experiment is costly. Some possible applications of the batch version of the algorithm include the selection of reagent combinations for chemical experiments, the selection of home-use electrical stimuli in such systems as the MyStim home epidural electrostimulation (EES) controller manufactured by Medtronic. Similarly, the GP-BUCB and GP-AUCB algorithms are suitable for applications like image annotation where many human annotators are available, but the amount of time required to generate an annotation is comparatively long with respect to the run time of the algorithm. Additional applications may include robotics, fluid dynamics analysis, machine learning, biosystems modeling, chemical modeling, finite element analysis, etc.
[0066] Additional applications of the delay version of the GP-BUCB and GP- AUCB algorithms include cases where the sequence of experiments is running continuously, but the evaluation of each experiment may be lengthy. For example the clinical selection of spinal cord therapy epidural electrostimulation (EES) configurations, in which Electromyography (EMG) is used as one of the primary measurements, but the processing of this EMG (and other sensor) data is time- consuming, on the order of the testing time for one to two experimental stimuli. Thus, additional processing may be performed based on delayed feedback from previous tests without having to wait for intervening processing to be finished. The ability to select EES configurations without processing EMG data is useful. The manual image annotation application mentioned above for the batch case could also be formulated in the delay case, where each annotator finishes working on their assigned image asynchronously, and relatively slowly. This capability would be very useful in a variety of on-line applications.
[0067] The application of the system 100 may be made in epidural electrostimulation (EES) for spinal cord injury therapy alone or in combination with other
interventions such as pharmacological agents and motor training. To overcome variation with respect to patients, injuries, surgical placement, and spinal cord plasticity, electrode arrays of increasing sophistication are being developed. Optimizing the stimuli delivered by these EES arrays is difficult as the numerous parameters yield large and complex stimulus spaces. A further complication is a lack of reliable predictive models for the physiological effects of a given stimulus, necessitating physical experiments with the EES system and patient. Another related application may be using the GP-AUCB algorithm to control costly finite element simulations of the electric field and voltage distribution in the spinal cord under epidural electro-stimulation, which would then be coupled to simulations of neurons.
[0068] A variant of the GP-BUCB algorithm explained above may be used to actively optimize an EMG-based metric of lower spinal cord function in 4 complete spinal rats during 3-15 experimental sessions. The algorithm is used to choose cathode/anode pairs of electrodes for EES among 7 (n = 2, array composed of parallel wires) or 27 contacts (n = 2, flexible parylene/platinum array) to maximize reward, defined as the peak-to-peak (PtP) amplitude of the evoked potential in the left tibialis anterior muscle in the interval 4.5-7.5 ms post- stimulus. This latency implies a single interneuronal delay, not direct activation of the motoneurons.
[0069] The amplitude is treated as a surrogate for the ability of the spinal interneurons to transduce sensory information into muscle responses and assume that supplying effectively transduced stimuli provides activity-based therapeutic excitation of the lower spinal cord. The GP-BUCB algorithm models the response function over stimuli and time as drawn from a Gaussian process, a probability distribution over functions. It then selects experimental stimuli to balance exploring poorly understood regions of the stimulus space and exploiting stimuli that produce strong responses.
[0070] The performance of the GP-BUCB algorithm under this reward metric typically achieves superior performance in terms of time-averaged reward and parity in terms of best single PtP response obtained in comparison an expert human experimenter's simultaneous efforts. Further, despite being initially unaware of the spinal location of effective stimuli, the algorithm learns a human-interpretable "shape" of the response function via active experimentation, allowing discovery of the gross functional organization of the spinal cord.
[0071] Another application may be in the field of automated vaccine design. The GP-BUCB algorithm may evaluate a database which describes the binding affinity of various peptides with a Major Histocompatibility Complex (MHC) Class I molecule. This is of importance when designing vaccines to exploit peptide binding properties. Each of the peptides is described by a set of chemical features in R45. The binding affinity of each peptide, which is treated as the reward or payoff, is described as an offset IC50 value. The experiments used an isotropic linear ARD kernel fitted on a different MHC molecule from the same data set.
[0072] An example computer system 300 in FIG. 3 includes a processor 302 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory 304, and a static memory 306, which communicate with each other via a bus 308. The computer system 300 may further include a video display unit 310 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 300 also includes an input device 312 (e.g., a keyboard), a cursor control device 314 (e.g., a mouse), a disk drive unit 316, a signal generation device 318 (e.g., a speaker), and a network interface device 320.
[0073] The disk drive unit 316 includes a machine-readable medium 322 on which is stored one or more sets of instructions (e.g., software 324) embodying any one or more of the methodologies or functions described herein. The instructions 324 may also reside, completely or at least partially, within the main memory 304, the static memory 306, and/or within the processor 302 during execution thereof by the computer system 300. The main memory 304 and the processor 302 also may constitute machine -readable media. The instructions 324 may further be transmitted or received over a network.
[0074] While the machine-readable medium is shown in an example to be a single medium, the term "machine-readable medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "machine -readable medium" can also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the various embodiments, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term
"machine -readable medium" can accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
[0075] A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, DVD ROM, or other computer readable medium that is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to the processor, may be used for the memory.
[0076] Furthermore, each of the computing devices of the system 100 may be conveniently implemented using one or more general purpose computer systems, microprocessors, digital signal processors, micro-controllers, application specific integrated circuits (ASIC), programmable logic devices (PLD), field programmable logic devices (FPLD), field programmable gate arrays (FPGA), and the like, programmed according to the teachings as described and illustrated herein, as will be appreciated by those skilled in the computer, software, and networking arts.
[0077] In addition, two or more computing systems or devices may be substituted for any one of the computing systems in the system 100. Accordingly, principles and advantages of distributed processing, such as redundancy, replication, and the like, also can be implemented, as desired, to increase the robustness and performance of the devices and systems of the system 100. The system 100 may also be implemented on a computer system or systems that extend across any network environment using any suitable interface mechanisms and communications technologies including, for example telecommunications in any suitable form (e.g., voice, modem, and the like), Public Switched Telephone Network (PSTNs), Packet Data Networks (PDNs), the Internet, intranets, a combination thereof, and the like.
[0078] The operation of the example system 100 shown in FIG. 1, which may be controlled on the example workstation, will now be described with reference to FIG. 1 in conjunction with the flow diagram shown in FIG. 4. The flow diagram in FIG. 4 is representative of example machine readable instructions for implementing selection of a batch of input data for parallel evaluation. In this example, the machine readable instructions comprise an algorithm for execution by: (a) a processor, (b) a controller, and/or (c) one or more other suitable processing device(s). The algorithm may be embodied in software stored on tangible media such as, for example, a flash memory, a CD-ROM, a floppy disk, a hard drive, a digital video (versatile) disk (DVD), or
other memory devices, but persons of ordinary skill in the art will readily appreciate that the entire algorithm and/or parts thereof could alternatively be executed by a device other than a processor and/or embodied in firmware or dedicated hardware in a well-known manner (e.g., it may be implemented by an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable logic device (FPLD), a field programmable gate array (FPGA), discrete logic, etc.). For example, any or all of the components of the interfaces could be implemented by software, hardware, and/or firmware. Also, some or all of the machine readable instructions represented by the flowchart of FIG. 4 may be implemented manually. Further, although the example algorithm is described with reference to the flowchart illustrated in FIG. 4, persons of ordinary skill in the art will readily appreciate that many other methods of implementing the example machine readable instructions may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined.
[0079] FIG. 4 is a flow diagram of the process of for implementing selection of a batch of input data for parallel evaluation with a function. In the process, the system 100 selects an initial set of inputs from the dataset of available input data stored in the database 102 (400). The initial set of inputs is constructed via uncertainty sampling and is selected non-adaptively without prior feedback. Feedback is obtained from the initial set of inputs with the function (402). The Gaussian process module 104 then provides the Gaussian process posterior and a confidence bound decision rule conditioned on the feedback from the initial set of inputs. (404). The Gaussian process module 104 thus models the function as drawing from a Gaussian process, determines a means and a variance of the modeled function via observations and determines an upper confidence bound from the means and variance. The batch selection engine 106 determines whether there are sufficient sets of inputs, B, in the batch (406). If there are sufficient inputs, the batch is stored and is complete for evaluation by the function evaluation engine 108 (408). The input data of the batch is evaluated in parallel with the function and the resulting determined data outputs are stored in a memory device such as the output memory device 110.
[0080] If there are more sets of inputs needed, the batch selection engine 106 selects a set of inputs based on the Gaussian process posterior and a confidence bound
decision rule (410). The batch selection engine 106 then hallucinates a corresponding observation based on the selected action (412). The batch selection engine 106 then updates the Gaussian process model using the hallucinated observations (414). The batch selection engine 106 then loops back to determine whether sufficient sets of inputs, B, have been assigned to the batch (406).
[0081] Each of these embodiments and obvious variations thereof is contemplated as falling within the spirit and scope of the claimed invention, which is set forth in the following claims.
Claims
1. A method of selecting a batch of input data from available input data for parallel evaluation by a function, the method comprising:
via a controller, modeling the function as drawn from a Gaussian process;
using observations to determine a mean and a variance of the modeled function;
determining an upper confidence bound from the determined mean and variance;
via the controller, applying a decision rule to select input data from the available input data to add to the batch of input data, wherein the selection is based on a domain-specific time varying parameter;
hallucinating intermediate observations within the batch;
via the controller, using the hallucinated observations with the decision rule to select subsequent input data from the available input data for the batch of input data via the controller; via the controller, evaluating the input data of the batch in parallel with the function; and storing the resulting determined data outputs in a memory device.
2. The method of claim 1, wherein the input data relates to peptide binding properties for automated vaccine design, and the data outputs are corresponding binding affinities.
3. The method of claim 1, wherein the input data relates to electrodes for electrical stimulation and locations on a spinal cord, and the data outputs relate to reaction to stimulation.
4. The method of claim 1, wherein the decision rules trades off exploitation with exploration in selecting the input data to add to the batch.
5. The method of claim 1, wherein a regret bound is limited by initializing the modeling with a finite set of observations.
6. The method of claim 1, further comprising selecting a second batch of input data for evaluation with the function after evaluating the input data of the batch, wherein the second batch of input data is selected with feedback from the evaluation of the first batch of data.
7. The method of claim 1, wherein the input data for the batch are determined according to: xt = argmax ^ft[t] (x) +Pt i (x)]
x e D wherein X is the plurality of data inputs for the batch, x is input data of the batch, β: is the domain-specific time-varying parameter to trade off exploitation and exploration, μ¾[:] (x) is a posterior mean and σ:_ι (x) is a standard deviation.
8. The method of claim 7, wherein the confidence intervals associated with the domain- specific time-varying parameter, β: contain the true function with high probability.
9. The method of claim 7, wherein variance is not recalculated for the next x in the batch that lies within the argmax.
10. The method of claim 1, wherein the number of data inputs in the batch has a variable length determined by the information gained by the evaluation of the function.
11. The method of claim 1 , wherein the number of data inputs in the batch has a variable length determined by on posterior uncertainty.
12. The method of claim 1, wherein the domain-specific time-varying parameter is scheduled according to exp(2C) t¾[t] wherein C is an upper bound on the conditional mutual information gain from the batch and t¾[t] is chosen according to the domain and requirements of a regret bound.
13. The method of claim 1, wherein the domain-specific time-varying parameter is offset by an additive or subtractive value.
14. The method of claim 1, further comprising:
selecting an initialization set from the available input data by uncertainty sampling without prior feedback;
obtaining feedback outputs from the initialization set with the function; and
applying the outputs to the Gaussian process.
15. A system for determining a batch of input data from available input data for parallel evaluation by a function, the system comprising:
a storage device storing a database including the available input data;
a controller coupled to the storage device, the controller operable to:
model the function as drawn from a Gaussian process;
use observations to determine a mean and a variance of the modeled function; determine an upper confidence bound from the determined mean and variance; apply a decision rule to select input data from the available input data to add to the batch of input data, wherein the selection is based on a domain-specific time varying parameter;
hallucinate intermediate observations within the batch;
select subsequent input data from the available input data for the batch of input data using the hallucinated observations with the decision rule;
evaluate the input data of the batch in parallel with the function; and store the resulting determined data outputs.
16. The system of claim 15, wherein the decision rule trades off exploitation with exploration when selecting the input data for the batch.
17. The system of claim 15, wherein a regret bound is limited by initializing the function model with a finite set of observations.
18. The system of claim 15, wherein the controller is further operable to select a second batch of input data for evaluation with the function, wherein the second batch of input data is selected with feedback from the evaluation of the first batch of data.
19. The system of claim 15, wherein the input data for the batch is determined according to: Xt = argmax ^fb[t] (x) +|3tVi (x)]
x e D wherein X is the plurality of data inputs for the batch, x is input data of the batch, β: is the domain-specific time-varying parameter to trade off exploitation and exploration, μ¾[:] (x) is a posterior mean and σ:_ι (x) is a standard deviation.
20. The system of claim 19, wherein the confidence intervals associated with the domain- specific time-varying parameter, pt, contain the true function with high probability.
21. The system of claim 19, wherein variance is not recalculated for the next x in the batch that lies within the argmax.
22. The system of claim 15, wherein the number of data inputs in the batch has a variable length determined by the information gained by the evaluation of the function.
23. The system of claim 15, wherein the number of data inputs in the batch has a variable length determined by on posterior uncertainty.
24. The system of claim 15, wherein the domain- specific time-varying parameter is scheduled according to exp(2C) t¾[t] wherein C is an upper bound on the conditional mutual information gain from the batch and α¾[:] is chosen according to the domain and requirements of a regret bound.
25. The system of claim 15, wherein the domain- specific time-varying parameter is offset by an additive or subtractive value.
26. A non-transitory, machine readable medium having stored thereon instructions for selecting a batch of input data from available input data for parallel evaluation by a function, which when executed by at least one machine processor, cause the machine to:
model the function as drawn from a Gaussian process;
use observations to determine a mean and a variance of the modeled function;
determine an upper confidence bound from the determined mean and variance;
apply a decision rule to select input data from the available input data to add to the batch of input data, wherein the selection is based on a domain-specific time varying parameter;
hallucinate intermediate observations within the batch;
select subsequent input data from the available input data for the batch of input data using the hallucinated observations with the decision rule; and
evaluate the input data of the batch in parallel with the function;
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261660110P | 2012-06-15 | 2012-06-15 | |
US61/660,110 | 2012-06-15 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2013188886A2 true WO2013188886A2 (en) | 2013-12-19 |
WO2013188886A3 WO2013188886A3 (en) | 2014-04-17 |
Family
ID=49756839
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2013/046196 WO2013188886A2 (en) | 2012-06-15 | 2013-06-17 | Method and system for parallel batch processing of data sets using gaussian process with batch upper confidence bound |
Country Status (2)
Country | Link |
---|---|
US (1) | US9342786B2 (en) |
WO (1) | WO2013188886A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111508000A (en) * | 2020-04-14 | 2020-08-07 | 北京交通大学 | Deep reinforcement learning target tracking method based on parameter space noise network |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140365180A1 (en) * | 2013-06-05 | 2014-12-11 | Carnegie Mellon University | Optimal selection of building components using sequential design via statistical based surrogate models |
DE102013212840B4 (en) * | 2013-07-02 | 2022-07-07 | Robert Bosch Gmbh | Model calculation unit and control unit for calculating a data-based function model with data in various number formats |
US10438114B1 (en) | 2014-08-07 | 2019-10-08 | Deepmind Technologies Limited | Recommending content using neural networks |
US10789544B2 (en) * | 2016-04-05 | 2020-09-29 | Google Llc | Batching inputs to a machine learning model |
WO2018153806A1 (en) * | 2017-02-24 | 2018-08-30 | Deepmind Technologies Limited | Training machine learning models |
US11568236B2 (en) | 2018-01-25 | 2023-01-31 | The Research Foundation For The State University Of New York | Framework and methods of diverse exploration for fast and safe policy improvement |
KR20220047277A (en) | 2019-07-16 | 2022-04-15 | 길리애드 사이언시즈, 인코포레이티드 | HIV Vaccines, and Methods of Making and Using the Same |
EP3955128B1 (en) * | 2020-08-11 | 2024-03-13 | Schneider Electric Industries SAS | Optimization of files compression |
EP4277652A1 (en) | 2021-01-14 | 2023-11-22 | Gilead Sciences, Inc. | Hiv vaccines and methods of using |
CN114448495B (en) * | 2022-03-31 | 2023-06-13 | 四川安迪科技实业有限公司 | Equipment batch adding method and device based on TDMA satellite network management |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090150126A1 (en) * | 2007-12-10 | 2009-06-11 | Yahoo! Inc. | System and method for sparse gaussian process regression using predictive measures |
US20090164405A1 (en) * | 2007-12-21 | 2009-06-25 | Honda Motor Co., Ltd. | Online Sparse Matrix Gaussian Process Regression And Visual Applications |
US20100174514A1 (en) * | 2009-01-07 | 2010-07-08 | Aman Melkumyan | Method and system of data modelling |
WO2011032207A1 (en) * | 2009-09-15 | 2011-03-24 | The University Of Sydney | A method and system for multiple dataset gaussian process modeling |
US20110270788A1 (en) * | 2010-04-30 | 2011-11-03 | Moore Douglas A | Neural Network For Clustering Input Data Based On A Gaussian Mixture Model |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7024312B1 (en) * | 1999-01-19 | 2006-04-04 | Maxygen, Inc. | Methods for making character strings, polynucleotides and polypeptides having desired characteristics |
US7865529B2 (en) * | 2003-11-18 | 2011-01-04 | Intelligent Model, Limited | Batch processing apparatus |
US8290882B2 (en) * | 2008-10-09 | 2012-10-16 | Microsoft Corporation | Evaluating decision trees on a GPU |
US8562524B2 (en) * | 2011-03-04 | 2013-10-22 | Flint Hills Scientific, Llc | Detecting, assessing and managing a risk of death in epilepsy |
-
2013
- 2013-06-17 WO PCT/US2013/046196 patent/WO2013188886A2/en active Application Filing
- 2013-06-17 US US13/919,757 patent/US9342786B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090150126A1 (en) * | 2007-12-10 | 2009-06-11 | Yahoo! Inc. | System and method for sparse gaussian process regression using predictive measures |
US20090164405A1 (en) * | 2007-12-21 | 2009-06-25 | Honda Motor Co., Ltd. | Online Sparse Matrix Gaussian Process Regression And Visual Applications |
US20100174514A1 (en) * | 2009-01-07 | 2010-07-08 | Aman Melkumyan | Method and system of data modelling |
WO2011032207A1 (en) * | 2009-09-15 | 2011-03-24 | The University Of Sydney | A method and system for multiple dataset gaussian process modeling |
US20110270788A1 (en) * | 2010-04-30 | 2011-11-03 | Moore Douglas A | Neural Network For Clustering Input Data Based On A Gaussian Mixture Model |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111508000A (en) * | 2020-04-14 | 2020-08-07 | 北京交通大学 | Deep reinforcement learning target tracking method based on parameter space noise network |
CN111508000B (en) * | 2020-04-14 | 2024-02-09 | 北京交通大学 | Deep reinforcement learning target tracking method based on parameter space noise network |
Also Published As
Publication number | Publication date |
---|---|
WO2013188886A3 (en) | 2014-04-17 |
US20130339287A1 (en) | 2013-12-19 |
US9342786B2 (en) | 2016-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9342786B2 (en) | Method and system for parallel batch processing of data sets using Gaussian process with batch upper confidence bound | |
Ng et al. | A mixture model with random-effects components for clustering correlated gene-expression profiles | |
Donnelly et al. | Empirical logit analysis is not logistic regression | |
Zigler | The central role of Bayes’ theorem for joint estimation of causal effects and propensity scores | |
Claxton et al. | A dynamic programming approach to the efficient design of clinical trials | |
JP2022544859A (en) | Systems and methods for imputing data using generative models | |
Simon et al. | Using Bayesian modeling in frequentist adaptive enrichment designs | |
CA3151246A1 (en) | Computer-implemented method and apparatus for analysing genetic data | |
US20230352138A1 (en) | Systems and Methods for Adjusting Randomized Experiment Parameters for Prognostic Models | |
JP2024522840A (en) | Systems and methods for estimating treatment effects in randomized trials using covariate-corrected stratification and pseudovalue regression | |
Hong et al. | Functional hierarchical models for identifying genes with different time-course expression profiles | |
Borgstede | An evolutionary model of reinforcer value | |
Gharibvand et al. | Analysis of survival data with clustered events | |
Jenkins et al. | Inference from samples of DNA sequences using a two-locus model | |
Alban et al. | Learning personalized treatment strategies with predictive and prognostic covariates in adaptive clinical trials | |
Wei et al. | Fair adaptive experiments | |
Yang et al. | Applications of Bayesian statistical methods in microarray data analysis | |
US20230352125A1 (en) | Systems and Methods for Adjusting Randomized Experiment Parameters for Prognostic Models | |
Edhan et al. | Sex with no regrets: How sexual reproduction uses a no regret learning algorithm for evolutionary advantage | |
Soare et al. | Regression with n→ 1 by expert knowledge elicitation | |
JP2024536911A (en) | COMPUTER IMPLEMENTED METHOD AND APPARATUS FOR ANALYZING GENETIC DATA - Patent application | |
Weir et al. | Flexible design and efficient implementation of adaptive dose-finding studies | |
Nguyen et al. | Detecting differentially expressed genes with RNA-seq data using backward selection to account for the effects of relevant covariates | |
Kulasekera et al. | Multi-response based personalized treatment selection with data from crossover designs for multiple treatments | |
O'Malley et al. | Sample size calculation for a historically controlled clinical trial with adjustment for covariates |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13805025 Country of ref document: EP Kind code of ref document: A2 |