US8818922B2 - Method and apparatus for predicting application performance across machines with different hardware configurations - Google Patents
Method and apparatus for predicting application performance across machines with different hardware configurations Download PDFInfo
- Publication number
- US8818922B2 US8818922B2 US13/171,812 US201113171812A US8818922B2 US 8818922 B2 US8818922 B2 US 8818922B2 US 201113171812 A US201113171812 A US 201113171812A US 8818922 B2 US8818922 B2 US 8818922B2
- Authority
- US
- United States
- Prior art keywords
- predictive model
- application
- performance
- model
- simulations
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3452—Performance evaluation by statistical analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3457—Performance evaluation by simulation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/815—Virtual
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/865—Monitoring of software
Definitions
- This application relates to system management and operation of large-scale systems and networks having heterogeneous components. More particularly, this application relates to a method and apparatus for predicting application performance across machines having hardware configurations with different hardware specifications or settings.
- a number of techniques have been proposed for accurately mapping application performance across machines with different hardware specifications and settings, but these techniques are limited in one way or another. These techniques can be divided into two classes.
- the first class evaluates application performance on a number of different servers in advance, and builds a model to summarize the application performance across those machines. In practice, however, it is difficult to collect enough data from machines with different hardware configurations. With the lack of measurement data, the real (actual) evaluation based techniques only include a limited number of hardware parameters, and rely on simple models such as the linear regression to learn their relationships. Such a simplification significantly jeopardizes the prediction accuracy of application performance.
- the second class of techniques relies on software simulation to collect data for performance modeling.
- simulation tools that can construct a complete microprocessor pipeline in software to approximate the application performance on any specified hardware device.
- sufficient data can be collected from a wide range of hardware configurations to learn a complete model for predicting application performance.
- the software based simulation necessarily yields uncertain and inaccurate data due to the specification inaccuracy, implementation imprecision, and other factors in those tools. As a consequence, the quality of the learned model can be affected by those errors.
- a method for predicting performance of an application on a machine of a predetermined hardware configuration comprises: simulating, in a computer process, the performance of the application under a plurality of different simulated hardware configurations; building, in a computer process, a predictive model of the performance of the application based on the results of the simulations; obtaining the performance of the application on a plurality of actual machines, each of the machines having a different hardware configuration; and in a computer process, Bayesian reinterpreting the predictive model built from the results of the simulations using the performance of the application on the plurality of actual machines, to obtain a final predictive model of the performance of the application having an accuracy greater than the predictive model built from the results of the simulations.
- the building of the predictive model comprises modeling nonlinear dependencies between the simulated performance of the application and the simulated hardware configurations with a generalized linear regression model with L1 penalty.
- the modeling of nonlinear dependencies comprises defining a set of basis functions to transform original variables so that their nonlinear relationships can be included in the predictive model.
- the modeling of nonlinear dependencies comprises applying the L1 norm penalty on coefficients of the generalized linear regression model to achieve sparseness of the predictive model's representation.
- the Bayesian reinterpreting of the predictive model comprises searching for an optimal solution for the linear regression model with L1 penalty.
- the Bayesian reinterpreting of the predictive model built from the results of the simulations comprises relearning parameters of the linear regression model using the performance of the application on the plurality of actual machines.
- the Bayesian reinterpreting of the predictive model built from the results of the simulations comprises defining a prior distribution which embeds information learned from the simulations to restrict values of the coefficients of the linear regression model.
- the Bayesian reinterpreting of the predictive model built from the results of the simulations comprises maximizing posterior probability distribution of model parameters so that the final predictive model comprises contributions from the simulated and actual hardware configurations.
- An apparatus for predicting performance of an application on a machine of a predetermined hardware configuration.
- the apparatus comprises a processor executing instructions for simulating the performance of the application under a plurality of different simulated hardware configurations; building a predictive model of the performance of the application based on the results of the simulations; and Bayesian reinterpreting the predictive model built from the results of the simulations using the performance of the application on a plurality of actual machines each having a different hardware configuration, to obtain a final predictive model of the performance of the application having an accuracy greater than the predictive model built from the results of the simulations.
- the instructions for building of the predictive model comprises instructions for modeling nonlinear dependencies between the simulated performance of the application and the simulated hardware configurations with a generalized linear regression model with L1 penalty.
- the instructions for modeling of nonlinear dependencies comprises instructions fordefining a set of basis functions to transform original variables so that their nonlinear relationships can be included in the predictive model.
- the instructions for modeling of nonlinear dependencies comprises instructions for applying the L1 norm penalty on coefficients of the linear regression model to achieve sparseness of the predictive model's representation.
- the instructions for Bayesian reinterpreting of the predictive model comprises instructions for searching for an optimal solution for the linear regression model with L1 penalty.
- the instructions for Bayesian reinterpreting of the predictive model built from the results of the simulations comprises instructions for relearning parameters of the linear regression model using the performance of the application on the plurality of actual machines.
- the instructions for Bayesian reinterpreting of the predictive model built from the results of the simulations comprises instructions for defining a prior distribution which embeds information learned from the simulations to restrict values of the coefficients of the linear regression model.
- the instructions for Bayesian reinterpreting of the predictive model built from the results of the simulations comprises instructions for maximizing posterior probability distribution of model parameters so that the final predictive model comprises contributions from the simulated and actual hardware configurations.
- FIG. 1 illustrates an exemplary embodiment of application performance mapping across heterogeneous machines.
- FIG. 2 is flowchart of a method for estimating application performance across heterogeneous machines according to the principles of the present disclosure.
- FIG. 3 illustrates the construction of a plurality of basis functions that are used to transform variables into a set of new representations in accordance with the process of block 202 of FIG. 2
- FIG. 4 is a flowchart detailing the prediction model enhancement processes represented by block 204 of the method of FIG. 2 .
- FIG. 5A is graph illustrating the prior distribution for P( ⁇
- FIG. 5B is graph illustrating the prior distribution for P( ⁇ tilde over ( ⁇ ) ⁇ 2 ).
- FIG. 6 is a block diagram of an exemplary embodiment of a computer system or apparatus for implementing the method for estimating application performance across heterogeneous machines.
- FIG. 1 illustrates an exemplary embodiment of application performance mapping across heterogeneous machines (servers having hardware configurations with different hardware specifications or settings) used in an enterprise data center or cloud.
- Application A is first hosted by an operating system running on a first server (machine) 10 with a first hardware configuration x a and application A is then hosted by an operating system running on a second server (machine) 20 with a second (different) hardware configuration x b .
- the performance of application A on the first machine is represented as y a .
- application A moves to second machine 20 with the different hardware configuration x b , its performance changes to y b under the same workload due to the different computing capacity of the second machine.
- the inputs of the model include, without limitation, the number of data TLB entries, the number of instruction TLB entries, L1 cache size, L1 cache line size, L1 cache associativity (ways), L2 cache size, L2 cache latency, memory latency, load queue size, and issue queue size.
- the output of the model is application performance which is represented in one embodiment, as the average CPU cycles per instruction (CPI).
- the predictor x in the performance model represents various hardware specifications including, without limitation, data/instruction translation lookaside buffer (TLB) sizes, data/instruction level 1 (L1) cache sizes, level 2 (L2) cache sizes, L1 cache latency, L2 cache latency, and other various hardware specifications.
- the hardware specifications can be obtained from spec sheets of the corresponding machine.
- the response variable y measures the quality of serving the incoming workloads.
- the definition of that performance metric varies with the characteristics of the application. While some computation intensive applications use the system throughput to measure the quality of service, some user interactive applications rely on the request response time to describe the performance. Instead of focusing on those application specific metrics, the method of the present disclosure uses machine CPU utilization for system performance, because it has been shown that the CPU utilization is highly correlated with high level performance metrics such as the throughput or request response time.
- Machine CPU utilization also depends on the intensity of the incoming workloads. Because the present method uses a performance variable whose value is only determined by the specifications of underlying hardware, the method of the present disclosure removes the portion of workload contributions, by decomposing the machine CPU utilization as:
- machine CPU utilization is determined by the number of instructions issued by the application, the CPU cycles per instruction (CPI), and the CPU speed.
- the number of issued instructions is proportional to the intensity of workloads, and CPU speed is a parameter that can be obtained from the hardware specifications. Therefore, the method of the present disclosure focuses on CPU cycles per instruction (CPI) as the performance variable y.
- the prediction model of the present disclosure can benefit many management tasks in a heterogeneous environment.
- the prediction model of the present disclosure can be used to determine the right number and types of new machines that need to be purchased during system capacity planning, even when those machines are not available yet.
- the recent resurgence of virtualization technology also introduced considerable interests in the performance mapping across heterogeneous hardware, because virtualization applications are capable of migrating between different machines. If the original and destination machines after migration are different, some management tools may require recalibration after migration, especially for those tools that rely on the relationship between the application performance and other system measurements such as the workload intensity. Model recalibration needs to be accomplished in real time so that it can take effect immediately after the migration.
- FIG. 2 is flowchart of the method for estimating application performance across heterogeneous machines according to the principles of the present disclosure.
- the behavior of the application of interest is simulated under various hardware settings.
- a statistical model is built to summarize the simulation results.
- the application is evaluated on a number of actual hardware instances to account for errors in simulation.
- the actual hardware data is applied to the model learned from simulation using Bayesian learning theory to enhance its accuracy.
- Bayesian learning theory allows the method of the present disclosure to take full advantage of both actual evaluation and simulation based methods, thereby avoiding their shortcomings. As a consequence, the method of the present disclosure obtains a better performance prediction model than existing techniques.
- a simulation tool such as, but not limited to, a PTLsim, is used to collect data [x, y] where x represents hardware specifications of the machine of interest and y is the application performance, i.e., the average CPU cycles per instruction (CPI) on that machine.
- a generalized linear regression with L 1 penalty is used in block 202 to model the non-linear dependencies between the application performance (response y) and underlying hardware parameters (input variables x).
- a plurality of non-linear templates based on the domain knowledge, are generated to transform original variables, and a set of polynomial basis functions are applied to the new variables.
- the method applies the L 1 penalty on regression coefficients, and an algorithm (to be described further on) is used to identify the optimal solution for that constrained regression problem.
- the sparse statistical model that results from this process can effectively predict the performance of the application based on simulation results.
- the process of block 204 comprises the running of the application on a limited number of actual hardware instances, and the use of Bayesian learning in the process of block 206 to enhance the model learned from simulation.
- the evaluation data from the actual hardware instances is used to relearn the parameters of the regression model from the simulation.
- the knowledge learned from simulation is used to restrict the values of regression coefficients.
- Such a prior constraint is represented as a Gaussian distribution with the mean as the values of corresponding coefficients learned from simulation.
- FIG. 3 illustrates the construction of a plurality of basis functions that are used to transform variables into a set of new representations in accordance with the process of block 202 of FIG. 2 .
- a set of new variables is defined in the model ⁇ (′).
- Such a transformation is based on the observation that the logarithmic function frequently appears between the application performance and many hardware parameters such as the TLB size, the cache size, and so on.
- the new set z contains the logarithmic transformation of all inputs x.
- block 302 applies a polynomial kernel with the order 2 on the variables z to obtain a pool of basis functions ⁇ 1 (z), ⁇ 2 (z), . . . , ⁇ p (z) ⁇ .
- those basis functions contain the terms of variables z taken the polynomial of degree at most 2.
- ⁇ k is used to denote the basis function ⁇ k (z).
- ⁇ 0 a parameter to balance the tradeoff between the error and penalization parts in equation (3). Since the goal of regulization is to minimize the number of non-zero elements in ⁇ , a natural choice of g( ⁇ ) would be the L 0 -norm of ⁇ , ⁇ 0 . However, since choosing ⁇ 0 involves combinatorial search for the solution that is hard to solve, g( ⁇ ) is often chosen to be some relaxed forms of L 0 -norm. Among many choices of relaxations, L 1 -norm is the most effective way.
- the optimization process of the present disclosure is based on the fact that the Laplacian prior equation (5) can be rewritten as a hierarchical decomposition of two other distributions: a zero-mean Gaussian prior p( ⁇ ii
- Equation (9) cannot be maximized directly. Instead the following expectation maximization (EM) process is used to find the solution.
- the EM process is an iterative technique, which computes the expectation of hidden variables ⁇ and uses such expectation as the estimation of ⁇ to find the optimal solution. Each iteration comprises an E-step and an M-step.
- the E-step computes the conditional expectation of ⁇ ( ⁇ ) given y and the current estimate ⁇ circumflex over ( ⁇ ) ⁇ 2 (t) and ⁇ circumflex over (B) ⁇ t
- the M-step performs the maximization of equation (9) with respect to ⁇ 2 and ⁇ except that the matrix ⁇ ( ⁇ ) is replaced with its conditional expectation V(t). According the following equations are obtained:
- ⁇ ⁇ ⁇ ( t + 1 ) 2 argmax ⁇ ⁇ - n ⁇ ⁇ log ⁇ ⁇ ⁇ 2 -
- 2 2 ⁇ 2 ⁇
- y - ⁇ ⁇ ⁇ ⁇ ( t ) ⁇ z 2 n , ( 13 ) ⁇ ⁇ ⁇ ⁇ ( t + 1 ) arg ⁇ ⁇ max ⁇ ⁇ -
- 2 2 ⁇ 2 - ⁇ T ⁇ V ⁇ ( 1 ) ⁇ ⁇ ⁇ ( ⁇ ⁇ ⁇ ( t + i ) 2 ⁇ V ⁇ ( i ) + ⁇ T ⁇ ⁇ ) - 1 ⁇ ⁇ T y . ( 14 )
- the EM process is easy to implement, and converges to the maximum of posterior probability of equation (6) quickly.
- the initial data for constructing the model may contain errors.
- m to enhance the quality of prediction.
- the number of real evaluations m is much smaller than the size of simulation data. If the generalized regression is learned in the same way as in the simulation, the model may contain large variances. Instead, the knowledge learned from both simulation and the real evaluation data is combined to improve the prediction model.
- FIG. 4 illustrates a flowchart detailing the prediction model enhancement processes of block 204 of FIG. 2 .
- the actual evaluation measurements are transformed into a set of basis functions ⁇ ⁇ i ⁇ in the generally the same manner as describe above with respect to blocks 300 and 302 of FIG. 3 , with the exception that, rather than including all the components in the regression (2), only relevant basis functions are selected into the model, i.e., those with non-zero coefficients in performance model learned from simulation.
- ⁇ tilde over (y) ⁇ ⁇ 1 ⁇ tilde over ( ⁇ ) ⁇ 1 + ⁇ 2 ⁇ tilde over ( ⁇ ) ⁇ 2 + . . . + ⁇ K ⁇ tilde over ( ⁇ ) ⁇ K .
- the least square solution ⁇ circumflex over ( ⁇ ) ⁇ may not be accurate. Therefore, the knowledge learned from simulation is used to guide the estimation of prediction model ⁇ , thereby improving the quality of estimation. That is, the values of prediction model ⁇ should be close to the corresponding coefficients in ⁇ learned from simulation. Our insight here is that although the coefficients ⁇ learned from simulation are not accurate, they still can provide guidance for the possible range of prediction model ⁇ values. Therefore, in block 402 , a prior constraint is added on the prediction model ⁇ , whose value follows a Gaussian distribution with the mean prediction model ⁇ as the corresponding ⁇ values learned during model construction and covariance ⁇ :
- the distribution of prediction model ⁇ is located around the mean value prediction model ⁇ learned from simulation.
- ⁇ P ⁇ ( ⁇ ⁇ 2 ) ⁇ b ⁇ ⁇ ⁇ ⁇ ( ⁇ ) ⁇ ( ⁇ ⁇ 2 ) - ( ⁇ + 1 ) ⁇ exp ⁇ ( - b ⁇ ⁇ z ⁇ ) , ( 19 )
- a, b are two parameters to control the shape and scale of the distribution
- ⁇ (a) is the gamma function of a.
- the final solution (prediction model) is obtained in block 404 by combining the equations (16)(18)(19) to express the posterior distribution for model parameters: P ( ⁇ , ⁇ tilde over ( ⁇ ) ⁇ 2
- the final prediction model ⁇ * is the weighted average of the prior prediction model ⁇ and the model ⁇ circumflex over ( ⁇ ) ⁇ that is obtained from the standard least square solution expressed in equation (17).
- the above Bayesian guided learning generates the final coefficients ⁇ * for the performance model (15), which combines the outcomes from real evaluation and simulation processes.
- FIG. 6 is a block diagram of an exemplary embodiment of a computer system or apparatus 600 for implementing the methods described herein.
- the computer system 600 includes at least one CPU 620 , at least one memory 630 for storing one or more programs which are executable by the processor(s) 620 for implementing the method described herein, one or more inputs 640 for receiving input data and an output 660 for outputting data.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Probability & Statistics with Applications (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
y=β 1φ1(z)+β2φ2(z)+ . . . +βpφp(z) (2)
where λ≧0 is a parameter to balance the tradeoff between the error and penalization parts in equation (3). Since the goal of regulization is to minimize the number of non-zero elements in β, a natural choice of g(β) would be the L0-norm of β, ∥β∥0. However, since choosing ∥β∥0 involves combinatorial search for the solution that is hard to solve, g(β) is often chosen to be some relaxed forms of L0-norm. Among many choices of relaxations, L1-norm is the most effective way. It is well known that with L1-norm constraint, g(β)=∥β∥1, the optimal solution β is constrained to be on the axes in the coefficient space and thus is sparse, whereas other alternatives such as L2-norm do not have that property. Therefore, L1-norm is used as the penalty function g(β) to enforce the sparseness of solution β.
where σ2 describes the noise level, and each coefficient βi is governed by a Laplacian prior
where γ is a predefined constant in the prior. The optimization of (3) maximizes the posterior distribution
p(β,σ2 |D,γ)∝p(y|β,σ 2)p(β|γ) (6)
Note that because the variance σ2 in (4) is also unknown, it is incorporated into the optimization process.
As a result, the distribution (6) can be rewritten as
p(y|β,σ 2)p(β|γ)=p(y|β,σ 2)p(β|τ)p(Σ|γ). (8)
where Γ(τ)=diag(τ1−1, . . . , τp−1) is the diagonal matrix with the inverse variances of all βis. By taking the derivatives with respect to β and σ2 respectively, the solution that maximizes equation (9) is obtained.
{tilde over (y)}=θ 1{tilde over (φ)}1+θ2{tilde over (φ)}2+ . . . +θK{tilde over (φ)}K. (15)
from which the following least square solution is obtained:
{circumflex over (θ)}=({tilde over (Φ)}T{tilde over (Φ)})−1{tilde over (Φ)}T {tilde over (y)}, (17)
where [{tilde over (Φ)},{tilde over (y)}] represents the real evaluation data, and {acute over (σ)}2 is the measurement noise. Note that symbol “{tilde over (*)}” is used to differentiate the variables with those in the simulation stage.
where a, b are two parameters to control the shape and scale of the distribution, Γ(a) is the gamma function of a. In one exemplary embodiment, a=1, b=1 can be used to plot the curve of P{tilde over (σ)}2 shown in
P(θ,{tilde over (σ)}2 |{tilde over (y)},{tilde over (Φ)})∝ P({tilde over (y)}|{tilde over (Φ)},θ,{tilde over (σ)} 2 P(θ|{tilde over (σ)}2)P({tilde over (σ)}2) (20)
By integrating out {tilde over (σ)}2 in P(θ{tilde over (σ)}2|{tilde over (y)},{tilde over (Φ)}), we obtain the marginal distribution for prediction model θ as a multi-variable t-distribution, from which the maximum can be found at
θ*=({tilde over (Φ)}T{tilde over (Φ)}+Σ−1)−1({tilde over (Φ)}T{tilde over (Φ)}{circumflex over (θ)}+Σ−1
Claims (16)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/171,812 US8818922B2 (en) | 2010-06-29 | 2011-06-29 | Method and apparatus for predicting application performance across machines with different hardware configurations |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US35942610P | 2010-06-29 | 2010-06-29 | |
| US13/171,812 US8818922B2 (en) | 2010-06-29 | 2011-06-29 | Method and apparatus for predicting application performance across machines with different hardware configurations |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20110320391A1 US20110320391A1 (en) | 2011-12-29 |
| US8818922B2 true US8818922B2 (en) | 2014-08-26 |
Family
ID=45353462
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/171,812 Expired - Fee Related US8818922B2 (en) | 2010-06-29 | 2011-06-29 | Method and apparatus for predicting application performance across machines with different hardware configurations |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US8818922B2 (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140026111A1 (en) * | 2011-04-11 | 2014-01-23 | Gregory Michael Stitt | Elastic computing |
| US20140304396A1 (en) * | 2013-04-09 | 2014-10-09 | International Business Machines Corporation | It system infrastructure prediction based on epidemiologic algorithm |
| WO2019017947A1 (en) * | 2017-07-20 | 2019-01-24 | Hewlett-Packard Development Company, L.P. | Predicting performance of a computer system |
| US20220092385A1 (en) * | 2020-09-18 | 2022-03-24 | Kabushiki Kaisha Toshiba | Information processing device and information processing system |
| US12405827B2 (en) | 2022-01-07 | 2025-09-02 | International Business Machines Corporation | Cognitive allocation of specialized hardware resources |
Families Citing this family (28)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8527704B2 (en) * | 2010-11-11 | 2013-09-03 | International Business Machines Corporation | Method and apparatus for optimal cache sizing and configuration for large memory systems |
| US9038043B1 (en) * | 2012-06-21 | 2015-05-19 | Row Sham Bow, Inc. | Systems and methods of information processing involving activity processing and/or optimization features |
| JP2014041566A (en) * | 2012-08-23 | 2014-03-06 | Nippon Telegr & Teleph Corp <Ntt> | Device, method, and program for linear regression model estimation |
| US10303618B2 (en) * | 2012-09-25 | 2019-05-28 | International Business Machines Corporation | Power savings via dynamic page type selection |
| US9363154B2 (en) | 2012-09-26 | 2016-06-07 | International Business Machines Corporaion | Prediction-based provisioning planning for cloud environments |
| US20150019301A1 (en) * | 2013-07-12 | 2015-01-15 | Xerox Corporation | System and method for cloud capability estimation for user application in black-box environments using benchmark-based approximation |
| US9274918B2 (en) | 2013-07-25 | 2016-03-01 | International Business Machines Corporation | Prediction of impact of workload migration |
| US9715663B2 (en) | 2014-05-01 | 2017-07-25 | International Business Machines Corporation | Predicting application performance on hardware accelerators |
| JP6885394B2 (en) * | 2016-03-31 | 2021-06-16 | 日本電気株式会社 | Information processing system, information processing device, simulation method and simulation program |
| US10860618B2 (en) | 2017-09-25 | 2020-12-08 | Splunk Inc. | Low-latency streaming analytics |
| US10692031B2 (en) * | 2017-11-02 | 2020-06-23 | International Business Machines Corporation | Estimating software as a service cloud computing resource capacity requirements for a customer based on customer workflows and workloads |
| US10997180B2 (en) | 2018-01-31 | 2021-05-04 | Splunk Inc. | Dynamic query processor for streaming and batch queries |
| US10453167B1 (en) | 2018-04-18 | 2019-10-22 | International Business Machines Corporation | Estimating performance of GPU application for different GPU-link performance ratio |
| CN108829517B (en) * | 2018-05-31 | 2021-04-06 | 中国科学院计算技术研究所 | A training method and system for machine learning in a cluster environment |
| US20210209481A1 (en) | 2018-07-06 | 2021-07-08 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods and systems for dynamic service performance prediction using transfer learning |
| US11188348B2 (en) * | 2018-08-31 | 2021-11-30 | International Business Machines Corporation | Hybrid computing device selection analysis |
| US10936585B1 (en) | 2018-10-31 | 2021-03-02 | Splunk Inc. | Unified data processing across streaming and indexed data sets |
| US11892933B2 (en) | 2018-11-28 | 2024-02-06 | Oracle International Corporation | Predicting application performance from resource statistics |
| US10467360B1 (en) * | 2019-01-02 | 2019-11-05 | Fmr Llc | System and method for dynamically determining availability of a computing resource |
| CN110362460B (en) * | 2019-07-12 | 2024-05-10 | 腾讯科技(深圳)有限公司 | Application program performance data processing method, device and storage medium |
| US11238048B1 (en) | 2019-07-16 | 2022-02-01 | Splunk Inc. | Guided creation interface for streaming data processing pipelines |
| US12164524B2 (en) | 2021-01-29 | 2024-12-10 | Splunk Inc. | User interface for customizing data streams and processing pipelines |
| US11663219B1 (en) * | 2021-04-23 | 2023-05-30 | Splunk Inc. | Determining a set of parameter values for a processing pipeline |
| US12242892B1 (en) | 2021-04-30 | 2025-03-04 | Splunk Inc. | Implementation of a data processing pipeline using assignable resources and pre-configured resources |
| US11989592B1 (en) | 2021-07-30 | 2024-05-21 | Splunk Inc. | Workload coordinator for providing state credentials to processing tasks of a data processing pipeline |
| US12164522B1 (en) | 2021-09-15 | 2024-12-10 | Splunk Inc. | Metric processing for streaming machine learning applications |
| CN113886920B (en) * | 2021-10-08 | 2024-06-11 | 中国矿业大学 | A bridge vibration response data prediction method based on sparse Bayesian learning |
| WO2024144780A1 (en) * | 2022-12-29 | 2024-07-04 | Rakuten Mobile, Inc. | Integrated application performance and infrastructure management |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7346736B1 (en) * | 2004-12-13 | 2008-03-18 | Sun Microsystems, Inc. | Selecting basis functions to form a regression model for cache performance |
| US20100131440A1 (en) * | 2008-11-11 | 2010-05-27 | Nec Laboratories America Inc | Experience transfer for the configuration tuning of large scale computing systems |
| US20110004426A1 (en) * | 2003-09-29 | 2011-01-06 | Rockwell Automation Technologies, Inc. | System and method for energy monitoring and management using a backplane |
| US20110044524A1 (en) * | 2008-04-28 | 2011-02-24 | Cornell University | Tool for accurate quantification in molecular mri |
-
2011
- 2011-06-29 US US13/171,812 patent/US8818922B2/en not_active Expired - Fee Related
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110004426A1 (en) * | 2003-09-29 | 2011-01-06 | Rockwell Automation Technologies, Inc. | System and method for energy monitoring and management using a backplane |
| US7346736B1 (en) * | 2004-12-13 | 2008-03-18 | Sun Microsystems, Inc. | Selecting basis functions to form a regression model for cache performance |
| US20110044524A1 (en) * | 2008-04-28 | 2011-02-24 | Cornell University | Tool for accurate quantification in molecular mri |
| US20100131440A1 (en) * | 2008-11-11 | 2010-05-27 | Nec Laboratories America Inc | Experience transfer for the configuration tuning of large scale computing systems |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140026111A1 (en) * | 2011-04-11 | 2014-01-23 | Gregory Michael Stitt | Elastic computing |
| US9495139B2 (en) * | 2011-04-11 | 2016-11-15 | University Of Florida Research Foundation, Inc. | Elastic computing |
| US20140304396A1 (en) * | 2013-04-09 | 2014-10-09 | International Business Machines Corporation | It system infrastructure prediction based on epidemiologic algorithm |
| US9313114B2 (en) * | 2013-04-09 | 2016-04-12 | International Business Machines Corporation | IT system infrastructure prediction based on epidemiologic algorithm |
| US9699053B2 (en) | 2013-04-09 | 2017-07-04 | International Business Machines Corporation | IT system infrastructure prediction based on epidemiologic algorithm |
| WO2019017947A1 (en) * | 2017-07-20 | 2019-01-24 | Hewlett-Packard Development Company, L.P. | Predicting performance of a computer system |
| US11294788B2 (en) | 2017-07-20 | 2022-04-05 | Hewlett-Packard Development Company, L.P. | Predicting performance of a computer system |
| US20220092385A1 (en) * | 2020-09-18 | 2022-03-24 | Kabushiki Kaisha Toshiba | Information processing device and information processing system |
| US12405827B2 (en) | 2022-01-07 | 2025-09-02 | International Business Machines Corporation | Cognitive allocation of specialized hardware resources |
Also Published As
| Publication number | Publication date |
|---|---|
| US20110320391A1 (en) | 2011-12-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8818922B2 (en) | Method and apparatus for predicting application performance across machines with different hardware configurations | |
| Bouhlel et al. | A Python surrogate modeling framework with derivatives | |
| US11836576B2 (en) | Distributed machine learning at edge nodes | |
| Rosin | Multi-armed bandits with episode context | |
| CN114424204B (en) | Data evaluation using reinforcement learning | |
| WO2023060737A1 (en) | Expected value estimation method and apparatus in quantum system, device, and system | |
| Awad et al. | Hidden markov model | |
| Schreiber et al. | Exponential integrators with parallel-in-time rational approximations for the shallow-water equations on the rotating sphere | |
| Stafford | Continuous integration of data into ground-motion models using Bayesian updating | |
| WO2018144534A1 (en) | Hardware-based machine learning acceleration | |
| Hafeez et al. | Empirical analysis and modeling of compute times of cnn operations on aws cloud | |
| Sheikh et al. | A bayesian approach to online performance modeling for database appliances using gaussian models | |
| US11928556B2 (en) | Removing unnecessary history from reinforcement learning state | |
| Gohil et al. | The importance of generalizability in machine learning for systems | |
| Singh et al. | Improving the quality of software by quantifying the code change metric and predicting the bugs | |
| Coletti et al. | Bayesian backcalculation of pavement properties using parallel transitional Markov chain Monte Carlo | |
| Johnston et al. | OpenCL performance prediction using architecture-independent features | |
| Wang et al. | A bayesian approach to parameter inference in queueing networks | |
| US20090138237A1 (en) | Run-Time Characterization of On-Demand Analytical Model Accuracy | |
| US20190034825A1 (en) | Automatically selecting regression techniques | |
| US7120567B2 (en) | Method and apparatus for determining output uncertainty of computer system models | |
| KR20040054711A (en) | System and method for assigning an engine measure metric to a computing system | |
| KR20150064673A (en) | Method and device for determining a gradient of a data-based function model | |
| Giannakopoulos et al. | Towards an adaptive, fully automated performance modeling methodology for cloud applications | |
| Wang et al. | QMLE: A methodology for statistical inference of service demands from queueing data |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: NEC LABORATORIES AMERICA, INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, HAIFENG;KANG, HUI;JIANG, GUOFEI;AND OTHERS;REEL/FRAME:026740/0392 Effective date: 20110812 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: NEC CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NEC LABORATORIES AMERICA, INC.;REEL/FRAME:034765/0565 Effective date: 20150113 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
| FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
| FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20220826 |