US11537615B2 - Using machine learning to estimate query resource consumption in MPPDB - Google Patents
Using machine learning to estimate query resource consumption in MPPDB Download PDFInfo
- Publication number
- US11537615B2 US11537615B2 US15/959,442 US201815959442A US11537615B2 US 11537615 B2 US11537615 B2 US 11537615B2 US 201815959442 A US201815959442 A US 201815959442A US 11537615 B2 US11537615 B2 US 11537615B2
- Authority
- US
- United States
- Prior art keywords
- query
- resource
- execution
- time slot
- machine learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24534—Query rewriting; Transformation
- G06F16/24542—Plan optimisation
- G06F16/24545—Selectivity estimation or determination
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/217—Database tuning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24532—Query optimisation of parallel queries
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24534—Query rewriting; Transformation
- G06F16/24547—Optimisations to support specific applications; Extensibility of optimisers
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/283—Multi-dimensional databases or data warehouses, e.g. MOLAP or ROLAP
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
Definitions
- Embodiments of the present disclosure relate to the field of machine learning, and in particular, to a method and an apparatus for using machine learning to estimate query resource consumption in a massively parallel processing database (MPPDB).
- MPPDB massively parallel processing database
- Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from examples, data sets, direct experience, or instructions without being explicitly programmed.
- AI artificial intelligence
- the primary aim is to allow the computers to learn automatically without human intervention or assistance and adjust actions accordingly.
- a method that includes receiving a query from a client device; generating a query plan for the query by parsing the query to determine operators of the query and a sequence of the operators; performing a query resource consumption estimation for the query based on the query plan using a predictive trained model generated using a machine learning technology that automates analytical model building; determining whether a currently available system resource is sufficient to initiate execution of the query based on the query resource consumption estimation of the query; initiating execution of the query in response to a determination that the currently available system resource is sufficient for executing the query based on the query resource consumption estimation of the query; receiving a result of the query after execution of the query is completed; and returning the result of the query to the client device.
- the method further includes executing the query based on a concurrency query execution plan in response to a determination that the currently available system resource is insufficient for initiating execution of the query based on the query resource consumption estimation of the query.
- the method further includes reducing the currently available system resource by the query resource consumption estimation for the query in response to initiating execution of the query.
- the method further includes increasing the currently available system resource by the query resource consumption estimation for the query in response to completing execution of the query.
- the process of generating the query plan for the query includes parsing the query into an execution hierarchical tree, where each tree node of the execution hierarchical tree represents an operator.
- the method further includes determining a number of instances each operator appears in the query and a sum of cardinalities for each instance of the operator.
- another implementation of the aspect provides that the machine learning technology utilizes an adaptive kernel that is configured to learn different kernel metrics to various system settings and data.
- machine learning technology utilizes multi-level stacking technology configured to leverage outputs of diverse base classifier models.
- another implementation of the aspect provides that the machine learning technology jointly performs query resource consumption estimation for a query and resource extreme events detection together.
- the method further includes placing the query into a query execution queue in response to a determination that the currently available system resource is insufficient for initiating execution of the query based on the query resource consumption estimation of the query.
- a query management device that includes a network communication interface configured to enable communication over a network a memory storage comprising instructions; and one or more processors in communication with the network communication interface and to the memory, wherein the one or more processors execute the instructions to: receive a query from a client device; generate a query plan for the query by parsing the query to determine operators of the query and a sequence of the operators; perform a query resource consumption estimation for the query based on the query plan using a predictive trained model generated using a machine learning technology that automates analytical model building; determine whether a currently available system resource is sufficient to initiate execution of the query based on the query resource consumption estimation of the query; initiate execution of the query in response to a determination that the currently available system resource is sufficient for executing the query based on the query resource consumption estimation of the query; receive a result of the query after execution of the query is completed; and return the result of the query to the client device.
- a further implementation of the query management device provides that the processing unit executes the query based on a concurrency query execution plan in response to a determination that the currently available system resource is insufficient for initiating execution of the query based on the query resource consumption estimation of the query.
- a further implementation of the query management device provides that the processing unit executes the executable instructions to reduce the currently available system resource by the query resource consumption estimation for the query in response to initiating execution of the query.
- a further implementation of the query management device provides that the processing unit executes the executable instructions to increase the currently available system resource by the query resource consumption estimation for the query in response to completing execution of the query.
- a further implementation of the query management device provides that the process of generating the query plan for the query includes parsing the query into an execution hierarchical tree, where each tree node of the execution hierarchical tree represents an operator.
- a further implementation of the query management device provides that the processing unit executes the executable instructions to determine a number of instances each operator appears in the query and a sum of cardinalities for each instance of the operator.
- a further implementation of the query management device provides that the machine learning technology utilizes an adaptive kernel that is configured to learn different kernel metrics to various system settings and data.
- a further implementation of the query management device provides that the machine learning technology utilizes multi-level stacking technology configured to leverage outputs of diverse base classifier models.
- a further implementation of the query management device provides that the machine learning technology jointly performs query resource consumption estimation for a query and resource extreme events detection together.
- FIG. 1 is a schematic diagram illustrating a high level system architecture of a co-prediction machine learning system in accordance with an embodiment of the present disclosure.
- FIG. 2 is a schematic diagram illustrating a simplistic view using machine learning for estimating system resource utilization tasks in accordance with an embodiment of the present disclosure.
- FIG. 3 is a schematic diagram illustrating a predictive ML process in accordance with an embodiment of the present disclosure.
- FIG. 4 is a schematic diagram illustrating a process for generating a query plan feature vector in accordance with an embodiment of the present disclosure.
- FIG. 5 is a schematic diagram illustrating a process for predicting the workload in accordance with an embodiment of the present disclosure.
- FIG. 6 is schematic diagram of a two level stacking technique in accordance with an embodiment of the present disclosure.
- FIG. 7 is a flowchart illustrating a process for query concurrency management flow control in accordance with an embodiment of the present disclosure.
- FIG. 8 is a schematic diagram of an apparatus in accordance with an embodiment of the present disclosure.
- a MPPDB is a database management system that partitions data across multiple servers or nodes for enabling queries to be split into a set of coordinated processes that are executed in parallel on the one or more nodes to achieve faster results.
- a system resource is a component that provides certain capabilities and contributes to the overall system performance.
- Non-limiting examples of system resources include system memory, cache memory, hard disk space, a central processing unit (CPU), and input/output (I/O) channels.
- the disclosed embodiments comprise a workload management (WLM) component that uses innovative machine learning techniques to estimate query resource costs. As a result, query concurrency levels and system resource utilization are improved. In addition, resource extreme events such as, but not limited to, out of memory (OOM) occurrences are avoided.
- WLM workload management
- queries may be complex.
- An execution time range of the queries may be from seconds to hours (or even to days).
- the long-running queries may require the use of resource-intensive operators, such as sort and/or hash-join.
- the operators use resources to sort their input data and/or to join their data sets.
- the amount of resources allocated to the operators affects the performance of the query such as, for example, the elapsed time.
- the larger the resource space that is assigned the better performance the query achieves.
- there are only limited resources available in a computer system and the resource is shared by all concurrent queries (and operators). Therefore, a challenge for the database system is to design an effective strategy to manage query concurrency levels given limited system resources.
- effectively managing query concurrency levels may be an extremely difficult task because it is not easy to accurately estimate query resource consumption with current technologies.
- the available memory of a computer system running a Gauss200 OLAP Data Warehouse system is 64 gigabytes (GB) and that three queries, Q1, Q2, and Q3, arrive at the system one after the other.
- the estimated memory cost of each of the three queries is 20 GB (i.e., ⁇ Q1, 20 GB>, ⁇ Q2, 20 GB> and ⁇ Q3, 20 GB>).
- Q1 is admitted into the database system first and starts executing because its memory cost is 20 GB, which is less than the system's current available memory of 64 GB.
- the query is accommodated.
- the system's current available memory is reduced to 44 GB.
- the memory bookkeeping is handled by a workload manager, e.g., a WLM component, in the database system.
- a workload manager e.g., a WLM component
- Q2 and Q3 After being admitted in the database system, Q2 and Q3 start executing as the current available memory of the computer system is 44 GB, which is sufficient for Q2, ⁇ Q2, 20 GB>, and Q3, ⁇ Q3, 20 GB>, and system is estimated to still have 4 GB left after execution of these three queries.
- an OOM issue may still occur in the system. The reason is that the estimated memory consumption is not accurate. For example, if each of the three queries consumed 24 GB of memory, instead of the estimated 20 GB, an OOM occurs because the actual required memory is 72 GB, which is greater than the available 64 GB of memory.
- This OOM issue could be avoided if the estimated resource consumption was accurate. For instance, if the estimated memory consumption for Q1, Q2, and Q3 were accurately estimated to be 24 GB, the system could initiate execution of Q1 as the system's current available memory is 64 GB. The system could also initiate execution of Q2 as the system's current available memory is 40 GB after accounting for Q1, which is greater than the estimated memory consumption for Q2. After accounting for both Q1 and Q2, the system's current available memory is 16 GB, which is not sufficient to execute Q3. The system places Q3 in an execution queue until the system's current available memory is sufficient to execute Q3. Thus, OOM is avoided due to accurately estimating the memory consumption for each of the queries.
- the WLM component may be configured to only execute a query if executing the query based on its estimated resource consumption does not reduce the available system resource beyond a particular threshold.
- the WLM may be configured to try and maintain at least 4 GB of available system memory at all times just in case a particular query or other system process exceeds expected memory consumption.
- the threshold enables the WLM to maintain a cushion or buffer to prevent OOM from occurring.
- the WLM may still queue a query for execution even if the available system memory (e.g., 20 GB) is sufficient to execute a query that has a query resource consumption estimation of 20 GB to prevent the possibility of OOM occurring.
- the disclosed embodiments may be used for managing query concurrency control in MPPDB. It should be noted that although the present disclosure uses the example of memory estimation in MPPDB, the disclosed technology may be extended to estimate other types of system resource consumption. The technology may also be applied to other database systems.
- a database management system is configured to perform the tasks of (1) estimating resource utilization and (2) resource extreme event discovery.
- estimating resource utilization plays a role in query concurrent level management.
- resource extreme event discovery is important as it may directly lead to OOM and a system crash.
- these two tasks are related. For example, if the resource consumption in the given unit is a high value, it has a higher probability to be an extreme event. On the other hand, if a time unit corresponds to low resource consumption, it has a lower chance to be an extreme event.
- a machine learning model which has the capability to jointly learn the models of both query resource consumption estimation and extreme events discovery, is provided in the present disclosure.
- the disclosed embodiments accurately estimate resource cost as well as detect extreme resource events, which are meaningful as they lead to OOM and/or a system crash. Jointly modeling these two related tasks leverages information from each other, and achieves better performance by mutually benefiting each other during the training phase.
- FIG. 1 is a schematic diagram illustrating a high level system architecture of a co-prediction machine learning system 100 for performing query resource consumption estimation and resource extreme events detection in accordance with an embodiment of the present disclosure.
- the co-prediction machine learning system 100 jointly performs query resource consumption estimation and resource extreme events detection together or simultaneously as they may provide mutual benefit to each other during the model training phase.
- the co-prediction machine learning system 100 may be generalized and work with any base machine learning technologies.
- the co-prediction machine learning system 100 provides an adaptive kernel learning method, which has the capability of learning the appropriate similarity or distance metric automatically for any given data and system setting. To solve the problem of over-fitting and concept drift, the co-prediction machine learning system 100 provides a robust machine learning prediction system with stacking techniques.
- the co-prediction machine learning system 100 achieves several innovative objectives including predicting the query resource costs when/before the system starts executing, predicting the possible extreme resource events as well as their occurrence time points, improving query concurrency level while controlling resource contention, and improving system performance and resource utilization while avoiding severe performance problems, such as OOM and system crashes.
- the co-prediction machine learning system 100 includes an input data module 102 , a query plan generation module 104 , a feature generation module 106 , a feature processing engine 108 , a predictive model module 110 , and a system resource log module 112 .
- the input data module 102 includes a tracked database (DB), a set of queries, and system logs of both actual resource cost and peak value if there is any at each time unit.
- the input queries and database may, for example, be a relational or a non-structured query language (NoSQL) database, such as Neo4j or Titan for graph database.
- NoSQL non-structured query language
- the co-prediction machine learning framework may easily be used to estimate other system resource consumption tasks.
- the input queries are forwarded to the query plan generation module 104 , which generates query operation plans using a DB optimizer.
- the query plan generation module 104 is responsible for generating the query plan (e.g., operators and their sequential relations) for the input queries using a query optimizer of the data management system.
- the query plan generation module 104 may select a best query plan.
- the selected query plan in a form of an ordered set of operators used to access data in a database system, is then forward to the feature generation module 106 .
- the feature generation module 106 is responsible for generating a set of feature representation of each query for modeling purposes that may be helpful for understanding resource cost.
- a set of features to represent is extracted.
- the feature generation module 106 may consider the size of data, operators used in a query plan, orders/sequences of operators, and the selectivity at each operator in a query plan. In other embodiments, other types of DB features may also be considered.
- the features extracted from the queries are then passed to the feature processing engine 108 .
- the feature processing engine 108 is may be used to further prepare the feature set extracted by the feature generation module 106 as the features generated from feature generation module 106 may not be clean.
- the features may include duplicate information, noise features, or too high dimension are used.
- the feature processing engine 108 performs dimension reduction and kernel learning to enhance the quality of feature representation.
- the present disclosure may work with any types of feature extraction and/or feature reduction technologies, such as Principal Component Analysis (PCA), Probabilistic latent semantic indexing (PLSI), Latent Dirichlet allocation (LDA), and so on.
- PCA Principal Component Analysis
- PLSI Probabilistic latent semantic indexing
- LDA Latent Dirichlet allocation
- the system resource log module 112 is used to capture the labels (e.g., resource utilization and peak values) from historical data for training purposes.
- the system resource log module 112 is responsible for pre-processing the input system logs and generating labels for both tasks.
- the pre-processing work may include removing background noise, extracting the peak values, and using resource utilization from system logs for training purposes.
- the predictive model module 110 is configured to train the co-predictive model to generate a predictive trained model. In one embodiment, given the predictive trained model and arriving queries without execution, the predictive model module 110 outputs the estimated resource cost and detected peak values (extreme event), if any, at each time unit. The details of each component are discussed below.
- FIG. 2 is a schematic diagram illustrating a simplistic view using machine learning for estimating system resource utilization tasks in accordance with an embodiment of the present disclosure.
- a machine learning model 208 may be employed to receive input workload and configuration data 202 of a system 204 .
- the machine learning model 208 is also configured to receive performance and resource utilization metrics data 206 of the system 204 given the input workload and configuration data 202 processed by the system 204 .
- the machine learning model 208 is configured to perform auto-extraction to identify the relationships between the input workload and configuration data 202 and the measured performance and resource utilization metrics data 206 .
- the machine learning model 208 may be configured to use various machine learning technologies for resource estimation in data management systems such as, but not limited to, regression, Canonical Correlation Analysis (CCA), and Kernel Canonical Correlation Analysis (KCCA).
- CCA Canonical Correlation Analysis
- KCCA Kernel Canonical Correlation Analysis
- the disclosed embodiments may utilize a single variable regression or a multivariate regression to predict each performance metric of interest. For example, in one embodiment, using multivariate regression to predict each performance metric of interest, independent variables x1, x2, . . . , xn are defined for each workload feature and each performance metric is treated as a separate dependent variable y.
- the goal of regression is to solve the equation a1x1+a2x2+ . . .
- CCA is a method for exploring the relationships between two multivariate sets of variables (vectors). For example, CCA considers pair-wise datasets and finds dimensions of maximal correlation across both datasets.
- CCA is a variant of CCA that captures similarity using a kernel function. Given two multivariate datasets, KCCA computes basis vectors for subspaces in which the projections of the two datasets are maximally correlated.
- FIG. 3 is a schematic diagram illustrating a predictive ML process 300 in accordance with an embodiment of the present disclosure.
- FIG. 3 illustrates the transformation predictive ML process 300 imposes on a workload feature vector dataset 302 and a performance feature vector dataset 308 .
- the predictive ML process 300 first creates the workload feature vector dataset 302 for all workloads of a system 304 .
- the performance feature vector dataset 308 is constructed for each corresponding observation of system resource utilization and performance of the system 304 based on the given workload.
- the predictive ML process 300 uses a query plan to generate the workload feature vector dataset 302 (denoted as xk).
- the predictive ML process 300 may construct the performance feature vector dataset 308 (denoted yk) from system performance logs.
- Each workload feature vector xk has a corresponding performance feature vector yk.
- the workload feature vector dataset 302 and the performance feature vector dataset 308 are inputted into a kernel function that respectively generates a kernel matrix Kx 306 and a kernel matrix Ky 310 .
- the kernel function is an algorithm that computes an inner product in feature space between two vector arguments.
- kernel functions include, but are not limited to, Gaussian, polynomial, linear, spline, anova RBF, Bessel, Laplacian, and hyperbolic tangent.
- the predictive ML process 300 projects the feature vector xk of the workload feature vector dataset 302 and performance feature vector yk of the performance feature vector dataset 308 onto dimensions 314 , 316 of maximal correlation across the data sets by applying a KCCA algorithm 312 that takes the kernel matrices Kx 306 and Ky 310 and solves the following generalized eigenvector problem:
- Kx represents the pairwise similarity kernel matrix of workload feature vector x
- Ky represents the pairwise similarity kernel matrix of performance feature vector y
- A represents a matrix consisting of the basis vectors of a subspace onto which Kx may be projected
- B represents a matrix consisting of basis vector of a subspace onto which Ky may be projected, such that Kx*A and Ky*B are maximally correlated.
- Kx*A is the workload projection
- Ky*B is the performance projection.
- the predictive ML process 300 computes basis vectors for subspaces, given two multivariate datasets, workload feature vector dataset 302 and performance feature vector dataset 308 , in which the projections of the two datasets are maximally correlated.
- a query plan feature vector is conducted to find the nearest neighbors of queries in the training set, and use the average system performance of these nearest neighbors to predict the performance vector for this test query.
- LSTM long short-term memory
- RNN recurrent neural network
- SVM support vector machine
- XGBoosting XGradient Boosting
- FIG. 4 is a schematic diagram illustrating a process 400 for generating a query plan feature vector in accordance with an embodiment of the present disclosure.
- the process 400 may be used by the predictive ML process 300 to generate the feature vector xk.
- the process 400 receives a query at block 402 .
- the process 400 at block 404 parses the query down into its operators to produce a query plan.
- An operator is a reserved word or a character used in a query to perform operation(s), such as comparisons and arithmetic operations.
- An operator may be used to specify a condition or to serve as conjunctions for multiple conditions in a statement.
- the process 400 at block 406 generates a query plan feature vector that includes each the number of instances each operator appears in the query and the sum of cardinalities for each instance of the operator.
- the sum of cardinalities indicates the actual data size to be process corresponding to an operator. For example, if a sort operator appears twice in a query plan with cardinalities 3000 and 45000, the query plan feature vector includes a “sort instance count” element containing the value 2 and a “sort cardinality sum” element containing the value 48000.
- FIG. 5 is a schematic diagram illustrating a process 500 for predicting the workload in accordance with an embodiment of the present disclosure.
- the process 500 begins with a query plan 502 for a query.
- the query plan 502 may be generated using the query plan generation module 104 as described in FIG. 1 .
- the query plan 502 is used to generate a query plan feature vector 504 as shown in the process 400 described in FIG. 4 .
- the query plan feature vector 504 is included in a feature vector dataset that includes the query plan feature vector of other queries for enabling query concurrency management.
- the process 500 applies the feature vector dataset of the query to KCCA process 506 to find the nearest neighbors of queries in the training set as shown in query plan projection 508 .
- the process 500 then correlates the nearest neighbors of queries in the training set to their performance projection 510 .
- the process 500 uses the average system performance of these nearest neighbors to generate the predicted performance vector 512 for the query.
- one innovative feature of the disclosed embodiments is the use of joint modeling to jointly detect the resource extreme events and predict the resource consumption.
- the joint modeling is performed by a single machine learning model.
- two machine learning models may be trained, one for resource consumption and the other for extreme events detection.
- additional information may be gleaned to provide higher accuracy in predicting resource consumption and resource usage extreme events. For example, if these tasks were to be performed separately, each task may be modeled independently using least squares SVM (LS-SVM) as a base model for the above two tasks.
- LS-SVM least squares SVM
- W u stands for weight matrix of resource utilization
- e i is the error for i th resource utilization prediction in model training
- x i is the i th query in training set
- y i is the resource utilization that is associated with the i th query in training set
- b u , e, ⁇ are all parameters of models for resource utilization prediction.
- the independent model for extreme events detection may be represented by the following equation:
- W p stands for weight matrix of resource peak value (extreme event) detection
- e i is the error for i th resource peak value prediction in model training
- x i is the i th query in training set
- y i is the resource peak value if there is any (0/1) that associate with i th query in training set.
- b u , e, ⁇ are all parameters of models for resource peak value prediction.
- the disclosed embodiments recognize certain drawbacks of independent modeling that includes overlooking the informative information between these two tasks. For example, when a data instance is predicted with a high cost of resource, it has a higher probability to be the extreme event compared against the lower one. Additionally, when a data instance is predicted as an extreme event, it is more likely to be a query with high volume of resource cost. Accordingly, the disclosed embodiments apply an innovative approach for predicting resource consumption and resource usage extreme events simultaneously. Thus, before a set of queries is executed, the disclosed embodiments will predict both (1) resource utilization and (2) any extreme event at each time unit.
- a non-limiting example using the co-prediction model, with LS-SVM as base technology may be represented by the following equation:
- the first 6 terms (before ⁇ 3 term) describe the utilization and peak value prediction learned by the model.
- the 7th term ( ⁇ 3 term) describes the graph constraint relations among utilization.
- the 8th term ( ⁇ 4 term) describes the graph constraint relations among peak value.
- the last term ( ⁇ 5 term) describes the relationships among two tasks.
- the disclosed embodiments may utilize adaptive kernel learning.
- existing machine learning technologies in the field select one of the existing kernel methods, with strong data distribution assumption, which may not always be true in real applications.
- the disclosed embodiments utilize an adaptive kernel learning technology that has the capability to learn the most appropriate kernel metrics to various system settings and data, with unknown distribution.
- the learnt kernel metric for each system has the capability to evolve over time based on the continuous collection of the system execution data and based on the fact that the distribution may change over time.
- the disclosed embodiments may utilize a supervised linear kernel for resource related features, where the weights for the features are estimated by aligning the induced weighted kernel to a ground truth kernel that is defined as follows:
- G(x i , x j ) stands for the ground truth similarity kernel matrix between feature vector xi and xj, where the label of xi is yi, and label of xj is yj; when yi and yj share the same label, G(x i , x j ) equals to 1, otherwise 0.
- kernel weight matrix W in which G ij is the ground truth kernel similarity between feature vector xi and xj, X stands for the data matrix in training set, XWX T stands for the kernel matrix that learnt from the training data, W is the parameter matrix that is learned in order to minimize the difference between ground truth matrix G and learnt matrix XWX T , and ⁇ is the parameter to tweak with a range (0,1).
- the first term (before the + operator) is minimized when G and XWX T are in agreement with each other.
- the second term (after the + operator) is a regularizer to keep the model parsimonious.
- the objective function is solved as the following equation:
- K(x, x*) is the kernel similarity between data feature vector x and x*
- sign is the indictor when XTWX*>0
- sign(XTWX*) 1
- x is feature vector in training set
- x* is the feature vector in test set.
- a third innovative aspect of the disclosed embodiments involve stacking for a robust system.
- the existing approach is to train a model on training set, and tweaking the parameter by checking the “change point.”
- many times the validate error (e.g., for offline training) and test error (e.g., for online testing/applying) are not consistent.
- a reason for the inconsistency between the offline validate and the online test/apply is the overfitting and concept drift, as well as the strong correlation of base classifier models.
- various embodiments include a stacking technology to leverage the output of diverse base classifier models. This provides a robust system that has consistent validate and test performance.
- the disclosed embodiments may include a multi-level (deep) a stacking technique with a set of local predictive models with diverse background.
- FIG. 6 is schematic diagram of a two level stacking technique 600 in accordance with an embodiment of the present disclosure.
- the two level stacking technique 600 provides consistent validate and test performance.
- the two level stacking technique 600 employs a five folds stacking technique.
- the training data 602 in the first state, is split into five folds and is used to train five models (Model 1, Model 3, Model 3, Model 4, and Model 5) using the leave-one-out strategy (4 folds of data for training purposes and the remaining one fold for predicting).
- the trained models are also applied to the test data and generate one prediction for each model (labeled as New Feature 614 ).
- three different prediction scores are obtained on the training data 602 of the five models.
- the prediction scores may each be generated from various predictive models such as, but not limited to, SVM, KCCA, and XGBoosting.
- SVM SVM
- KCCA KCCA
- XGBoosting XGBoosting
- the average of the 5 groups of predicted scores from each model is determined.
- Model 6 comprises the computed averaged prediction scores on the test data for the different predictive models.
- the prediction scores 626 , 628 , 630 , 632 , and 634 are treated as new training data on a test data 636 .
- the average of the prediction on the test data from the first state 620 is used as new test data 622 .
- the final results are generated using the new set of training and test data with the predictive model 624 .
- the stacking technique has the advantage in avoiding overfitting due to K-fold cross validation and interpreting non-linearity between features due to treating output as features.
- FIG. 7 is a flowchart illustrating a process 700 for query concurrency management flow control in accordance with an embodiment of the present disclosure.
- the process 700 begins, at block 702 , by receiving one or more queries to be executed.
- the one or more queries may be received locally or over a network from a single client device or from multiple client devices.
- a client device is any device that requests execution of a query.
- the one or more queries are passed to a query optimizer for generating a query plan.
- the query optimizer generates the query plan by parsing the query into an execution hierarchical tree to determine query operators and their sequential relations or a sequence of the operators and a number of instances of each operator.
- each tree node of the execution hierarchical tree represents an operator of the query.
- the process 700 applies machine learning (ML) to perform resource cost estimation for the query using the trained model and execution plan.
- the process 700 at block 706 outputs the estimated resource cost and detected peak values (extreme event) if any.
- the estimated memory costs of the three queries, Q1, Q2, and Q3, are 24 GB, 24 GB and 24 GB, respectively, and are represented as ⁇ Q1, 24 GB>, ⁇ Q2, 24 GB> and ⁇ Q3, 24 GB>.
- the process 700 determines if there is sufficient system resource available to initiate execution of the query. If the process 700 at block 708 determines that there is sufficient system resource available to initiate execution of the query, the process 700 passes the query to a workload manager (WLM) at block 712 that is configured to perform resource bookkeeping by reducing the current available system resource by the amount of system resource required to perform the query. The process 700 then passes the query to an executor at block 714 that is configured to execute the query.
- WLM workload manager
- the process 700 receives the results of the query when execution of the query is completed. In one embodiment, the process at block 718 returns or transmits the result of the query a client device that requested performance of the query. The process 700 , at block 716 , returns or frees up the resource used by the query via the WLM.
- the process 700 passes the query the WLM at block 710 that places the query into a query queue for execution until there is sufficient resource to execute the query. The process 700 repeats for each arriving query.
- the process 700 may be simultaneously processing or managing more than one query at a time. For example, while a query is being executed, the process 700 may begin processing additional queries that it receives. For instance, using the above example, after Q1 begins executing, the process 700 also initiates execution of Q2 as the available memory of the computer system is 40 GB, which is greater than the estimated system memory cost for ⁇ Q2, 24 GB>. While Q1 and Q2 are executing, the system has 16 GB of system memory available. Thus, the system's currently available memory of 16 GB is insufficient for initiating execution of Q3, ⁇ Q3, 24 GB>, and thus, Q3 is queued in the wait queue to avoid system OOM occurring.
- embodiments of the present disclosure provide a co-prediction joint modeling for concurrent query resource utilization and extreme events detection.
- the disclosed co-prediction framework could be generalized and work with any base machine learning technologies.
- an adaptive kernel method is provided.
- the adaptive kernel method automatically learns the most appropriate similarity function/distance metric for given data and system settings.
- a robust machine learning prediction system with a stacking technique is provided.
- Advantages and benefits of the disclosed embodiments include, but are not limited to, providing faster and more reliable system performance. For example, severe performance problems, such as OOM, are avoided, system performance becomes smooth and predictable, system resources are better utilized and managed, and resource utilization is improved as query concurrency levels are dynamically adjusted based on query costs. In addition, central processing unit (CPU) and Disk input/output (I/O) resource utilization may also be improved. As a result, embodiments of the present disclosure provide a better customer experience.
- FIG. 8 is a schematic diagram of a workload management device 800 according to an embodiment of the disclosure.
- the workload management device 800 is suitable for implementing the disclosed embodiments as described herein.
- the workload management device 800 comprises ingress ports 810 and receiver units (Rx) 820 for receiving data; a processor, logic unit, or CPU 830 to process the data; transmitter units (Tx) 840 and egress ports 850 for transmitting the data; and a memory 860 for storing the data.
- the workload management device 800 may also comprise optical-to-electrical (OE) components and electrical-to-optical (EO) components coupled to the ingress ports 810 , the receiver units 820 , the transmitter units 840 , and the egress ports 850 for egress or ingress of optical or electrical signals.
- OE optical-to-electrical
- EO electrical-to-optical
- the processor 830 is implemented by hardware and software.
- the processor 830 may be implemented as one or more CPU chips, cores (e.g., as a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and digital signal processors (DSPs).
- the processor 830 is in communication with the ingress ports 810 , receiver units 820 , transmitter units 840 , egress ports 850 , and memory 860 .
- the processor 830 comprises a co-prediction machine learning module 870 .
- the co-prediction machine learning module 870 implements the disclosed embodiments described above. For instance, the co-prediction machine learning module 870 implements, processes, prepares, or provides the various functions disclosed herein.
- co-prediction machine learning module 870 therefore provides a substantial improvement to the functionality of the workload management device 800 and effects a transformation of the resource management device 800 to a different state.
- the co-prediction machine learning module 870 is implemented as instructions stored in the memory 860 and executed by the processor 830 .
- the memory 860 comprises one or more disks, tape drives, and solid-state drives and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution.
- the memory 860 may be volatile and/or non-volatile and may be read-only memory (ROM), random-access (RAM), ternary content-addressable memory (TCAM), and/or static random-access memory (SRAM).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Operations Research (AREA)
- Algebra (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/959,442 US11537615B2 (en) | 2017-05-01 | 2018-04-23 | Using machine learning to estimate query resource consumption in MPPDB |
| EP18794005.1A EP3607477A4 (de) | 2017-05-01 | 2018-04-25 | Verwendung von maschinenlernen zur schätzung des abfrageressourcenverbrauchs in mppdb |
| CN201880026160.XA CN110537175B (zh) | 2017-05-01 | 2018-04-25 | 利用机器学习估计mppdb中的查询资源消耗的方法及装置 |
| PCT/CN2018/084464 WO2018201948A1 (en) | 2017-05-01 | 2018-04-25 | Using machine learning to estimate query resource consumption in mppdb |
| EP25156352.4A EP4542413A1 (de) | 2017-05-01 | 2018-04-25 | Verwendung von maschinenlernen zur schätzung des abfrageressourcenverbrauchs in mppdb |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201762492706P | 2017-05-01 | 2017-05-01 | |
| US15/959,442 US11537615B2 (en) | 2017-05-01 | 2018-04-23 | Using machine learning to estimate query resource consumption in MPPDB |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20180314735A1 US20180314735A1 (en) | 2018-11-01 |
| US11537615B2 true US11537615B2 (en) | 2022-12-27 |
Family
ID=63917198
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/959,442 Active 2038-12-12 US11537615B2 (en) | 2017-05-01 | 2018-04-23 | Using machine learning to estimate query resource consumption in MPPDB |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US11537615B2 (de) |
| EP (2) | EP3607477A4 (de) |
| CN (1) | CN110537175B (de) |
| WO (1) | WO2018201948A1 (de) |
Families Citing this family (35)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10942783B2 (en) | 2018-01-19 | 2021-03-09 | Hypernet Labs, Inc. | Distributed computing using distributed average consensus |
| US10878482B2 (en) | 2018-01-19 | 2020-12-29 | Hypernet Labs, Inc. | Decentralized recommendations using distributed average consensus |
| US11244243B2 (en) | 2018-01-19 | 2022-02-08 | Hypernet Labs, Inc. | Coordinated learning using distributed average consensus |
| US10909150B2 (en) * | 2018-01-19 | 2021-02-02 | Hypernet Labs, Inc. | Decentralized latent semantic index using distributed average consensus |
| US11048694B2 (en) * | 2018-04-26 | 2021-06-29 | International Business Machines Corporation | Median based estimation of database query responses |
| US11061902B2 (en) * | 2018-10-18 | 2021-07-13 | Oracle International Corporation | Automated configuration parameter tuning for database performance |
| US11429893B1 (en) * | 2018-11-13 | 2022-08-30 | Amazon Technologies, Inc. | Massively parallel real-time database-integrated machine learning inference engine |
| US12067009B2 (en) * | 2018-12-10 | 2024-08-20 | Teradata Us, Inc. | Predictive query parsing time and optimization |
| US11544236B2 (en) * | 2018-12-28 | 2023-01-03 | Teradata Us, Inc. | Machine-learning driven database management |
| CN109635118A (zh) * | 2019-01-10 | 2019-04-16 | 博拉网络股份有限公司 | 一种基于大数据的用户搜索匹配方法 |
| EP3690751A1 (de) * | 2019-01-31 | 2020-08-05 | Siemens Aktiengesellschaft | Verfahren zum herstellen eines extraktors für tiefe latente merkmale für industriesensordaten |
| US11138266B2 (en) * | 2019-02-21 | 2021-10-05 | Microsoft Technology Licensing, Llc | Leveraging query executions to improve index recommendations |
| US11971793B2 (en) | 2019-03-05 | 2024-04-30 | Micro Focus Llc | Machine learning model-based dynamic prediction of estimated query execution time taking into account other, concurrently executing queries |
| CN110166282B (zh) * | 2019-04-16 | 2020-12-01 | 苏宁云计算有限公司 | 资源分配方法、装置、计算机设备和存储介质 |
| CN111949631B (zh) * | 2019-05-14 | 2024-06-25 | 华为技术有限公司 | 一种确定数据库的配置参数的方法及装置 |
| CN110888859B (zh) * | 2019-11-01 | 2022-04-01 | 浙江大学 | 一种基于组合深度神经网络的连接基数估计方法 |
| US11748350B2 (en) * | 2020-02-21 | 2023-09-05 | Microsoft Technology Licensing, Llc | System and method for machine learning for system deployments without performance regressions |
| US11327969B2 (en) * | 2020-07-15 | 2022-05-10 | Oracle International Corporation | Term vector modeling of database workloads |
| CN111953701B (zh) * | 2020-08-19 | 2022-10-11 | 福州大学 | 基于多维特征融合和堆栈集成学习的异常流量检测方法 |
| CN114253938B (zh) * | 2020-09-22 | 2025-10-03 | 中兴通讯股份有限公司 | 数据管理方法、数据管理装置及存储介质 |
| US12387132B1 (en) * | 2020-09-29 | 2025-08-12 | Amazon Technologies, Inc. | Orchestration for building and executing machine learning pipelines on graph data |
| US11500830B2 (en) * | 2020-10-15 | 2022-11-15 | International Business Machines Corporation | Learning-based workload resource optimization for database management systems |
| US11500871B1 (en) * | 2020-10-19 | 2022-11-15 | Splunk Inc. | Systems and methods for decoupling search processing language and machine learning analytics from storage of accessed data |
| US11657069B1 (en) | 2020-11-25 | 2023-05-23 | Amazon Technologies, Inc. | Dynamic compilation of machine learning models based on hardware configurations |
| US11636124B1 (en) * | 2020-11-25 | 2023-04-25 | Amazon Technologies, Inc. | Integrating query optimization with machine learning model prediction |
| US11762860B1 (en) * | 2020-12-10 | 2023-09-19 | Amazon Technologies, Inc. | Dynamic concurrency level management for database queries |
| US20220222231A1 (en) * | 2021-01-13 | 2022-07-14 | Coupang Corp. | Computerized systems and methods for using artificial intelligence to optimize database parameters |
| US11568320B2 (en) | 2021-01-21 | 2023-01-31 | Snowflake Inc. | Handling system-characteristics drift in machine learning applications |
| CN114116778B (zh) * | 2021-09-26 | 2025-01-03 | 中国电子口岸数据中心成都分中心 | 一种数据库查询优化方法 |
| US12271572B2 (en) | 2021-10-25 | 2025-04-08 | Oracle International Corporation | Unified user interface for monitoring hybrid deployment of computing systems |
| US11907250B2 (en) * | 2022-07-22 | 2024-02-20 | Oracle International Corporation | Workload-aware data encoding |
| US11921692B1 (en) * | 2022-09-16 | 2024-03-05 | Capital One Services, Llc | Computer-based systems configured for automatically updating a database based on an initiation of a dynamic machine-learning verification and methods of use thereof |
| US20240311380A1 (en) * | 2023-03-16 | 2024-09-19 | Microsoft Technology Licensing, Llc | Query processing on accelerated processing units |
| US12158882B1 (en) * | 2023-10-03 | 2024-12-03 | Hitachi, Ltd. | Query based method to derive insight about manufacturing operations |
| US20250139103A1 (en) * | 2023-10-25 | 2025-05-01 | Uber Technologies, Inc. | Systems and Methods for Query Cancellation |
Citations (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040172378A1 (en) * | 2002-11-15 | 2004-09-02 | Shanahan James G. | Method and apparatus for document filtering using ensemble filters |
| US20100082599A1 (en) * | 2008-09-30 | 2010-04-01 | Goetz Graefe | Characterizing Queries To Predict Execution In A Database |
| US20100114865A1 (en) * | 2008-10-21 | 2010-05-06 | Chetan Kumar Gupta | Reverse Mapping Of Feature Space To Predict Execution In A Database |
| US20100257154A1 (en) | 2009-04-01 | 2010-10-07 | Sybase, Inc. | Testing Efficiency and Stability of a Database Query Engine |
| US20120246158A1 (en) | 2011-03-25 | 2012-09-27 | Microsoft Corporation | Co-range partition for query plan optimization and data-parallel programming model |
| US20130185730A1 (en) * | 2011-11-02 | 2013-07-18 | International Business Machines Corporation | Managing resources for maintenance tasks in computing systems |
| US20140372356A1 (en) | 2013-06-12 | 2014-12-18 | Microsoft Corporation | Predictive pre-launch for applications |
| US20150286684A1 (en) * | 2013-11-06 | 2015-10-08 | Software Ag | Complex event processing (cep) based system for handling performance issues of a cep system and corresponding method |
| CN105183850A (zh) | 2015-09-07 | 2015-12-23 | 百度在线网络技术(北京)有限公司 | 基于人工智能的信息查询方法及装置 |
| CN105279286A (zh) | 2015-11-27 | 2016-01-27 | 陕西艾特信息化工程咨询有限责任公司 | 一种交互式大数据分析查询处理方法 |
| US20160128083A1 (en) * | 2014-10-31 | 2016-05-05 | British Telecommunications Public Limited Company | Networked resource provisioning system |
| US20160188594A1 (en) * | 2014-12-31 | 2016-06-30 | Cloudera, Inc. | Resource management in a distributed computing environment |
| US20160217003A1 (en) * | 2013-06-24 | 2016-07-28 | Sap Se | Task Scheduling for Highly Concurrent Analytical and Transaction Workloads |
| US20180157978A1 (en) * | 2016-12-02 | 2018-06-07 | International Business Machines Corporation | Predicting Performance of Database Queries |
-
2018
- 2018-04-23 US US15/959,442 patent/US11537615B2/en active Active
- 2018-04-25 WO PCT/CN2018/084464 patent/WO2018201948A1/en not_active Ceased
- 2018-04-25 EP EP18794005.1A patent/EP3607477A4/de not_active Ceased
- 2018-04-25 CN CN201880026160.XA patent/CN110537175B/zh active Active
- 2018-04-25 EP EP25156352.4A patent/EP4542413A1/de active Pending
Patent Citations (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040172378A1 (en) * | 2002-11-15 | 2004-09-02 | Shanahan James G. | Method and apparatus for document filtering using ensemble filters |
| US20100082599A1 (en) * | 2008-09-30 | 2010-04-01 | Goetz Graefe | Characterizing Queries To Predict Execution In A Database |
| US20100114865A1 (en) * | 2008-10-21 | 2010-05-06 | Chetan Kumar Gupta | Reverse Mapping Of Feature Space To Predict Execution In A Database |
| US20100257154A1 (en) | 2009-04-01 | 2010-10-07 | Sybase, Inc. | Testing Efficiency and Stability of a Database Query Engine |
| CN102362276A (zh) | 2009-04-01 | 2012-02-22 | 赛贝斯股份有限公司 | 测试数据库查询引擎的效率和稳定性 |
| US20120246158A1 (en) | 2011-03-25 | 2012-09-27 | Microsoft Corporation | Co-range partition for query plan optimization and data-parallel programming model |
| US20130185730A1 (en) * | 2011-11-02 | 2013-07-18 | International Business Machines Corporation | Managing resources for maintenance tasks in computing systems |
| US20140372356A1 (en) | 2013-06-12 | 2014-12-18 | Microsoft Corporation | Predictive pre-launch for applications |
| US20160217003A1 (en) * | 2013-06-24 | 2016-07-28 | Sap Se | Task Scheduling for Highly Concurrent Analytical and Transaction Workloads |
| US20150286684A1 (en) * | 2013-11-06 | 2015-10-08 | Software Ag | Complex event processing (cep) based system for handling performance issues of a cep system and corresponding method |
| US20160128083A1 (en) * | 2014-10-31 | 2016-05-05 | British Telecommunications Public Limited Company | Networked resource provisioning system |
| US20160188594A1 (en) * | 2014-12-31 | 2016-06-30 | Cloudera, Inc. | Resource management in a distributed computing environment |
| CN105183850A (zh) | 2015-09-07 | 2015-12-23 | 百度在线网络技术(北京)有限公司 | 基于人工智能的信息查询方法及装置 |
| CN105279286A (zh) | 2015-11-27 | 2016-01-27 | 陕西艾特信息化工程咨询有限责任公司 | 一种交互式大数据分析查询处理方法 |
| US20180157978A1 (en) * | 2016-12-02 | 2018-06-07 | International Business Machines Corporation | Predicting Performance of Database Queries |
Non-Patent Citations (9)
| Title |
|---|
| Archana Sulochana Ganapathi, "Predicting and Optimizing System Utilization and Performance via Statistical Machine Learning," Technical Report No. UCB/EECS-2009-181, http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-181.html, Dec. 17, 2009 111 pages. |
| Bach, et al., "Kernel Independent Component Analysis," Journal of Machine Learning Research 3 (2002), Jul. 2002, pp. 1-48. |
| Foreign Communication From A Counterpart Application, European Application No. 18794005.1, Extended European Search Report dated Jan. 23, 2020, 12 pages. |
| Foreign Communication From A Counterpart Application, PCT Application No. PCT/CN2018/084464, English Translation of International Search Report dated Jul. 27, 2018, 4 pages. |
| Hotelling, H., 1936: Relations between two sets of variants, Biometrika, 28, 321-377. |
| Machine Translation and Abstract of Chinese Publication No. CN105279286, Jan. 27, 2016, 10 pages. |
| Mehta, A., et al., "Automated Workload Management for Enterprise Data Warehouses", XP055658091, IEEE Data Engineering Bulletin, vol. 31, Mar. 1, 2008, 10 pages. |
| PAVLO ANDREW, VAN AKEN DANA ET. AL.: "Self-Drving Database Management Systems", 8TH BIENNIAL CONFERENCE ON INNOVATIVE DATA SYSTEMS RESEARCH CIDR 2017, 11 January 2017 (2017-01-11), XP055612820 |
| Pavlo, A., et al., "Self-Driving Database Management Systems", XP055612820, 8th Biennial Conference on Innovative Data Systems Research CIDR 2017, Jan. 11, 2017, 6 pages. |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2018201948A1 (en) | 2018-11-08 |
| CN110537175B (zh) | 2022-07-22 |
| EP3607477A4 (de) | 2020-02-26 |
| CN110537175A (zh) | 2019-12-03 |
| EP3607477A1 (de) | 2020-02-12 |
| US20180314735A1 (en) | 2018-11-01 |
| EP4542413A1 (de) | 2025-04-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11537615B2 (en) | Using machine learning to estimate query resource consumption in MPPDB | |
| US11567937B2 (en) | Automated configuration parameter tuning for database performance | |
| EP3776375B1 (de) | Lernoptimierer für gemeinsame genutzte cloud | |
| Raza et al. | Autonomic performance prediction framework for data warehouse queries using lazy learning approach | |
| CN113168575A (zh) | 微型机器学习 | |
| US20190079846A1 (en) | Application performance control system for real time monitoring and control of distributed data processing applications | |
| US20100114865A1 (en) | Reverse Mapping Of Feature Space To Predict Execution In A Database | |
| Duggan et al. | Contender: A resource modeling approach for concurrent query performance prediction. | |
| CN114144770A (zh) | 用于生成用于模型重新训练的数据集的系统和方法 | |
| US11941376B2 (en) | AI differentiation based HW-optimized intelligent software development tools for developing intelligent devices | |
| US11366806B2 (en) | Automated feature generation for machine learning application | |
| CN110490304B (zh) | 一种数据处理方法及设备 | |
| US20230153394A1 (en) | One-pass approach to automated timeseries forecasting | |
| Khoshkbarforoushha et al. | Resource usage estimation of data stream processing workloads in datacenter clouds | |
| Nimmagadda | Model optimization techniques for edge devices | |
| US12306833B2 (en) | Robust query execution plan selection using machine learning with predictive uncertainties | |
| Sagaama et al. | Automatic parameter tuning for big data pipelines with deep reinforcement learning | |
| US11921756B2 (en) | Automated database operation classification using artificial intelligence techniques | |
| Karn et al. | Criteria for learning without forgetting in artificial neural networks | |
| US20250173133A1 (en) | Systems and methods for improving computing performance by implementing an application package orchestrator in an electronic environment | |
| US20240152805A1 (en) | Systems, methods, and non-transitory computer-readable storage devices for training deep learning and neural network models using overfitting detection and prevention | |
| US20250260653A1 (en) | System and method for dynamic allocation of container session network resources via a machine learning model | |
| US20250094863A1 (en) | Efficient optimization of machine learning performance | |
| Jiang et al. | A Multi-output Gaussian Process Regression with Negative Transfer Mitigation for Generating Boundary Test Scenarios of Multi-UAV Systems | |
| US20250077290A1 (en) | Distributed artificial intelligence software code optimizer |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: FUTUREWEI TECHNOLOGIES, INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIU, LEI;ZHANG, MINGYI;DONG, YU;AND OTHERS;SIGNING DATES FROM 20180504 TO 20180524;REEL/FRAME:046568/0014 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |