CN117172093A - Method and device for optimizing strategy of Linux system kernel configuration based on machine learning - Google Patents

Method and device for optimizing strategy of Linux system kernel configuration based on machine learning Download PDF

Info

Publication number
CN117172093A
CN117172093A CN202310909401.8A CN202310909401A CN117172093A CN 117172093 A CN117172093 A CN 117172093A CN 202310909401 A CN202310909401 A CN 202310909401A CN 117172093 A CN117172093 A CN 117172093A
Authority
CN
China
Prior art keywords
parameter
load
recommendation
configuration
optimization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310909401.8A
Other languages
Chinese (zh)
Inventor
王新元
侯朋朋
何家泰
张开创
于佳耕
武延军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Nanjing Software Technology Research Institute
Institute of Software of CAS
Original Assignee
Zhongke Nanjing Software Technology Research Institute
Institute of Software of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Nanjing Software Technology Research Institute, Institute of Software of CAS filed Critical Zhongke Nanjing Software Technology Research Institute
Priority to CN202310909401.8A priority Critical patent/CN117172093A/en
Publication of CN117172093A publication Critical patent/CN117172093A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a policy recommendation method and a policy recommendation device for Linux system kernel configuration based on machine learning, wherein the method comprises the following steps: based on the current parameter configuration of the Linux system, collecting load data; according to the load data, the load type of the current load is identified; according to the historical optimal recommendation of the current load, recommending the latest parameter configuration for the current load; and generating a parameter space of the latest parameter configuration, and carrying out parameter recommendation of the Linux system in the parameter space by using a parameter recommendation optimization model to obtain a parameter recommendation result. The method provides a general optimization strategy for parameter configuration optimization of the Linux system kernel, and can effectively improve the resource utilization rate and the load performance.

Description

Method and device for optimizing strategy of Linux system kernel configuration based on machine learning
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a method and a device for recommending strategies based on kernel configuration of a Linux system based on machine learning.
Background
The system kernel configuration optimization refers to the process of adjusting and setting the kernel of the computer operating system to improve the performance, safety and stability of the system. Through reasonably configuring kernel parameters, the behavior and resource management of an operating system can be optimized according to specific application requirements and hardware environments, so that better performance and user experience are achieved. System kernel configuration optimization typically involves the following: hardware support, virtual memory and file systems, scheduler and process management, network and security settings, and kernel module management. Linux systems typically use the lowest kernel parameters, i.e., down-conversion, which is done for system stability, but performance is sacrificed much, and these default parameters can only maintain normal daily operation. Wherein, using the most naive and general measurement method, i.e. testing the latency of core operations such as system calls and context switch operations, it was found that there were performance degradation cases for some operations, where severe such as select system calls were dropped more than 100% multiple times during Linux kernel versions 4.15 to 4.20.
Aiming at the problems, in order to optimize the performance and resource quota of the Linux system, so that the Linux system can better utilize hardware resources and improve the performance and stability of a kernel, a plurality of feasible tuning methods are realized, wherein the existing mature tuning strategies are mainly divided into the following three types: 1. and (3) manual tuning: through expert knowledge experience, valuable adjustable parameters are selected, and according to the system load condition, the performance and the resource utilization efficiency are improved through manually configuring and optimizing various aspects of the system; 2. automated monitoring and adjustment: real-time monitoring and adjustment of system resources can be achieved using monitoring tools such as Nagios, zabbix, etc. in combination with automation scripts. For example, a threshold may be set to detect the usage of resources such as CPU, memory, disk, etc., and when the usage exceeds the threshold, corresponding optimization measures are automatically taken, such as starting additional processes, releasing caches, adjusting scheduling algorithms, etc.; 3. kernel tuning tool: the Linux system provides some kernel tuning tools, such as sysctl, tuned, perf. These tools can help the user automatically analyze the performance bottlenecks of the system and provide corresponding optimization suggestions. By running these tools and adjusting them according to their output, the performance of the system can be improved quickly. However, these methods have the following disadvantages:
1. and (3) manual tuning: time consuming and highly specialized knowledge, manual tuning requires in-depth knowledge of the principles and characteristics of the system, application and associated tools. This requires a lot of time and effort to learn and study. Furthermore, for complex systems and large-scale environments, manual tuning may require a large number of trial and error and iterations. In addition, manual tuning requires rich expertise and experience, including in-depth understanding of multiple fields such as operating systems, hardware architectures, applications, etc.;
2. automated monitoring and adjustment: lacking in dynamics, threshold-type adjustment strategies are typically adjusted based on static, preset thresholds. This means that they cannot adapt to system environment and load changes because the threshold is difficult to predict accurately or adjust to reality. When a system encounters a new load mode or change, the threshold strategy may not be recognized and adapted in time, resulting in performance degradation or inability to maximize system potential;
3. kernel tuning tool: the learning curve is long, the configuration complexity is high, and the configuration and parameter setting of the tools need to be deeply informed about the internal mechanism of the operating system and the corresponding documents. Learning and understanding the method of use of these tools can be a challenge for non-professional users or novice users. And incorrect configuration may lead to system instability or performance degradation, requiring extensive knowledge and experience to properly configure the tools. The method comprises the steps of carrying out a first treatment on the surface of the
Disclosure of Invention
Based on the reasons, the invention discloses a strategy recommendation method and a strategy recommendation device for kernel configuration of a Linux system based on machine learning. After training, the system is continuously monitored, the load is identified every time the system has new load condition change, and the most approximate tuning priority knowledge is selected from the existing tuning knowledge base to carry out parameter recommendation. And in the construction process of the tuning knowledge base, the optimal parameter configuration is recommended for the Linux system kernel through a machine learning method.
The technical scheme of the invention comprises the following steps:
a policy recommendation method for Linux system kernel configuration based on machine learning comprises the following steps:
based on the current parameter configuration of the Linux system, collecting load data;
according to the load data, the load type of the current load is identified;
according to the historical optimal recommendation of the current load, recommending the latest parameter configuration for the current load;
generating a parameter space of the latest parameter configuration, and recommending parameters of the Linux system in the parameter space by using a parameter recommendation optimization model to obtain a parameter recommendation result; the parameter recommendation optimization model is constructed based on a first machine learning method.
Further, identifying a load type of the current load according to the load data, including:
training a load category recognition model based on a second machine learning method;
preprocessing the load data;
and sending the preprocessed load data into the load type identification model to obtain the load type of the current load.
Further, the training a load category recognition model based on the second machine learning method includes:
collecting a plurality of groups of preprocessed load sample data;
training a plurality of decision trees using random forest techniques and designating an output tag as a load type for each load sample data; the split contribution degree of the features is measured through a base index or an information gain when each decision tree is trained, and the random sampling technology and the random selection feature technology are adopted to enable each decision tree to be independent;
and voting based on the result of each decision tree to obtain the output result of the load sample data.
Further, preprocessing the load data, including:
filling the missing values in the load data by adopting a moving average method to smooth the data;
and/or the number of the groups of groups,
regarding invalid characteristic values in the load data, taking the change value of each point and the previous point as a new sequence;
and/or the number of the groups of groups,
and filling the abnormal value in the load data with the average value of the front and rear data.
Further, the first machine learning method includes: bayesian optimization methods, genetic algorithms, particle swarm algorithms, or neural networks.
Further, generating a parameter space of the latest parameter configuration, and performing parameter recommendation of the Linux system in the parameter space by using a parameter recommendation optimization model to obtain a parameter recommendation result, wherein the method comprises the following steps:
selecting a proxy model type, wherein the proxy model type comprises: gaussian process regression, gaussian process regression of radial basis function description similarity or a random forest-based proxy model;
selecting a sampling function, wherein the sampling function comprises: expected improvement, locally expected improvement, improved probability, or lower confidence;
selecting a sampling strategy, wherein the sampling strategy comprises the following steps: random sampling strategy, latin hypercube sampling strategy or Sobol sequence sampling strategy;
generating a Bayesian optimization model according to the selected proxy model type, the sampling function and the sampling strategy;
performing parameter optimization on the Bayesian optimization model based on a parameter space of sample parameter configuration, and obtaining a parameter recommendation optimization model;
generating a parameter space of the latest parameter configuration according to the types, the values, the step sizes and the ranges of different parameters in the latest parameter configuration;
and obtaining a parameter recommendation result corresponding to the parameter space of the latest parameter configuration according to the parameter recommendation optimization model.
Further, the method further comprises:
under the condition that the parameter recommendation result does not reach the ideal optimization effect, taking the parameter recommendation result as the current parameter configuration, returning to the current parameter configuration based on the Linux system, and collecting load data;
outputting the parameter recommendation result and updating the historical optimal recommendation of the current load based on the parameter recommendation result under the condition that the parameter recommendation result achieves an ideal optimization effect and the optimization effect corresponding to the parameter recommendation result is better than the optimization effect corresponding to the historical optimal recommendation;
and outputting the historical optimal recommendation of the current load under the condition that the parameter recommendation result reaches an ideal optimization effect and the optimization effect corresponding to the parameter recommendation result is worse than the optimization effect corresponding to the historical optimal recommendation.
A policy recommendation device for Linux system kernel configuration based on machine learning, comprising:
the data collection module is used for collecting load data based on the current parameter configuration of the Linux system;
the type identification module is used for identifying the load type of the current load according to the load data;
the first parameter recommendation module is used for recommending the latest parameter configuration for the current load according to the historical optimal recommendation of the current load;
the second parameter recommendation module is used for generating a parameter space of the latest parameter configuration, and performing parameter recommendation of the Linux system in the parameter space by using a parameter recommendation optimization model to obtain a parameter recommendation result; the parameter recommendation optimization model is constructed based on a first machine learning method.
An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the policy recommendation method for Linux system kernel configuration based on machine learning described in any of the above.
A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method of policy recommendation for a machine learning based Linux system kernel configuration as described in any of the above.
Compared with the prior art, the invention has at least the following advantages:
1. the invention is characterized in that the load identification and parameter recommendation are performed by using a machine learning method including but not limited to Bayesian optimization, genetic algorithm, particle swarm algorithm, decision tree, random forest, and neural network.
2. The invention builds the complete optimized logic chain, is more efficient and sustainable, and optimizes the system kernel.
3. The prior method based on automatic monitoring and adjustment and kernel tuning tools is mainly determined by operating system developers and kernel development teams according to the requirements and characteristics of the system, and the selection process of parameters has inefficient trial and error. The invention uses a Bayesian optimization model to bring the parameters of the kernel of the Linux system into an optimization range, and can find out high-performance recommendation configuration faster and more accurately by using an efficient sampling function and a sampling strategy.
4. The prior method based on automatic monitoring and adjustment and kernel tuning tools considers the existing resource situation, carries out continuous optimization aiming at the current system scene, and does not generate any association with other states of the system, including historical states and future potential similar states. The invention uses an integrated learning method based on decision trees and random forests to record, learn and classify load types,
drawings
FIG. 1 is a schematic diagram of a policy recommendation method for machine learning based Linux system kernel configuration of the present invention.
FIG. 2 is a flow chart of a policy recommendation method for Linux system kernel configuration based on machine learning.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention in any way.
The invention aims to provide a strategy recommendation method for load identification and parameter optimization of Linux system kernel configuration based on machine learning, which adopts the parameter recommendation and classification identification by the machine learning method including but not limited to Bayesian optimization, genetic algorithm, particle swarm algorithm, decision tree, random forest and neural network, and is integrated in the Linux system environment. Taking Bayesian optimization as an example, the invention firstly collects the historical data of the load condition on the system, and recommends the optimal parameter configuration for the kernel configuration of the Linux system through Bayesian optimization. Meanwhile, feature data of different types of loads with labels are used for constructing the identification model. After training, the system is continuously monitored, and each time the load condition on the system changes, the load is identified, the existing knowledge is utilized, the potential better parameter configuration is rapidly recommended, and further optimization is developed on the basis.
The overall architecture of the invention is shown in figure 1, and the input configurable Linux system kernel parameters, a series of application load software and hardware parameters and characteristic values related to the running state of the system are output as a group of proper parameter recommendations. The invention is mainly divided into two parts: the system comprises a parameter tuning part and a load identification part, wherein when a recommended knowledge base is not available, only parameter tuning is performed, and load information is collected to perform load identification training; after certain data are accumulated, the load of subsequent deployment of the system is firstly identified and rapidly recommended, and then the parameter tuning process is carried out on the basis. The specific description is as follows:
1. parameter tuning part
The main purpose of the parameter tuning part is to gradually approach the real objective function by inputting different parameter configurations as independent variables and observing corresponding performance performances by using a Bayesian optimization method based on global optimization of a proxy model, so as to find a global optimal solution. The method comprises three parts:
the method mainly aims at acquiring adjustable parameter information of a load to be optimized, and constructs a parameter space in an automatic mode according to the characteristics and requirements of the load so as to search in an optimization algorithm. The specific method comprises the following steps: firstly, a server monitoring a specific port is deployed, and parameter information is waited for to be transmitted, and can be communicated through a YAML file format or received through gRPC communication means. The parameters can be discrete, continuous or mixed, and after the parameter information is obtained, the generation of the parameter space is completed according to the type, the value, the step length, the range and other information of different parameters.
The parameter recommendation optimization model is generated by using a machine learning method including but not limited to Bayesian optimization, genetic algorithm, particle swarm algorithm, neural network and the like. Taking a Bayesian optimization model as an example, selecting different proxy model types, including Gaussian process regression (GB), gaussian process regression of Radial Basis Function (RBF) description similarity and a proxy model (PRF) based on a random forest; selecting different sampling functions, including strategies such as Expected Improvement (EI), local Expected Improvement (LEI), improvement Probability (PI), lower confidence Limit (LCB) and the like; different sampling strategies are selected, including random sampling, latin Hypercube Sampling (LHS) and Sobol sequence sampling (Sobol), and a Bayesian optimization model is generated on the basis of the different sampling strategies.
The parameter optimization process uses machine learning methods including, but not limited to, bayesian optimization, genetic algorithm, particle swarm algorithm, neural network, and the like. Taking the Bayesian optimization model as an example, performing parameter recommendation in a parameter space by using the generated Bayesian optimization model, continuously monitoring after completing parameter recommendation, receiving performance of the recommended parameter configuration, feeding back to the optimization model, and performing next parameter recommendation after updating.
2. Load identification section
The main purpose of the load identification part is to identify and classify the load in the Linux system by a machine learning method including but not limited to decision trees, random forests, neural networks and the like, so that for different load types, the prior information according to a parameter tuning module or an expert knowledge base is referred to for more accurate resource scheduling and management of the load. The method comprises three parts:
the characteristic collection module, different types of applications running on the system, different pressure degrees of the same application and request distribution jointly form complex load conditions on the Linux system, software and hardware parameters of application programs, and system characteristic indexes such as CPU, memory, network and storage are closely related in load category, and data collection tools such as perf are needed to be used for real-time collection and storage. Due to the huge number of features, abnormal values and missing values of the collected load data are unavoidable. In order to enhance the robustness of the load identification model, filling the missing values by adopting a moving average method to smooth the data; and regarding invalid characteristic values, taking the change value of each point and the previous point as a new sequence, filling the abnormal value in the invalid characteristic values by using the average value of the front and rear data, and reserving 99.7% of normal distribution data for each characteristic column.
Classifying by using an ensemble learning method, dividing the input data into a training set and a test set by taking a plurality of groups of preprocessed sample data as input, and designating a required output label as a load type of each sample to train a decision tree. The split contribution degree of the characteristics is measured through indexes such as a Gini index (Gini index) or an information gain (information gain), and the importance degrees of different load characteristics are obtained. On this basis, optimal features are selected from the training dataset to distinguish differences between different categories. The data set is recursively partitioned according to the selected feature, generating a decision tree. Overfitting problems occur when the decision tree model is too complex to accommodate new data. To avoid overfitting, random forest techniques are used to train multiple decision trees, integrate multiple weak classifiers into strong classifiers, and vote the results of each decision tree to form a comprehensive decision. By adopting the methods of random sampling and random selection of features, each decision tree is mutually independent, so that the accuracy and stability of overall classification are improved.
The method of using the neural network is used for classification, if the characteristics of the application are more, the history database is rich, and the data volume is huge, a classification model based on an artificial neural network structure can be adopted, and a complex nonlinear mode can be learned through the combination of a plurality of neuron nodes.
The second diagram is a training process flow, and the specific steps are as follows:
step 1, finishing continuous observation of load types of main flow, such as Web server load, database load, calculation load, memory load, network load and the like, and performing step 5-6 on collected performance indexes and characteristic indexes of the main flow load to generate an initial recommendation library and an identification model.
And 2, acquiring a series of characteristic values when the perf detects that the load of the Linux system changes.
And step 3, identifying the load type with the highest confidence coefficient through the initial identification model according to the past characteristic data.
And 4, performing configuration recommendation for the new load according to the historical optimal recommendation of different loads.
And 5, generating a parameter space according to the parameters of the loads, constructing a parameter optimization model for each load, and recommending the parameters.
And 6, obtaining performance of corresponding parameter configuration, feeding back to the optimization model, and carrying out next parameter recommendation after updating.
And 7, circularly performing the steps 3-6 until an ideal optimization effect is obtained, and recording the current configuration to the recommendation library for updating if the current optimal performance is better than the optimal configuration in the initial recommendation library.
Assuming that Linux faces a certain Web server load in the current scenario, it is required to process an HTTP request from a client and provide corresponding Web page content. The load is reverse-proxied by ng inginx, mainly in terms of the demand for network bandwidth and handling large numbers of concurrent connections. At this time, the Linux system has a series of parameters to be optimized related to the network, net.core.somaxconn (defining the maximum number of connections waiting for service in a queue for each listening socket in the system), net.ipv4.tcp_max_syn_backlog (controlling the maximum length of the TCP SYN queue, i.e. the maximum number of connections waiting to be established through three handshakes at the same time), net.core.netdev_max_backlog (specifying the size of the receiving queue of each network device), and the like, and has a series of software and hardware parameters of application programs and load running status features, such as CPU usage rate of the nginx_work_cpu_use of the nginx work process, physical memory size actually occupied by the nginx process, cpu.ad_average load, work.tcp_connections (TCP connection statistics), and the like. The optimization process takes the improvement of the pressure test index RPS (the number of requests processed per second) as an optimization target, and comprises the following specific steps:
step 1, deploying Nginx in a Linux environment in advance, performing a series of pressure tests of Apache bench and Webbench, continuously observing, obtaining performance of RPS (processing request number per second), continuously collecting the characteristic indexes, processing abnormal values and missing values occurring in the acquisition process, storing the abnormal values and the missing values as corresponding file formats, using macroscopic F1 (Macro F1) as an evaluation index, taking a load type as a target column, excluding irrelevant columns, generating random seeds, training, obtaining an initial recognition model, and recording an initial recommendation library according to relevant configuration of the best performance occurring in each process, wherein a record of a certain time is as follows: net.core.somaxconn=128, net.ipv4.tcp_max_syn_backlog=128, net.core.netdev_max_backlog=1000, and the class of the load was recorded as 'web1'.
And 2, deploying the perf service in the Linux system, collecting characteristic values such as cpu, load_average, network, tcp_connections, network_connections and the like, respectively corresponding to the average load of the system, TCP network connection statistics and UDP connection statistics, collecting the current load condition and the running state of the Linux system.
And step 3, inputting the information collected in the previous step into an initial recognition model, and recognizing that the load type with the highest confidence is web 1.
And 4, searching the load with the label of 'web1' in the initial recommendation library, acquiring the record, and recommending, namely configuring the recommendation as net.core.somaxconn=128, net.ipv4.tcp_max_syn_backlog=128 and net.core.netdev_max_backlog=1000.
And 5, generating a parameter space according to the parameters of the load, constructing a parameter optimization model for the load, and recommending parameters such as net.core.somaxconn, net.ipv4.tcp_max_syn_backlog, net.core.netdev_max_backlog and the like in the parameter space.
And 6, acquiring the performance of RPS (request per second) of recommended parameter configuration, feeding back to the optimization model, and carrying out next parameter recommendation after updating.
And 7, circularly performing the steps 3-6 until an ideal optimization effect is obtained, and recording the current configuration to the recommendation library for updating if the current optimal performance is better than the optimal configuration in the initial recommendation library. For example, the record of the original recommendation library is covered with net.core.somaxconn=256, net.ipv4.tcp_max_syn_backlog=256, net.core.netdev_max_backlog=2000.
In summary, the method for recommending the strategy based on the load identification of the machine learning and the kernel configuration of the Linux system with optimized parameters provides an optimizing tool, wherein the parameter optimizing part comprises a parameter space generation, a parameter recommending optimizing model generation and a parameter optimizing iterative process; the load identification part comprises a characteristic collection module and a classification module including but not limited to ensemble learning, neural network and the like.

Claims (10)

1. The method for recommending the strategy of the kernel configuration of the Linux system based on the machine learning is characterized by comprising the following steps: based on the current parameter configuration of the Linux system, collecting load data;
according to the load data, the load type of the current load is identified;
according to the historical optimal recommendation of the current load, recommending the latest parameter configuration for the current load;
generating a parameter space of the latest parameter configuration, and recommending parameters of the Linux system in the parameter space by using a parameter recommendation optimization model to obtain a parameter recommendation result; the parameter recommendation optimization model is constructed based on a first machine learning method.
2. The method of claim 1, wherein identifying a load type of a current load based on the load data comprises:
training a load category recognition model based on a second machine learning method;
preprocessing the load data;
and sending the preprocessed load data into the load type identification model to obtain the load type of the current load.
3. The method of claim 2, wherein training a load class identification model based on the second machine learning method comprises:
collecting a plurality of groups of preprocessed load sample data;
training a plurality of decision trees using random forest techniques and designating an output tag as a load type for each load sample data; the split contribution degree of the features is measured through a base index or an information gain when each decision tree is trained, and the random sampling technology and the random selection feature technology are adopted to enable each decision tree to be independent;
and voting based on the result of each decision tree to obtain the output result of the load sample data.
4. The method of claim 2, wherein preprocessing the load data comprises:
filling the missing values in the load data by adopting a moving average method to smooth the data;
and/or the number of the groups of groups,
regarding invalid characteristic values in the load data, taking the change value of each point and the previous point as a new sequence;
and/or the number of the groups of groups,
and filling the abnormal value in the load data with the average value of the front and rear data.
5. The method of claim 1, wherein the first machine learning method comprises: bayesian optimization methods, genetic algorithms, particle swarm algorithms, or neural networks.
6. The method of claim 5, wherein generating the parameter space of the latest parameter configuration and performing parameter recommendation of the Linux system in the parameter space using a parameter recommendation optimization model to obtain a parameter recommendation result comprises:
selecting a proxy model type, wherein the proxy model type comprises: gaussian process regression, gaussian process regression of radial basis function description similarity or a random forest-based proxy model;
selecting a sampling function, wherein the sampling function comprises: expected improvement, locally expected improvement, improved probability, or lower confidence;
selecting a sampling strategy, wherein the sampling strategy comprises the following steps: random sampling strategy, latin hypercube sampling strategy or Sobol sequence sampling strategy;
generating a Bayesian optimization model according to the selected proxy model type, the sampling function and the sampling strategy;
performing parameter optimization on the Bayesian optimization model based on a parameter space of sample parameter configuration, and obtaining a parameter recommendation optimization model;
generating a parameter space of the latest parameter configuration according to the types, the values, the step sizes and the ranges of different parameters in the latest parameter configuration;
and obtaining a parameter recommendation result corresponding to the parameter space of the latest parameter configuration according to the parameter recommendation optimization model.
7. The method of any one of claims 1 to 6, further comprising:
under the condition that the parameter recommendation result does not reach the ideal optimization effect, taking the parameter recommendation result as the current parameter configuration, returning to the current parameter configuration based on the Linux system, and collecting load data;
outputting the parameter recommendation result and updating the historical optimal recommendation of the current load based on the parameter recommendation result under the condition that the parameter recommendation result achieves an ideal optimization effect and the optimization effect corresponding to the parameter recommendation result is better than the optimization effect corresponding to the historical optimal recommendation;
and outputting the historical optimal recommendation of the current load under the condition that the parameter recommendation result reaches an ideal optimization effect and the optimization effect corresponding to the parameter recommendation result is worse than the optimization effect corresponding to the historical optimal recommendation.
8. A policy recommendation device for Linux system kernel configuration based on machine learning, the device comprising: the data collection module is used for collecting load data based on the current parameter configuration of the Linux system;
the type identification module is used for identifying the load type of the current load according to the load data;
the first parameter recommendation module is used for recommending the latest parameter configuration for the current load according to the historical optimal recommendation of the current load;
the second parameter recommendation module is used for generating a parameter space of the latest parameter configuration, and performing parameter recommendation of the Linux system in the parameter space by using a parameter recommendation optimization model to obtain a parameter recommendation result; the parameter recommendation optimization model is constructed based on a first machine learning method.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the policy recommendation method for a machine learning based Linux system kernel configuration of any of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of policy recommendation for a machine learning based Linux system kernel configuration of any of claims 1 to 7.
CN202310909401.8A 2023-07-24 2023-07-24 Method and device for optimizing strategy of Linux system kernel configuration based on machine learning Pending CN117172093A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310909401.8A CN117172093A (en) 2023-07-24 2023-07-24 Method and device for optimizing strategy of Linux system kernel configuration based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310909401.8A CN117172093A (en) 2023-07-24 2023-07-24 Method and device for optimizing strategy of Linux system kernel configuration based on machine learning

Publications (1)

Publication Number Publication Date
CN117172093A true CN117172093A (en) 2023-12-05

Family

ID=88934465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310909401.8A Pending CN117172093A (en) 2023-07-24 2023-07-24 Method and device for optimizing strategy of Linux system kernel configuration based on machine learning

Country Status (1)

Country Link
CN (1) CN117172093A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117789192A (en) * 2024-02-26 2024-03-29 浪潮计算机科技有限公司 Setting item management method, device, equipment and medium of basic input/output system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117789192A (en) * 2024-02-26 2024-03-29 浪潮计算机科技有限公司 Setting item management method, device, equipment and medium of basic input/output system

Similar Documents

Publication Publication Date Title
US11113124B2 (en) Systems and methods for quickly searching datasets by indexing synthetic data generating models
US11487941B2 (en) Techniques for determining categorized text
US11570070B2 (en) Network device classification apparatus and process
US8315960B2 (en) Experience transfer for the configuration tuning of large scale computing systems
CN110324170B (en) Data analysis equipment, multi-model co-decision system and method
US20240127124A1 (en) Systems and methods for an accelerated and enhanced tuning of a model based on prior model tuning data
CN113626241B (en) Abnormality processing method, device, equipment and storage medium for application program
Li et al. Traffic identification of mobile apps based on variational autoencoder network
CN115563610B (en) Training method, recognition method and device for intrusion detection model
CN111104242A (en) Method and device for processing abnormal logs of operating system based on deep learning
CN117172093A (en) Method and device for optimizing strategy of Linux system kernel configuration based on machine learning
EP3905619A1 (en) System and method for classifying network devices
CN110716761A (en) Automatic and self-optimizing determination of execution parameters of software applications on an information processing platform
CN113326177A (en) Index anomaly detection method, device, equipment and storage medium
CN116684491A (en) Dynamic caching method, device, equipment and medium based on deep learning
CN103823881A (en) Method and device for performance optimization of distributed database
CN115686995A (en) Data monitoring processing method and device
CN111475380A (en) Log analysis method and device
CN109978038B (en) Cluster abnormity judgment method and device
CN105897503A (en) Hadoop cluster bottleneck detection algorithm based on resource information gain
CN117527622B (en) Data processing method and system of network switch
Bahaweres et al. Combining PCA and SMOTE for software defect prediction with visual analytics approach
US20240012859A1 (en) Data cataloging based on classification models
CN116932580A (en) Radix estimation method and device
CN117354251A (en) Automatic extraction method for electric power Internet of things terminal characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination