CN111930484A - Method and system for optimizing performance of thread pool of power grid information communication server - Google Patents

Method and system for optimizing performance of thread pool of power grid information communication server Download PDF

Info

Publication number
CN111930484A
CN111930484A CN202010727268.0A CN202010727268A CN111930484A CN 111930484 A CN111930484 A CN 111930484A CN 202010727268 A CN202010727268 A CN 202010727268A CN 111930484 A CN111930484 A CN 111930484A
Authority
CN
China
Prior art keywords
thread pool
operations
performance
task
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010727268.0A
Other languages
Chinese (zh)
Other versions
CN111930484B (en
Inventor
祝晓辉
赵晓波
毕会静
易克难
王秉洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Training Center of State Grid Hebei Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Training Center of State Grid Hebei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Training Center of State Grid Hebei Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202010727268.0A priority Critical patent/CN111930484B/en
Publication of CN111930484A publication Critical patent/CN111930484A/en
Application granted granted Critical
Publication of CN111930484B publication Critical patent/CN111930484B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/245Classification techniques relating to the decision surface
    • G06F18/2451Classification techniques relating to the decision surface linear, e.g. hyperplane
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method and a system for optimizing the performance of a thread pool of a power grid information communication server, wherein the method comprises the following steps: analyzing factors influencing the performance of the thread pool, and establishing a thread pool performance model; inputting the performance test data of the communication server into a thread pool tuning model based on a support vector machine to obtain the hyper-parameters of the trained thread pool tuning model; judging whether the size of the current thread pool is the optimal size or not through a trained support vector machine prediction model, resetting the thread pool if the size of the current thread pool is not the optimal size, and dynamically updating a training sample set by selecting the characteristic data of the thread pool which meets certain conditions; the dynamic thread pool intelligent tuning model provided by the scheme can intelligently reduce the user response time of the server, especially can play a role in peak clipping when accessing a peak, and improves the execution efficiency of the server.

Description

Method and system for optimizing performance of thread pool of power grid information communication server
Technical Field
The invention relates to the field of intelligent power grids, in particular to a method and a system for optimizing the performance of a thread pool of a power grid information communication server.
Background
With the development of intellectualization, networking and automation of power grids in China, information interaction among power information networks is more frequent and deeper. The grid information communication server carries core services in grid information network information transmission, and is often faced with a large number of user requests, and the processing time required by the user tasks is generally short. Accordingly, communications servers typically employ thread-pooling techniques to respond to such user requests efficiently and in a timely manner. However, while the thread pool improves the system performance, it also raises a new problem of how to select an appropriate thread pool size to obtain the best server performance. If the size of the thread pool is selected too large, although the capacity of the thread pool for processing the task requests of the users in parallel is increased, the system overhead generated by the system for maintaining the threads with the large number is increased; in addition, the larger the number of threads, the more the competition of system resources is increased, and the performance of the system is likely to be reduced. And the selection of the size of the thread pool is too small, which can weaken the capability of the thread pool to process the user request in parallel. Therefore, selecting the appropriate thread pool size becomes a critical factor in determining server performance.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method and a system for optimizing the performance of a thread pool of a power grid information communication server, select a proper thread pool size and reduce the user response time of the server.
The technical scheme for solving the technical problems is as follows:
a method for optimizing the performance of a thread pool of a power grid information communication server comprises the following steps:
s1, analyzing factors influencing the performance of the thread pool, and establishing a thread pool performance model;
s2, inputting the performance test data of the communication server into a thread pool tuning model based on a support vector machine to obtain the hyper-parameters of the trained thread pool tuning model;
s3, judging whether the current thread pool size is the optimal size through the trained support vector machine prediction model, resetting the thread pool if the current thread pool size is not the optimal size, and dynamically updating the training sample set by selecting the thread pool characteristic data meeting certain conditions;
the thread pool tuning model is established according to the thread pool performance data throughput, the task operation time, the task blocking time and the corresponding optimal thread pool size, and the thread pool performance optimization is to select the appropriate thread pool size according to the user request number.
Further, the S1 specifically includes:
(1) let the user task response time be tResponse toThe queue waiting time of the task in the queue is tQueuing upThe processing time of the task in the thread pool is tPoolThen t isResponse to=tQueuing up+tPool
(2) The processing time of a task in the thread pool comprises the CPU operation time t occupied by the taskOperationsAnd a waiting time t for the task to be suspended by waiting for system resourcesWait forI.e. tPool=tOperations+tWait for. End user task response time tResponse to=tQueuing up+tOperations+tWait for
(3) Setting the system throughput as m, the thread pool size as n, and the task operation time as tOperationsThen the mathematical model of the task queuing time is tQueuing up=f(n,m,tPool)=f(n,m,tOperations+tWait for);
(4) Cause waiting systemThe time consumed by the resource blocking is TBlocking of a vesselThe time of CPU operation occupied by the thread in the pool is TOperationsThen the mathematical model of task latency can be written as tWait for=g(n,TOperations,TBlocking of a vessel);
(5) Task operation time tOperationsThe method refers to that a user task occupies the time consumed by a CPU to execute the task after entering a thread pool. For each user task, the operation time can be regarded as a constant, and is independent of other parameters such as throughput, thread pool size and the like, and tOperations=TOperations
(6) In summary, a mathematical model of user response time reflecting thread pool performance can be constructed as
tResponse to=tQueuing up+tOperations+tWait for
=f(n,m,TOperations+g(n,TOperations,TBlocking of a vessel))
+TOperations+g(n,TOperations,TBlocking of a vessel),
Can be written as tResponse to=h(n,m,TOperations,TBlocking of a vessel);
(7) To optimize the performance of the thread pool, i.e. to make the user task response time tResponse toTaking the minimum value. If the above formula is continuously differentiable, the necessary condition to get the minimum value is t'Response to=h'(nbest,m,TOperations,TBlocking of a vessel)=0。
Further, the S2 specifically includes:
s21, initializing the hyper-parameters of the support vector machine based on an improved fluid search optimization algorithm (IFSO); wherein the hyper-parameters comprise: penalty factor C, parameter gamma of radial basis kernel function;
and S22, performing cross training by using a support vector machine, and performing iterative optimization by using the obtained classification accuracy as an IFSO fitness function to finally obtain the optimal hyperparameter.
Further, the S22 specifically includes:
(1) initializing the position, the speed, the density and the moving direction of each fluid particle and normal pressure;
(2) calculating an objective function value, updating an optimal objective function value, an optimal position and a worst objective function value, and calculating the density of fluid particles;
(3) normalizing the objective function value and calculating the pressure of the fluid particles;
(4) calculating the pressure and the speed direction of other fluid particles to the current particle;
(5) calculating a fluid velocity value and a velocity vector according to a Bernoulli equation;
(6) updating the position of particles
(7) And (5) repeating the steps (2) to (6) until a termination condition is met.
In order to improve the accuracy of the fluid search algorithm, a two-stage optimization mechanism is adopted, namely diversified search in the first stage and refined exploration in the second stage.
Further, the S3 specifically includes:
inputting the performance monitoring data of the thread pool running in real time into a support vector machine as a test sample to obtain the size category of the optimal thread pool;
judging whether the size of the obtained optimal thread pool is consistent with the current size, if not, resetting the thread pool, and dynamically adjusting the size of the thread pool
And judging whether the characteristic data meets a KKT (Karush-Kuhn-Tucker) condition, if so, replacing a point in a training sample set which most violates the KKT condition, and performing training and learning through a support vector machine to obtain a new classification hyperplane of each optimal thread pool size.
The beneficial effect of adopting the further scheme is that: in the thread pool tuning model based on the support vector machine, a KKT condition is adopted as a judgment condition for updating a training sample set, the scale of a fixed size is maintained for the training sample set, the infinite increase of the sample set caused by continuous introduction of new samples is avoided, and the thread pool tuning model can adapt to a complex and changeable environment.
A system for optimizing the performance of a thread pool of a power grid information communication server comprises the following components: the function relation establishing module for the optimal performance of the thread pool, the support vector machine parameter selecting module and the thread pool size optimizing module;
the function relation establishing module with the optimal thread pool performance is used for analyzing factors influencing the thread pool performance so as to optimize the thread pool performance and achieve the aim of optimizing the server performance;
the support vector machine parameter selection module is used for inputting the performance test data of the communication server into a thread pool tuning model based on the support vector machine to obtain the hyperparameter of the trained thread pool tuning model;
the thread pool size optimizing module is used for judging whether the current thread pool size is the optimal size or not through a trained support vector machine prediction model, resetting the thread pool if the current thread pool size is not the optimal size, and dynamically updating a training sample set by using thread pool characteristic data meeting certain conditions;
the thread pool tuning model is established according to the thread pool performance data throughput, the task operation time, the task blocking time and the corresponding optimal thread pool size, and the thread pool performance optimization is to select the appropriate thread pool size according to the user request number.
Further, the functional relationship establishing module with the optimal thread pool performance is used for analyzing factors influencing the thread pool performance, and the steps specifically include:
(1) let the user task response time be tResponse toThe queue waiting time of the task in the queue is tQueuing upThe processing time of the task in the thread pool is tPoolThen t isResponse to=tQueuing up+tPool
(2) The processing time of a task in the thread pool comprises the CPU operation time t occupied by the taskOperationsAnd a waiting time t for the task to be suspended by waiting for system resourcesWait forI.e. tPool=tOperations+tWait for. End user task response time tResponse to=tQueuing up+tOperations+tWait for
(3) Setting the system throughput as mThe thread pool size is n, the task operation time is tOperationsThen the mathematical model of the task queuing time is tQueuing up=f(n,m,tPool)=f(n,m,tOperations+tWait for);
(4) Let T be the time consumed by waiting for system resources to be blockedBlocking of a vesselThe time of CPU operation occupied by the thread in the pool is TOperationsThen the mathematical model of task latency can be written as tWait for=g(n,TOperations,TBlocking of a vessel);
(5) Task operation time tOperationsThe method refers to that a user task occupies the time consumed by a CPU to execute the task after entering a thread pool. For each user task, the operation time can be regarded as a constant, and is independent of other parameters such as throughput, thread pool size and the like, and tOperations=TOperations
(6) In summary, a mathematical model of user response time reflecting thread pool performance can be constructed as
tResponse to=tQueuing up+tOperations+tWait for
=f(n,m,TOperations+g(n,TOperations,TBlocking of a vessel))
+TOperations+g(n,TOperations,TBlocking of a vessel),
Can be written as tResponse to=h(n,m,TOperations,TBlocking of a vessel);
(7) To optimize the performance of the thread pool, i.e. to make the user task response time tResponse toTaking the minimum value. If the above formula is continuously differentiable, the necessary condition to get the minimum value is t'Response to=h'(nbest,m,TOperations,TBlocking of a vessel)=0。
Further, the support vector machine parameter selection module comprises: the device comprises a support vector machine parameter initialization module and a support vector machine parameter training module;
the support vector machine parameter initialization module is used for initializing the hyper-parameters of the support vector machine; wherein the hyper-parameters comprise: penalty factor C, parameter gamma of radial basis kernel function;
and the support vector machine training module is used for performing iterative optimization as a fitness function of the IFSO according to the obtained classification accuracy rate to finally obtain the optimal hyperparameter.
Further, the support vector machine parameter training module is specifically configured to calculate an optimal hyper-parameter applicable to the thread pool size tuning module, and the method specifically includes the steps of:
(1) initializing the position, the speed, the density and the moving direction of each fluid particle and normal pressure;
(2) calculating an objective function value, updating an optimal objective function value, an optimal position and a worst objective function value, and calculating the density of fluid particles;
(3) normalizing the objective function value and calculating the pressure of the fluid particles;
(4) calculating the pressure and the speed direction of other fluid particles to the current particle;
(5) calculating a fluid velocity value and a velocity vector according to a Bernoulli equation;
(6) updating the position of the particle;
(7) and (5) repeating the steps (2) to (6) until a termination condition is met.
In order to improve the accuracy of the fluid search algorithm, a two-stage optimization mechanism is adopted, namely diversified search in the first stage and refined exploration in the second stage.
Further, the thread pool size tuning module is specifically configured to:
inputting the performance monitoring data of the thread pool running in real time into a support vector machine as a test sample to obtain the size category of the optimal thread pool;
judging whether the size of the obtained optimal thread pool is consistent with the current size, if not, resetting the thread pool, and dynamically adjusting the size of the thread pool
And judging whether the characteristic data meets a KKT (Karush-Kuhn-Tucker) condition or not, if so, replacing the point most violating the KKT condition in the training sample set, and submitting the point to a support vector machine training module to generate a new classification hyperplane of each optimal thread pool size.
The beneficial effect of adopting the further scheme is that:
the invention has the beneficial effects that:
according to the invention, an original training sample set is constructed by a large amount of communication server performance experimental data, the sizes of thread pools under different power grid scenes are dynamically adjusted by a trained support vector machine, and meanwhile, the training sample set is dynamically updated by adopting real-time performance experimental data, so that dynamic intelligent optimization of the communication server can be realized.
In the thread pool tuning model based on the support vector machine, a KKT condition is adopted as a judgment condition for updating a training sample set, the scale of a fixed size is maintained for the training sample set, the infinite increase of the sample set caused by continuous introduction of new samples is avoided, and the thread pool tuning model can adapt to a complex and changeable environment.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
Fig. 1 is a schematic flowchart of a method for optimizing the performance of a thread pool of a power grid information communication server according to an embodiment of the present invention;
FIG. 2 is a flowchart of an IFSO-based support vector machine algorithm according to another embodiment of the present invention;
FIG. 3 is a normalized partial training sample set data provided by an embodiment of the present invention;
FIG. 4 is a graph illustrating classification accuracy of an IFSO-SVM provided by an embodiment of the present invention as compared to a different algorithm optimized SVM;
FIG. 5 is an iterative graph of classification accuracy for an IFSO-SVM provided by an embodiment of the present invention as compared to a different algorithm-optimized SVM;
FIG. 6 shows a comparison of the performance of a dynamic thread pool and a static thread pool intelligently adjusted by different optimization algorithms according to an embodiment of the present invention;
FIG. 7 is a graph illustrating efficiency improvement results of an IFSO-SVM according to an embodiment of the present invention compared to different algorithms;
fig. 8 is a structural diagram of a system for optimizing the performance of a thread pool of a power grid information communication server according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth to illustrate, but are not to be construed to limit the scope of the invention.
In the power communication network environment, a communication server generally adopts a thread pool technology to timely and efficiently respond to the user requests, the server creates a group of threads for a network application program in advance, when a user service request comes, the created threads in the thread pool can be directly called to serve the user, and after the user task is executed, the group of threads are not destroyed, but wait for the service requests of the next group of users. However, the thread pool also creates a new problem while improving the system performance, and if the size of the thread pool is too large, although the capacity of the thread pool for processing the user task requests in parallel is increased, the system also increases more system overhead for maintaining such a large number of threads, and if the size of the thread pool is too small, the capacity of the thread pool for processing the user task requests in parallel is weakened. The thread pool performance optimization problem is to select a proper thread pool size according to the number of user requests.
As shown in fig. 1, a method for optimizing performance of a thread pool of a power grid information communication server according to an embodiment of the present invention includes:
s1, analyzing factors influencing the performance of the thread pool, and establishing a thread pool performance model;
(1) let the user task response time be tResponse toThe queue waiting time of the task in the queue is tQueuing upThe processing time of the task in the thread pool is tPoolThen t isResponse to=tQueuing up+tPool
(2) The processing time of a task in the thread pool comprises the CPU operation time t occupied by the taskOperationsAnd a waiting time t for the task to be suspended by waiting for system resourcesWait forI.e. tPool=tOperations+tWait for. End user task response time tResponse to=tQueuing up+tOperations+tWait for
(3) Setting the system throughput as m, the thread pool size as n, and the task operation time as tOperationsThen the mathematical model of the task queuing time is tQueuing up=f(n,m,tPool)=f(n,m,tOperations+tWait for);
(4) Let T be the time consumed by waiting for system resources to be blockedBlocking of a vesselThe time of CPU operation occupied by the thread in the pool is TOperationsThen the mathematical model of task latency can be written as tWait for=g(n,TOperations,TBlocking of a vessel);
(5) Task operation time tOperationsThe method refers to that a user task occupies the time consumed by a CPU to execute the task after entering a thread pool. For each user task, the operation time can be regarded as a constant, and is independent of other parameters such as throughput, thread pool size and the like, and tOperations=TOperations
(6) In summary, a mathematical model of user response time reflecting thread pool performance can be constructed as
tResponse to=tQueuing up+tOperations+tWait for
=f(n,m,TOperations+g(n,TOperations,TBlocking of a vessel))
+TOperations+g(n,TOperations,TBlocking of a vessel),
Can be written as tResponse to=h(n,m,TOperations,TBlocking of a vessel);
(7) To optimize the performance of the thread pool, i.e. to make the user task response time tResponse toTaking the minimum value. If the above formula is continuously differentiable, the necessary condition to get the minimum value is t'Response to=h'(nbest,m,TOperations,TBlocking of a vessel)=0。
Thus, it can be concluded that the factors that affect the performance of the thread pool are throughput, task computation time, task blocking time, and thread pool size. For practical situations, throughput, task computation time and task blocking time all belong to a user task part, and cannot be forcibly adjusted. The size of the thread pool can be dynamically adjusted in a thread pool program to adapt to objective user task change, so that the purpose of adjusting the thread pool is achieved;
s2, inputting the performance test data of the communication server into a thread pool tuning model based on a support vector machine to obtain the hyper-parameters of the trained thread pool tuning model;
in one embodiment, a thread pool tuning model based on a support vector machine is established according to the thread pool performance data throughput, the task operation time, the task blocking time and the corresponding optimal thread pool size.
A Support Vector Machine (SVM) is an intelligent method with strong generalization capability. The principle of the support vector machine is to set the input training sample point (x)i,yi),i=1…n,x∈RnThe hyperplane equation for y ∈ { -1, +1} separation is (w · x) + b ═ 0. The hyperplane equations can be infinite, where the optimal hyperplane can be solved by:
Figure BDA0002599960620000091
Figure BDA0002599960620000092
where C is a penalty factor and ξ is a relaxation variable.
Introducing Lagrange factor alphaiAnd convert it into a dual problem
Figure BDA0002599960620000093
The optimal solution is
Figure BDA0002599960620000094
i is 1,2, …, n and
Figure BDA0002599960620000095
the optimal hyperplane equation is then
Figure BDA0002599960620000096
Where SV is the set of support vectors, the summation in the above equation is actually performed only on the support vectors. Therefore, the unknown sample can be classified by judging the value of y.
For the nonlinear case, the kernel function K (x · y) ═ phi (x) · phi (y) can be introduced to convert the kernel function K (x · y) · phi (x) · phi (y) into a linear division problem of a high-dimensional feature space, so that the dual problem of the nonlinear support vector machine can be obtained as
Figure BDA0002599960620000097
Thus, the optimal classification hyperplane can be obtained by solving the solution of the problem. The decision function of the support vector machine is:
Figure BDA0002599960620000098
Figure BDA0002599960620000099
commonly used kernel functions include linear kernel functions, polynomial kernel functions, and radial basis kernel functions. The radial basis kernel function has the capability of mapping the nonlinear classification problem into the linear classification problem of an infinite dimensional space, so that the radial basis kernel function is widely applied to the research of the support vector machine. The radial basis kernel function is defined as:
K(xi,x)=exp(-γ||xi-x||2),γ>0
where γ is a parameter of the radial basis kernel function.
The prediction performance of the support vector machine is determined by a penalty factor C and a parameter gamma of a radial basis kernel function. Therefore, how to properly select the values of C and γ to optimize the prediction performance of the SVM becomes a problem that needs to be focused. The direct method is an exhaustive method, but the operation amount is large, and the optimal parameter combination cannot be searched necessarily. More methods are to search for the optimal parameter combination by using a group intelligence optimization algorithm. In one embodiment, a relatively new and better-performance fluid search optimization algorithm is introduced and improved to improve the prediction performance of the support vector machine.
The basic FSO algorithm steps are as follows:
(1) the position of each fluid particle is initialized and the parameters are adjusted. Initialising a fluid particle velocity V i0, direction of fluid motion 0, fluid density ρ i1, and normal pressure p0=1(i=1,2……n)。
(2) Calculating objective function value, updating optimum objective function value ybestAt an optimum position XbestAnd the worst objective function value yworstCalculating the fluid particle density ρ ═ m/lD
(3) Normalizing the value of the objective function and calculating the pressure of the fluid particles
Figure BDA0002599960620000101
(4) The pressure of the other fluid particles to the current particle is calculated,
Figure BDA0002599960620000102
in the direction of speed of
Figure BDA0002599960620000103
(5) Calculating fluid velocity values according to Bernoulli's equation
Figure BDA0002599960620000104
And by the steps ofThe direction in step (4) calculates the velocity vector Vi=direction·vi·rand
(6) And (4) updating the position. Xi+1=Xi+Vi
(7) And (5) repeating the steps (2) to (6) until a termination condition is met.
Because the original FSO algorithm has larger operation amount and is easy to fall into local optimum, the method is improved according to the actual situation of the optimization of the support vector machine by the following two points:
improvement 1. because the calculation amount of the pressure direction in the step (4) is large, the method is simplified into the method
Figure BDA0002599960620000105
In order to improve the accuracy of the fluid search algorithm, a two-stage optimization mechanism is adopted: namely diversified search in the first stage and refined exploration in the second stage. When the number of iterations reaches a certain threshold M' at the end of the first phase, the search space is reduced to around the current optimum value and the pel length is exponentially reduced, i.e. l · e(M'-t)/σRefinement exploration is performed, where σ can set the precision of the search.
Fig. 2 shows the IFSO-based support vector machine algorithm. Firstly, values of hyper-parameters C and gamma of the SVM are initialized randomly through IFSO, then the values are assigned to the SVM for cross training, the obtained SVM classification accuracy is used as a fitness function of the IFSO for iterative optimization, and finally the optimal SVM hyper-parameter is searched.
And forming a sample in the training sample set by the thread pool performance data throughput, the task operation time, the task blocking time and the optimal thread pool size obtained through experiments. Wherein, the throughput, the task operation time and the task blocking time form three characteristic attributes in the sample, and the optimal thread pool size forms the classification label of the sample. And obtaining the corresponding relation between the characteristic attribute and the classification label through the learning training of the support vector machine. Therefore, when new feature data, namely a test sample comes, a classification label corresponding to the test sample, namely the optimal thread pool size, can be obtained according to the corresponding relation, so that a basis is provided for the size adjustment of the thread pool. The effect of thread pool tuning is directly related to the selection of the training sample set of the support vector machine, and the more comprehensive the training sample is selected, the more accurate the obtained optimal thread pool size is. Fig. 1 shows a thread pool tuning model framework based on a support vector machine, wherein the specific steps of the training process of the thread pool tuning model based on the support vector machine are as follows:
step1, classifying the performance characteristic data of the thread pool through experiments, taking throughput, task operation time and task blocking time as characteristic variables, classifying the characteristic variables into a plurality of classes according to the optimal thread pool size, and constructing an initial training sample set;
step2, training and learning through a support vector machine to obtain a classification hyperplane of each optimal thread pool size.
And S3, judging whether the current thread pool size is the optimal size through the trained support vector machine prediction model, resetting the thread pool if the current thread pool size is not the optimal size, and dynamically updating the training sample set by selecting the thread pool characteristic data meeting certain conditions.
The effect of thread pool tuning is directly related to the selection of the training sample set of the support vector machine, and the more comprehensive the training sample is selected, the more accurate the obtained optimal thread pool size is. Meanwhile, due to the complexity and changeability of the actual situation, the fixed training sample set selected in advance should be changed to adapt to the new situation which changes continuously. Therefore, the monitored thread pool characteristic data is judged through certain conditions, if the condition is met, the training sample set is dynamically updated, and if the condition is not met, the training sample set is directly discarded. The method comprises the following specific steps:
step1, inputting the performance monitoring data of the thread pool running in real time into a support vector machine as a test sample to obtain the optimal thread pool size category.
Step2 judges whether the obtained optimal thread pool size is consistent with the current size, if not, the thread pool is reset, and the size of the thread pool is dynamically adjusted.
Step3, judging whether the characteristic data meet a KKT (Karush-Kuhn-Tucker) condition, if so, replacing the point in the training sample set which most violates the KKT condition, and returning to Step 2; if so, discard the ignore.
In the thread pool tuning model based on the support vector machine, the KKT condition is used as a judgment condition for updating the training sample set. If the test sample meets the KKT condition, the test sample also falls into the region of the support vector, and contributes to the classification decision function, and the training sample set needs to be updated, and the support vector machine needs to be retrained; on the contrary, if the test sample does not satisfy the KKT condition, the test sample does not fall into the support vector region, and the regression decision function will not be acted, and the support vector machine does not need to be retrained.
For the training sample set, a fixed size should be maintained, and the sample set should not grow indefinitely as new samples are introduced. Therefore, when a new sample is introduced, the smallest sample that acts on the classification decision function, i.e., the sample that violates the KKT condition most seriously needs to be removed, so that the number of training sample sets can be kept unchanged.
An original training sample set is constructed through a large amount of communication server performance experimental data, the sizes of thread pools under different power grid scenes are dynamically adjusted through a trained support vector machine, and meanwhile, the training sample set is dynamically updated through real-time performance experimental data, so that dynamic intelligent optimization of the communication server can be realized.
Preferably, in any of the above embodiments, S2 specifically includes:
s21, initializing the hyper-parameters of the support vector machine based on an improved fluid search optimization algorithm (IFSO); wherein the hyper-parameters comprise: penalty factor C, parameter gamma of radial basis kernel function;
and S22, performing cross training by using a support vector machine, and performing iterative optimization by using the obtained classification accuracy as an IFSO fitness function to finally obtain the optimal hyperparameter.
By adopting the thread pool tuning model based on the IFSO-SVM, higher classification accuracy can be obtained, the optimization effect of the FSO is improved, the local optimum can be easily skipped, the user response time of the server is intelligently reduced, the peak clipping effect can be realized particularly when the peak is visited, and the execution efficiency of the server can be improved.
Preferably, in any of the above embodiments, S22 specifically includes:
(1) initializing the position, the speed, the density and the moving direction of each fluid particle and normal pressure;
(2) calculating an objective function value, updating an optimal objective function value, an optimal position and a worst objective function value, and calculating the density of fluid particles;
(3) normalizing the objective function value and calculating the pressure of the fluid particles;
(4) calculating the pressure and the speed direction of other fluid particles to the current particle;
(5) calculating a fluid velocity value and a velocity vector according to a Bernoulli equation;
(6) updating the position of the particle;
(7) and (5) repeating the steps (2) to (6) until a termination condition is met.
In order to improve the accuracy of the fluid search algorithm, a two-stage optimization mechanism is adopted, namely diversified search in the first stage and refined exploration in the second stage.
By simplifying the calculation method of the pressure direction and dividing the optimization process into two stages of diversified search and refined exploration, the defects that the original FSO is often easy to fall into local optimum and the calculation amount is large can be avoided, and therefore the convergence speed in model training is greatly improved.
Preferably, in any of the above embodiments, S3 specifically includes:
inputting the performance monitoring data of the thread pool running in real time into a support vector machine as a test sample to obtain the size category of the optimal thread pool;
judging whether the size of the obtained optimal thread pool is consistent with the current size, if not, resetting the thread pool, and dynamically adjusting the size of the thread pool
And judging whether the characteristic data meets a KKT (Karush-Kuhn-Tucker) condition, if so, replacing a point in a training sample set which most violates the KKT condition, and performing training and learning through a support vector machine to obtain a new classification hyperplane of each optimal thread pool size.
In the thread pool tuning model based on the support vector machine, a KKT condition is adopted as a judgment condition for updating a training sample set, a fixed-size scale is maintained for the training sample set, the infinite increase of the sample set caused by the continuous introduction of new samples can be avoided, and the thread pool tuning model can adapt to a complex and changeable environment.
In other embodiments provided by the invention, according to the thread pool performance optimization method of the scheme, the size of the thread pool is intelligently adjusted by a support vector machine by taking the throughput, the task operation time and the task blocking time acquired in real time by a certain power grid information communication server (the server is mainly configured with a CPU (Central processing Unit) of 2.4GHz Intel Xeon E5-2665, a memory of 32G and a hard disk of 10T.) as characteristic variables. The data acquisition interval is that a group of server characteristic data is acquired every 15 minutes, and 160 groups of data are acquired in a week of working day. And 160 groups of characteristic data determine the size of the optimal thread pool by taking the response time of the user task as an evaluation index of the server performance through a simulation experiment, and construct a training sample set.
Meanwhile, in order to eliminate dimension influence among the collected data characteristics, SVM training is facilitated, and characteristic data are subjected to normalization processing.
Figure BDA0002599960620000141
Wherein d is the original value of the feature data, dminIs the minimum value in the feature data, dmaxIs the maximum value of the characteristic data, and g is the normalized characteristic data value. The normalized partial training sample set data is shown in fig. 3.
In order to verify the performance of the IFSO-SVM, experiments compare the real performance of the original fluid search algorithm FSO-SVM, the particle swarm optimization PSO-SVM and the artificial bee colony optimization ABC-SVMAnd (6) testing the result. And the support vector machine classifier adopts open source software LIBSVM and evaluates the average classification accuracy of the model through 5-fold cross validation. The algorithm parameters are set as follows: the number of particles was set to 30 and the maximum number of iterations was 50. Wherein the parameters of FSO and IFSO are set as follows: the density limit ratio θ is 20%, the diversity search ratio M' is 0.7, and σ is 40; the particle group parameters are set as c1 ═ c2 ═ 2, ωbegin=0.9,ωend0.2; the parameter of the artificial bee colony algorithm is Ponlooker=Pemployed0.5; firefly algorithm alpha is 0.5, betamin0.2 and 1. The search range of parameter C in SVM is [0.01,35000 ]]The search range of γ is [0.0001,32 ]]。
Experiment one: and testing the classification result of the IFSO-SVM. As can be seen from fig. 4, the IFSO-SVM achieves the highest classification accuracy of all comparison algorithms. Followed by ABC-SVM, FSO-SVM, FA-SVM and PSO-SVM in that order. The improved IFSO-SVM has the average classification accuracy higher than that of the original FSO-SVM by 1.82 percent and is higher than that of the ABC-SVM positioned at the second position by 1.61 percent. Therefore, the FSO is improved, so that the overall optimal solution is easier to search, and the classification accuracy of the SVM is improved.
Fig. 5 shows an iterative graph of classification accuracy resulting from different algorithms optimizing SVMs. It can be seen that, at the initial iteration, the SVM parameters are randomly initialized by each algorithm, so that the obtained classification accuracy is basically the same and is about 85%. With the progress of search iteration, the IFSO-SVM is continuously promoted, and the classification accuracy higher than that of other optimization algorithms is finally obtained, and the results of the ABC-SVM and the FSO-SVM below are obviously higher. It is worth noting that the performance of the original FSO-SVM is slightly worse than that of the ABC-SVM, and the IFSO is better than the classification accuracy of the original FSO in the iteration process, which shows that the improvement of the IFSO enhances the search effect of the FSO, so that the FSO is easier to jump out of the local optimum, and a better classification effect can be obtained.
Experiment two: and (5) testing the performance of the server. Fig. 6 shows the comparison result of the performance of the dynamic thread pool and the static thread pool (the size of the thread pool is fixed to be 30) which are intelligently adjusted by different optimization algorithms, wherein the adjustment interval of the dynamic thread pool is 1 minute. The entire experiment lasted 45 minutes, with 11 to 33 minutes being the peak period of the visit. As is apparent from fig. 4, the performance of the intelligent dynamic pool based on various optimization algorithms is significantly better than that of the static pool. In the dynamic pool comparison of various optimization algorithms, the original test results of the FSO-SVM are equivalent to the test results of the ABC-SVM and are similar to the results obtained during SVM training. And the user response time based on the IFSO-SVM dynamic pool is shorter than that of other comparison algorithms. Particularly, when the access peak is high, the rise of the user response time of the IFSO-SVM dynamic pool is obviously more gradual than that of other optimization algorithms, so that the IFSO has a better effect on the improvement of the original FSO and can intelligently perform the peak clipping effect on the server response time.
FIG. 7 shows the efficiency improvement results of the IFSO-SVM compared to different algorithms, including the average efficiency improvement result within 45 minutes, the minimum efficiency improvement result, and the maximum efficiency improvement result. It can be seen that the performance of the dynamically tuned thread pool is obviously improved compared with that of the static pool, and the IFSO-SVM is improved by 9.12% -38.00% compared with other comparison algorithms on average. Therefore, the method can be concluded that the IFSO-SVM dynamic thread pool algorithm has a good effect on intelligent optimization of the performance of the communication server.
In another embodiment provided by the present invention, a system for optimizing the performance of a thread pool of a grid information communication server is provided, as shown in fig. 8, the system includes: the method comprises a thread pool performance optimization function relationship establishing module 11, a thread pool tuning model training module 12 and a thread pool size tuning module 13;
the function relationship establishing module 11 with the optimal thread pool performance is used for analyzing factors influencing the thread pool performance, so that the thread pool performance is optimized, and the purpose of optimizing the server performance is achieved;
the thread pool tuning model training module 12 is used for inputting the performance test data of the communication server into a thread pool tuning model based on a support vector machine to obtain the hyper-parameters of the trained thread pool tuning model;
the thread pool size optimizing module 13 is configured to determine whether the current thread pool size is the optimal size through a trained support vector machine prediction model, reset the thread pool if the current thread pool size is not the optimal size, and dynamically update a training sample set by using thread pool feature data meeting a certain condition;
the thread pool tuning model is established according to the thread pool performance data throughput, the task operation time, the task blocking time and the corresponding optimal thread pool size, and the thread pool performance optimization is to select the appropriate thread pool size according to the user request number.
An original training sample set is constructed through a large amount of communication server performance experimental data, the sizes of thread pools under different power grid scenes are dynamically adjusted through a trained support vector machine, and meanwhile, the training sample set is dynamically updated through real-time performance experimental data, so that dynamic intelligent optimization of the communication server can be realized.
Preferably, in any of the above embodiments, the functional relationship establishing module 11 for optimal thread pool performance is configured to analyze factors affecting the thread pool performance, and the steps specifically include:
(1) let the user task response time be tResponse toThe queue waiting time of the task in the queue is tQueuing upThe processing time of the task in the thread pool is tPoolThen t isResponse to=tQueuing up+tPool
(2) The processing time of a task in the thread pool comprises the CPU operation time t occupied by the taskOperationsAnd a waiting time t for the task to be suspended by waiting for system resourcesWait forI.e. tPool=tOperations+tWait for. End user task response time tResponse to=tQueuing up+tOperations+tWait for
(3) Setting the system throughput as m, the thread pool size as n, and the task operation time as tOperationsThen the mathematical model of the task queuing time is tQueuing up=f(n,m,tPool)=f(n,m,tOperations+tWait for);
(4) Let T be the time consumed by waiting for system resources to be blockedBlocking of a vesselIn the pondThe time of CPU operation occupied by the thread is TOperationsThen the mathematical model of task latency can be written as tWait for=g(n,TOperations,TBlocking of a vessel);
(5) Task operation time tOperationsThe method refers to that a user task occupies the time consumed by a CPU to execute the task after entering a thread pool. For each user task, the operation time can be regarded as a constant, and is independent of other parameters such as throughput, thread pool size and the like, and tOperations=TOperations
(6) In summary, a mathematical model of user response time reflecting thread pool performance can be constructed as
tResponse to=tQueuing up+tOperations+tWait for
=f(n,m,TOperations+g(n,TOperations,TBlocking of a vessel))
+TOperations+g(n,TOperations,TBlocking of a vessel),
Can be written as tResponse to=h(n,m,TOperations,TBlocking of a vessel);
(7) To optimize the performance of the thread pool, i.e. to make the user task response time tResponse toTaking the minimum value. If the above formula is continuously differentiable, the necessary condition to get the minimum value is t'Response to=h'(nbest,m,TOperations,TBlocking of a vessel)=0。
Preferably, in any of the above embodiments, the support vector machine parameter selection module 12 comprises: the device comprises a support vector machine parameter initialization module and a support vector machine parameter training module;
the support vector machine parameter initialization module is used for initializing the hyper-parameters of the support vector machine; wherein the hyper-parameters comprise: penalty factor C, parameter gamma of radial basis kernel function;
and the support vector machine training module is used for performing iterative optimization as a fitness function of the IFSO according to the obtained classification accuracy rate to finally obtain the optimal hyperparameter.
By adopting the thread pool tuning model based on the IFSO-SVM, higher classification accuracy is obtained, the optimization effect of the FSO is improved, the FSO can jump out of local optimum more easily, the user response time of the server is intelligently reduced, the peak clipping effect can be realized particularly when the peak is visited, and the execution efficiency of the server can be improved.
Preferably, in any of the above embodiments, the support vector machine parameter training module is specifically configured to calculate an optimal hyperparameter applicable to the thread pool size tuning module 13, and the steps specifically include:
(1) initializing the position, the speed, the density and the moving direction of each fluid particle and normal pressure;
(2) calculating an objective function value, updating an optimal objective function value, an optimal position and a worst objective function value, and calculating the density of fluid particles;
(3) normalizing the objective function value and calculating the pressure of the fluid particles;
(4) calculating the pressure and the speed direction of other fluid particles to the current particle;
(5) calculating a fluid velocity value and a velocity vector according to a Bernoulli equation;
(6) updating the position of the particle;
(7) and (5) repeating the steps (2) to (6) until a termination condition is met.
In order to improve the accuracy of the fluid search algorithm, a two-stage optimization mechanism is adopted, namely diversified search in the first stage and refined exploration in the second stage.
The beneficial effect of adopting the further scheme is that: by simplifying the calculation method of the pressure direction and dividing the optimization process into two stages of diversified search and refined exploration, the defects that the original FSO is often easy to fall into local optimum and the calculation amount is large are avoided, and the convergence speed in model training can be greatly improved.
Preferably, in any of the embodiments described above, the thread pool size tuning module 13 is specifically configured to:
inputting the performance monitoring data of the thread pool running in real time into a support vector machine as a test sample to obtain the size category of the optimal thread pool;
judging whether the size of the obtained optimal thread pool is consistent with the current size, if not, resetting the thread pool, and dynamically adjusting the size of the thread pool
And judging whether the characteristic data meets a KKT (Karush-Kuhn-Tucker) condition or not, if so, replacing the point most violating the KKT condition in the training sample set, and submitting the point to a support vector machine training module to generate a new classification hyperplane of each optimal thread pool size.
In the thread pool tuning model based on the support vector machine, a KKT condition is adopted as a judgment condition for updating a training sample set, the scale of a fixed size is maintained for the training sample set, the infinite increase of the sample set caused by continuous introduction of new samples is avoided, and the thread pool tuning model can adapt to a complex and changeable environment.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for optimizing the performance of a thread pool of a power grid information communication server is characterized by comprising the following steps:
s1, analyzing factors influencing the performance of the thread pool, and establishing a thread pool performance model;
s2, inputting the performance test data of the communication server into a thread pool tuning model based on a support vector machine to obtain the hyper-parameters of the trained thread pool tuning model;
s3, judging whether the current thread pool size is the optimal size through the trained support vector machine prediction model, resetting the thread pool if the current thread pool size is not the optimal size, and dynamically updating the training sample set by selecting the thread pool characteristic data meeting certain conditions;
and the thread pool tuning model is established according to the thread pool performance data throughput, the task operation time, the task blocking time and the corresponding optimal thread pool size.
2. The method for optimizing the performance of the thread pool of the grid information communication server according to claim 1, wherein the S1 specifically includes:
(1) let the user task response time be tResponse toThe queue waiting time of the task in the queue is tQueuing upThe processing time of the task in the thread pool is tPoolThen t isResponse to=tQueuing up+tPool
(2) The processing time of a task in the thread pool comprises the CPU operation time t occupied by the taskOperationsAnd a waiting time t for the task to be suspended by waiting for system resourcesWait forI.e. tPool=tOperations+tWait forAnd thus end user task response time tResponse to=tQueuing up+tOperations+tWait for
(3) Setting the system throughput as m, the thread pool size as n, and the task operation time as tOperationsThen the mathematical model of the task queuing time is tQueuing up=f(n,m,tPool)=f(n,m,tOperations+tWait for);
(4) Let T be the time consumed by waiting for system resources to be blockedBlocking of a vesselThe time of CPU operation occupied by the thread in the pool is TOperationsThen the mathematical model of task latency can be written as tWait for=g(n,TOperations,TBlocking of a vessel);
(5) Task operation time tOperationsThe method is characterized in that the time consumed by a CPU to execute tasks is preempted after user tasks enter a thread pool, for each user task, the operation time can be regarded as a constant and is irrelevant to other parameters such as throughput, thread pool size and the like, and tOperations=TOperations
(6) In summary, a mathematical model of user response time reflecting thread pool performance can be constructed as
tResponse to=tQueuing up+tOperations+tWait for
=f(n,m,TOperations+g(n,TOperations,TBlocking of a vessel))+TOperations+g(n,TOperations,TBlocking of a vessel),
Can be written as tResponse to=h(n,m,TOperations,TBlocking of a vessel);
(7) To optimize the performance of the thread pool, i.e. to make the user task response time tResponse toTaking a minimum value, if the above formula is continuously differentiable, the necessary condition for taking the minimum value is t'Response to=h'(nbest,m,TOperations,TBlocking of a vessel)=0。
3. The method for optimizing the performance of the thread pool of the grid information communication server according to claim 1, wherein the S2 specifically includes:
s21, initializing the hyper-parameters of the support vector machine; the hyper-parameters include: penalty factor C, parameter gamma of radial basis kernel function;
and S22, performing cross training by using a support vector machine, and performing iterative optimization by using the obtained classification accuracy as a fitness function of the improved fluid search optimization algorithm to finally obtain the optimal hyperparameter.
4. The method for optimizing the performance of the thread pool of the grid information communication server according to claim 3, wherein the S22 specifically includes:
(1) initializing the position, the speed, the density and the moving direction of each fluid particle and normal pressure;
(2) calculating an objective function value, updating an optimal objective function value, an optimal position and a worst objective function value, and calculating the density of fluid particles;
(3) normalizing the objective function value and calculating the pressure of the fluid particles;
(4) calculating the pressure and the speed direction of other fluid particles to the current particle;
(5) calculating a fluid velocity value and a velocity vector according to a Bernoulli equation;
(6) updating the position of the particle;
(7) and (5) repeating the steps (2) to (6) until a termination condition is met.
5. The method for optimizing the performance of the thread pool of the grid information communication server according to any one of claims 1 to 4, wherein the step S3 specifically includes:
inputting the performance monitoring data of the thread pool running in real time into a support vector machine as a test sample to obtain the size category of the optimal thread pool;
judging whether the size of the obtained optimal thread pool is consistent with the current size, if not, resetting the thread pool, and dynamically adjusting the size of the thread pool;
and judging whether the characteristic data meets the KKT condition or not, if so, replacing the point most violating the KKT condition in the training sample set, and performing training and learning through a support vector machine to obtain new classification hyperplanes of the sizes of all the optimal thread pools.
6. A system for optimizing the performance of a thread pool of a power grid information communication server is characterized by comprising the following steps: the function relation establishing module for the optimal performance of the thread pool, the support vector machine parameter selecting module and the thread pool size optimizing module;
the function relation establishing module with the optimal thread pool performance is used for analyzing factors influencing the thread pool performance so as to optimize the thread pool performance and achieve the aim of optimizing the server performance;
the support vector machine parameter selection module is used for inputting the performance test data of the communication server into a thread pool tuning model based on the support vector machine to obtain the hyperparameter of the trained thread pool tuning model;
the thread pool size optimizing module is used for judging whether the current thread pool size is the optimal size or not through a trained support vector machine prediction model, resetting the thread pool if the current thread pool size is not the optimal size, and dynamically updating a training sample set by using thread pool characteristic data meeting certain conditions;
and the thread pool tuning model is established according to the thread pool performance data throughput, the task operation time, the task blocking time and the corresponding optimal thread pool size.
7. The system for optimizing the performance of the thread pool of the power grid information communication server according to claim 6, wherein the function relationship establishing module with the optimal performance of the thread pool is used for analyzing factors influencing the performance of the thread pool, and the steps specifically include:
(1) let the user task response time be tResponse toThe queue waiting time of the task in the queue is tQueuing upThe processing time of the task in the thread pool is tPoolThen t isResponse to=tQueuing up+tPool
(2) The processing time of a task in the thread pool comprises the CPU operation time t occupied by the taskOperationsAnd a waiting time t for the task to be suspended by waiting for system resourcesWait forI.e. tPool=tOperations+tWait forAnd thus end user task response time tResponse to=tQueuing up+tOperations+tWait for
(3) Setting the system throughput as m, the thread pool size as n, and the task operation time as tOperationsThen the mathematical model of the task queuing time is tQueuing up=f(n,m,tPool)=f(n,m,tOperations+tWait for);
(4) Let T be the time consumed by waiting for system resources to be blockedBlocking of a vesselThe time of CPU operation occupied by the thread in the pool is TOperationsThen the mathematical model of task latency can be written as tWait for=g(n,TOperations,TBlocking of a vessel);
(5) Task operation time tOperationsThe method refers to that a user task occupies the time consumed by a CPU to execute the task after entering a thread pool. For each user task, the operation time can be regarded as a constant, and is independent of other parameters such as throughput, thread pool size and the like, and tOperations=TOperations
(6) In summary, a mathematical model of user response time reflecting thread pool performance can be constructed as
tResponse to=tQueuing up+tOperations+tWait for
=f(n,m,TOperations+g(n,TOperations,TBlocking of a vessel))+TOperations+g(n,TOperations,TBlocking of a vessel),
Can be written as tResponse to=h(n,m,TOperations,TBlocking of a vessel);
(7) To optimize the performance of the thread pool, the user task response time t is setResponse toTaking a minimum value, if the above formula is continuously differentiable, the necessary condition for taking the minimum value is t'Response to=h'(nbest,m,TOperations,TBlocking of a vessel)=0。
8. The system for optimizing the performance of the thread pool of the grid information communication server according to claim 6, wherein the support vector machine parameter selection module comprises: the device comprises a support vector machine parameter initialization module and a support vector machine parameter training module;
the support vector machine parameter initialization module is used for initializing the hyper-parameters of the support vector machine; wherein the hyper-parameters comprise: penalty factor C, parameter gamma of radial basis kernel function;
and the support vector machine training module is used for performing iterative optimization as a fitness function of the improved fluid search optimization algorithm according to the obtained classification accuracy, and finally obtaining the optimal hyperparameter.
9. The system according to claim 8, wherein the support vector machine parameter training module is specifically configured to calculate an optimal hyper-parameter applicable to the thread pool size tuning module, and the method specifically includes the following steps:
(1) initializing the position, the speed, the density and the moving direction of each fluid particle and normal pressure;
(2) calculating an objective function value, updating an optimal objective function value, an optimal position and a worst objective function value, and calculating the density of fluid particles;
(3) normalizing the objective function value and calculating the pressure of the fluid particles;
(4) calculating the pressure and the speed direction of other fluid particles to the current particle;
(5) calculating a fluid velocity value and a velocity vector according to a Bernoulli equation;
(6) updating the position of the particle;
(7) and (5) repeating the steps (2) to (6) until a termination condition is met.
10. The system for optimizing the performance of the thread pool of the grid information communication server according to any one of claims 6 to 9, wherein the thread pool size optimizing module is specifically configured to:
inputting the performance monitoring data of the thread pool running in real time into a support vector machine as a test sample to obtain the size category of the optimal thread pool;
judging whether the size of the obtained optimal thread pool is consistent with the current size, if not, resetting the thread pool, and dynamically adjusting the size of the thread pool;
and judging whether the characteristic data meets the KKT condition or not, if so, substituting the point most violating the KKT condition in the training sample set, and handing the point to a support vector machine training module to generate a new classification hyperplane of each optimal thread pool size.
CN202010727268.0A 2020-07-24 2020-07-24 Power grid information communication server thread pool performance optimization method and system Active CN111930484B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010727268.0A CN111930484B (en) 2020-07-24 2020-07-24 Power grid information communication server thread pool performance optimization method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010727268.0A CN111930484B (en) 2020-07-24 2020-07-24 Power grid information communication server thread pool performance optimization method and system

Publications (2)

Publication Number Publication Date
CN111930484A true CN111930484A (en) 2020-11-13
CN111930484B CN111930484B (en) 2023-06-30

Family

ID=73314674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010727268.0A Active CN111930484B (en) 2020-07-24 2020-07-24 Power grid information communication server thread pool performance optimization method and system

Country Status (1)

Country Link
CN (1) CN111930484B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194040A (en) * 2021-04-28 2021-07-30 王程 Intelligent control method for instantaneous high-concurrency server thread pool congestion
CN116401236A (en) * 2023-06-07 2023-07-07 瀚高基础软件股份有限公司 Method and equipment for adaptively optimizing database parameters

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984288A (en) * 2018-06-11 2018-12-11 山东中创软件商用中间件股份有限公司 Thread pool capacity adjustment method, device and equipment based on system response time
US20190258904A1 (en) * 2018-02-18 2019-08-22 Sas Institute Inc. Analytic system for machine learning prediction model selection
CN110401635A (en) * 2019-06-28 2019-11-01 国网安徽省电力有限公司电力科学研究院 A kind of tertiary-structure network penetrates design method
CN110399182A (en) * 2019-07-25 2019-11-01 哈尔滨工业大学 A kind of CUDA thread placement optimization method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190258904A1 (en) * 2018-02-18 2019-08-22 Sas Institute Inc. Analytic system for machine learning prediction model selection
CN108984288A (en) * 2018-06-11 2018-12-11 山东中创软件商用中间件股份有限公司 Thread pool capacity adjustment method, device and equipment based on system response time
CN110401635A (en) * 2019-06-28 2019-11-01 国网安徽省电力有限公司电力科学研究院 A kind of tertiary-structure network penetrates design method
CN110399182A (en) * 2019-07-25 2019-11-01 哈尔滨工业大学 A kind of CUDA thread placement optimization method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
申扬 等: "基于机器学习的电网信息通信服务器智能优化", 《科学技术与工程》, vol. 20, no. 32, pages 13302 - 13308 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194040A (en) * 2021-04-28 2021-07-30 王程 Intelligent control method for instantaneous high-concurrency server thread pool congestion
CN116401236A (en) * 2023-06-07 2023-07-07 瀚高基础软件股份有限公司 Method and equipment for adaptively optimizing database parameters
CN116401236B (en) * 2023-06-07 2023-08-18 瀚高基础软件股份有限公司 Method and equipment for adaptively optimizing database parameters

Also Published As

Publication number Publication date
CN111930484B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN113631867B (en) Controlling a manufacturing process using a causal model
El-Yaniv et al. Active Learning via Perfect Selective Classification.
CN113128671B (en) Service demand dynamic prediction method and system based on multi-mode machine learning
CN110909773B (en) Client classification method and system based on adaptive particle swarm
CN111930484A (en) Method and system for optimizing performance of thread pool of power grid information communication server
CN108280236A (en) A kind of random forest visualization data analysing method based on LargeVis
CN112287990A (en) Model optimization method of edge cloud collaborative support vector machine based on online learning
CN115562940A (en) Load energy consumption monitoring method and device, medium and electronic equipment
Khodaverdian et al. A shallow deep neural network for selection of migration candidate virtual machines to reduce energy consumption
Jia et al. Kill two birds with one stone: Auto-tuning rocksdb for high bandwidth and low latency
Meirom et al. Optimizing tensor network contraction using reinforcement learning
Ni et al. Online performance and power prediction for edge TPU via comprehensive characterization
Daghero et al. Dynamic Decision Tree Ensembles for Energy-Efficient Inference on IoT Edge Nodes
CN117112871B (en) Data real-time efficient fusion processing method based on FCM clustering algorithm model
US20220172105A1 (en) Efficient and scalable computation of global feature importance explanations
CN116092614B (en) Carbon fiber precursor preparation simulation method based on hybrid neural network
Halford et al. Selectivity correction with online machine learning
WO2022100370A1 (en) Automatic adjustment and optimization method for svm-based streaming
Yaghini et al. HIOPGA: a new hybrid metaheuristic algorithm to train feedforward neural networks for prediction
Wu et al. HW3C: a heuristic based workload classification and cloud configuration approach for big data analytics
Gong et al. P2P traffic identification method based on an improvement incremental SVM learning algorithm
Huang et al. Discriminative model for google host load prediction with rich feature set
Chen et al. Automated Machine Learning
CN118519788B (en) Edge computing power resource sharing method and system
CN116108349B (en) Algorithm model training optimization method, device, data classification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant