CN111930484A - Method and system for optimizing performance of thread pool of power grid information communication server - Google Patents

Method and system for optimizing performance of thread pool of power grid information communication server Download PDF

Info

Publication number
CN111930484A
CN111930484A CN202010727268.0A CN202010727268A CN111930484A CN 111930484 A CN111930484 A CN 111930484A CN 202010727268 A CN202010727268 A CN 202010727268A CN 111930484 A CN111930484 A CN 111930484A
Authority
CN
China
Prior art keywords
thread pool
performance
task
operations
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010727268.0A
Other languages
Chinese (zh)
Other versions
CN111930484B (en
Inventor
祝晓辉
赵晓波
毕会静
易克难
王秉洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Training Center of State Grid Hebei Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Training Center of State Grid Hebei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Training Center of State Grid Hebei Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202010727268.0A priority Critical patent/CN111930484B/en
Publication of CN111930484A publication Critical patent/CN111930484A/en
Application granted granted Critical
Publication of CN111930484B publication Critical patent/CN111930484B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/245Classification techniques relating to the decision surface
    • G06F18/2451Classification techniques relating to the decision surface linear, e.g. hyperplane
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开了一种电网信息通信服务器线程池性能优化方法与系统,方法包括:对影响线程池性能的因素进行分析,从而建立线程池性能模型;将信通服务器性能试验数据输入到基于支持向量机的线程池调优模型中,得到训练好的线程池调优模型的超参数;通过训练好的支持向量机预测模型,判断当前线程池尺寸是否为最佳尺寸,如果不符则重新设置线程池,并选用符合一定条件的线程池特征数据动态更新训练样本集;通过本方案提出的动态线程池智能调优模型,能够智能地减少服务器的用户响应时间,尤其在访问高峰时能够起到削峰的作用,提升了服务器的执行效率。

Figure 202010727268

The invention discloses a method and system for optimizing the performance of a thread pool of a power grid information communication server. The method includes: analyzing factors affecting the performance of the thread pool, thereby establishing a performance model of the thread pool; In the thread pool tuning model of the machine, the hyperparameters of the trained thread pool tuning model are obtained; through the trained support vector machine prediction model, it is judged whether the current thread pool size is the best size, and if not, the thread pool is reset. , and select the thread pool feature data that meets certain conditions to dynamically update the training sample set; through the dynamic thread pool intelligent tuning model proposed in this solution, the user response time of the server can be intelligently reduced, especially during peak access times. to improve the performance of the server.

Figure 202010727268

Description

一种电网信息通信服务器线程池性能优化方法与系统A method and system for optimizing the performance of a thread pool of a power grid information communication server

技术领域technical field

本发明涉及智能电网领域,尤其涉及一种电网信息通信服务器线程池性能优化方法与系统。The invention relates to the field of smart grid, in particular to a method and system for optimizing the performance of a thread pool of a power grid information communication server.

背景技术Background technique

随着我国电网向智能化、网络化、自动化发展,电力信息网络间的信息交互愈发频繁深入。电网信息通信服务器则承载着电网信息网络信息传输中的核心业务,经常面对大量的用户请求,而这些用户任务所需的处理时间一般都很短。因此,信通服务器一般采用线程池技术来及时高效的响应这些用户请求。但线程池在提高系统性能的同时,也提出了一个新问题,即如何选择一个合适的线程池大小,以获得最佳的服务器性能。如果线程池的尺寸选择过大,虽然会增加线程池并行处理用户任务请求的能力,但同时也增加了系统为维护如此多数目线程而产生的更多的系统开销;另外,线程数目越多也必然导致系统资源的竞争越发激烈,很可能会导致系统的性能反而下降。而线程池的尺寸选择过小,又会削弱线程池并行处理用户请求的能力。因此,选择合适的线程池尺寸成为了决定服务器性能的关键因素。With the development of my country's power grid towards intelligence, networking and automation, the information interaction between power information networks has become more frequent and in-depth. The power grid information communication server carries the core business of power grid information network information transmission, and often faces a large number of user requests, and the processing time required for these user tasks is generally very short. Therefore, the ICT server generally adopts the thread pool technology to respond to these user requests in a timely and efficient manner. However, while the thread pool improves system performance, it also raises a new problem, that is, how to choose an appropriate thread pool size to obtain the best server performance. If the size of the thread pool is too large, although the ability of the thread pool to process user task requests in parallel will increase, it will also increase the system overhead for maintaining such a large number of threads. It will inevitably lead to more intense competition for system resources, which is likely to cause the performance of the system to decline. If the size of the thread pool is too small, it will weaken the ability of the thread pool to process user requests in parallel. Therefore, choosing an appropriate thread pool size becomes a key factor in determining server performance.

发明内容SUMMARY OF THE INVENTION

本发明所要解决的技术问题是提供一种电网信息通信服务器线程池性能优化方法与系统,选择合适的线程池尺寸,减少服务器的用户响应时间。The technical problem to be solved by the present invention is to provide a method and system for optimizing the performance of the thread pool of a power grid information communication server, which can select an appropriate thread pool size and reduce the user response time of the server.

本发明解决上述技术问题的技术方案如下:The technical scheme that the present invention solves the above-mentioned technical problems is as follows:

一种电网信息通信服务器线程池性能优化方法,包括以下步骤:A method for optimizing the performance of a power grid information communication server thread pool, comprising the following steps:

S1,对影响线程池性能的因素进行分析,从而建立线程池性能模型;S1, analyze the factors that affect the performance of the thread pool, so as to establish a performance model of the thread pool;

S2,将信通服务器性能试验数据输入到基于支持向量机的线程池调优模型中,得到训练好的线程池调优模型的超参数;S2, input the performance test data of the ICT server into the thread pool tuning model based on the support vector machine, and obtain the hyperparameters of the trained thread pool tuning model;

S3,通过训练好的支持向量机预测模型,判断当前线程池尺寸是否为最佳尺寸,如果不符则重新设置线程池,并选用符合一定条件的线程池特征数据动态更新训练样本集;S3, judge whether the current thread pool size is the optimal size through the trained support vector machine prediction model, if not, reset the thread pool, and select the thread pool feature data that meets certain conditions to dynamically update the training sample set;

其中,所述线程池调优模型根据线程池性能数据吞吐量、任务运算时间、任务阻塞时间以及对应最佳线程池尺寸建立,所述线程池性能优化即为根据用户请求数选择合适的线程池尺寸。The thread pool tuning model is established according to the thread pool performance data throughput, task operation time, task blocking time and the corresponding optimal thread pool size, and the thread pool performance optimization is to select an appropriate thread pool according to the number of user requests size.

进一步地,所述S1具体包括:Further, the S1 specifically includes:

(1)设用户任务响应时间为t响应,任务在队列中的排队等待时间为t排队,任务在线程池中的池中处理时间为t,则t响应=t排队+t(1) Let the user task response time be t response , the queue waiting time of the task in the queue is t queue , and the task processing time in the pool in the thread pool is t pool , then t response = t queue + t pool ;

(2)一个任务在线程池中的处理时间包括任务抢占CPU运算时间t运算和任务因等待系统资源而被挂起的等待时间t等待,即t=t运算+t等待。因此最终用户任务响应时间t响应=t排队+t运算+t等待(2) The processing time of a task in the thread pool includes the task preempting CPU operation time t operation and the waiting time t wait for the task to be suspended due to waiting for system resources, that is, t pool = t operation + t wait . Therefore, the end user task response time t response = t queue + t operation + t wait ;

(3)设系统吞吐量为m,线程池尺寸为n,任务运算时间为t运算,则任务排队时间的数学模型为t排队=f(n,m,t)=f(n,m,t运算+t等待);(3) Suppose the system throughput is m, the thread pool size is n, and the task operation time is t operation , then the mathematical model of task queuing time is t queuing = f(n, m, t pool ) = f(n, m, t operation + t wait );

(4)设因等待系统资源所阻塞消耗的时间为T阻塞,池内线程占用CPU运算的时间为T运算,则任务等待时间的数学模型可写为t等待=g(n,T运算,T阻塞);(4) Assuming that the time consumed by waiting for system resources is blocked as T blocking , and the time that threads in the pool occupy for CPU operations is T operation , the mathematical model of task waiting time can be written as t waiting = g(n, T operation , T blocking );

(5)任务运算时间t运算是指用户任务进入线程池后抢占CPU执行任务所消耗的时间。对于每个用户任务而言,其运算时间可认为是一个常数,与吞吐量、线程池尺寸等其他参数无关,t运算=T运算(5) The task operation time t operation refers to the time consumed by the user task to preempt the CPU to execute the task after entering the thread pool. For each user task, its operation time can be considered as a constant, independent of other parameters such as throughput and thread pool size, t operation = T operation ;

(6)综上所述,反映线程池性能的用户响应时间的数学模型可建为(6) To sum up, the mathematical model of user response time reflecting thread pool performance can be built as

t响应=t排队+t运算+t等待 tresponse = tqueue+toperation + twait

=f(n,m,T运算+g(n,T运算,T阻塞))=f(n,m,T operation +g(n,T operation ,T blocking ))

+T运算+g(n,T运算,T阻塞),+T operation +g(n, T operation , T blocking ),

可写成t响应=h(n,m,T运算,T阻塞);It can be written as t response = h(n, m, T operation , T blocking );

(7)欲使线程池性能达到最优,也就是令用户任务响应时间t响应取最小值。如果上式连续可微,则取到最小值的必要条件为t'响应=h'(nbest,m,T运算,T阻塞)=0。(7) To optimize the performance of the thread pool, that is, to minimize the user task response time t response . If the above formula is continuously differentiable, the necessary condition for obtaining the minimum value is t'response =h'(n best , m, T operation , T blocking )=0.

进一步地,所述S2具体包括:Further, the S2 specifically includes:

S21,基于一种改进的流体搜索优化算法(IFSO),初始化支持向量机的超参数;其中,所述超参数包括:惩罚因子C、径向基核函数的参数γ;S21, based on an improved fluid search optimization algorithm (IFSO), initialize the hyperparameters of the support vector machine; wherein, the hyperparameters include: a penalty factor C, a parameter γ of a radial basis kernel function;

S22,使用支持向量机进行交叉训练,并根据得到的分类准确率作为IFSO的适应度函数进行迭代寻优,最终得到最优的超参数。S22 , cross-training is performed using a support vector machine, and an iterative optimization is performed according to the obtained classification accuracy as the fitness function of the IFSO, and the optimal hyperparameters are finally obtained.

进一步地,所述S22具体包括:Further, the S22 specifically includes:

(1)初始化各个流体粒子的位置、速度,流体的密度、运动方向,以及常压;(1) Initialize the position and velocity of each fluid particle, the density of the fluid, the direction of motion, and the normal pressure;

(2)计算目标函数值,更新最优目标函数值、最优位置以及最差目标函数值,计算流体粒子密度;(2) Calculate the objective function value, update the optimal objective function value, the optimal position and the worst objective function value, and calculate the fluid particle density;

(3)对目标函数值进行归一化,并计算流体粒子的压强;(3) Normalize the objective function value and calculate the pressure of the fluid particles;

(4)计算其他流体粒子对当前粒子的压强和速度方向;(4) Calculate the pressure and velocity direction of other fluid particles to the current particle;

(5)根据伯努利方程计算流体速度值和速度矢量;(5) Calculate fluid velocity value and velocity vector according to Bernoulli equation;

(6)更新粒子的位置(6) Update the position of the particle

(7)重复步骤(2)-(6)直到满足终止条件。(7) Repeat steps (2)-(6) until the termination condition is satisfied.

其中,为了提高流体搜索算法的精度,采用两阶段的优化机制,即第一阶段的多样化搜索和第二阶段的精细化探索。Among them, in order to improve the accuracy of the fluid search algorithm, a two-stage optimization mechanism is adopted, namely the diversified search in the first stage and the refined exploration in the second stage.

进一步地,所述S3具体包括:Further, the S3 specifically includes:

将实时运行的线程池性能监测数据作为测试样本输入支持向量机中,得到所属的最佳线程池尺寸类别;Input the real-time running thread pool performance monitoring data as a test sample into the support vector machine to obtain the best thread pool size category to which it belongs;

判断所得最佳线程池尺寸是否与当前尺寸相符,如果不符则重新设置线程池,动态调整线程池大小Determine whether the obtained optimal thread pool size is consistent with the current size, if not, reset the thread pool and dynamically adjust the thread pool size

判断特征数据是否满足KKT(Karush-Kuhn-Tucker)条件,如果满足,则替代训练样本集中最违反KKT条件的点,通过支持向量机进行训练学习,得到新的各最佳线程池尺寸的分类超平面。Determine whether the feature data satisfies the KKT (Karush-Kuhn-Tucker) condition, if so, replace the point that most violates the KKT condition in the training sample set, train and learn through the support vector machine, and get the new classification super of each optimal thread pool size. flat.

采用上述进一步方案的有益效果是:在基于支持向量机的线程池调优模型中,采用KKT条件作为训练样本集更新的判定条件,对于训练样本集,维持一个固定大小的规模,避免随着新样本的不断引入而使得样本集无限增大,可以使线程池调优模型能够适应复杂多变的环境。The beneficial effect of adopting the above-mentioned further scheme is: in the thread pool tuning model based on support vector machine, the KKT condition is used as the judgment condition for the update of the training sample set. The continuous introduction of samples makes the sample set increase infinitely, which can make the thread pool tuning model adapt to the complex and changeable environment.

一种电网信息通信服务器线程池性能优化的系统,包括:线程池性能最优的函数关系建立模块、支持向量机参数选择模块和线程池尺寸调优模块;A system for optimizing the performance of a thread pool of a power grid information communication server, comprising: a function relationship establishing module with optimal thread pool performance, a support vector machine parameter selection module and a thread pool size tuning module;

所述线程池性能最优的函数关系建立模块用于对影响线程池性能的因素进行分析,从而对线程池性能进行优化,便可以达到优化服务器性能的目的;The function relationship establishing module with the optimal thread pool performance is used to analyze the factors affecting the thread pool performance, so as to optimize the thread pool performance, the purpose of optimizing the server performance can be achieved;

所述支持向量机参数选择模块用于将信通服务器性能试验数据输入到基于支持向量机的线程池调优模型中,得到训练好的线程池调优模型的超参数;The support vector machine parameter selection module is used to input the performance test data of the communication server into the thread pool tuning model based on the support vector machine, and obtain the hyperparameters of the trained thread pool tuning model;

所述线程池尺寸调优模块用于通过训练好的支持向量机预测模型,判断当前线程池尺寸是否为最佳尺寸,如果不符则重新设置线程池,并选用符合一定条件的线程池特征数据动态更新训练样本集;The thread pool size tuning module is used to judge whether the current thread pool size is the optimal size through the trained support vector machine prediction model, if not, reset the thread pool, and select the thread pool feature data dynamic that meets certain conditions. Update the training sample set;

其中,所述线程池调优模型根据线程池性能数据吞吐量、任务运算时间、任务阻塞时间以及对应最佳线程池尺寸建立,所述线程池性能优化即为根据用户请求数选择合适的线程池尺寸。The thread pool tuning model is established according to the thread pool performance data throughput, task operation time, task blocking time and the corresponding optimal thread pool size, and the thread pool performance optimization is to select an appropriate thread pool according to the number of user requests size.

进一步地,所述线程池性能最优的函数关系建立模块用于对影响线程池性能的因素进行分析,其步骤具体包括:Further, the function relationship establishment module with the best performance of the thread pool is used to analyze the factors affecting the performance of the thread pool, and the steps specifically include:

(1)设用户任务响应时间为t响应,任务在队列中的排队等待时间为t排队,任务在线程池中的池中处理时间为t,则t响应=t排队+t(1) Let the user task response time be t response , the queue waiting time of the task in the queue is t queue , and the task processing time in the pool in the thread pool is t pool , then t response = t queue + t pool ;

(2)一个任务在线程池中的处理时间包括任务抢占CPU运算时间t运算和任务因等待系统资源而被挂起的等待时间t等待,即t=t运算+t等待。因此最终用户任务响应时间t响应=t排队+t运算+t等待(2) The processing time of a task in the thread pool includes the task preempting CPU operation time t operation and the waiting time t wait for the task to be suspended due to waiting for system resources, that is, t pool = t operation + t wait . Therefore, the end user task response time t response = t queue + t operation + t wait ;

(3)设系统吞吐量为m,线程池尺寸为n,任务运算时间为t运算,则任务排队时间的数学模型为t排队=f(n,m,t)=f(n,m,t运算+t等待);(3) Suppose the system throughput is m, the thread pool size is n, and the task operation time is t operation , then the mathematical model of task queuing time is t queuing = f(n, m, t pool ) = f(n, m, t operation + t wait );

(4)设因等待系统资源所阻塞消耗的时间为T阻塞,池内线程占用CPU运算的时间为T运算,则任务等待时间的数学模型可写为t等待=g(n,T运算,T阻塞);(4) Assuming that the time consumed by waiting for system resources is blocked as T blocking , and the time that threads in the pool occupy for CPU operations is T operation , the mathematical model of task waiting time can be written as t waiting = g(n, T operation , T blocking );

(5)任务运算时间t运算是指用户任务进入线程池后抢占CPU执行任务所消耗的时间。对于每个用户任务而言,其运算时间可认为是一个常数,与吞吐量、线程池尺寸等其他参数无关,t运算=T运算(5) The task operation time t operation refers to the time consumed by the user task to preempt the CPU to execute the task after entering the thread pool. For each user task, its operation time can be considered as a constant, independent of other parameters such as throughput and thread pool size, t operation = T operation ;

(6)综上所述,反映线程池性能的用户响应时间的数学模型可建为(6) To sum up, the mathematical model of user response time reflecting thread pool performance can be built as

t响应=t排队+t运算+t等待 tresponse = tqueue+toperation + twait

=f(n,m,T运算+g(n,T运算,T阻塞))=f(n,m,T operation +g(n,T operation ,T blocking ))

+T运算+g(n,T运算,T阻塞),+T operation +g(n, T operation , T blocking ),

可写成t响应=h(n,m,T运算,T阻塞);It can be written as t response = h(n, m, T operation , T blocking );

(7)欲使线程池性能达到最优,也就是令用户任务响应时间t响应取最小值。如果上式连续可微,则取到最小值的必要条件为t'响应=h'(nbest,m,T运算,T阻塞)=0。(7) To optimize the performance of the thread pool, that is, to minimize the user task response time t response . If the above formula is continuously differentiable, the necessary condition for obtaining the minimum value is t'response =h'(n best , m, T operation , T blocking )=0.

进一步地,所述支持向量机参数选择模块包括:支持向量机参数初始化模块和支持向量机参数训练模块;Further, the SVM parameter selection module includes: a SVM parameter initialization module and a SVM parameter training module;

所述支持向量机参数初始化模块用于初始化支持向量机的超参数;其中,所述超参数包括:惩罚因子C、径向基核函数的参数γ;The support vector machine parameter initialization module is used to initialize the hyperparameters of the support vector machine; wherein, the hyperparameters include: a penalty factor C, a parameter γ of the radial basis kernel function;

所述支持向量机训练模块用于根据得到的分类准确率作为IFSO的适应度函数进行迭代寻优,最终得到最优的超参数。The support vector machine training module is used for iterative optimization according to the obtained classification accuracy as the fitness function of the IFSO, and finally the optimal hyperparameters are obtained.

进一步地,所述支持向量机参数训练模块具体用于计算适用于线程池尺寸调优模块的最优的超参数,其步骤具体包括:Further, the support vector machine parameter training module is specifically used to calculate the optimal hyperparameters suitable for the thread pool size tuning module, and the steps specifically include:

(1)初始化各个流体粒子的位置、速度,流体的密度、运动方向,以及常压;(1) Initialize the position and velocity of each fluid particle, the density of the fluid, the direction of motion, and the normal pressure;

(2)计算目标函数值,更新最优目标函数值、最优位置以及最差目标函数值,计算流体粒子密度;(2) Calculate the objective function value, update the optimal objective function value, the optimal position and the worst objective function value, and calculate the fluid particle density;

(3)对目标函数值进行归一化,并计算流体粒子的压强;(3) Normalize the objective function value and calculate the pressure of the fluid particles;

(4)计算其他流体粒子对当前粒子的压强和速度方向;(4) Calculate the pressure and velocity direction of other fluid particles to the current particle;

(5)根据伯努利方程计算流体速度值和速度矢量;(5) Calculate fluid velocity value and velocity vector according to Bernoulli equation;

(6)更新粒子的位置;(6) Update the position of the particle;

(7)重复步骤(2)-(6)直到满足终止条件。(7) Repeat steps (2)-(6) until the termination condition is satisfied.

其中,为了提高流体搜索算法的精度,采用两阶段的优化机制,即第一阶段的多样化搜索和第二阶段的精细化探索。Among them, in order to improve the accuracy of the fluid search algorithm, a two-stage optimization mechanism is adopted, namely the diversified search in the first stage and the refined exploration in the second stage.

进一步地,所述线程池尺寸调优模块具体用于:Further, the thread pool size tuning module is specifically used for:

将实时运行的线程池性能监测数据作为测试样本输入支持向量机中,得到所属的最佳线程池尺寸类别;Input the real-time running thread pool performance monitoring data as a test sample into the support vector machine to obtain the best thread pool size category to which it belongs;

判断所得最佳线程池尺寸是否与当前尺寸相符,如果不符则重新设置线程池,动态调整线程池大小Determine whether the obtained optimal thread pool size is consistent with the current size, if not, reset the thread pool and dynamically adjust the thread pool size

判断特征数据是否满足KKT(Karush-Kuhn-Tucker)条件,如果满足,则替代训练样本集中最违反KKT条件的点,交由支持向量机训练模块以产生新的各最佳线程池尺寸的分类超平面。Judge whether the feature data satisfies the KKT (Karush-Kuhn-Tucker) condition, and if so, replace the point that most violates the KKT condition in the training sample set, and pass it to the support vector machine training module to generate a new classification of the optimal thread pool size. flat.

采用上述进一步方案的有益效果是:The beneficial effects of adopting the above-mentioned further scheme are:

本发明的有益效果是:The beneficial effects of the present invention are:

本发明通过大量的信通服务器性能实验数据构造原始训练样本集,并通过训练好的支持向量机对不同电网场景下的线程池尺寸进行动态调整,同时采用实时的性能试验数据对训练样本集进行动态更新,可实现对信通服务器动态智能调优。The present invention constructs the original training sample set by using a large number of communication server performance test data, dynamically adjusts the size of the thread pool under different power grid scenarios through the trained support vector machine, and uses the real-time performance test data to conduct the training sample set. Dynamic update can realize dynamic and intelligent tuning of the ICT server.

在基于支持向量机的线程池调优模型中,采用KKT条件作为训练样本集更新的判定条件,对于训练样本集,维持一个固定大小的规模,避免随着新样本的不断引入而使得样本集无限增大,可以使线程池调优模型能够适应复杂多变的环境。In the thread pool tuning model based on support vector machine, the KKT condition is used as the judgment condition for the update of the training sample set. For the training sample set, a fixed size is maintained to avoid the infinite sample set with the continuous introduction of new samples. If it increases, the thread pool tuning model can adapt to complex and changeable environments.

本发明附加的方面的优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明实践了解到。Advantages of additional aspects of the invention will be set forth in part in the description that follows, and parts will be apparent from the description below, or learned by practice of the invention.

附图说明Description of drawings

图1为本发明的实施例提供的电网信息通信服务器线程池性能优化方法的流程示意图;1 is a schematic flowchart of a method for optimizing the performance of a power grid information communication server thread pool according to an embodiment of the present invention;

图2为本发明的其他实施例提供的基于IFSO的支持向量机算法流程图;2 is a flowchart of an IFSO-based support vector machine algorithm provided by other embodiments of the present invention;

图3为本发明的实施例提供的归一化后的部分训练样本集数据;3 is a normalized partial training sample set data provided by an embodiment of the present invention;

图4为本发明的实施例提供的IFSO-SVM相较于不同算法优化SVM得出的分类准确率;Fig. 4 is the classification accuracy rate that the IFSO-SVM provided by the embodiment of the present invention compares different algorithms to optimize the SVM obtained;

图5为本发明的实施例提供的IFSO-SVM相较于不同算法优化SVM得出的分类准确率的迭代曲线图;5 is an iterative curve diagram of the classification accuracy obtained by IFSO-SVM compared to different algorithm optimization SVMs according to an embodiment of the present invention;

图6为本发明的实施例提供的不同的优化算法智能调整的动态线程池与静态线程池性能的比较结果;6 is a comparison result of the performance of the dynamic thread pool and the static thread pool intelligently adjusted by different optimization algorithms provided by an embodiment of the present invention;

图7为本发明的实施例提供的IFSO-SVM相较于不同算法的效率提升结果;FIG. 7 is an efficiency improvement result of IFSO-SVM provided by an embodiment of the present invention compared with different algorithms;

图8为本发明的实施例提供的电网信息通信服务器线程池性能优化的系统的结构图。FIG. 8 is a structural diagram of a system for optimizing the performance of a thread pool of a power grid information communication server according to an embodiment of the present invention.

具体实施方式Detailed ways

以下结合附图对本发明的原理和特征进行描述,所举实施例只用于解释本发明,并非用于限定本发明的范围。The principles and features of the present invention will be described below with reference to the accompanying drawings. The embodiments are only used to explain the present invention, but not to limit the scope of the present invention.

在电力通信网络环境中,信通服务器一般采用线程池技术来及时高效的响应这些用户请求,服务器事先为网络应用程序创建一组线程,当有用户服务请求到来时,可以直接调用线程池中创建好的线程为用户服务,当用户任务执行完成后,这一组线程并不随之销毁,而是等待下一批用户的服务请求。但线程池在提高系统性能的同时,也产生了一个新的问题,如果线程池的尺寸选择过大,虽然会增加线程池并行处理用户任务请求的能力,但同时也增加了系统为维护如此多数目线程而产生的更多的系统开销,线程池的尺寸选择过小,又会削弱线程池并行处理用户请求的能力。线程池性能优化问题是指,根据用户请求数选择合适的线程池尺寸。In the power communication network environment, the ICT server generally uses thread pool technology to respond to these user requests in a timely and efficient manner. The server creates a set of threads for network applications in advance. When a user service request arrives, it can directly call the thread pool to create a set of threads. A good thread serves the user. When the user task is completed, this group of threads is not destroyed, but waits for the service request of the next batch of users. However, while the thread pool improves system performance, it also creates a new problem. If the size of the thread pool is too large, although it will increase the thread pool's ability to process user task requests in parallel, it will also increase the system's ability to maintain such a large number of tasks. If the size of the thread pool is too small, it will weaken the ability of the thread pool to process user requests in parallel. The thread pool performance optimization problem refers to selecting an appropriate thread pool size according to the number of user requests.

如图1所示,为本发明实施例提供的一种电网信息通信服务器线程池性能优化方法,该方法包括:As shown in FIG. 1, a method for optimizing the performance of a power grid information communication server thread pool provided by an embodiment of the present invention includes:

S1,对影响线程池性能的因素进行分析,从而建立线程池性能模型;S1, analyze the factors that affect the performance of the thread pool, so as to establish a performance model of the thread pool;

(1)设用户任务响应时间为t响应,任务在队列中的排队等待时间为t排队,任务在线程池中的池中处理时间为t,则t响应=t排队+t(1) Let the user task response time be t response , the queue waiting time of the task in the queue is t queue , and the task processing time in the pool in the thread pool is t pool , then t response = t queue + t pool ;

(2)一个任务在线程池中的处理时间包括任务抢占CPU运算时间t运算和任务因等待系统资源而被挂起的等待时间t等待,即t=t运算+t等待。因此最终用户任务响应时间t响应=t排队+t运算+t等待(2) The processing time of a task in the thread pool includes the task preempting CPU operation time t operation and the waiting time t wait for the task to be suspended due to waiting for system resources, that is, t pool = t operation + t wait . Therefore, the end user task response time t response = t queue + t operation + t wait ;

(3)设系统吞吐量为m,线程池尺寸为n,任务运算时间为t运算,则任务排队时间的数学模型为t排队=f(n,m,t)=f(n,m,t运算+t等待);(3) Suppose the system throughput is m, the thread pool size is n, and the task operation time is t operation , then the mathematical model of task queuing time is t queuing = f(n, m, t pool ) = f(n, m, t operation + t wait );

(4)设因等待系统资源所阻塞消耗的时间为T阻塞,池内线程占用CPU运算的时间为T运算,则任务等待时间的数学模型可写为t等待=g(n,T运算,T阻塞);(4) Assuming that the time consumed by waiting for system resources is blocked as T blocking , and the time that threads in the pool occupy for CPU operations is T operation , the mathematical model of task waiting time can be written as t waiting = g(n, T operation , T blocking );

(5)任务运算时间t运算是指用户任务进入线程池后抢占CPU执行任务所消耗的时间。对于每个用户任务而言,其运算时间可认为是一个常数,与吞吐量、线程池尺寸等其他参数无关,t运算=T运算(5) The task operation time t operation refers to the time consumed by the user task to preempt the CPU to execute the task after entering the thread pool. For each user task, its operation time can be considered as a constant, independent of other parameters such as throughput and thread pool size, t operation = T operation ;

(6)综上所述,反映线程池性能的用户响应时间的数学模型可建为(6) To sum up, the mathematical model of user response time reflecting thread pool performance can be built as

t响应=t排队+t运算+t等待 tresponse = tqueue+toperation + twait

=f(n,m,T运算+g(n,T运算,T阻塞))=f(n,m,T operation +g(n,T operation ,T blocking ))

+T运算+g(n,T运算,T阻塞),+T operation +g(n, T operation , T blocking ),

可写成t响应=h(n,m,T运算,T阻塞);It can be written as t response = h(n, m, T operation , T blocking );

(7)欲使线程池性能达到最优,也就是令用户任务响应时间t响应取最小值。如果上式连续可微,则取到最小值的必要条件为t'响应=h'(nbest,m,T运算,T阻塞)=0。(7) To optimize the performance of the thread pool, that is, to minimize the user task response time t response . If the above formula is continuously differentiable, the necessary condition for obtaining the minimum value is t'response =h'(n best , m, T operation , T blocking )=0.

因此,可以得出,影响线程池性能的因素为吞吐量、任务运算时间、任务阻塞时间和线程池尺寸。对于实际情况而言,吞吐量、任务运算时间和任务阻塞时间都属于用户任务部分,是无法强制调整的。而线程池尺寸却可以在线程池程序中动态调整,来适应客观的用户任务变化,达到线程池调优的目的;Therefore, it can be concluded that the factors affecting the performance of the thread pool are throughput, task operation time, task blocking time and thread pool size. For the actual situation, the throughput, task operation time and task blocking time belong to the user task part and cannot be adjusted forcibly. However, the thread pool size can be dynamically adjusted in the thread pool program to adapt to objective user task changes and achieve the purpose of thread pool tuning;

S2,将信通服务器性能试验数据输入到基于支持向量机的线程池调优模型中,得到训练好的线程池调优模型的超参数;S2, input the performance test data of the ICT server into the thread pool tuning model based on the support vector machine, and obtain the hyperparameters of the trained thread pool tuning model;

在某实施例中,根据线程池性能数据吞吐量、任务运算时间、任务阻塞时间以及对应最佳线程池尺寸建立基于支持向量机的线程池调优模型。In a certain embodiment, a thread pool tuning model based on a support vector machine is established according to the thread pool performance data throughput, task operation time, task blocking time and the corresponding optimal thread pool size.

支持向量机(support vector machine,SVM)是一种具有较强泛化能力的智能方法。支持向量机的原理是,设输入训练样本点(xi,yi),i=1…n,x∈Rn,y∈{-1,+1}分开的超平面方程为(w·x)+b=0。该超平面方程可有无数个,其中最优超平面可由下式求解:Support vector machine (SVM) is an intelligent method with strong generalization ability. The principle of the support vector machine is to set the input training sample points (x i , y i ), i=1...n, x∈R n , y∈{-1,+1} The hyperplane equation separated by (w·x )+b=0. There can be an infinite number of hyperplane equations, and the optimal hyperplane can be solved by the following formula:

Figure BDA0002599960620000091
Figure BDA0002599960620000091

Figure BDA0002599960620000092
Figure BDA0002599960620000092

其中C为惩罚因子,ξ为松弛变量。where C is the penalty factor and ξ is the slack variable.

引入拉格朗日因子αi并将其转化为对偶问题可得Introducing the Lagrangian factor α i and transforming it into a dual problem, we get

Figure BDA0002599960620000093
Figure BDA0002599960620000093

最优解为

Figure BDA0002599960620000094
i=1,2,…,n且The optimal solution is
Figure BDA0002599960620000094
i=1,2,...,n and

Figure BDA0002599960620000095
Figure BDA0002599960620000095

于是最优超平面方程为So the optimal hyperplane equation is

Figure BDA0002599960620000096
Figure BDA0002599960620000096

其中,SV为支持向量集,上式中的求和实际上只对支持向量进行。由此可以通过判断y的取值来对未知样本进行分类。Among them, SV is the support vector set, and the summation in the above formula is actually only performed on the support vector. Thus, the unknown samples can be classified by judging the value of y.

对于非线性情况,可通过引入核函数K(x·y)=φ(x)·φ(y)的办法将其转化为高维特征空间的线性划分问题,由此可得非线性支持向量机的对偶问题为For the nonlinear case, it can be transformed into a linear partition problem of high-dimensional feature space by introducing the kernel function K(x·y)=φ(x)·φ(y), and a nonlinear support vector machine can be obtained. The dual problem is

Figure BDA0002599960620000097
Figure BDA0002599960620000097

由此,只要求出此问题的解,即可得到最优分类超平面。支持向量机的决策函数为:Therefore, only the solution of this problem is required to obtain the optimal classification hyperplane. The decision function of the support vector machine is:

Figure BDA0002599960620000098
Figure BDA0002599960620000098

Figure BDA0002599960620000099
Figure BDA0002599960620000099

常用的核函数包括线性核函数,多项式核函数以及径向基核函数等。由于径向基核函数更加具有将非线性分类问题,映射为无限维空间的线性分类问题的能力,因此在支持向量机的研究中得到了广泛应用。径向基核函数的定义为:Commonly used kernel functions include linear kernel functions, polynomial kernel functions and radial basis kernel functions. Because the radial basis kernel function has the ability to map nonlinear classification problems into linear classification problems in infinite dimensional space, it has been widely used in the research of support vector machines. The radial basis kernel function is defined as:

K(xi,x)=exp(-γ||xi-x||2),γ>0K(x i ,x)=exp(-γ||x i -x|| 2 ),γ>0

其中,γ为径向基核函数的参数。Among them, γ is the parameter of the radial basis kernel function.

支持向量机的预测性能,由惩罚因子C与径向基核函数的参数γ共同决定。因此,如何适当的选择C与γ的值,以使得SVM的预测性能达到最优,就成为一个需要重点关注的问题。比较直接的方法是采用穷举法,但运算量较大,且未必能搜索到最优的参数组合。更多的方法是采用群智能优化算法,来搜索最优的参数组合。在某实施例中,引入了相对较新、性能较好的流体搜索优化算法并加以改进,以期提高支持向量机的预测性能。The prediction performance of the support vector machine is determined by the penalty factor C and the parameter γ of the radial basis kernel function. Therefore, how to properly select the values of C and γ so that the prediction performance of SVM can be optimized has become a problem that needs to be focused on. The more direct method is to use the exhaustive method, but the computational load is large, and the optimal parameter combination may not be able to be searched. More methods are to use swarm intelligence optimization algorithm to search for the optimal parameter combination. In a certain embodiment, a relatively new and better-performing fluid search optimization algorithm is introduced and improved in order to improve the prediction performance of the support vector machine.

基本的FSO算法步骤如下:The basic steps of the FSO algorithm are as follows:

(1)初始化各个流体粒子的位置并调整参数。初始化流体粒子速度Vi=0,流体运动方向direction=0,流体密度ρi=1,以及常压p0=1(i=1,2……n)。(1) Initialize the position of each fluid particle and adjust the parameters. Initialize fluid particle velocity Vi = 0, fluid motion direction direction = 0, fluid density ρ i =1, and normal pressure p 0 =1 (i=1, 2...n).

(2)计算目标函数值,更新最优目标函数值ybest,最优位置Xbest以及最差目标函数值yworst.计算流体粒子密度ρ=m/lD(2) Calculate the objective function value, update the optimal objective function value y best , the optimal position X best and the worst objective function value y worst . Calculate the fluid particle density ρ=m/l D .

(3)对目标函数值进行归一化,并计算流体粒子的压强

Figure BDA0002599960620000101
(3) Normalize the objective function value and calculate the pressure of the fluid particles
Figure BDA0002599960620000101

(4)计算其他流体粒子对当前粒子的压强,(4) Calculate the pressure of other fluid particles on the current particle,

Figure BDA0002599960620000102
Figure BDA0002599960620000102

速度方向为

Figure BDA0002599960620000103
The speed direction is
Figure BDA0002599960620000103

(5)根据伯努利方程计算流体速度值

Figure BDA0002599960620000104
并由步骤(4)中的方向计算出速度矢量Vi=direction·vi·rand(5) Calculate the fluid velocity value according to the Bernoulli equation
Figure BDA0002599960620000104
And calculate the velocity vector V i =direction·vi·rand from the direction in step (4 )

(6)位置更新。Xi+1=Xi+Vi(6) Location update. X i+1 =X i +V i .

(7)重复步骤(2)-(6)直到满足终止条件。(7) Repeat steps (2)-(6) until the termination condition is satisfied.

由于原始FSO算法的运算量较大,且容易陷入局部最优,因此本文根据支持向量机优化的实际情况,做了以下两点改进:Since the original FSO algorithm has a large amount of computation and is easy to fall into local optimum, this paper makes the following two improvements according to the actual situation of support vector machine optimization:

改进1.由于步骤(4)中压强方向的计算量较大,本文将其简化为Improvement 1. Due to the large amount of calculation in the pressure direction in step (4), this paper simplifies it as

Figure BDA0002599960620000105
Figure BDA0002599960620000105

改进2.为了提高流体搜索算法的精度,采用两阶段的优化机制:即第一阶段的多样化搜索和第二阶段的精细化探索。当迭代次数在第一阶段结束时达到特定阈值M'时,搜索空间会缩小到当前最优值附近,并且像元长度会按指数减小,即l=l·e(M'-t)/σ进行精细化探索,其中σ可以设置搜索的精度。Improvement 2. In order to improve the accuracy of the fluid search algorithm, a two-stage optimization mechanism is adopted: the first-stage diversified search and the second-stage refined exploration. When the number of iterations reaches a certain threshold M' at the end of the first stage, the search space is shrunk to around the current optimal value, and the pixel length is reduced exponentially, i.e. l=l·e (M'-t)/ σ for refined exploration, where σ can set the precision of the search.

图2给出了基于IFSO的支持向量机算法。首先通过IFSO随机初始化SVM的超参数C与γ的值,然后赋值给SVM进行交叉训练,得到的SVM分类准确率作为IFSO的适应度函数进行迭代寻优,最终搜索到最优的SVM超参数。Figure 2 shows the support vector machine algorithm based on IFSO. Firstly, the values of hyperparameters C and γ of SVM are randomly initialized by IFSO, and then assigned to SVM for cross-training.

通过实验得到的线程池性能数据吞吐量、任务运算时间、任务阻塞时间以及最佳线程池尺寸,构成训练样本集中的一个样本。其中,吞吐量、任务运算时间、任务阻塞时间构成了样本中的三个特征属性,而最佳线程池尺寸则构成样本的分类标签。通过支持向量机的学习训练,得到特征属性与分类标签之间的对应关系。这样,在新的特征数据也就是测试样本到来时,便可以根据此对应关系得到测试样本所对应的分类标签,即最佳线程池尺寸,从而为线程池的大小调整给出依据。线程池调优的效果直接与支持向量机训练样本集的选取有关,训练样本选取的越全面,则得到的最佳线程池尺寸就会越精确。图1给出了基于支持向量机的线程池调优模型框架,其中基于支持向量机的线程池调优模型的训练过程的具体步骤为:The thread pool performance data throughput, task computing time, task blocking time and optimal thread pool size obtained through experiments constitute a sample in the training sample set. Among them, throughput, task operation time, and task blocking time constitute the three characteristic attributes in the sample, and the optimal thread pool size constitutes the classification label of the sample. Through the learning and training of the support vector machine, the corresponding relationship between the feature attribute and the classification label is obtained. In this way, when the new feature data, that is, the test sample, arrives, the classification label corresponding to the test sample, that is, the optimal thread pool size, can be obtained according to the corresponding relationship, so as to provide a basis for adjusting the size of the thread pool. The effect of thread pool tuning is directly related to the selection of the SVM training sample set. The more comprehensive the training samples are, the more accurate the optimal thread pool size will be. Figure 1 shows the framework of the thread pool tuning model based on support vector machine, in which the specific steps of the training process of the thread pool tuning model based on support vector machine are:

Step1首先通过实验对线程池性能特征数据进行分类,以吞吐量、任务运算时间和任务阻塞时间为特征变量,根据最佳线程池尺寸分为多类,构造初始训练样本集;Step1: First, classify the thread pool performance characteristic data through experiments, take throughput, task operation time and task blocking time as characteristic variables, divide them into multiple categories according to the optimal thread pool size, and construct an initial training sample set;

Step2通过支持向量机进行训练学习,得到各最佳线程池尺寸的分类超平面。Step 2. Train and learn through the support vector machine, and obtain the classification hyperplane of each optimal thread pool size.

S3,通过训练好的支持向量机预测模型,判断当前线程池尺寸是否为最佳尺寸,如果不符则重新设置线程池,并选用符合一定条件的线程池特征数据动态更新训练样本集。S3: Determine whether the current thread pool size is the optimal size through the trained support vector machine prediction model, if not, reset the thread pool, and select the thread pool feature data that meets certain conditions to dynamically update the training sample set.

线程池调优的效果直接与支持向量机训练样本集的选取有关,训练样本选取的越全面,则得到的最佳线程池尺寸就会越精确。同时,由于实际情况的复杂多变,对于事先选好的固定的训练样本集也应该有所改变,才能适应不断变化的新情况的出现。为此,将监测得到的线程池特征数据通过一定的条件进行判断,符合条件的则用来动态更新训练样本集,而不符合条件的则直接舍弃。具体步骤为:The effect of thread pool tuning is directly related to the selection of the SVM training sample set. The more comprehensive the training samples are, the more accurate the optimal thread pool size will be. At the same time, due to the complexity and change of the actual situation, the fixed training sample set selected in advance should also be changed in order to adapt to the emergence of new and changing situations. To this end, the thread pool feature data obtained by monitoring is judged by certain conditions, and those that meet the conditions are used to dynamically update the training sample set, and those that do not meet the conditions are directly discarded. The specific steps are:

Step1将实时运行的线程池性能监测数据作为测试样本输入支持向量机中,得到所属的最佳线程池尺寸类别。Step 1 Input the real-time running thread pool performance monitoring data as a test sample into the support vector machine, and obtain the best thread pool size category to which it belongs.

Step2判断所得最佳线程池尺寸是否与当前尺寸相符,如果不符则重新设置线程池,动态调整线程池大小。Step2: Determine whether the obtained optimal thread pool size is consistent with the current size. If not, reset the thread pool and dynamically adjust the thread pool size.

Step3判断特征数据是否满足KKT(Karush-Kuhn-Tucker)条件,如果满足,则替代训练样本集中最违反KKT条件的点,返回Step2;如果违反,则舍弃忽略。Step 3 judges whether the feature data satisfies the KKT (Karush-Kuhn-Tucker) condition, if so, replace the point that most violates the KKT condition in the training sample set, and return to Step 2; if it violates, discard and ignore.

在基于支持向量机的线程池调优模型中,本文将KKT条件作为训练样本集更新的判定条件。如果测试样本满足KKT条件,说明这个样本也会落入支持向量的区域内,对分类决策函数会有贡献,需要更新训练样本集,重新训练支持向量机;反之,如果测试样本不满足KKT条件,则不会落入支持向量区域内,对回归决策函数将不会起作用,也就无需再重新训练支持向量机。In the thread pool tuning model based on support vector machine, this paper takes the KKT condition as the judgment condition for the update of the training sample set. If the test sample satisfies the KKT condition, it means that the sample will also fall into the area of the support vector, which will contribute to the classification decision function. It is necessary to update the training sample set and retrain the support vector machine; otherwise, if the test sample does not meet the KKT condition, It will not fall into the support vector region, and will not work for the regression decision function, so there is no need to retrain the support vector machine.

对于训练样本集,应该维持一个固定大小的规模,而不应该随着新样本的不断引入而样本集无限增大。因此,在引入新样本的同时,需要剔除出对分类决策函数作用起的最小的样本,即违反KKT条件最严重的样本,这样可以使训练样本集的数目保持不变。For the training sample set, a fixed size should be maintained, and the sample set should not grow infinitely with the continuous introduction of new samples. Therefore, while introducing new samples, it is necessary to remove the samples that play the least effect on the classification decision function, that is, the samples that violate the KKT condition most seriously, so that the number of training sample sets can be kept unchanged.

通过大量的信通服务器性能实验数据构造原始训练样本集,并通过训练好的支持向量机对不同电网场景下的线程池尺寸进行动态调整,同时采用实时的性能试验数据对训练样本集进行动态更新,可实现对信通服务器动态智能调优.The original training sample set is constructed from a large number of ICT server performance experimental data, and the size of the thread pool in different power grid scenarios is dynamically adjusted by the trained support vector machine, and the training sample set is dynamically updated with real-time performance experimental data. , which can realize dynamic and intelligent tuning of the ICT server.

优选地,在上述任意实施例中,S2具体包括:Preferably, in any of the above embodiments, S2 specifically includes:

S21,基于一种改进的流体搜索优化算法(IFSO),初始化支持向量机的超参数;其中,所述超参数包括:惩罚因子C、径向基核函数的参数γ;S21, based on an improved fluid search optimization algorithm (IFSO), initialize the hyperparameters of the support vector machine; wherein, the hyperparameters include: a penalty factor C, a parameter γ of a radial basis kernel function;

S22,使用支持向量机进行交叉训练,并根据得到的分类准确率作为IFSO的适应度函数进行迭代寻优,最终得到最优的超参数。S22 , cross-training is performed using a support vector machine, and an iterative optimization is performed according to the obtained classification accuracy as the fitness function of the IFSO, and the optimal hyperparameters are finally obtained.

通过采用基于IFSO-SVM的线程池调优模型,可以获得更高的分类准确率以及提高FSO的优化效果,使其更容易跳出局部最优,智能地减少服务器的用户响应时间,尤其在访问高峰时能够起到削峰的作用,可以提升服务器的执行效率。By adopting the thread pool tuning model based on IFSO-SVM, higher classification accuracy can be obtained and the optimization effect of FSO can be improved, making it easier to jump out of the local optimum, and intelligently reducing the user response time of the server, especially during peak access times. It can play the role of peak shaving, which can improve the execution efficiency of the server.

优选地,在上述任意实施例中,S22具体包括:Preferably, in any of the above embodiments, S22 specifically includes:

(1)初始化各个流体粒子的位置、速度,流体的密度、运动方向,以及常压;(1) Initialize the position and velocity of each fluid particle, the density of the fluid, the direction of motion, and the normal pressure;

(2)计算目标函数值,更新最优目标函数值、最优位置以及最差目标函数值,计算流体粒子密度;(2) Calculate the objective function value, update the optimal objective function value, the optimal position and the worst objective function value, and calculate the fluid particle density;

(3)对目标函数值进行归一化,并计算流体粒子的压强;(3) Normalize the objective function value and calculate the pressure of the fluid particles;

(4)计算其他流体粒子对当前粒子的压强和速度方向;(4) Calculate the pressure and velocity direction of other fluid particles to the current particle;

(5)根据伯努利方程计算流体速度值和速度矢量;(5) Calculate fluid velocity value and velocity vector according to Bernoulli equation;

(6)更新粒子的位置;(6) Update the position of the particle;

(7)重复步骤(2)-(6)直到满足终止条件。(7) Repeat steps (2)-(6) until the termination condition is satisfied.

其中,为了提高流体搜索算法的精度,采用两阶段的优化机制,即第一阶段的多样化搜索和第二阶段的精细化探索。Among them, in order to improve the accuracy of the fluid search algorithm, a two-stage optimization mechanism is adopted, namely the diversified search in the first stage and the refined exploration in the second stage.

通过简化压强方向的计算方法,并将优化过程分为多样化搜索和精细化探索两个阶段,可以避免原始的FSO经常容易陷入局部最优,且运算量较大的缺点,从而大幅提升模型训练时的收敛速度。By simplifying the calculation method of the pressure direction, and dividing the optimization process into two stages: diversified search and refined exploration, it can avoid the shortcomings of the original FSO often falling into local optimum and the large amount of calculation, thus greatly improving the model training. the speed of convergence.

优选地,在上述任意实施例中,S3具体包括:Preferably, in any of the above embodiments, S3 specifically includes:

将实时运行的线程池性能监测数据作为测试样本输入支持向量机中,得到所属的最佳线程池尺寸类别;Input the real-time running thread pool performance monitoring data as a test sample into the support vector machine to obtain the best thread pool size category to which it belongs;

判断所得最佳线程池尺寸是否与当前尺寸相符,如果不符则重新设置线程池,动态调整线程池大小Determine whether the obtained optimal thread pool size is consistent with the current size, if not, reset the thread pool and dynamically adjust the thread pool size

判断特征数据是否满足KKT(Karush-Kuhn-Tucker)条件,如果满足,则替代训练样本集中最违反KKT条件的点,通过支持向量机进行训练学习,得到新的各最佳线程池尺寸的分类超平面。Determine whether the feature data satisfies the KKT (Karush-Kuhn-Tucker) condition, if so, replace the point that most violates the KKT condition in the training sample set, train and learn through the support vector machine, and get the new classification super of each optimal thread pool size. flat.

在基于支持向量机的线程池调优模型中,通过采用KKT条件作为训练样本集更新的判定条件,对于训练样本集,维持一个固定大小的规模,可以避免随着新样本的不断引入而使得样本集无限增大,并且使线程池调优模型能够适应复杂多变的环境。In the thread pool tuning model based on support vector machine, by using the KKT condition as the judgment condition for the update of the training sample set, maintaining a fixed size for the training sample set can avoid the continuous introduction of new samples. The set grows infinitely and enables the thread pool tuning model to adapt to complex and changing environments.

在本发明提供的其他实施例中,根据本方案的线程池性能优化方法,以某电网信息通信服务器(服务器主要配置:CPU为2.4GHz Intel Xeon E5-2665,内存为32G,硬盘为10T。)实时采集的吞吐量、任务运算时间和任务阻塞时间为特征变量,通过支持向量机来智能调整线程池尺寸大小。数据采集间隔为每15分钟采集一组服务器特征数据,共采集一周工作日时间得到160组数据。160组特征数据通过仿真实验,将用户任务响应时间作为服务器性能的评价指标,确定最佳线程池尺寸大小,构造训练样本集。In other embodiments provided by the present invention, according to the thread pool performance optimization method of this solution, a certain power grid information communication server (main configuration of the server: CPU is 2.4GHz Intel Xeon E5-2665, memory is 32G, and hard disk is 10T.) The throughput, task operation time and task blocking time collected in real time are characteristic variables, and the size of the thread pool is intelligently adjusted by the support vector machine. The data collection interval is to collect a set of server characteristic data every 15 minutes, and a total of 160 sets of data are collected during one week's working day. 160 sets of characteristic data are simulated through simulation experiments, and the user task response time is used as the evaluation index of server performance, and the optimal thread pool size is determined to construct a training sample set.

同时,为了消除采集数据特征之间的量纲影响,有利于SVM的训练,对特征数据进行了归一化处理。At the same time, in order to eliminate the dimensional influence between the features of the collected data and facilitate the training of SVM, the feature data is normalized.

Figure BDA0002599960620000141
Figure BDA0002599960620000141

其中,d为特征数据原始值,dmin为特征数据中的最小值,dmax为特征数据的最大值,g为归一化后的特征数据值。归一化后的部分训练样本集数据如图3所示。Among them, d is the original value of the feature data, dmin is the minimum value in the feature data, dmax is the maximum value of the feature data, and g is the normalized feature data value. The normalized part of the training sample set data is shown in Figure 3.

为了验证IFSO-SVM的表现,实验对比了原始流体搜索算法FSO-SVM,粒子群优化算法(particle swarm optimization)PSO-SVM以及人工蜂群算法(artificial bee colonyalgorithm)ABC-SVM的实验结果。支持向量机分类器采用开源软件LIBSVM,通过5折交叉验证评估模型的平均分类准确率。各算法参数设置如下:粒子数目设为30,最大迭代次数为50。其中FSO与IFSO的参数设置如下:密度极限比例θ=20%,多样化搜索比例M’=0.7,σ=40;粒子群参数设置如下c1=c2=2,ωbegin=0.9,ωend=0.2;人工蜂群算法参数为Ponlooker=Pemployed=0.5;萤火虫算法α=0.5,βmin=0.2,γ=1。SVM中参数C的搜索范围为[0.01,35000],γ的搜索范围为[0.0001,32]。In order to verify the performance of IFSO-SVM, the experiments compared the experimental results of the original fluid search algorithm FSO-SVM, particle swarm optimization algorithm PSO-SVM and artificial bee colony algorithm ABC-SVM. The support vector machine classifier adopts the open source software LIBSVM, and the average classification accuracy of the model is evaluated by 5-fold cross-validation. The parameters of each algorithm are set as follows: the number of particles is set to 30, and the maximum number of iterations is 50. The parameters of FSO and IFSO are set as follows: density limit ratio θ=20%, diversified search ratio M'=0.7, σ=40; particle swarm parameters are set as follows: c1=c2=2, ω begin =0.9, ω end =0.2 ; The parameters of artificial bee colony algorithm are P onlooker =P employed = 0.5; firefly algorithm α = 0.5, β min = 0.2, γ = 1. The search range of parameter C in SVM is [0.01, 35000], and the search range of γ is [0.0001, 32].

实验一:IFSO-SVM分类结果测试。从图4中可以看出,IFSO-SVM获得了所有对比算法中最高的分类准确率。其后依次是ABC-SVM,FSO-SVM,FA-SVM以及PSO-SVM。改进后的IFSO-SVM比原始的FSO-SVM的平均分类准确率高了1.82个百分点,比位于第二位的ABC-SVM高出了1.61个百分点。可见,对FSO的改进使其更容易搜索到全局最优解,提升了SVM的分类准确率。Experiment 1: IFSO-SVM classification result test. As can be seen from Figure 4, IFSO-SVM achieves the highest classification accuracy among all contrasting algorithms. It is followed by ABC-SVM, FSO-SVM, FA-SVM and PSO-SVM. The average classification accuracy of the improved IFSO-SVM is 1.82 percentage points higher than the original FSO-SVM, and 1.61 percentage points higher than the second-placed ABC-SVM. It can be seen that the improvement of FSO makes it easier to search for the global optimal solution and improves the classification accuracy of SVM.

图5给出了不同算法优化SVM得出的分类准确率的迭代曲线图。可以看出,在迭代初始时,由于SVM参数由各算法随机初始化,因此得到的分类准确率基本相同,均在85%左右。随着搜索迭代的进行,IFSO-SVM经过不断提升,最终获得了比其他优化算法更高的分类准确率,明显高出下方的ABC-SVM和FSO-SVM的结果。值得注意的是,原始的FSO-SVM的表现略差于ABC-SVM,而IFSO在迭代过程中均优于原始FSO的分类准确率,说明IFSO的改进增强了FSO的搜索效果,使其更容易跳出局部最优,能够得到更好的分类效果。Figure 5 shows the iterative curves of the classification accuracy obtained by optimizing the SVM with different algorithms. It can be seen that at the beginning of the iteration, since the SVM parameters are randomly initialized by each algorithm, the obtained classification accuracy is basically the same, which is about 85%. As the search iteration proceeds, IFSO-SVM has been continuously improved, and finally obtained a higher classification accuracy than other optimization algorithms, significantly higher than the results of ABC-SVM and FSO-SVM below. It is worth noting that the original FSO-SVM performs slightly worse than the ABC-SVM, while the IFSO outperforms the original FSO in the classification accuracy in the iteration process, indicating that the improvement of IFSO enhances the search effect of FSO and makes it easier Jumping out of the local optimum can get a better classification effect.

实验二:服务器性能测试。电网信通服务器某次突发访问量增多的性能测试结果,图6给出了不同的优化算法智能调整的动态线程池与静态线程池(线程池尺寸固定为30)性能的比较结果,动态线程池调整间隔为1分钟。整个实验持续了45分钟,其中11分钟到33分钟为访问量的高峰期。从图4中可以明显看出,基于各种优化算法的智能动态池性能要明显好于静态池的性能。而在各种优化算法动态池对比中,原始的FSO-SVM与ABC-SVM的测试结果相当,与SVM训练时的结果类似。而基于IFSO-SVM动态池的用户响应时间要比其他对比算法更少。尤其在访问量高峰期时,IFSO-SVM动态池的用户响应时间的上升明显比其他优化算法更加平缓,表明了IFSO对原始FSO的改进起到了较好的效果,能够智能地对服务器响应时间进行削峰的作用。Experiment 2: Server performance test. The performance test results of a certain burst of access to the power grid communication server, Figure 6 shows the performance comparison results of the dynamic thread pool and static thread pool (the size of the thread pool is fixed at 30) intelligently adjusted by different optimization algorithms. The pool adjustment interval is 1 minute. The whole experiment lasted for 45 minutes, of which 11 minutes to 33 minutes were the peak period of traffic. It can be clearly seen from Figure 4 that the performance of intelligent dynamic pools based on various optimization algorithms is significantly better than that of static pools. In the dynamic pool comparison of various optimization algorithms, the test results of the original FSO-SVM and ABC-SVM are comparable, and the results are similar to those of SVM training. The user response time based on IFSO-SVM dynamic pool is less than other comparison algorithms. Especially during the peak traffic period, the user response time of the IFSO-SVM dynamic pool is obviously more gradual than other optimization algorithms, which shows that IFSO has a good effect on the improvement of the original FSO, and can intelligently adjust the server response time. The role of peak clipping.

图7给出了IFSO-SVM相较于不同算法的效率提升结果,包括45分钟之内平均效率提升结果,最小效率提升结果以及最大效率提升结果。可以看出,动态调优的线程池性能比静态池的性能均有明显改善,且IFSO-SVM相较于其他对比算法平均会有9.12%~38.00%的提高。因此,可以得出结论,基于IFSO-SVM动态线程池算法对信通服务器性能的智能优化具有良好的效果。Figure 7 shows the efficiency improvement results of IFSO-SVM compared with different algorithms, including the average efficiency improvement results within 45 minutes, the minimum efficiency improvement results and the maximum efficiency improvement results. It can be seen that the performance of the dynamically tuned thread pool is significantly improved than that of the static pool, and IFSO-SVM will have an average improvement of 9.12% to 38.00% compared with other comparison algorithms. Therefore, it can be concluded that the intelligent optimization of ICT server performance based on the IFSO-SVM dynamic thread pool algorithm has a good effect.

在本发明提供的其他实施例中,给出了一种电网信息通信服务器线程池性能优化的系统,如图8所示,该系统包括:线程池性能最优的函数关系建立模块11、线程池调优模型训练模块12和线程池尺寸调优模块13;In other embodiments provided by the present invention, a system for optimizing the performance of the thread pool of a power grid information communication server is provided. As shown in FIG. 8 , the system includes: a functional relationship establishing module 11 for optimizing the performance of the thread pool, a thread pool Tuning the model training module 12 and the thread pool size tuning module 13;

线程池性能最优的函数关系建立模块11用于对影响线程池性能的因素进行分析,从而对线程池性能进行优化,达到优化服务器性能的目的;The function relationship establishing module 11 for the optimal thread pool performance is used to analyze the factors affecting the thread pool performance, so as to optimize the thread pool performance and achieve the purpose of optimizing the server performance;

线程池调优模型训练模块12用于将信通服务器性能试验数据输入到基于支持向量机的线程池调优模型中,得到训练好的线程池调优模型的超参数;The thread pool tuning model training module 12 is used for inputting the performance test data of the communication server into the thread pool tuning model based on the support vector machine to obtain the hyperparameters of the trained thread pool tuning model;

线程池尺寸调优模块13用于通过训练好的支持向量机预测模型,判断当前线程池尺寸是否为最佳尺寸,如果不符则重新设置线程池,并选用符合一定条件的线程池特征数据动态更新训练样本集;The thread pool size tuning module 13 is used to judge whether the current thread pool size is the optimal size through the trained support vector machine prediction model, if not, reset the thread pool, and select the thread pool feature data that meets certain conditions to dynamically update training sample set;

其中,线程池调优模型根据线程池性能数据吞吐量、任务运算时间、任务阻塞时间以及对应最佳线程池尺寸建立,所述线程池性能优化即为根据用户请求数选择合适的线程池尺寸。The thread pool tuning model is established according to the thread pool performance data throughput, task operation time, task blocking time and the corresponding optimal thread pool size, and the thread pool performance optimization is to select an appropriate thread pool size according to the number of user requests.

通过大量的信通服务器性能实验数据构造原始训练样本集,并通过训练好的支持向量机对不同电网场景下的线程池尺寸进行动态调整,同时采用实时的性能试验数据对训练样本集进行动态更新,可实现对信通服务器动态智能调优。The original training sample set is constructed from a large number of ICT server performance experimental data, and the size of the thread pool in different power grid scenarios is dynamically adjusted by the trained support vector machine, and the training sample set is dynamically updated with real-time performance experimental data. , which can realize dynamic and intelligent tuning of the ICT server.

优选地,在上述任意实施例中,线程池性能最优的函数关系建立模块11用于对影响线程池性能的因素进行分析,其步骤具体包括:Preferably, in any of the above embodiments, the function relationship establishing module 11 with the best thread pool performance is used to analyze factors affecting the performance of the thread pool, and the steps specifically include:

(1)设用户任务响应时间为t响应,任务在队列中的排队等待时间为t排队,任务在线程池中的池中处理时间为t,则t响应=t排队+t(1) Let the user task response time be t response , the queue waiting time of the task in the queue is t queue , and the task processing time in the pool in the thread pool is t pool , then t response = t queue + t pool ;

(2)一个任务在线程池中的处理时间包括任务抢占CPU运算时间t运算和任务因等待系统资源而被挂起的等待时间t等待,即t=t运算+t等待。因此最终用户任务响应时间t响应=t排队+t运算+t等待(2) The processing time of a task in the thread pool includes the task preempting CPU operation time t operation and the waiting time t wait for the task to be suspended due to waiting for system resources, that is, t pool = t operation + t wait . Therefore, the end user task response time t response = t queue + t operation + t wait ;

(3)设系统吞吐量为m,线程池尺寸为n,任务运算时间为t运算,则任务排队时间的数学模型为t排队=f(n,m,t)=f(n,m,t运算+t等待);(3) Suppose the system throughput is m, the thread pool size is n, and the task operation time is t operation , then the mathematical model of task queuing time is t queuing = f(n, m, t pool ) = f(n, m, t operation + t wait );

(4)设因等待系统资源所阻塞消耗的时间为T阻塞,池内线程占用CPU运算的时间为T运算,则任务等待时间的数学模型可写为t等待=g(n,T运算,T阻塞);(4) Assuming that the time consumed by waiting for system resources is blocked as T blocking , and the time that threads in the pool occupy for CPU operations is T operation , the mathematical model of task waiting time can be written as t waiting = g(n, T operation , T blocking );

(5)任务运算时间t运算是指用户任务进入线程池后抢占CPU执行任务所消耗的时间。对于每个用户任务而言,其运算时间可认为是一个常数,与吞吐量、线程池尺寸等其他参数无关,t运算=T运算(5) The task operation time t operation refers to the time consumed by the user task to preempt the CPU to execute the task after entering the thread pool. For each user task, its operation time can be considered as a constant, independent of other parameters such as throughput and thread pool size, t operation = T operation ;

(6)综上所述,反映线程池性能的用户响应时间的数学模型可建为(6) To sum up, the mathematical model of user response time reflecting thread pool performance can be built as

t响应=t排队+t运算+t等待 tresponse = tqueue+toperation + twait

=f(n,m,T运算+g(n,T运算,T阻塞))=f(n,m,T operation +g(n,T operation ,T blocking ))

+T运算+g(n,T运算,T阻塞),+T operation +g(n, T operation , T blocking ),

可写成t响应=h(n,m,T运算,T阻塞);It can be written as t response = h(n, m, T operation , T blocking );

(7)欲使线程池性能达到最优,也就是令用户任务响应时间t响应取最小值。如果上式连续可微,则取到最小值的必要条件为t'响应=h'(nbest,m,T运算,T阻塞)=0。(7) To optimize the performance of the thread pool, that is, to minimize the user task response time t response . If the above formula is continuously differentiable, the necessary condition for obtaining the minimum value is t'response =h'(n best , m, T operation , T blocking )=0.

优选地,在上述任意实施例中,支持向量机参数选择模块12包括:支持向量机参数初始化模块和支持向量机参数训练模块;Preferably, in any of the above embodiments, the support vector machine parameter selection module 12 includes: a support vector machine parameter initialization module and a support vector machine parameter training module;

支持向量机参数初始化模块用于初始化支持向量机的超参数;其中,所述超参数包括:惩罚因子C、径向基核函数的参数γ;The support vector machine parameter initialization module is used to initialize the hyperparameters of the support vector machine; wherein, the hyperparameters include: the penalty factor C, the parameter γ of the radial basis kernel function;

支持向量机训练模块用于根据得到的分类准确率作为IFSO的适应度函数进行迭代寻优,最终得到最优的超参数。The support vector machine training module is used for iterative optimization according to the obtained classification accuracy as the fitness function of IFSO, and finally the optimal hyperparameters are obtained.

通过采用基于IFSO-SVM的线程池调优模型获得了更高的分类准确率,提高了FSO的优化效果,使其更容易跳出局部最优,智能地减少服务器的用户响应时间,尤其在访问高峰时能够起到削峰的作用,可以提升服务器的执行效率。By adopting the thread pool tuning model based on IFSO-SVM, a higher classification accuracy is obtained, which improves the optimization effect of FSO, makes it easier to jump out of the local optimum, and intelligently reduces the user response time of the server, especially in the peak of access. It can play the role of peak shaving, which can improve the execution efficiency of the server.

优选地,在上述任意实施例中,支持向量机参数训练模块具体用于计算适用于线程池尺寸调优模块13的最优的超参数,其步骤具体包括:Preferably, in any of the above embodiments, the support vector machine parameter training module is specifically used to calculate the optimal hyperparameters suitable for the thread pool size tuning module 13, and the steps specifically include:

(1)初始化各个流体粒子的位置、速度,流体的密度、运动方向,以及常压;(1) Initialize the position and velocity of each fluid particle, the density of the fluid, the direction of motion, and the normal pressure;

(2)计算目标函数值,更新最优目标函数值、最优位置以及最差目标函数值,计算流体粒子密度;(2) Calculate the objective function value, update the optimal objective function value, the optimal position and the worst objective function value, and calculate the fluid particle density;

(3)对目标函数值进行归一化,并计算流体粒子的压强;(3) Normalize the objective function value and calculate the pressure of the fluid particles;

(4)计算其他流体粒子对当前粒子的压强和速度方向;(4) Calculate the pressure and velocity direction of other fluid particles to the current particle;

(5)根据伯努利方程计算流体速度值和速度矢量;(5) Calculate fluid velocity value and velocity vector according to Bernoulli equation;

(6)更新粒子的位置;(6) Update the position of the particle;

(7)重复步骤(2)-(6)直到满足终止条件。(7) Repeat steps (2)-(6) until the termination condition is satisfied.

其中,为了提高流体搜索算法的精度,采用两阶段的优化机制,即第一阶段的多样化搜索和第二阶段的精细化探索。Among them, in order to improve the accuracy of the fluid search algorithm, a two-stage optimization mechanism is adopted, namely the diversified search in the first stage and the refined exploration in the second stage.

采用上述进一步方案的有益效果是:通过简化压强方向的计算方法,并将优化过程分为多样化搜索和精细化探索两个阶段,避免原始的FSO经常容易陷入局部最优,且运算量较大的缺点,可以大幅提升模型训练时的收敛速度。The beneficial effect of adopting the above-mentioned further scheme is: by simplifying the calculation method of the pressure direction, and dividing the optimization process into two stages of diversification search and refined exploration, it avoids that the original FSO is often prone to fall into local optimum, and the amount of calculation is large. It can greatly improve the convergence speed of model training.

优选地,在上述任意实施例中,线程池尺寸调优模块13具体用于:Preferably, in any of the above embodiments, the thread pool size tuning module 13 is specifically used to:

将实时运行的线程池性能监测数据作为测试样本输入支持向量机中,得到所属的最佳线程池尺寸类别;Input the real-time running thread pool performance monitoring data as a test sample into the support vector machine to obtain the best thread pool size category to which it belongs;

判断所得最佳线程池尺寸是否与当前尺寸相符,如果不符则重新设置线程池,动态调整线程池大小Determine whether the obtained optimal thread pool size is consistent with the current size, if not, reset the thread pool and dynamically adjust the thread pool size

判断特征数据是否满足KKT(Karush-Kuhn-Tucker)条件,如果满足,则替代训练样本集中最违反KKT条件的点,交由支持向量机训练模块以产生新的各最佳线程池尺寸的分类超平面。Judge whether the feature data satisfies the KKT (Karush-Kuhn-Tucker) condition, and if so, replace the point that most violates the KKT condition in the training sample set, and pass it to the support vector machine training module to generate a new classification of the optimal thread pool size. flat.

在基于支持向量机的线程池调优模型中,采用KKT条件作为训练样本集更新的判定条件,对于训练样本集,维持一个固定大小的规模,避免随着新样本的不断引入而使得样本集无限增大,可以使线程池调优模型能够适应复杂多变的环境。In the thread pool tuning model based on support vector machine, the KKT condition is used as the judgment condition for the update of the training sample set. For the training sample set, a fixed size is maintained to avoid the infinite sample set with the continuous introduction of new samples. If it increases, the thread pool tuning model can adapt to complex and changeable environments.

以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的精神和范围。The above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still be described in the foregoing embodiments. Modifications are made to the technical solutions of the present invention, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for optimizing the performance of a thread pool of a power grid information communication server is characterized by comprising the following steps:
s1, analyzing factors influencing the performance of the thread pool, and establishing a thread pool performance model;
s2, inputting the performance test data of the communication server into a thread pool tuning model based on a support vector machine to obtain the hyper-parameters of the trained thread pool tuning model;
s3, judging whether the current thread pool size is the optimal size through the trained support vector machine prediction model, resetting the thread pool if the current thread pool size is not the optimal size, and dynamically updating the training sample set by selecting the thread pool characteristic data meeting certain conditions;
and the thread pool tuning model is established according to the thread pool performance data throughput, the task operation time, the task blocking time and the corresponding optimal thread pool size.
2. The method for optimizing the performance of the thread pool of the grid information communication server according to claim 1, wherein the S1 specifically includes:
(1) let the user task response time be tResponse toThe queue waiting time of the task in the queue is tQueuing upThe processing time of the task in the thread pool is tPoolThen t isResponse to=tQueuing up+tPool
(2) The processing time of a task in the thread pool comprises the CPU operation time t occupied by the taskOperationsAnd a waiting time t for the task to be suspended by waiting for system resourcesWait forI.e. tPool=tOperations+tWait forAnd thus end user task response time tResponse to=tQueuing up+tOperations+tWait for
(3) Setting the system throughput as m, the thread pool size as n, and the task operation time as tOperationsThen the mathematical model of the task queuing time is tQueuing up=f(n,m,tPool)=f(n,m,tOperations+tWait for);
(4) Let T be the time consumed by waiting for system resources to be blockedBlocking of a vesselThe time of CPU operation occupied by the thread in the pool is TOperationsThen the mathematical model of task latency can be written as tWait for=g(n,TOperations,TBlocking of a vessel);
(5) Task operation time tOperationsThe method is characterized in that the time consumed by a CPU to execute tasks is preempted after user tasks enter a thread pool, for each user task, the operation time can be regarded as a constant and is irrelevant to other parameters such as throughput, thread pool size and the like, and tOperations=TOperations
(6) In summary, a mathematical model of user response time reflecting thread pool performance can be constructed as
tResponse to=tQueuing up+tOperations+tWait for
=f(n,m,TOperations+g(n,TOperations,TBlocking of a vessel))+TOperations+g(n,TOperations,TBlocking of a vessel),
Can be written as tResponse to=h(n,m,TOperations,TBlocking of a vessel);
(7) To optimize the performance of the thread pool, i.e. to make the user task response time tResponse toTaking a minimum value, if the above formula is continuously differentiable, the necessary condition for taking the minimum value is t'Response to=h'(nbest,m,TOperations,TBlocking of a vessel)=0。
3. The method for optimizing the performance of the thread pool of the grid information communication server according to claim 1, wherein the S2 specifically includes:
s21, initializing the hyper-parameters of the support vector machine; the hyper-parameters include: penalty factor C, parameter gamma of radial basis kernel function;
and S22, performing cross training by using a support vector machine, and performing iterative optimization by using the obtained classification accuracy as a fitness function of the improved fluid search optimization algorithm to finally obtain the optimal hyperparameter.
4. The method for optimizing the performance of the thread pool of the grid information communication server according to claim 3, wherein the S22 specifically includes:
(1) initializing the position, the speed, the density and the moving direction of each fluid particle and normal pressure;
(2) calculating an objective function value, updating an optimal objective function value, an optimal position and a worst objective function value, and calculating the density of fluid particles;
(3) normalizing the objective function value and calculating the pressure of the fluid particles;
(4) calculating the pressure and the speed direction of other fluid particles to the current particle;
(5) calculating a fluid velocity value and a velocity vector according to a Bernoulli equation;
(6) updating the position of the particle;
(7) and (5) repeating the steps (2) to (6) until a termination condition is met.
5. The method for optimizing the performance of the thread pool of the grid information communication server according to any one of claims 1 to 4, wherein the step S3 specifically includes:
inputting the performance monitoring data of the thread pool running in real time into a support vector machine as a test sample to obtain the size category of the optimal thread pool;
judging whether the size of the obtained optimal thread pool is consistent with the current size, if not, resetting the thread pool, and dynamically adjusting the size of the thread pool;
and judging whether the characteristic data meets the KKT condition or not, if so, replacing the point most violating the KKT condition in the training sample set, and performing training and learning through a support vector machine to obtain new classification hyperplanes of the sizes of all the optimal thread pools.
6. A system for optimizing the performance of a thread pool of a power grid information communication server is characterized by comprising the following steps: the function relation establishing module for the optimal performance of the thread pool, the support vector machine parameter selecting module and the thread pool size optimizing module;
the function relation establishing module with the optimal thread pool performance is used for analyzing factors influencing the thread pool performance so as to optimize the thread pool performance and achieve the aim of optimizing the server performance;
the support vector machine parameter selection module is used for inputting the performance test data of the communication server into a thread pool tuning model based on the support vector machine to obtain the hyperparameter of the trained thread pool tuning model;
the thread pool size optimizing module is used for judging whether the current thread pool size is the optimal size or not through a trained support vector machine prediction model, resetting the thread pool if the current thread pool size is not the optimal size, and dynamically updating a training sample set by using thread pool characteristic data meeting certain conditions;
and the thread pool tuning model is established according to the thread pool performance data throughput, the task operation time, the task blocking time and the corresponding optimal thread pool size.
7. The system for optimizing the performance of the thread pool of the power grid information communication server according to claim 6, wherein the function relationship establishing module with the optimal performance of the thread pool is used for analyzing factors influencing the performance of the thread pool, and the steps specifically include:
(1) let the user task response time be tResponse toThe queue waiting time of the task in the queue is tQueuing upThe processing time of the task in the thread pool is tPoolThen t isResponse to=tQueuing up+tPool
(2) The processing time of a task in the thread pool comprises the CPU operation time t occupied by the taskOperationsAnd a waiting time t for the task to be suspended by waiting for system resourcesWait forI.e. tPool=tOperations+tWait forAnd thus end user task response time tResponse to=tQueuing up+tOperations+tWait for
(3) Setting the system throughput as m, the thread pool size as n, and the task operation time as tOperationsThen the mathematical model of the task queuing time is tQueuing up=f(n,m,tPool)=f(n,m,tOperations+tWait for);
(4) Let T be the time consumed by waiting for system resources to be blockedBlocking of a vesselThe time of CPU operation occupied by the thread in the pool is TOperationsThen the mathematical model of task latency can be written as tWait for=g(n,TOperations,TBlocking of a vessel);
(5) Task operation time tOperationsThe method refers to that a user task occupies the time consumed by a CPU to execute the task after entering a thread pool. For each user task, the operation time can be regarded as a constant, and is independent of other parameters such as throughput, thread pool size and the like, and tOperations=TOperations
(6) In summary, a mathematical model of user response time reflecting thread pool performance can be constructed as
tResponse to=tQueuing up+tOperations+tWait for
=f(n,m,TOperations+g(n,TOperations,TBlocking of a vessel))+TOperations+g(n,TOperations,TBlocking of a vessel),
Can be written as tResponse to=h(n,m,TOperations,TBlocking of a vessel);
(7) To optimize the performance of the thread pool, the user task response time t is setResponse toTaking a minimum value, if the above formula is continuously differentiable, the necessary condition for taking the minimum value is t'Response to=h'(nbest,m,TOperations,TBlocking of a vessel)=0。
8. The system for optimizing the performance of the thread pool of the grid information communication server according to claim 6, wherein the support vector machine parameter selection module comprises: the device comprises a support vector machine parameter initialization module and a support vector machine parameter training module;
the support vector machine parameter initialization module is used for initializing the hyper-parameters of the support vector machine; wherein the hyper-parameters comprise: penalty factor C, parameter gamma of radial basis kernel function;
and the support vector machine training module is used for performing iterative optimization as a fitness function of the improved fluid search optimization algorithm according to the obtained classification accuracy, and finally obtaining the optimal hyperparameter.
9. The system according to claim 8, wherein the support vector machine parameter training module is specifically configured to calculate an optimal hyper-parameter applicable to the thread pool size tuning module, and the method specifically includes the following steps:
(1) initializing the position, the speed, the density and the moving direction of each fluid particle and normal pressure;
(2) calculating an objective function value, updating an optimal objective function value, an optimal position and a worst objective function value, and calculating the density of fluid particles;
(3) normalizing the objective function value and calculating the pressure of the fluid particles;
(4) calculating the pressure and the speed direction of other fluid particles to the current particle;
(5) calculating a fluid velocity value and a velocity vector according to a Bernoulli equation;
(6) updating the position of the particle;
(7) and (5) repeating the steps (2) to (6) until a termination condition is met.
10. The system for optimizing the performance of the thread pool of the grid information communication server according to any one of claims 6 to 9, wherein the thread pool size optimizing module is specifically configured to:
inputting the performance monitoring data of the thread pool running in real time into a support vector machine as a test sample to obtain the size category of the optimal thread pool;
judging whether the size of the obtained optimal thread pool is consistent with the current size, if not, resetting the thread pool, and dynamically adjusting the size of the thread pool;
and judging whether the characteristic data meets the KKT condition or not, if so, substituting the point most violating the KKT condition in the training sample set, and handing the point to a support vector machine training module to generate a new classification hyperplane of each optimal thread pool size.
CN202010727268.0A 2020-07-24 2020-07-24 A method and system for optimizing the performance of a grid information communication server thread pool Active CN111930484B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010727268.0A CN111930484B (en) 2020-07-24 2020-07-24 A method and system for optimizing the performance of a grid information communication server thread pool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010727268.0A CN111930484B (en) 2020-07-24 2020-07-24 A method and system for optimizing the performance of a grid information communication server thread pool

Publications (2)

Publication Number Publication Date
CN111930484A true CN111930484A (en) 2020-11-13
CN111930484B CN111930484B (en) 2023-06-30

Family

ID=73314674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010727268.0A Active CN111930484B (en) 2020-07-24 2020-07-24 A method and system for optimizing the performance of a grid information communication server thread pool

Country Status (1)

Country Link
CN (1) CN111930484B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194040A (en) * 2021-04-28 2021-07-30 王程 Intelligent control method for instantaneous high-concurrency server thread pool congestion
CN116401236A (en) * 2023-06-07 2023-07-07 瀚高基础软件股份有限公司 Method and equipment for adaptively optimizing database parameters

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108984288A (en) * 2018-06-11 2018-12-11 山东中创软件商用中间件股份有限公司 Thread pool capacity adjustment method, device and equipment based on system response time
US20190258904A1 (en) * 2018-02-18 2019-08-22 Sas Institute Inc. Analytic system for machine learning prediction model selection
CN110401635A (en) * 2019-06-28 2019-11-01 国网安徽省电力有限公司电力科学研究院 A design method for isolation and penetration of internal and external nets
CN110399182A (en) * 2019-07-25 2019-11-01 哈尔滨工业大学 A CUDA thread placement optimization method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190258904A1 (en) * 2018-02-18 2019-08-22 Sas Institute Inc. Analytic system for machine learning prediction model selection
CN108984288A (en) * 2018-06-11 2018-12-11 山东中创软件商用中间件股份有限公司 Thread pool capacity adjustment method, device and equipment based on system response time
CN110401635A (en) * 2019-06-28 2019-11-01 国网安徽省电力有限公司电力科学研究院 A design method for isolation and penetration of internal and external nets
CN110399182A (en) * 2019-07-25 2019-11-01 哈尔滨工业大学 A CUDA thread placement optimization method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
申扬 等: "基于机器学习的电网信息通信服务器智能优化", 《科学技术与工程》, vol. 20, no. 32, pages 13302 - 13308 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194040A (en) * 2021-04-28 2021-07-30 王程 Intelligent control method for instantaneous high-concurrency server thread pool congestion
CN116401236A (en) * 2023-06-07 2023-07-07 瀚高基础软件股份有限公司 Method and equipment for adaptively optimizing database parameters
CN116401236B (en) * 2023-06-07 2023-08-18 瀚高基础软件股份有限公司 Method and equipment for adaptively optimizing database parameters

Also Published As

Publication number Publication date
CN111930484B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN117472587B (en) Resource scheduling system of AI intelligent computation center
Xu et al. Optimal status update for caching enabled IoT networks: A dueling deep R-network approach
El-Yaniv et al. Active learning via perfect selective classification
CN113128671B (en) A method and system for dynamic prediction of service demand based on multimodal machine learning
CN110533112A (en) Internet of vehicles big data cross-domain analysis and fusion method
CN110909773B (en) Customer Classification Method and System Based on Adaptive Particle Swarm
CN105654196A (en) Adaptive load prediction selection method based on electric power big data
CN112287990A (en) Model optimization method of edge cloud collaborative support vector machine based on online learning
CN110705640A (en) Method for constructing prediction model based on slime mold algorithm
CN111930484B (en) A method and system for optimizing the performance of a grid information communication server thread pool
CN110020435B (en) Method for optimizing text feature selection by adopting parallel binary bat algorithm
CN114118567B (en) Power service bandwidth prediction method based on double-channel converged network
CN111428587A (en) Crowd counting and density estimating method and device, storage medium and terminal
CN110288075A (en) A Feature Selection Method Based on Improved Hybrid Leapfrog Algorithm
CN110751257A (en) Method for constructing prediction model based on hunger game search algorithm
CN109754023A (en) A Novel Decision Tree Classification Method Based on J Divergence
Yadav Machine Learning Algorithms: Optimizing Efficiency in AI Applications
CN116842354A (en) Feature selection method based on quantum artificial jellyfish search mechanism
CN116595040A (en) Optimization method and device for classified query of data in overload scene
CN118195756A (en) Data analysis method for resource allocation and electronic equipment
CN111832645A (en) Feature selection method for categorical data based on discrete crow differential collaborative search algorithm
CN108717551A (en) A kind of fuzzy hierarchy clustering method based on maximum membership degree
CN111797935A (en) Semi-supervised deep network picture classification method based on group intelligence
CN117574213A (en) APSO-CNN-based network traffic classification method
CN116279471A (en) A Vehicle Acceleration Prediction Method Considering Driving Behavior Characteristics in Car-following Scenarios

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant