CN114201378A - Server performance prediction method, device, equipment, storage medium and program product - Google Patents

Server performance prediction method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN114201378A
CN114201378A CN202111532565.0A CN202111532565A CN114201378A CN 114201378 A CN114201378 A CN 114201378A CN 202111532565 A CN202111532565 A CN 202111532565A CN 114201378 A CN114201378 A CN 114201378A
Authority
CN
China
Prior art keywords
data
server
performance
historical
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111532565.0A
Other languages
Chinese (zh)
Inventor
钱宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp filed Critical China Construction Bank Corp
Priority to CN202111532565.0A priority Critical patent/CN114201378A/en
Publication of CN114201378A publication Critical patent/CN114201378A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3447Performance evaluation by modeling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, in particular to a method, a device, equipment, a storage medium and a program product for predicting the performance of a server, wherein the method comprises the following steps: acquiring real-time operation environment data of a server at a current time point; acquiring first historical performance data of the server in a first preset time period before the current time point; inputting the first historical performance data and the real-time operating environment data into a pre-trained target performance prediction model; and acquiring the predicted performance data of the server output by the target performance prediction model based on the first historical performance data and the real-time operation environment data in a second preset time period in the future. The method of the invention predicts the performance data in a future period of time by comprehensively analyzing the historical performance data in a period of time before the current time point and the real-time operation environment data of the current time point, and can improve the accuracy and reliability of server performance prediction.

Description

Server performance prediction method, device, equipment, storage medium and program product
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, a device, a storage medium, and a program product for predicting server performance.
Background
The bank core system can trigger batch processing operation execution of related components according to predefined operation front-back dependency relationship through an operation scheduling tool in a day-to-end batch processing mode, so that mass data processing of different business functions is realized.
During the end-of-day batch process, the execution of the batch job may require complex, large-data-volume file operations, which may occupy a large memory space of the server. The execution attributes of the jobs/job flows between the platform and each component are not strictly serial, and for some jobs/job flows without front and back dependency, parallel scheduling operation can be performed to improve efficiency and save time, and the parallel operation of a plurality of jobs/job flows can undoubtedly consume more server resources, which causes the use of a Central Processing Unit (CPU) of the server to be soaring, increases the occupation of disk space and memory space, and in severe cases, directly causes the downtime of the server, affects the execution of batch Processing, and increases the cost on time and manpower.
Therefore, in the process of batch processing at the end of the day, it is necessary to pay attention to the performance index variation trend of the server and find the performance bottleneck in time so as to early warn potential performance problems in advance, reduce hidden production hazards and ensure stable and healthy operation of the system.
One scheme in the prior art is to monitor the service conditions of the server CPU and the disk space in real time, but the scheme cannot find problems in advance, and when the problems really occur, operation and maintenance personnel can only perform corresponding processing, so that the performance problems of the server cannot be really avoided. In another scheme in the prior art, the performance state of the server in a future period of time is predicted according to the current performance index of the server, but the actual operation condition of the server is not fully considered in the scheme, so that the accuracy of performance prediction is poor.
Disclosure of Invention
In view of the foregoing problems in the prior art, it is an object of the present invention to provide a method, an apparatus, a device, a storage medium, and a program product for predicting server performance, which can improve the accuracy and reliability of server performance prediction.
In order to solve the above problem, the present invention provides a server performance prediction method, including:
acquiring real-time operation environment data of a server at a current time point;
acquiring first historical performance data of the server in a first preset time period before the current time point;
inputting the first historical performance data and the real-time operating environment data into a pre-trained target performance prediction model;
and acquiring the predicted performance data of the server output by the target performance prediction model based on the first historical performance data and the real-time operation environment data in a second preset time period in the future.
Further, the obtaining of the first historical performance data of the server in the first preset time period before the current time point includes:
determining a first time point sequence in a first preset time period before the current time point, wherein the first time point sequence comprises a preset number of first time points;
and acquiring historical performance data of the server at each first time point in the first time point sequence to obtain a historical performance data sequence as the first historical performance data.
Further, the target performance prediction model is obtained by pre-training based on second historical performance data and historical operating environment data of the server in a third preset time period before the current time point;
the training method of the target performance prediction model comprises the following steps:
acquiring second historical performance data and historical operating environment data of the server in a third preset time period before the current time point;
constructing a training sample data set based on the second historical performance data and the historical operating environment data;
and training an initial performance prediction model based on the training sample data set to obtain the target performance prediction model.
Further, said constructing a training sample data set based on said second historical performance data and said historical operating environment data comprises:
determining a second time point sequence in a third preset time period before the current time point;
for each second time point in the second time point sequence, acquiring historical operating environment data of the server at the second time point as first input data;
acquiring second historical performance data of the server in a first preset time period before the second time point, wherein the second historical performance data is used as second input data;
acquiring second historical performance data of the server within a second preset time period after the second time point as output data;
and constructing a training sample data set by using the first input data, the second input data and the output data.
Further, the training an initial performance prediction model based on the training sample data set to obtain the target performance prediction model includes:
constructing an initial performance prediction model for server performance prediction;
training the initial performance prediction model by using the training sample data set to obtain a trained performance prediction model;
and comparing the trained performance prediction model with the initial performance prediction model, and determining an optimal performance prediction model as a target performance prediction model.
Further, the method further comprises:
judging whether the predicted performance data meets a preset condition or not;
and when the predicted performance data meets a preset condition, generating and displaying an early warning notification message, wherein the early warning notification message is used for indicating that the server is abnormal.
Further, the predicted performance data comprises performance peak data of the server within a second preset time period in the future;
the judging whether the predicted performance data meets the preset condition comprises the following steps:
judging whether the performance peak data is larger than a preset peak threshold value or not;
when the performance peak data is larger than a preset peak threshold value, determining that the predicted performance data meets a preset condition;
and when the performance peak data is less than or equal to a preset peak threshold value, determining that the predicted performance data does not meet a preset condition.
Another aspect of the present invention provides a server performance prediction apparatus, including:
the operation environment data acquisition module is used for acquiring real-time operation environment data of the server at the current time point;
the historical performance data acquisition module is used for acquiring first historical performance data of the server in a first preset time period before the current time point;
the model input module is used for inputting the first historical performance data and the real-time operation environment data into a pre-trained target performance prediction model;
and the predicted performance data acquisition module is used for acquiring predicted performance data of the server, which is output by the target performance prediction model based on the first historical performance data and the real-time operation environment data, in a second preset time period in the future.
Another aspect of the present invention provides an electronic device, including a processor and a memory, where at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the server performance prediction method as described above.
Another aspect of the present invention provides a computer-readable storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the server performance prediction method as described above.
Another aspect of the present invention provides a computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the server performance prediction method as described above.
Due to the technical scheme, the invention has the following beneficial effects:
according to the server performance prediction method provided by the embodiment of the invention, the historical performance data and the real-time operation environment data of the server are obtained, the performance data of the server in the next time period is predicted in real time by using the pre-trained performance prediction model, the value of the historical performance data can be fully utilized, and when the performance of the server is predicted in real time, the performance indexes of the server in a time period before the current time point are comprehensively analyzed, including the influence of external environmental factors, so that the accuracy and the reliability of server performance prediction are improved, the operation quality of the server is more accurately reflected, the problem is found in advance, the problem is solved, and the safety and the stability of the operation of the server are improved.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings used in the description of the embodiment or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the invention;
FIG. 2 is a flow diagram of a method for server performance prediction according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an end-of-day batch server performance prediction process provided by one embodiment of the present invention;
FIG. 4 is a schematic diagram of a training process of an end-of-day batch model provided by an embodiment of the invention;
FIG. 5 is a diagram of constructing a training sample data set according to an embodiment of the present invention;
FIG. 6 is a flow chart of a method for server performance prediction according to another embodiment of the present invention;
fig. 7 is a schematic structural diagram of a server performance prediction apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or device.
In order to make the objects, technical solutions and advantages disclosed in the embodiments of the present invention more clearly apparent, the embodiments of the present invention are described in further detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the embodiments of the invention and are not intended to limit the embodiments of the invention. In the technical scheme of the embodiment of the invention, the data acquisition, storage, use, processing and the like all conform to relevant regulations of national laws and regulations.
Referring to the specification and fig. 1, a schematic diagram of an implementation environment of a server performance prediction method according to an embodiment of the present invention is shown. It should be noted that fig. 1 is only an example of an implementation environment in which the embodiment of the present invention may be applied to help those skilled in the art understand the technical content of the present invention, and does not mean that the embodiment of the present invention may not be applied to other devices, systems, environments or scenarios. As shown in fig. 1, the implementation environment may include at least a monitoring device 110 and at least one server 120. The monitoring device 110 and each server 120 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present invention.
The monitoring device 110 may be various electronic devices, and may include a physical device of a smart phone, a tablet computer, a notebook computer, a desktop computer, a server, etc., and may also include software running in the physical device, such as an application program, etc., but is not limited thereto. The operating system running on the monitoring device 110 may include, but is not limited to, an android system, an iOS system, a linux system, a windows system, and the like.
The server 120 may include a server that operates independently, or a distributed server, or a server cluster composed of a plurality of servers, or may include a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform.
In this embodiment of the present invention, the monitoring device 110 may be installed with monitoring software, and the monitoring device 110 may interact with the server 120 through various monitoring software to monitor the operation state and various operation performance indexes of the server 120.
It should be noted that the server performance prediction method provided by the embodiment of the present invention may be generally executed by the monitoring device 110. Accordingly, the server performance prediction apparatus provided by the embodiment of the present invention may be generally disposed in the monitoring device 110. In some possible embodiments, the server performance prediction method provided by the embodiment of the present invention may also be executed by the server 120. Accordingly, the server performance prediction apparatus provided in the embodiment of the present invention may also be disposed in the server 120.
It should be noted that fig. 1 is only an example. It will be understood by those skilled in the art that although only 1 monitoring device and 2 servers are shown in fig. 1, the present invention is not limited to this embodiment, and there may be any number of monitoring devices and servers according to actual needs.
Referring to the specification, fig. 2 shows a flow of a server performance prediction method according to an embodiment of the present invention, which may be applied to the monitoring device in fig. 1, and specifically as shown in fig. 2, the method may include the following steps:
s210: and acquiring real-time running environment data of the server at the current time point.
In the embodiment of the present invention, the current time point may be a time point at which server performance prediction is performed, the real-time operation environment data may be influence parameters that influence server performance, and these influence parameters may be regarded as external environment factors with respect to server performance index parameters. The real-time operation environment data may include, but is not limited to, data such as job flow/job number, file size, file number, file operation complexity, and the like in parallel operation.
The number of the workflow/jobs in parallel operation is positively correlated with the peak value of the CPU utilization rate of the server in a period of time in the future, and the more the number of the workflow/jobs is, the higher the peak value of the CPU utilization rate is. The file size, the file quantity and the file operation complexity are positively correlated with the peak value of the memory space occupation of the server in a period of time in the future. For example, the larger the file in the RCSORT type job is, the more complicated the file operation is, the more the consumption peak value of the memory space is; the larger the number of files in a DDMERGE type job, the larger the file, the more the peak value of memory space consumption.
S220: and acquiring first historical performance data of the server in a first preset time period before the current time point.
In practical applications, if the state of the server is predicted only based on the current performance data of the server in a future period of time, an erroneous prediction result may be obtained. For example, for a bank core system platform server, the performance index of the current server is assumed to be good, but there is also a possibility that: before the batch processing stage T1 begins, only the platform job flow/job is running, and both are in a front-back dependency relationship, so that the program is basically a single-thread running program, and the server resources are still abundant at this time. However, once the T1 phase begins, the workflow/job of all the components is called up at the same time in a moment, and multiple threads run simultaneously, which can cause a sudden surge in CPU utilization. For another example, when a non-RCSORT type job is run, the Spark server is not used, and the memory space of the Spark server at this stage can be considered ideal, but once the RCSORT type job is run, and the processed job file is large and the operation is complicated, the memory space of the Spark server is drastically reduced. On the other hand, if the performance of the current server is not good, but the process is just the server resource release process, the performance index of the subsequent server will be better and better. It can be seen that the state of a server in a future period of time cannot be predicted by only the current state of the server.
In order to improve the accuracy of server performance prediction, the server performance indexes in the historical time period are introduced, and comprehensive server performance analysis including the influence of external environmental factors is performed by utilizing rich information contained in the historical performance indexes of the server. The first preset time period may be set according to actual needs, which is not limited in the embodiment of the present invention.
In one possible embodiment, the obtaining of the first historical performance data of the server in the first preset time period before the current time point may include:
determining a first time point sequence in a first preset time period before the current time point, wherein the first time point sequence comprises a preset number of first time points;
and acquiring historical performance data of the server at each first time point in the first time point sequence to obtain a historical performance data sequence as the first historical performance data.
The time intervals between every two adjacent time points in the first time point sequence may be the same or different, preferably, the time intervals are the same, that is, the first time point sequence may be a time point sequence consisting of a preset number of first time points acquired periodically. Accordingly, the first historical performance data may be a historical performance data sequence composed of a preset number of periodically acquired historical performance data. The preset number may be set according to actual needs, and the embodiment of the present invention is not limited thereto.
In practical application, with reference to fig. 3 of the specification, a fixed-length historical performance data queue (including a preset number of data) may be pre-established, server performance data may be periodically collected and added to the queue, when the fixed-length queue is full, the first historical performance data may be obtained, and the first prediction of the server performance may be performed by using the collected current time point operating environment data and the first historical performance data, so as to obtain the performance data of the server within a period of time in the future. The acquisition periods of the server performance data and the operating environment data may be set according to actual needs, which is not limited in the embodiment of the present invention, and one preferable mode is that the acquisition periods are set to be the same as the periods of the training data acquisition of the training performance prediction model.
And when the next acquisition time point is reached, adding the acquired server performance data into the historical performance data fixed length queue, wherein the historical performance data fixed length queue follows the data structure characteristics of the queue, first in first out is carried out, the data which is earliest entered into the queue is removed, and the latest performance data is added. Thus, a new queue is obtained, and a new prediction can be triggered by combining the latest running environment data. The performance of the server can be continuously monitored by repeating the above process.
S230: inputting the first historical performance data and the real-time operating environment data into a pre-trained target performance prediction model.
S240: and acquiring the predicted performance data of the server output by the target performance prediction model based on the first historical performance data and the real-time operation environment data in a second preset time period in the future.
In the embodiment of the invention, as the historical performance data of the server and the performance data in a future period of time are closely related, but the historical performance data and the performance data are influenced by a plurality of factors and obviously not in a linear relationship, the relationship between the historical performance data and the performance data is difficult to obtain specifically. And regarding the prediction problem of the nonlinear system, the neural network model is a better solution, so that a target performance prediction model can be trained in advance to predict the performance of the server. The target performance prediction model may adopt a Back Propagation (BP) neural network. It should be noted that the above examples do not limit the embodiments of the present invention, and the target performance prediction model may also adopt other types of neural network structures in the prior art according to actual needs.
In the embodiment of the invention, the first historical performance data and the real-time operation environment data are input into the target performance prediction model, so that the predicted performance data of the server in a second preset time period in the future can be output. The predicted performance data may include, but is not limited to, CPU usage data and memory space occupation data, and the second preset time period may be set according to actual needs, which is not limited in this embodiment of the present invention.
In a possible embodiment, since in practical applications, it is only necessary to pay attention to whether the performance of the server has a bottleneck, and it is not necessary to pay attention to all the performance data in a future period of time, the predicted performance data may be the performance peak data of the server in a second preset period of time in the future, and may include, but is not limited to, CPU usage peak data and memory space occupation peak data.
In practical application, with reference to fig. 3 of the specification, the PYTHON script may be executed in advance, parameters and information of a pre-trained target performance prediction model may be read and stored in the memory, and the read target performance prediction model may be directly used in subsequent performance prediction.
In one possible embodiment, the target performance prediction model may be obtained by pre-training based on second historical performance data and historical operating environment data of the server within a third preset time period before the current time point; specifically, the training method of the target performance prediction model may include the following steps:
the method comprises the following steps: and acquiring second historical performance data and historical operating environment data of the server in a third preset time period before the current time point.
Step two: and constructing a training sample data set based on the second historical performance data and the historical operating environment data.
Step three: and training an initial performance prediction model based on the training sample data set to obtain the target performance prediction model.
In the first step, the third preset time period may be set according to actual needs, which is not limited in the embodiment of the present invention. For example, assuming that the server performance prediction method provided by the embodiment of the present invention is applied to a daily end batch processing task process, a time period corresponding to a previous batch processing process of the current batch processing process may be used as the third time period, that is, the target performance prediction model is trained by using second historical performance data and historical operating environment data of the server in the previous batch processing process.
In the second step, the constructing a training sample data set based on the second historical performance data and the historical operating environment data may include:
determining a second time point sequence in a third preset time period before the current time point;
for each second time point in the second time point sequence, acquiring historical operating environment data of the server at the second time point as first input data;
acquiring second historical performance data of the server in a first preset time period before the second time point, wherein the second historical performance data is used as second input data;
acquiring second historical performance data of the server within a second preset time period after the second time point as output data;
and constructing a training sample data set by using the first input data, the second input data and the output data.
The time intervals between every two adjacent time points in the second time point sequence may be the same or different. The method for acquiring the second historical performance data of the server in the first preset time period before the second time point is similar to the method for acquiring the first historical performance data of the server in the first preset time period before the current time point, and the embodiment of the invention is not repeated herein. Correspondingly, the second input data may also be a historical performance data sequence composed of a preset number of historical performance data acquired periodically, and the acquisition period is the same as the acquisition period of the server performance data and the operating environment data during server performance prediction.
The second historical performance data of the server in a second preset time period after the second time point may also include, but is not limited to, CPU usage data and memory space usage data, and may include, but is not limited to, CPU usage peak data and memory space usage peak data, for example.
In practical application, referring to fig. 4 of the specification, when the server operates within a third preset time period before the current time point, performance data and operating environment data of the server may be periodically collected, and the collected data is recorded, so that the second historical performance data and the historical operating environment data may be obtained. And (4) constructing a training sample data set for model training by using the collected data, and finally finishing the model training.
In practical application, referring to fig. 5 in the specification, since the periodically collected server performance data is a serial data, the data may be preprocessed in advance to form data with X dimension input and Y dimension output. Wherein X is the established fixed-length queue length of the historical performance data (i.e., equal to the preset number), and Y is the queue length of the performance data in a second preset time period in the future.
Illustratively, as shown in fig. 5, where each circle represents one performance datum, the corresponding performance datum is arranged from top to bottom as the order of the records, from front to back in time. The upper selection is the data input of X dimension, and the lower selection is the data output of Y dimension.
In practical application, after data with X-dimension input and Y-dimension output are formed, training sample data can be constructed according to the data. Specifically, the X-dimensional data input and the operating environment data (including but not limited to the number of jobs/job in parallel operation, file size, number of files, and file operation complexity) corresponding to a certain time point between the last data in the X-dimensional data input and the first data in the Y-dimensional data output may be composed as an input vector for model training, and the largest data in the Y-dimensional data output may be used as an output vector for model training to construct training sample data.
For example, as shown in fig. 5, the X-dimensional data input and the operating environment data corresponding to a certain time point between the a data in the X-dimensional data input and the B data in the Y-dimensional data output may be formed as an input vector of model training, and the largest data C data in the Y-dimensional data output may be used as an output vector of model training. Alternatively, the operating environment data corresponding to the time point of the a data or the B data and the data input in the X dimension may be used to form an input vector for model training.
In practical application, the upper and lower selection boxes are respectively translated downwards for a time period, so that new data with X-dimension input and Y-dimension output can be obtained, and a training sample data can be constructed by the method. And continuously and respectively translating the upper and lower selection boxes downwards, and acquiring new data with X-dimension input and Y-dimension output every time of downwards translation for a time period, thereby constructing training sample data. Finally, a series of training sample data can be constructed, and each training sample data is subjected to normalization processing, so that a training sample data set for model training can be obtained.
In the third step, the training an initial performance prediction model based on the training sample data set to obtain the target performance prediction model may include:
constructing an initial performance prediction model for server performance prediction;
training the initial performance prediction model by using the training sample data set to obtain a trained performance prediction model;
and comparing the trained performance prediction model with the initial performance prediction model, and determining an optimal performance prediction model as a target performance prediction model.
Specifically, the performance prediction model that has been trained before the third preset time period may be used as the initial performance prediction model, that is, the parameters and information of the performance prediction model that has been trained before are used as the initial values of model training. If there is no trained performance prediction model, a neural network model may be constructed and initial values set as the initial performance prediction model. Wherein the neural network model may be a BP neural network model.
Specifically, after model training is completed to obtain a trained performance prediction model, the trained performance prediction model may be compared with the initial performance prediction model, and an optimal performance prediction model of the trained performance prediction model and the initial performance prediction model is selected as a target performance prediction model.
Specifically, when the error and accuracy of the target performance prediction model training reach acceptable ranges, the target performance prediction model can be used for predicting the server performance data in the future in real time.
In the embodiment of the invention, in the operation process of the server, the performance data and the operation environment data of the server are periodically acquired, the acquired data are recorded, the performance prediction model which is trained before is retrained based on the recorded data to obtain a new performance prediction model, the new performance prediction model is compared with the performance prediction model which is trained before, and the optimal model is selected as the target performance prediction model, so that the fitting precision of the model can be continuously improved, overfitting is avoided, and a more accurate performance prediction result can be obtained.
In a specific embodiment, with reference to fig. 4 in the specification, the server performance prediction method provided by the embodiment of the present invention may be applied to a final-day batch processing process, and for predicting the performance of a server in a final-day batch processing task operation process, a training process of a corresponding target performance prediction model may include a process of acquiring performance data of a final-day batch processing server and constructing a model.
Specifically, as the model training is premised on the fact that a large amount of training sample data is required, the server performance data, the operating environment data and the corresponding time stamps can be periodically acquired in the last batch processing process of the previous N days, the acquired data are recorded in a file, the file is named as a file of the current day, the subsequent performance prediction model training is prepared, and the value of the historical data is fully exerted. The method can also be used for data acquisition and recording in the subsequent turn-by-turn daily final batch processing process, and after the turn-by-turn daily final batch processing is finished, a well-trained performance prediction model before idle time optimization is utilized.
Specifically, the training process of the performance prediction model may specifically include the following steps:
the method comprises the steps of firstly, after the first N rounds of day final batch processing are finished, utilizing collected server performance data and operation environment data in the previous N rounds of day final batch processing to construct a neural network model and conduct model training to obtain a trained performance prediction model. The neural network model may be a BP neural network model, and the value of N may be set according to actual needs, which is not limited in this embodiment of the present invention.
And secondly, after the batch processing is finished in each subsequent round of day, performing model training to optimize the performance prediction model by utilizing the collected server performance data and the collected operating environment data in the processing process on the basis of the performance prediction model trained in the previous time, comparing the performance prediction model with the performance prediction model trained in the previous time after the training is finished, and selecting and storing the optimal model. And in the final batch processing process of the next day, the optimal model can be used for server performance prediction.
For example, after the nth-round final batch processing is finished, the server performance data and the operating environment data in the previous N-round final batch processing processes may be trained, and a performance prediction model may be constructed and stored. After the M (M > N) th round of final daily batch processing is finished, on the basis of a performance prediction model obtained after the M-1 th round of final daily batch processing is finished, a model parameter value is used as an initial parameter value of model training, server performance data and operating environment data in the M-1 th round of final daily batch processing are trained to obtain a new performance prediction model, the new performance prediction model is compared with the performance prediction model obtained after the M-1 th round of final daily batch processing is finished, and the optimal performance prediction model is selected and stored. After the M +1 th round of final batch processing is finished, on the basis of a performance prediction model obtained after the M +1 th round of final batch processing is finished, a model parameter value is used as an initial parameter value of model training, server performance data and operating environment data in the M +1 th round of final batch processing are trained to obtain a new performance prediction model, the new performance prediction model is compared with the performance prediction model obtained after the M +1 th round of final batch processing is finished, and an optimal performance prediction model is selected and stored. And the model training process of the subsequent end-of-day batch processing process and the like.
In the operation process of the daily final batch processing task, the training process of the performance prediction model and the time period of the daily final batch processing are combined, a training data set is automatically obtained in each round of daily final batch processing, and each round of model training is trained on the basis of the parameter information of the historical model, so that repeated data in training can be avoided, the uniformity of the quantity of training data is ensured, and the training speed and efficiency are improved. Meanwhile, the performance prediction model obtained through training is compared with the historical model, the optimal model is selected, the problems of accuracy rate reduction and error increase caused by over-training can be solved, and the accuracy of the performance prediction model obtained through training on server performance prediction is ensured.
Specifically, when the error and the accuracy of the obtained performance prediction model training reach acceptable ranges, the performance prediction model can be used as a target performance prediction model to be applied to the prediction of server performance data in a future period of time in real time.
In one possible embodiment, referring to fig. 6 in conjunction with the description, the method may further include the steps of:
s250: and judging whether the predicted performance data meets a preset condition or not.
In the embodiment of the invention, in order to avoid the server crash caused by the over-pressure of the CPU, the disk space, the memory space and the like of the server, whether the predicted performance data exceeds the bearing capacity of the server or not can be judged after the predicted performance data of a period of time in the future is predicted, and early warning can be started when the predicted performance data exceeds the bearing capacity of the server so as to inform business personnel to process the predicted performance data and avoid the influence of the crash of the server on the running time and the efficiency of tasks.
In one possible embodiment, the predicted performance data may include performance peak data for the server over a second predetermined time period in the future;
the determining whether the predicted performance data satisfies a preset condition may include:
judging whether the performance peak data is larger than a preset peak threshold value or not;
when the performance peak data is larger than a preset peak threshold value, determining that the predicted performance data meets a preset condition;
and when the performance peak data is less than or equal to a preset peak threshold value, determining that the predicted performance data does not meet a preset condition.
The preset peak threshold may correspond to a type of the performance peak data, and may be a CPU peak threshold when the performance peak data is CPU peak data, and may be a memory space peak threshold when the performance peak data is memory space peak data.
When the performance peak data is greater than the preset peak threshold, it can be determined that the bearing capacity of the server will be exceeded in a second preset time period in the future, and early warning needs to be performed. The preset peak threshold may be set according to an actual situation, which is not limited in the embodiment of the present invention.
S260: and when the predicted performance data meets a preset condition, generating and displaying an early warning notification message, wherein the early warning notification message is used for indicating that the server is abnormal.
In the embodiment of the invention, when the predicted performance data meets the preset condition, the situation that the predicted performance data exceeds the bearing capacity of the server in the second preset time period in the future can be judged, and the early warning is needed, and the early warning notification message can be generated and displayed, so that business personnel can timely process the early warning, and the situation that the predicted performance data exceeds the bearing capacity of the server is avoided.
According to the embodiment of the invention, the obtained predicted performance data is monitored, and corresponding processing is carried out according to the predicted performance data, when the predicted performance data exceeds the bearing capacity of the server, a corresponding early warning means can be adopted in time, so that business personnel can process the data in time, the problem is prevented from happening in the bud, the normal operation of the server and the continuity of task execution are ensured, the efficiency of task execution is improved, and the use experience is improved.
Illustratively, with reference to fig. 3 of the specification, in the process of running the end-of-day batch processing task, after obtaining the predicted performance data of the server in a second preset time period in the future, if it is determined that the predicted performance data meets the preset condition (i.e., it is determined that the predicted performance data will exceed the bearing capacity of the server in the second preset time period in the future), an early warning is started, the executed job is suspended, so that the service personnel can conveniently perform corresponding measures and operations, and after the processing is completed, the suspended job is suspended again to continue the end-of-day batch processing. In the process of batch processing task operation at the end of the day, the processes of data acquisition, server performance prediction and judgment and early warning can be circularly carried out until the whole processing process is finished.
In summary, according to the server performance prediction method provided by the embodiment of the invention, historical performance data and real-time operation environment data of the server are obtained, and the performance data of the server in the next time period is predicted in real time by using the pre-trained performance prediction model, so that the value of the historical performance data can be fully utilized, and when the performance of the server is predicted in real time, the performance indexes of the server in a time period before the current time point are comprehensively analyzed, including the influence of external environmental factors, so that the accuracy and reliability of server performance prediction are improved, the operation quality of the server is more accurately reflected, problems are found in advance, the problems are solved, and the safety and the stability of the operation of the server are improved.
Referring to the specification and fig. 7, a structure of a server performance prediction apparatus 700 according to an embodiment of the present invention is shown. As shown in fig. 7, the apparatus 700 may include:
an operation environment data obtaining module 710, configured to obtain real-time operation environment data of the server at a current time point;
a historical performance data obtaining module 720, configured to obtain first historical performance data of the server in a first preset time period before the current time point;
a model input module 730, configured to input the first historical performance data and the real-time operating environment data into a pre-trained target performance prediction model;
a predicted performance data obtaining module 740, configured to obtain predicted performance data, output by the target performance prediction model based on the first historical performance data and the real-time operating environment data, of the server within a second preset time period in the future.
In one possible embodiment, the target performance prediction model is obtained by pre-training based on second historical performance data and historical operating environment data of the server in a third preset time period before the current time point; the apparatus 700 may further include a model training module, which may include:
the training data acquisition unit is used for acquiring second historical performance data and historical operating environment data of the server in a third preset time period before the current time point;
a training sample data construction unit, configured to construct a training sample data set based on the second historical performance data and the historical operating environment data;
and the model training unit is used for training an initial performance prediction model based on the training sample data set to obtain the target performance prediction model.
In one possible embodiment, the apparatus 700 may further include:
the judging module is used for judging whether the predicted performance data meets a preset condition or not;
and the early warning notification message generation module is used for generating and displaying an early warning notification message when the predicted performance data meets a preset condition, wherein the early warning notification message is used for indicating that the server is abnormal.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus provided in the above embodiments and the corresponding method embodiments belong to the same concept, and specific implementation processes thereof are detailed in the corresponding method embodiments and are not described herein again.
An embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the server performance prediction method provided by the above method embodiment.
The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
Referring to the specification in conjunction with fig. 8, a block diagram of an electronic device 800 is shown, in accordance with one embodiment of the present invention. Electronic device 800 may include one or more processors 802, system control logic 808 coupled to at least one of processors 802, system memory 804 coupled to system control logic 808, non-volatile memory (NVM)806 coupled to system control logic 808, and a network interface 810 coupled to system control logic 808.
The processor 802 may include one or more single-core or multi-core processors. The processor 802 may include any combination of general-purpose processors and dedicated processors (e.g., graphics processors, application processors, baseband processors, etc.). In embodiments herein, the processor 802 may be configured to perform one or more embodiments in accordance with the various embodiments as shown in fig. 2-6.
In some embodiments, the system control logic 808 may include any suitable interface controllers to provide any suitable interface to at least one of the processors 802 and/or any suitable device or component in communication with the system control logic 808.
In some embodiments, the system control logic 808 may include one or more memory controllers to provide an interface to the system memory 804. System memory 804 may be used to load and store data and/or instructions. The memory 804 of the device 800 may comprise any suitable volatile memory, such as suitable Dynamic Random Access Memory (DRAM), in some embodiments.
NVM/memory 806 may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions. In some embodiments, the NVM/memory 806 may include any suitable non-volatile memory such as flash memory and/or any suitable non-volatile storage device, such as at least one of a HDD (Hard Disk Drive), CD (Compact Disc) Drive, DVD (Digital Versatile Disc) Drive.
The NVM/memory 806 may include a portion of a storage resource installed on a device of the device 800 or it may be accessible by, but not necessarily a part of, the device. For example, the NVM/storage 806 may be accessed over a network via the network interface 810.
In particular, system memory 804 and NVM/storage 806 may each include: a temporary copy and a permanent copy of instructions 820. The instructions 820 may include: instructions that when executed by at least one of the processors 802 cause the apparatus 800 to implement the server performance prediction method as illustrated in fig. 2-6. In some embodiments, the instructions 820, hardware, firmware, and/or software components thereof may additionally/alternatively be disposed in the system control logic 808, the network interface 810, and/or the processor 802.
Network interface 810 may include a transceiver to provide a radio interface for device 800 to communicate with any other suitable device (e.g., front end module, antenna, etc.) over one or more networks. In some embodiments, the network interface 810 may be integrated with other components of the device 800. For example, the network interface 810 may be integrated with at least one of the communication module of the processor 802, the system memory 804, the NVM/storage 806, and a firmware device (not shown) having instructions that, when executed by at least one of the processors 802, the device 800 implements one or more of the various embodiments illustrated in fig. 2-6.
The network interface 810 may further include any suitable hardware and/or firmware to provide a multiple-input multiple-output radio interface. For example, network interface 810 may be a network adapter, a wireless network adapter, a telephone modem, and/or a wireless modem.
In one embodiment, at least one of the processors 802 may be packaged together with logic for one or more controllers of system control logic 808 to form a System In Package (SiP). In one embodiment, at least one of the processors 802 may be integrated on the same die with logic for one or more controllers of system control logic 808 to form a system on a chip (SoC).
The apparatus 800 may further comprise: input/output (I/O) devices 812. I/O device 812 may include a user interface to enable a user to interact with device 800; the design of the peripheral component interface enables peripheral components to also interact with the device 800. In some embodiments, the device 800 further comprises a sensor for determining at least one of environmental conditions and location information associated with the device 800.
In some embodiments, the user interface may include, but is not limited to, a display (e.g., a liquid crystal display, a touch screen display, etc.), a speaker, a microphone, one or more cameras (e.g., still image cameras and/or video cameras), a flashlight (e.g., a light emitting diode flash), and a keyboard.
In some embodiments, the peripheral component interfaces may include, but are not limited to, a non-volatile memory port, an audio jack, and a power interface.
In some embodiments, the sensors may include, but are not limited to, a gyroscope sensor, an accelerometer, a proximity sensor, an ambient light sensor, and a positioning unit. The positioning unit may also be part of the network interface 810 or interact with the network interface 810 to communicate with components of a positioning network, such as Global Positioning System (GPS) satellites.
It is to be understood that the illustrated structure of the embodiments of the invention is not to be construed as a specific limitation to the electronic device 800. In other embodiments of the invention, the electronic device 800 may include more or fewer components than illustrated, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
An embodiment of the present invention further provides a computer-readable storage medium, which may be disposed in an electronic device to store at least one instruction or at least one program for implementing a server performance prediction method, where the at least one instruction or the at least one program is loaded and executed by the processor to implement the server performance prediction method provided by the foregoing method embodiment.
Optionally, in an embodiment of the present invention, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
An embodiment of the present invention also provides a computer program product comprising a computer program/instructions which is loaded and executed by a processor to implement the steps of the server performance prediction method provided in the various alternative embodiments described above, when the computer program product is run on an electronic device.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (11)

1. A method for server performance prediction, comprising:
acquiring real-time operation environment data of a server at a current time point;
acquiring first historical performance data of the server in a first preset time period before the current time point;
inputting the first historical performance data and the real-time operating environment data into a pre-trained target performance prediction model;
and acquiring the predicted performance data of the server output by the target performance prediction model based on the first historical performance data and the real-time operation environment data in a second preset time period in the future.
2. The method of claim 1, wherein the obtaining first historical performance data of the server within a first preset time period before the current time point comprises:
determining a first time point sequence in a first preset time period before the current time point, wherein the first time point sequence comprises a preset number of first time points;
and acquiring historical performance data of the server at each first time point in the first time point sequence to obtain a historical performance data sequence as the first historical performance data.
3. The method of claim 1, wherein the target performance prediction model is pre-trained based on second historical performance data and historical operating environment data of the server within a third preset time period before the current time point;
the training method of the target performance prediction model comprises the following steps:
acquiring second historical performance data and historical operating environment data of the server in a third preset time period before the current time point;
constructing a training sample data set based on the second historical performance data and the historical operating environment data;
and training an initial performance prediction model based on the training sample data set to obtain the target performance prediction model.
4. The method of claim 3, wherein said constructing a set of training sample data based on said second historical performance data and said historical operating environment data comprises:
determining a second time point sequence in a third preset time period before the current time point;
for each second time point in the second time point sequence, acquiring historical operating environment data of the server at the second time point as first input data;
acquiring second historical performance data of the server in a first preset time period before the second time point, wherein the second historical performance data is used as second input data;
acquiring second historical performance data of the server within a second preset time period after the second time point as output data;
and constructing a training sample data set by using the first input data, the second input data and the output data.
5. The method of claim 3, wherein training an initial performance prediction model based on the set of training sample data to obtain the target performance prediction model comprises:
constructing an initial performance prediction model for server performance prediction;
training the initial performance prediction model by using the training sample data set to obtain a trained performance prediction model;
and comparing the trained performance prediction model with the initial performance prediction model, and determining an optimal performance prediction model as a target performance prediction model.
6. The method of claim 1, further comprising:
judging whether the predicted performance data meets a preset condition or not;
and when the predicted performance data meets a preset condition, generating and displaying an early warning notification message, wherein the early warning notification message is used for indicating that the server is abnormal.
7. The method of claim 6, wherein the predicted performance data comprises performance peak data for the server over a second predetermined period of time in the future;
the judging whether the predicted performance data meets the preset condition comprises the following steps:
judging whether the performance peak data is larger than a preset peak threshold value or not;
when the performance peak data is larger than a preset peak threshold value, determining that the predicted performance data meets a preset condition;
and when the performance peak data is less than or equal to a preset peak threshold value, determining that the predicted performance data does not meet a preset condition.
8. A server performance prediction apparatus, comprising:
the operation environment data acquisition module is used for acquiring real-time operation environment data of the server at the current time point;
the historical performance data acquisition module is used for acquiring first historical performance data of the server in a first preset time period before the current time point;
the model input module is used for inputting the first historical performance data and the real-time operation environment data into a pre-trained target performance prediction model;
and the predicted performance data acquisition module is used for acquiring predicted performance data of the server, which is output by the target performance prediction model based on the first historical performance data and the real-time operation environment data, in a second preset time period in the future.
9. An electronic device, comprising a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executed to implement the server performance prediction method according to any one of claims 1 to 7.
10. A computer-readable storage medium, having at least one instruction or at least one program stored therein, the at least one instruction or the at least one program being loaded and executed by a processor to implement the server performance prediction method according to any one of claims 1 to 7.
11. A computer program product comprising computer program/instructions, characterized in that the computer program/instructions, when executed by a processor, implement the steps of the server performance prediction method according to any of claims 1-7.
CN202111532565.0A 2021-12-15 2021-12-15 Server performance prediction method, device, equipment, storage medium and program product Pending CN114201378A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111532565.0A CN114201378A (en) 2021-12-15 2021-12-15 Server performance prediction method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111532565.0A CN114201378A (en) 2021-12-15 2021-12-15 Server performance prediction method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN114201378A true CN114201378A (en) 2022-03-18

Family

ID=80653867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111532565.0A Pending CN114201378A (en) 2021-12-15 2021-12-15 Server performance prediction method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN114201378A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114757450A (en) * 2022-06-13 2022-07-15 四川晨坤电气设备有限公司 Line loss prediction method and device for intelligent power distribution operation and maintenance network
CN115981969A (en) * 2023-03-10 2023-04-18 中国信息通信研究院 Monitoring method and device for block chain data platform, electronic equipment and storage medium
CN116701127A (en) * 2023-08-09 2023-09-05 睿至科技集团有限公司 Big data-based application performance monitoring method and platform

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114757450A (en) * 2022-06-13 2022-07-15 四川晨坤电气设备有限公司 Line loss prediction method and device for intelligent power distribution operation and maintenance network
CN114757450B (en) * 2022-06-13 2022-09-02 四川晨坤电气设备有限公司 Line loss prediction method and device for intelligent power distribution operation and maintenance network
CN115981969A (en) * 2023-03-10 2023-04-18 中国信息通信研究院 Monitoring method and device for block chain data platform, electronic equipment and storage medium
CN116701127A (en) * 2023-08-09 2023-09-05 睿至科技集团有限公司 Big data-based application performance monitoring method and platform
CN116701127B (en) * 2023-08-09 2023-12-19 睿至科技集团有限公司 Big data-based application performance monitoring method and platform

Similar Documents

Publication Publication Date Title
CN114201378A (en) Server performance prediction method, device, equipment, storage medium and program product
US11321141B2 (en) Resource management for software containers using container profiles
CN105700948A (en) Method and device for scheduling calculation task in cluster
CN111143039B (en) Scheduling method and device of virtual machine and computer storage medium
CN114416512A (en) Test method, test device, electronic equipment and computer storage medium
CN113672467A (en) Operation and maintenance early warning method and device, electronic equipment and storage medium
CN112162891A (en) Performance test method in server cluster and related equipment
US20140244846A1 (en) Information processing apparatus, resource control method, and program
US20190101911A1 (en) Optimization of virtual sensing in a multi-device environment
CN110347546B (en) Dynamic adjustment method, device, medium and electronic equipment for monitoring task
CN115994029A (en) Container resource scheduling method and device
CN115238837A (en) Data processing method and device, electronic equipment and storage medium
CN115219935A (en) New energy equipment health condition evaluation method, system, device and medium
CN113296951A (en) Resource allocation scheme determination method and equipment
CN111258866A (en) Computer performance prediction method, device, equipment and readable storage medium
CN116450485B (en) Detection method and system for application performance interference
US20230246981A1 (en) Evaluation framework for cloud resource optimization
CN116628231B (en) Task visual release method and system based on big data platform
CN113256044B (en) Policy determination method and device and electronic equipment
CN117076748B (en) Data acquisition method, device, computer equipment and storage medium
US20210357794A1 (en) Determining the best data imputation algorithms
CN115390995A (en) Method, device, equipment and medium for adjusting number of containers
CN113778727A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN118012719A (en) Container running state monitoring method, intelligent computing cloud operating system and computing platform
CN115599539A (en) Engine scheduling method based on task amount prediction and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination