CN111104299A - Server performance prediction method and device, electronic equipment and storage medium - Google Patents
Server performance prediction method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111104299A CN111104299A CN201911207134.XA CN201911207134A CN111104299A CN 111104299 A CN111104299 A CN 111104299A CN 201911207134 A CN201911207134 A CN 201911207134A CN 111104299 A CN111104299 A CN 111104299A
- Authority
- CN
- China
- Prior art keywords
- performance
- server
- basic training
- prediction
- lstm model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Algebra (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Computer Hardware Design (AREA)
- Pure & Applied Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application discloses a method and a device for predicting server performance, an electronic device and a computer readable storage medium, wherein the method comprises the following steps: monitoring the server to obtain a numerical value of the performance monitoring index corresponding to each time point as a basic training set; initializing parameters of an LSTM model, and selecting data in a preset time window from the basic training set as basic training data; and inputting the basic training data into the LSTM model to obtain a prediction result corresponding to the performance monitoring index. According to the server performance prediction method, the performance monitoring index data of the server are used as basic data for prediction, the LSTM model is used for predicting the trend and periodicity of the performance monitoring index in a future period of time so as to predict the performance and resource requirements of the server in the future period of time, and the prediction accuracy and efficiency are high.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a server performance prediction method and apparatus, an electronic device, and a computer-readable storage medium.
Background
In a cloud data center management platform, continuous monitoring on performance indexes of a server is needed, physical performance states of the server are checked in real time, future performance and utilization rate of physical equipment of the server are estimated, and a corresponding solution is made in advance.
The monitoring performance data has the characteristics of nonlinearity, trend and periodicity, and is predicted by a statistical method, a machine learning method, a neural network method and the like. The prediction accuracy of the statistical method is not high relative to a neural network, the machine learning method needs feature extraction and feature selection, the time cost and the labor cost of the process are high, and part of machine learning algorithms have the problem of slow model training;
therefore, how to improve the accuracy and efficiency of server performance prediction is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a server performance prediction method, a server performance prediction device, an electronic device and a computer readable storage medium, and accuracy and efficiency of server performance prediction are improved.
In order to achieve the above object, the present application provides a server performance prediction method, including:
monitoring the server to obtain a numerical value of the performance monitoring index corresponding to each time point as a basic training set;
initializing parameters of an LSTM model, and selecting data in a preset time window from the basic training set as basic training data;
and inputting the basic training data into the LSTM model to obtain a prediction result corresponding to the performance monitoring index.
After the monitoring is performed on the server to obtain the numerical value of the performance monitoring index corresponding to each time point as the basic training set, the method further comprises the following steps:
preprocessing the basic training set;
correspondingly, the selecting data in the preset time window as the basic training data in the basic training set includes:
and selecting data in a preset time window from the preprocessed basic training set as basic training data.
Wherein, preprocessing the basic training set comprises:
formatting each of said time points as a UNIX timestamp;
and intercepting the numerical value by a preset length.
After the basic training data is input into the LSTM model to obtain the prediction result corresponding to the performance monitoring index, the method further includes:
and displaying the prediction result.
After the basic training data is input into the LSTM model to obtain the prediction result corresponding to the performance monitoring index, the method further includes:
when the numerical value exceeding a preset range exists in the prediction result, sending alarm information; wherein, the alarm information comprises the time point corresponding to the numerical value.
After the basic training data is input into the LSTM model to obtain the prediction result corresponding to the performance monitoring index, the method further includes:
optimizing the parameters of the LSTM model based on the prediction knot to obtain an optimized LSTM model;
and inputting the basic training data into the optimized LSTM model to obtain a prediction result corresponding to the performance monitoring index again.
Wherein optimizing the parameters of the LSTM model based on the prediction junctions to obtain an optimized LSTM model comprises:
and optimizing the parameters of the LSTM model by using a Bayesian algorithm based on the prediction result to obtain an optimized LSTM model.
To achieve the above object, the present application provides a server performance prediction apparatus, including:
the monitoring module is used for monitoring the server to obtain a numerical value of the performance monitoring index corresponding to each time point as a basic training set;
the selection module is used for initializing parameters of the LSTM model and selecting data in a preset time window from the basic training set as basic training data;
and the first prediction module is used for inputting the basic training data into the LSTM model to obtain a prediction result corresponding to the performance monitoring index.
To achieve the above object, the present application provides an electronic device including:
a memory for storing a computer program;
a processor for implementing the steps of the server performance prediction method as described above when executing the computer program.
To achieve the above object, the present application provides a computer-readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the server performance prediction method as described above.
According to the scheme, the server performance prediction method provided by the application comprises the following steps: monitoring the server to obtain a numerical value of the performance monitoring index corresponding to each time point as a basic training set; initializing parameters of an LSTM model, and selecting data in a preset time window from the basic training set as basic training data; and inputting the basic training data into the LSTM model to obtain a prediction result corresponding to the performance monitoring index.
According to the server performance prediction method, the performance monitoring index data of the server is used as basic data for prediction, the LSTM (Long short-term memory) model is used for conducting trend and periodic prediction on the performance monitoring index for a period of time in the future so as to predict the performance and resource requirements of the server for a period of time in the future, the performance state of the server is known in advance, the occurrence of running faults of the server is prevented, and the running reliability of the server of the cloud data center can be improved. The LSTM model remembers unimportant information which needs to be memorized for a long time through a gating state, has a long-term memory function on long-sequence performance data, and can predict the trend of the performance data monitored by the server and predict periodically fluctuating performance indexes. The gating state of the LSTM has an input gate, an output gate, a forgetting gate and a state gate, so that the LSTM has the characteristic of memorizing important data, is better applied to performance prediction of long-time sequence data, solves the problems of gradient explosion and gradient disappearance transferred by a layer at the same time, and has higher prediction accuracy and efficiency. The application also discloses a server performance prediction device, an electronic device and a computer readable storage medium, which can also achieve the technical effects.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow diagram illustrating a method for server performance prediction in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating another method of server performance prediction in accordance with an exemplary embodiment;
FIG. 3 is a block diagram illustrating a server performance prediction apparatus in accordance with an exemplary embodiment;
FIG. 4 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application discloses a server performance prediction method, which improves the accuracy and efficiency of server performance prediction.
Referring to fig. 1, a flow chart of a server performance prediction method according to an exemplary embodiment is shown, as shown in fig. 1, including:
s101: monitoring the server to obtain a numerical value of the performance monitoring index corresponding to each time point as a basic training set;
the execution subject of this embodiment is a processor of the monitoring system, and is intended to predict a value of a performance monitoring index in a future period of time based on the monitored performance monitoring index. In this step, the server is monitored for a period of time to obtain a value of a certain performance monitoring index at each time point within a period of time, where the performance monitoring index is not limited, and those skilled in the art can flexibly select the performance monitoring index according to actual conditions.
Preferably, after this step, the method further includes: and preprocessing the basic training set. It can be understood that, in order to improve the prediction efficiency of the subsequent steps, the basic training set is preprocessed, the time point is formatted into UNIX timestamp, and the numerical value of the performance monitoring index is truncated by a preset length, for example, 2 decimal places are reserved to omit unnecessary precision. Namely, the step of preprocessing the basic training set comprises the following steps: formatting each of said time points as a UNIX timestamp; and intercepting the numerical value by a preset length.
S102: initializing parameters of an LSTM model, and selecting data in a preset time window from the basic training set as basic training data;
in this step, the parameters of the LSTM model, which may include the width and height of the network, etc., are initialized and the LSTM model input data is determined. In the embodiment, the data in the preset time window is selected as the basic training data of the LSTM model, in the specific implementation, all data of the basic training set may be used for prediction, or a part of data may be selected, which is not specifically limited in this embodiment.
It can be understood that, if the basic training set is subjected to the preprocessing step, the step is specifically to initialize parameters of the LSTM model, and select data in the preprocessed basic training set within a preset time window as the basic training data.
S103: and inputting the basic training data into the LSTM model to obtain a prediction result corresponding to the performance monitoring index.
In this step, basic training data is input into the LSTM model, and a prediction result of the performance monitoring index can be obtained. The mathematical expression of LSTM is as follows:
it=sigm(Wxixt+Whiht-1)
ft=sigm(Wxfxt+Whfht-1)
ot=sigm(Wxoxt+Whoht-1)
ht=ot⊙tanh(ct)
wherein i is an input gate, f is a forgetting gate, o is an output gate, c is a cell state, h is a hidden state, and W isxi、Wxf、WxoAnd WxcCoefficient of linear relationship, Whi、Whf、WhoAnd WhcFor linear relationship offset, ⊙ represents element-by-element multiplication, sigm is sigmoid activation function:
it should be noted that, if the accuracy of the prediction result obtained in this step is not high, the parameters of the LSTM model may be optimized based on the prediction result, and the optimized LSTM model is used to obtain the prediction result again, that is, this step further includes: optimizing the parameters of the LSTM model based on the prediction knot to obtain an optimized LSTM model; and inputting the basic training data into the optimized LSTM model to obtain a prediction result corresponding to the performance monitoring index again. The optimization algorithm is not limited herein, and for example, a bayesian algorithm may be adopted, that is, the step of optimizing the parameters of the LSTM model based on the prediction result to obtain the optimized LSTM model includes: and optimizing the parameters of the LSTM model by using a Bayesian algorithm based on the prediction result to obtain an optimized LSTM model.
Preferably, the method further comprises the following steps: and displaying the prediction result. In specific implementation, the prediction result is integrated and displayed to a physical infrastructure management platform, so that the future prediction result can be visually seen.
Preferably, the method further comprises the following steps: when the numerical value exceeding a preset range exists in the prediction result, sending alarm information; wherein, the alarm information comprises the time point corresponding to the numerical value.
According to the server performance prediction method provided by the embodiment of the application, the performance monitoring index data of the server is used as basic data for prediction, the LSTM model is used for predicting the trend and periodicity of the performance monitoring index in a future period of time, so that the performance and resource requirements of the server in the future period of time are predicted, the performance state of the server is known in advance, the occurrence of server operation faults is prevented, and the server operation reliability of a cloud data center can be improved. The LSTM model remembers unimportant information which needs to be memorized for a long time through a gating state, has a long-term memory function on long-sequence performance data, and can predict the trend of the performance data monitored by the server and predict periodically fluctuating performance indexes. The gating state of the LSTM has an input gate, an output gate, a forgetting gate and a state gate, so that the LSTM has the characteristic of memorizing important data, is better applied to performance prediction of long-time sequence data, solves the problems of gradient explosion and gradient disappearance transferred by a layer at the same time, and has higher prediction accuracy and efficiency.
The embodiment of the application discloses a server performance prediction method, and compared with the previous embodiment, the embodiment further explains and optimizes the technical scheme. Specifically, the method comprises the following steps:
referring to fig. 2, a flow diagram of another method for server performance prediction is shown according to an exemplary embodiment, as shown in fig. 2, including:
s201: monitoring the server to obtain a numerical value of the performance monitoring index corresponding to each time point as a basic training set;
s202: initializing parameters of an LSTM model;
s203: formatting each time point in the basic training set into a UNIX time stamp, and intercepting the numerical value by a preset length;
s204: selecting data in a preset time window from the preprocessed basic training set as basic training data;
s205: inputting the basic training data into the LSTM model to obtain a prediction result corresponding to the performance monitoring index;
s206: optimizing the parameters of the LSTM model based on the prediction knot to obtain an optimized LSTM model;
s207: inputting the basic training data into the optimized LSTM model to obtain a prediction result corresponding to the performance monitoring index again;
s208: displaying the prediction result
S209: when the numerical value exceeding a preset range exists in the prediction result, sending alarm information; wherein, the alarm information comprises the time point corresponding to the numerical value.
An application embodiment provided by the present application is described below, which may specifically include the following steps:
the method comprises the following steps: data sets based on time series performance data (long time series, 1 month, 3 months, half a year, one year) were selected.
Step two: and (3) processing time series performance data, formatting the time series into UNIX time stamps, intercepting the length of the monitoring performance data (reserving 2-bit decimal) to eliminate unnecessary precision, and enabling the formatted monitoring performance data of the server to serve as basic training data.
Step three: and (4) carrying out parameter optimization on the grid width and height of the LSTM model (selecting the commonly used Bayes parameter adjustment to carry out optimization parameter selection) to obtain model initialization parameters.
Step four: training a long-short term memory network model, initializing time sequence length, grid width and height, and batch size.
Step five: and the LSTM algorithm predicts and outputs the periodicity and the trend of the performance data, and optimizes the initialization parameters of the model according to the prediction result of the long-sequence monitoring performance data, so that the prediction accuracy is higher.
Step six: and (3) processing the prediction result data, reserving a 3-bit decimal (because the precision requirement of the prediction result is not high), and fully formatting a value smaller than 0 into 0 (the server performance data has no negative number and no reference value).
Step seven: by outputting the result, integrating and displaying the result to an (ISPIM) physical infrastructure management platform, the future prediction result can be visually seen.
Step eight: the method has the advantages that key performance indexes are monitored, hardware equipment is prevented from being damaged due to overheating, the server is prevented from running in an overload mode, the software service process is stopped and crashed, normal business is influenced, performance data of a period of time (1 month, 3 months, half year, one year, three years and the like) in the future can be evaluated, and normal running of the cloud data center server is guaranteed.
In the following, a server performance prediction apparatus provided in an embodiment of the present application is introduced, and a server performance prediction apparatus described below and a server performance prediction method described above may be referred to each other.
Referring to fig. 3, a block diagram of a server performance prediction apparatus according to an exemplary embodiment is shown, as shown in fig. 3, including:
the monitoring module 301 is configured to monitor a server to obtain a numerical value of the performance monitoring index corresponding to each time point as a basic training set;
a selecting module 302, configured to initialize parameters of the LSTM model, and select data in a preset time window from the basic training set as basic training data;
a first prediction module 303, configured to input the basic training data into the LSTM model to obtain a prediction result corresponding to the performance monitoring index.
The server performance prediction device provided by the embodiment of the application takes the performance monitoring index data of the server as basic data for prediction, and utilizes the LSTM model to perform trend and periodic prediction on the performance monitoring index for a period of time in the future so as to predict the performance and resource requirements of the server for a period of time in the future, know the performance state of the server in advance, prevent the occurrence of server operation faults, and improve the server operation reliability of the cloud data center. The LSTM model remembers unimportant information which needs to be memorized for a long time through a gating state, has a long-term memory function on long-sequence performance data, and can predict the trend of the performance data monitored by the server and predict periodically fluctuating performance indexes. The gating state of the LSTM has an input gate, an output gate, a forgetting gate and a state gate, so that the LSTM has the characteristic of memorizing important data, is better applied to performance prediction of long-time sequence data, solves the problems of gradient explosion and gradient disappearance transferred by a layer at the same time, and has higher prediction accuracy and efficiency.
On the basis of the above embodiment, as a preferred implementation, the method further includes:
the preprocessing module is used for preprocessing the basic training set;
correspondingly, the selecting module 302 is specifically a module for initializing parameters of the LSTM model and selecting data in a preset time window from the preprocessed basic training set as basic training data.
On the basis of the foregoing embodiment, as a preferred implementation manner, the preprocessing module is specifically a module that formats each time point into a UNIX timestamp and truncates the numerical value by a preset length.
On the basis of the above embodiment, as a preferred implementation, the method further includes:
and the display module is used for displaying the prediction result.
On the basis of the above embodiment, as a preferred implementation, the method further includes:
the alarm module is used for sending alarm information when the numerical value exceeding the preset range exists in the prediction result; wherein, the alarm information comprises the time point corresponding to the numerical value.
On the basis of the above embodiment, as a preferred implementation, the method further includes:
the optimization module is used for optimizing the parameters of the LSTM model based on the prediction knot to obtain an optimized LSTM model;
and the second prediction module is used for inputting the basic training data into the optimized LSTM model to obtain a prediction result corresponding to the performance monitoring index again.
On the basis of the above embodiment, as a preferred implementation manner, the optimization module is specifically a module that optimizes the parameters of the LSTM model by using a bayesian algorithm based on the prediction result to obtain an optimized LSTM model.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present application further provides an electronic device, and referring to fig. 4, a structure diagram of an electronic device 400 provided in an embodiment of the present application, as shown in fig. 4, may include a processor 11 and a memory 12. The electronic device 400 may also include one or more of a multimedia component 13, an input/output (I/O) interface 14, and a communication component 15.
The processor 11 is configured to control the overall operation of the electronic device 400, so as to complete all or part of the steps in the server performance prediction method. The memory 12 is used to store various types of data to support operation at the electronic device 400, such data may include, for example, instructions for any application or method operating on the electronic device 400, as well as application-related data, such as contact data, messaging, pictures, audio, video, and so forth. The Memory 12 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia component 13 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 12 or transmitted via the communication component 15. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 14 provides an interface between the processor 11 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 15 is used for wired or wireless communication between the electronic device 400 and other devices. Wireless communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G or 4G, or a combination of one or more of them, so that the corresponding communication component 15 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-mentioned server performance prediction method.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the above-described server performance prediction method is also provided. For example, the computer readable storage medium may be the memory 12 described above including program instructions executable by the processor 11 of the electronic device 400 to perform the server performance prediction method described above.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Claims (10)
1. A method for server performance prediction, comprising:
monitoring the server to obtain a numerical value of the performance monitoring index corresponding to each time point as a basic training set;
initializing parameters of an LSTM model, and selecting data in a preset time window from the basic training set as basic training data;
and inputting the basic training data into the LSTM model to obtain a prediction result corresponding to the performance monitoring index.
2. The method for predicting the performance of the server according to claim 1, wherein after the monitoring the server to obtain the value of the performance monitoring index corresponding to each time point is used as a basic training set, the method further comprises:
preprocessing the basic training set;
correspondingly, the selecting data in the preset time window as the basic training data in the basic training set includes:
and selecting data in a preset time window from the preprocessed basic training set as basic training data.
3. The method of server performance prediction according to claim 2, wherein preprocessing the base training set comprises:
formatting each of said time points as a UNIX timestamp;
and intercepting the numerical value by a preset length.
4. The method for predicting performance of a server according to claim 1, wherein after inputting the basic training data into the LSTM model to obtain the prediction result corresponding to the performance monitoring indicator, the method further comprises:
and displaying the prediction result.
5. The method for predicting performance of a server according to claim 1, wherein after inputting the basic training data into the LSTM model to obtain the prediction result corresponding to the performance monitoring indicator, the method further comprises:
when the numerical value exceeding a preset range exists in the prediction result, sending alarm information; wherein, the alarm information comprises the time point corresponding to the numerical value.
6. The server performance prediction method according to any one of claims 1 to 5, wherein after the basic training data is input into the LSTM model to obtain the prediction result corresponding to the performance monitoring index, the method further includes:
optimizing the parameters of the LSTM model based on the prediction knot to obtain an optimized LSTM model;
and inputting the basic training data into the optimized LSTM model to obtain a prediction result corresponding to the performance monitoring index again.
7. The server performance prediction method of claim 6, wherein the optimizing the parameters of the LSTM model based on the prediction junctions to obtain an optimized LSTM model comprises:
and optimizing the parameters of the LSTM model by using a Bayesian algorithm based on the prediction result to obtain an optimized LSTM model.
8. A server performance prediction apparatus, comprising:
the monitoring module is used for monitoring the server to obtain a numerical value of the performance monitoring index corresponding to each time point as a basic training set;
the selection module is used for initializing parameters of the LSTM model and selecting data in a preset time window from the basic training set as basic training data;
and the first prediction module is used for inputting the basic training data into the LSTM model to obtain a prediction result corresponding to the performance monitoring index.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the server performance prediction method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the server performance prediction method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911207134.XA CN111104299A (en) | 2019-11-29 | 2019-11-29 | Server performance prediction method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911207134.XA CN111104299A (en) | 2019-11-29 | 2019-11-29 | Server performance prediction method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111104299A true CN111104299A (en) | 2020-05-05 |
Family
ID=70421152
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911207134.XA Withdrawn CN111104299A (en) | 2019-11-29 | 2019-11-29 | Server performance prediction method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111104299A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598050A (en) * | 2020-12-18 | 2021-04-02 | 四川省成都生态环境监测中心站 | Ecological environment data quality control method |
CN113518000A (en) * | 2021-05-12 | 2021-10-19 | 北京奇艺世纪科技有限公司 | Method and device for adjusting number of instances of online service and electronic equipment |
CN113746668A (en) * | 2021-08-09 | 2021-12-03 | 中铁信弘远(北京)软件科技有限责任公司 | Application process fault prediction method, device, equipment and readable storage medium |
CN114138601A (en) * | 2021-11-26 | 2022-03-04 | 北京金山云网络技术有限公司 | Service alarm method, device, equipment and storage medium |
CN115185805A (en) * | 2022-09-13 | 2022-10-14 | 浪潮电子信息产业股份有限公司 | Performance prediction method, system, equipment and storage medium of storage system |
CN117807411A (en) * | 2024-02-29 | 2024-04-02 | 济南浪潮数据技术有限公司 | Server performance index prediction method and device and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109684310A (en) * | 2018-11-22 | 2019-04-26 | 安徽继远软件有限公司 | A kind of information system performance Situation Awareness method based on big data analysis |
CN110008079A (en) * | 2018-12-25 | 2019-07-12 | 阿里巴巴集团控股有限公司 | Monitor control index method for detecting abnormality, model training method, device and equipment |
CN110275814A (en) * | 2019-06-28 | 2019-09-24 | 深圳前海微众银行股份有限公司 | A kind of monitoring method and device of operation system |
-
2019
- 2019-11-29 CN CN201911207134.XA patent/CN111104299A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109684310A (en) * | 2018-11-22 | 2019-04-26 | 安徽继远软件有限公司 | A kind of information system performance Situation Awareness method based on big data analysis |
CN110008079A (en) * | 2018-12-25 | 2019-07-12 | 阿里巴巴集团控股有限公司 | Monitor control index method for detecting abnormality, model training method, device and equipment |
CN110275814A (en) * | 2019-06-28 | 2019-09-24 | 深圳前海微众银行股份有限公司 | A kind of monitoring method and device of operation system |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598050A (en) * | 2020-12-18 | 2021-04-02 | 四川省成都生态环境监测中心站 | Ecological environment data quality control method |
CN113518000A (en) * | 2021-05-12 | 2021-10-19 | 北京奇艺世纪科技有限公司 | Method and device for adjusting number of instances of online service and electronic equipment |
CN113746668A (en) * | 2021-08-09 | 2021-12-03 | 中铁信弘远(北京)软件科技有限责任公司 | Application process fault prediction method, device, equipment and readable storage medium |
CN113746668B (en) * | 2021-08-09 | 2024-04-02 | 中铁信弘远(北京)软件科技有限责任公司 | Application process fault prediction method, device, equipment and readable storage medium |
CN114138601A (en) * | 2021-11-26 | 2022-03-04 | 北京金山云网络技术有限公司 | Service alarm method, device, equipment and storage medium |
CN115185805A (en) * | 2022-09-13 | 2022-10-14 | 浪潮电子信息产业股份有限公司 | Performance prediction method, system, equipment and storage medium of storage system |
CN115185805B (en) * | 2022-09-13 | 2023-01-24 | 浪潮电子信息产业股份有限公司 | Performance prediction method, system, equipment and storage medium of storage system |
CN117807411A (en) * | 2024-02-29 | 2024-04-02 | 济南浪潮数据技术有限公司 | Server performance index prediction method and device and electronic equipment |
CN117807411B (en) * | 2024-02-29 | 2024-06-07 | 济南浪潮数据技术有限公司 | Server performance index prediction method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111104299A (en) | Server performance prediction method and device, electronic equipment and storage medium | |
CN114285728B (en) | Predictive model training method, traffic prediction device and storage medium | |
US10832150B2 (en) | Optimized re-training for analytic models | |
CN112256886B (en) | Probability calculation method and device in atlas, computer equipment and storage medium | |
CN111931345B (en) | Monitoring data prediction method, device, equipment and readable storage medium | |
CN111311014B (en) | Service data processing method, device, computer equipment and storage medium | |
CN111967917B (en) | Method and device for predicting user loss | |
CN112182118B (en) | Target object prediction method based on multiple data sources and related equipment thereof | |
US20220108147A1 (en) | Predictive microservices activation using machine learning | |
WO2023154538A1 (en) | System and method for reducing system performance degradation due to excess traffic | |
CN112489236A (en) | Attendance data processing method and device, server and storage medium | |
CN115526224A (en) | Stochastic event classification for artificial intelligence of information technology operations | |
CN112989203B (en) | Material throwing method, device, equipment and medium | |
US20220366230A1 (en) | Markov processes using analog crossbar arrays | |
US11409588B1 (en) | Predicting hardware failures | |
CN117472511A (en) | Container resource monitoring method, device, computer equipment and storage medium | |
US20210286699A1 (en) | Automated selection of performance monitors | |
CN107832578A (en) | Data processing method and device based on situation variation model | |
US11221938B2 (en) | Real-time collaboration dynamic logging level control | |
CN114501518B (en) | Flow prediction method, flow prediction device, flow prediction apparatus, flow prediction medium, and program product | |
CN113610174B (en) | Phik feature selection-based power grid host load prediction method, phik feature selection-based power grid host load prediction equipment and medium | |
CN113329128B (en) | Traffic data prediction method and device, electronic equipment and storage medium | |
CN112990826B (en) | Short-time logistics demand prediction method, device, equipment and readable storage medium | |
US20220013239A1 (en) | Time-window based attention long short-term memory network of deep learning | |
CN117130873B (en) | Task monitoring method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200505 |
|
WW01 | Invention patent application withdrawn after publication |