CN111475393A - Service performance prediction method and device, electronic equipment and readable storage medium - Google Patents

Service performance prediction method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111475393A
CN111475393A CN202010271099.4A CN202010271099A CN111475393A CN 111475393 A CN111475393 A CN 111475393A CN 202010271099 A CN202010271099 A CN 202010271099A CN 111475393 A CN111475393 A CN 111475393A
Authority
CN
China
Prior art keywords
service
data
performance
services
calling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010271099.4A
Other languages
Chinese (zh)
Inventor
王晨
李文浩
彭宣榕
江婷
吴骏龙
严佳奇
廖玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lazas Network Technology Shanghai Co Ltd
Original Assignee
Lazas Network Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lazas Network Technology Shanghai Co Ltd filed Critical Lazas Network Technology Shanghai Co Ltd
Priority to CN202010271099.4A priority Critical patent/CN111475393A/en
Publication of CN111475393A publication Critical patent/CN111475393A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Debugging And Monitoring (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the disclosure discloses a service performance prediction method, a service performance prediction device, electronic equipment and a readable storage medium. The service performance prediction method comprises the following steps: training a service performance model by using historical data of a service, wherein the historical data of the service comprises calling data of the service as an input feature of the service performance model and first performance data of the service as an output standard of the service performance model; acquiring a mapping relation of calling data among the services at a first time point; generating the calling amount of the service aiming at the first input flow according to the mapping relation; and calculating second performance data of the service aiming at the first input flow by using the call quantity of the service generated aiming at the first input flow according to the trained service performance model. Therefore, the service performance is accurately predicted through the historical data under the condition of not consuming real online resources, and the service reliability is improved.

Description

Service performance prediction method and device, electronic equipment and readable storage medium
Technical Field
The disclosure relates to the technical field of internet, in particular to a service performance prediction method, a service performance prediction device, electronic equipment and a readable storage medium.
Background
Currently, when a system platform such as an e-commerce platform faces a traffic peak such as a large promotion, the peak pressure of the service of a computer room is dozens of times of the normal time. At this time, performance bottlenecks may occur in some services of the whole system, so that the whole service cannot be used, and business process is affected. The platform will usually expose performance problems in advance by means of pressure testing. In a conventional processor performance monitoring mode, a uniform threshold is set for all nodes in a distributed system, and when the consumption of a processor exceeds the threshold, the system gives an alarm prompt. The pressure test threshold estimated by experience is not accurate, the dependency relationship among services cannot be accurately reflected, the real pressure test of a full link can influence the on-line normal service, and the monitoring mode of a distributed processor cannot monitor the services mainly based on the I/O type.
Disclosure of Invention
In order to solve the problems in the related art, embodiments of the present disclosure provide a service performance prediction method, apparatus, electronic device, and readable storage medium.
In a first aspect, an embodiment of the present disclosure provides a service performance prediction method, including:
training a service performance model by using historical data of a service, wherein the historical data of the service comprises calling data of the service as an input feature of the service performance model and first performance data of the service as an output standard of the service performance model;
acquiring a mapping relation of calling data among the services at a first time point;
generating the calling amount of the service aiming at the first input flow according to the mapping relation;
and calculating second performance data of the service aiming at the first input flow by using the call quantity of the service generated aiming at the first input flow according to the trained service performance model.
With reference to the first aspect, in a first implementation manner of the first aspect, the first time point is a time point in the historical data at which an incoming traffic peak occurs last in a system providing the service.
With reference to the first aspect or the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the call data of the service includes call data of the service itself and call data of a dependent service of the service.
With reference to the first aspect or the first implementation manner of the first aspect, in a third implementation manner of the first aspect, the mapping relationship of the call data between the services is a proportional relationship of call volumes of the services and a value of the call volumes.
With reference to the first aspect or the first implementation manner of the first aspect, in a fourth implementation manner of the first aspect, the history data of the services includes a call volume between the services at the first time point, and the first input traffic is a preset value.
With reference to the fourth implementation manner of the first aspect, in a fifth implementation manner of the first aspect, the first performance data and the second performance data refer to at least one of a processor utilization rate, an interface response time, a request success rate, and an exception rate of a system providing services, and the first input traffic refers to a preset number of accesses to the system providing services.
With reference to the first aspect or the first implementation manner of the first aspect, in a sixth implementation manner of the first aspect, the service performance model is a neural network model.
With reference to the sixth implementation manner of the first aspect, in a seventh implementation manner of the first aspect, the present disclosure employs different neural network models for different services.
With reference to the first aspect or the first implementation manner of the first aspect, in an eighth implementation manner of the first aspect, the present disclosure further includes:
determining whether second performance data for the service exceeds a first threshold;
identifying the service as a particular service in response to determining that the second performance data for the service exceeds a first threshold.
With reference to the eighth implementation manner of the first aspect, the identifying the service as the specific service in response to determining that the second performance data of the service exceeds the first threshold includes:
adding the specific service to a specific service list.
With reference to the eighth implementation manner of the first aspect, in a tenth implementation manner of the first aspect, the disclosure employs different first thresholds for different services.
In a second aspect, an embodiment of the present disclosure provides a service performance prediction apparatus, including:
a training module configured to train a service performance model using historical data of a service, wherein the historical data of the service includes invocation data of the service as an input feature of the service performance model and first performance data of the service as an output standard of the service performance model;
the obtaining module is configured to obtain a mapping relation of calling data among the services at a first time point;
the generating module is configured to generate the calling amount of the service for the first input traffic according to the mapping relation;
a calculation module configured to calculate second performance data of the service for the first input traffic with the call volume of the service generated for the first input traffic according to the trained service performance model.
With reference to the second aspect, the present disclosure provides in a first implementation manner of the second aspect, wherein the first time point is a time point in the historical data at which an incoming traffic peak has occurred last time in a system providing the service.
With reference to the second aspect or the first implementation manner of the second aspect, in a second implementation manner of the second aspect, the call data of the service includes call data of the service itself and call data of a dependent service of the service.
With reference to the second aspect or the first implementation manner of the second aspect, in a third implementation manner of the second aspect, the mapping relationship of the call data between the services is a proportional relationship of call volumes of the services and a value of the call volumes.
With reference to the second aspect or the first implementation manner of the second aspect, in a fourth implementation manner of the second aspect, the history data of the services includes a call volume between the services at the first time point, and the first input traffic is a preset value.
With reference to the fourth implementation manner of the second aspect, in a fifth implementation manner of the second aspect, the first performance data and the second performance data refer to at least one of processor utilization, interface response time, request success rate, and exception proportion of a system providing services, and the first input traffic refers to a preset number of accesses to the system providing services.
With reference to the second aspect or the first implementation manner of the second aspect, in a sixth implementation manner of the second aspect, the service performance model is a neural network model.
With reference to the sixth implementation manner of the second aspect, in a seventh implementation manner of the second aspect, the present disclosure employs different neural network models for different services.
With reference to the second aspect or the first implementation manner of the second aspect, in an eighth implementation manner of the second aspect, the present disclosure further includes:
a determination module configured to determine whether second performance data of the service exceeds a first threshold;
an identification module configured to identify the service as a particular service in response to determining that the second performance data for the service exceeds a first threshold.
With reference to the eighth implementation manner of the second aspect, in a ninth implementation manner of the second aspect, the identifying module is further configured to:
adding the specific service to a specific service list.
With reference to the eighth implementation manner of the second aspect, in a tenth implementation manner of the second aspect, the disclosure employs different first thresholds for different services.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including a memory and a processor; wherein,
the memory is configured to store one or more computer instructions, where the one or more computer instructions are executed by the processor to implement the method according to the first aspect, the first implementation manner to the tenth implementation manner of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a readable storage medium, on which computer instructions are stored, and the computer instructions, when executed by a processor, implement the method according to the first aspect, or any one of the first to tenth implementations of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the technical scheme provided by the embodiment of the disclosure, a service performance model is trained by utilizing historical data of a service, wherein the historical data of the service comprises calling data of the service as an input feature of the service performance model and first performance data of the service as an output standard of the service performance model; acquiring a mapping relation of calling data among the services at a first time point; generating the calling amount of the service aiming at the first input flow according to the mapping relation; and according to the trained service performance model, calculating second performance data of the service aiming at the first input flow by using the call volume of the service generated aiming at the first input flow, thereby accurately predicting the service performance through historical data under the condition of not consuming on-line real resources and improving the service reliability.
According to the technical scheme provided by the embodiment of the disclosure, the first time point is the time point of the input flow peak occurring in the historical data in the system for providing the service for the latest time, so that the accuracy of the historical data of the first time point is ensured, and the accuracy of service performance prediction is ensured.
According to the technical scheme provided by the embodiment of the disclosure, the calling data of the service comprises calling data of the service and calling data of the dependent service of the service, so that the condition of service calling is fully considered, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, the mapping relation of the calling data among the services is the proportional relation of the calling quantity of the services and the value of the calling quantity, so that the calling quantities of a plurality of different services are accurately evaluated, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, the historical data of the services comprises the call volume between the services at the first time point, and the first input flow is a preset value, so that the condition of service call is fully considered, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, the first performance data and the second performance data refer to at least one of processor utilization rate, interface response time, request success rate and abnormal proportion of a system providing services, and the first input flow refers to preset access times aiming at the system providing services, so that the service performance is comprehensively evaluated, and the service performance prediction accuracy is improved.
According to the technical scheme provided by the embodiment of the disclosure, the service performance model is the neural network model, so that the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, different neural network models are adopted for different services, so that the adaptability to different services is improved, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, the method further comprises the following steps: determining whether second performance data for the service exceeds a first threshold; and in response to determining that the second performance data of the service exceeds the first threshold, identifying the service as a specific service, thereby early warning the service exceeding the performance threshold and improving service reliability.
According to the technical solution provided by the embodiment of the present disclosure, identifying the service as a specific service by the responding to the determination that the second performance data of the service exceeds the first threshold includes: and adding the specific service to a specific service list, thereby early warning the service exceeding the performance threshold value and improving the service reliability.
According to the technical scheme provided by the embodiment of the disclosure, different first threshold values are adopted for different services, so that the requirements of different services are adapted, and the service reliability is improved.
According to the technical scheme provided by the embodiment of the disclosure, the training module is configured to train a service performance model by using historical data of a service, wherein the historical data of the service comprises calling data of the service as an input feature of the service performance model and first performance data of the service as an output standard of the service performance model; the obtaining module is configured to obtain a mapping relation of calling data among the services at a first time point; the generating module is configured to generate the calling amount of the service for the first input traffic according to the mapping relation; and the calculation module is configured to calculate second performance data of the service aiming at the first input flow by using the call quantity of the service generated aiming at the first input flow according to the trained service performance model, so that the service performance is accurately predicted through historical data under the condition of not consuming real online resources, and the service reliability is improved.
According to the technical scheme provided by the embodiment of the disclosure, the first time point is the time point of the input flow peak occurring in the historical data in the system for providing the service for the latest time, so that the accuracy of the historical data of the first time point is ensured, and the accuracy of service performance prediction is ensured.
According to the technical scheme provided by the embodiment of the disclosure, the calling data of the service comprises calling data of the service and calling data of the dependent service of the service, so that the condition of service calling is fully considered, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, the mapping relation of the calling data among the services is the proportional relation of the calling quantity of the services and the value of the calling quantity, so that the calling quantities of a plurality of different services are accurately evaluated, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, the historical data of the services comprises the call volume between the services at the first time point, and the first input flow is a preset value, so that the condition of service call is fully considered, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, the first performance data and the second performance data refer to at least one of processor utilization rate, interface response time, request success rate and abnormal proportion of a system providing services, and the first input flow refers to preset access times aiming at the system providing services, so that the service performance is comprehensively evaluated, and the service performance prediction accuracy is improved.
According to the technical scheme provided by the embodiment of the disclosure, the service performance model is the neural network model, so that the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, different neural network models are adopted for different services, so that the adaptability to different services is improved, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, the determining module is configured to determine whether second performance data of the service exceeds a first threshold; an identification module configured to identify the service as a specific service in response to determining that the second performance data of the service exceeds the first threshold, thereby early warning services exceeding the performance threshold and improving service reliability.
According to the technical scheme provided by the embodiment of the disclosure, the identification module is further configured to: and adding the specific service to a specific service list, thereby early warning the service exceeding the performance threshold value and improving the service reliability.
According to the technical scheme provided by the embodiment of the disclosure, different first threshold values are adopted for different services, so that the requirements of different services are adapted, and the service reliability is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other features, objects, and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1 illustrates a flow diagram of a service performance prediction method according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of a service performance prediction method according to another embodiment of the present disclosure;
FIG. 3 illustrates a flow diagram of an exemplary application scenario of a service performance prediction method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a service performance prediction apparatus according to an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of a service performance prediction apparatus according to another embodiment of the present disclosure;
FIG. 6 shows a block diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 7 illustrates a schematic block diagram of a computer system suitable for use in implementing a service performance prediction method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of labels, numbers, steps, actions, components, parts, or combinations thereof disclosed in the present specification, and are not intended to preclude the possibility that one or more other labels, numbers, steps, actions, components, parts, or combinations thereof are present or added.
It should be further noted that the embodiments and labels in the embodiments of the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
In the course of proposing the technical solutions provided according to the embodiments of the present disclosure, the inventors considered the following several solutions:
1. by means of empirically estimated solutions. In this scenario, services that may have capacity bottlenecks are predicted from past traffic peak scenarios such as large promotions for the system platform.
The scheme has the defects that the business time is changed, the dependency relationship between services is changed all the time, and the estimation according to historical experience is obviously not accurate enough.
2. A full link stress test or a single service stress test scheme. In the scheme, full link pressure test is carried out according to the on-line real traffic of a system platform, and the service with capacity bottleneck is evaluated in advance when a traffic peak scene such as large promotion comes.
The scheme has the defects that the construction of the pressure test data cannot accurately reflect the on-line real situation of the system platform, and the pressure test needs to be carried out by means of the on-line real machine resources, so that the normal work of the on-line service is influenced. The pressure test is different from the performance of infrastructures such as middleware in a real scene, and a large error occurs in an evaluation result.
3. And predicting the abnormal condition of the CPU of the distributed system by using a machine learning method. In the scheme, a plurality of TPS (number of access times per second) of each node in a distributed system of a system platform in a first time period is acquired. And predicting the CPU utilization rate predicted value of each node in the distributed system in a second time period according to the TPS values and a preset CPU utilization rate prediction model. And determining a node with abnormal CPU utilization rate in the distributed system based on the plurality of predicted CPU utilization rate values and the plurality of actual CPU utilization rate values in the second time period.
The scheme has the defects that only the CPU data of the distributed service single node can be acquired, and the performance condition of all the services of the whole computer room cannot be provided. The scheme can only provide detection of the abnormal condition of the service CPU and cannot provide more performance data for reference. Detection of performance bottlenecks cannot be accurately provided if I/O (input/output) type dominated services are encountered.
In order to solve the problems in the related art, the inventors propose a technical solution provided according to an embodiment of the present disclosure.
According to the technical scheme provided by the embodiment of the disclosure, a service performance model is trained by utilizing historical data of a service, wherein the historical data of the service comprises calling data of the service as an input characteristic of the service performance model and first performance data of the service as an output standard of the service performance model; acquiring a mapping relation of calling data between services at a first time point; generating the calling amount of the service aiming at the first input flow according to the mapping relation; and according to the trained service performance model, calculating second performance data of the service aiming at the first input flow by using the call volume of the service generated aiming at the first input flow, thereby accurately predicting the service performance through historical data under the condition of not consuming on-line real resources and improving the service reliability.
Fig. 1 shows a flow chart of a service performance prediction method according to an embodiment of the present disclosure. As shown in fig. 1, the service performance prediction method includes: steps S101, S102, S103 and S104.
In step S101, a service performance model is trained using historical data of a service, wherein the historical data of the service includes invocation data of the service as an input feature of the service performance model and first performance data of the service as an output standard of the service performance model.
In step S102, a mapping relationship of call data between services at a first point in time is acquired.
In step S103, a call amount of the service is generated for the first input traffic according to the mapping relationship.
In step S104, second performance data of the service for the first input traffic is calculated using the call volume of the service generated for the first input traffic according to the trained service performance model.
In one embodiment of the present disclosure, a service performance model is trained using historical data of services, among a plurality of services deployed, for predicting service performance during peak periods of traffic, such as large promotional campaigns. When the service performance model is trained, the calling data of the service, namely the calling times of the service in unit time, is used as the input characteristics of the service performance model; first performance data of the service is used as an output criterion of the service performance model. The first performance data may include at least one of processor utilization, interface response time, request success rate, and exception proportion to meet evaluation requirements for different service performances of a data processing type service, an I/O interface type service, and the like. Other data may be used as input to the service performance model, and other performance may be used as output criteria for the service performance model. In one embodiment of the present disclosure, the input features and the output criteria for training the service performance model may be obtained in real time during the service operation process to improve the training accuracy of the service performance model. The calling data between different services can have a mapping relation for measuring the proportion and absolute value between the calling amounts of different services. By obtaining the mapping relationship of the call data between the services at the first time point (the specific time point in the past), the call data between different services at the first time point can be described more accurately. During peak traffic periods, the number of user accesses to the system providing the service will increase dramatically. The call volume for different services can be calculated according to the access times of the system providing the services during the preset traffic peak period. After the training of the service performance model is completed, second performance data of different services can be calculated according to the calculated calling amount of the different services in the service peak period, namely at least one of the utilization rate of a processor, the response time of an interface, the success rate of requests and the abnormal proportion of the service peak period, so that the service performance in the service peak period can be accurately predicted through historical data under the condition of not consuming real online resources, and the service reliability is improved.
In one embodiment of the present disclosure, the first time point is a time point in the history data at which an incoming traffic peak has occurred last in the system providing the service, so that the call data of the service is acquired under the condition that the system load is high. The invocation data of the service acquired at the first point in time may then be used as historical data to train the service performance model. Therefore, the data of the training service performance model is as close as possible to the data in the service peak period, a better model training effect is obtained, and the service performance prediction accuracy of the service peak is ensured. The first time point may also be other time points, such as the time point at which the peak of the input flow is the largest among the last three occurrences of the peak of the input flow. In one embodiment of the present disclosure, the judgment criterion for the flow peak value may adopt a judgment criterion known in the related art. In one embodiment of the present disclosure, the quantitative relationship between the value of the input flow rate and the average value of the input flow rate for a specific period of time may be used as the determination criterion of the flow rate peak value. For example, the maximum value of the input flow rate in a certain period of time may be taken as the flow rate peak. For another example, the flow peak may be 3 times the average value of the input flow that has reached or exceeded a certain period of time.
According to the technical scheme provided by the embodiment of the disclosure, the first time point is the time point of the input flow peak occurring in the historical data in the system for providing the service for the latest time, so that the accuracy of the historical data of the first time point is ensured, and the accuracy of service performance prediction is ensured.
In one embodiment of the present disclosure, there may be a dependency relationship between a plurality of different services. The call volume of the service may include the call volume of the service by the user and the call volume of the service on which the service depends. After the dependency relationship among the services is considered, the calculation of the service calling amount is more accurate, so that the service performance prediction accuracy is improved. The dependency relationship between the services can be one layer or multiple layers, so that the calling quantity of the multiple layers of services with the dependency relationship is calculated, the calculation of the calling quantity of the services is more accurate, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, the calling data of the service comprises the calling data of the service and the calling data of the dependent service of the service, so that the condition of service calling is fully considered, and the accuracy of service performance prediction is improved.
In one embodiment of the present disclosure, the mapping relationship of the call data between the services includes a proportion of the call amount of the service and a specific numerical value of the call amount for each service. The proportion of the amount of calls for a service and the specific value of the amount of calls for each service may be different at different times. For example, on the system platform, at the dining time, the call volume proportion and the call volume value of the sales-related service are high; and during the promotion activity of the communication carrier, the calling quantity proportion and the calling quantity value of the service related to the virtual rechargeable card are higher. By the proportion of the call quantity of the service and the specific numerical value of the call quantity of each service, the mapping relation of call data among the services can be more accurately described, the call quantities of a plurality of different services can be more accurately evaluated, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, the mapping relation of the calling data among the services is the proportional relation of the calling quantity of the services and the value of the calling quantity, so that the calling quantities of a plurality of different services are accurately evaluated, and the accuracy of service performance prediction is improved.
In one embodiment of the present disclosure, the history data of the service may be a call amount between services at a first time point, and the first input traffic may be a value preset according to the number of accesses to a system providing the service during a traffic peak period. The historical data of the service may be other historical data, and the first input traffic may be other values. By the method, the condition of service calling can be fully considered, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, the historical data of the service comprises the call quantity between services at the first time point, and the first input flow is a preset value, so that the condition of service call is fully considered, and the accuracy of service performance prediction is improved.
In one embodiment of the disclosure, the measured first performance data of the service may be at least one of a processor utilization, an interface response time, a request success rate, and an anomaly ratio of the system providing the service. The second performance data predicted by the service training model may also be at least one of processor utilization, interface response time, request success rate, and exception rate of the system providing the service. The first input traffic may be a preset number of accesses to the system providing the service. Therefore, the performance of various services such as a calculation type service, an I/O interface type service and the like can be comprehensively and comprehensively evaluated, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, the first performance data and the second performance data refer to at least one of the processor utilization rate, the interface response time, the request success rate and the abnormal proportion of the system providing the service, and the first input flow refers to the preset access times aiming at the system providing the service, so that the service performance is comprehensively evaluated, and the service performance prediction accuracy is improved.
In one embodiment of the present disclosure, the service performance model may be a neural network model, for example, may be a deep neural network having a plurality of hidden layers. The neural network model can be an off-line neural network model, reduces consumption of system resources, and is stored in the database for subsequent use. The neural network is trained by the input call data of the service and the output first performance data of the service, thereby predicting the performance of the service more accurately.
According to the technical scheme provided by the embodiment of the disclosure, the service performance model is the neural network model, so that the accuracy of service performance prediction is improved.
In one embodiment of the present disclosure, the neural network model refers to an algorithmic mathematical model simulating animal neural network behavior characteristics for distributed parallel information processing. The neural network model achieves the purpose of processing information by adjusting the mutual connection relationship among a large number of internal nodes according to the complexity of the system. The specific type of neural network model may be selected from the related art, and the details of the disclosure are not repeated herein.
In one embodiment of the present disclosure, the performance of processor utilization, interface response time, request success rate, exception rate, etc. may be different for different services under the same call data condition. Different neural network models can be adopted for different services, so that the adaptability of different services is improved, and the accuracy of service performance prediction is improved.
According to the technical scheme provided by the embodiment of the disclosure, different neural network models are adopted for different services, so that the adaptability to different services is improved, and the accuracy of service performance prediction is improved.
A service performance prediction method according to another embodiment of the present disclosure is described below with reference to fig. 2. Fig. 2 shows a flow chart of a service performance prediction method according to another embodiment of the present disclosure. The service performance prediction method shown in fig. 2 includes steps S201 and S202, in addition to steps S101, S102, S103 and S104 which are the same as those in fig. 1.
In step S201, it is determined whether the second performance data of the service exceeds a first threshold.
In step S202, the service is identified as the specific service in response to determining that the second performance data of the service exceeds the first threshold.
In one embodiment of the present disclosure, a first threshold may be set. When it is determined that second performance data such as the processor utilization rate, the interface response time, the request success rate and the abnormal proportion of the service exceed the first threshold, the service is identified as a specific service and gives more attention to the specific service, so that the service exceeding the performance threshold is early warned, and the service reliability is improved. In one embodiment of the present disclosure, a particular service may be referred to as a performance bottleneck service, i.e., a service that bottlenecks performance due to excessive ingress traffic.
According to the technical scheme provided by the embodiment of the disclosure, the method further comprises the following steps: determining whether second performance data for the service exceeds a first threshold; and in response to determining that the second performance data of the service exceeds the first threshold, identifying the service as a specific service, thereby early warning the service exceeding the performance threshold and improving service reliability.
In an embodiment of the disclosure, a specific service, in which second performance data of the service, such as processor utilization rate, interface response time, request success rate, abnormal proportion, and the like, exceeds a first threshold, may be added to a specific service list, and the specific service is managed in a unified manner, so that services exceeding the performance threshold are early warned, and service reliability is improved. The specific service can be managed by other modes except the specific service list, and the early warning of the service exceeding the performance threshold value can be realized.
According to the technical solution provided by the embodiment of the present disclosure, identifying a service as a specific service by responding to a determination that second performance data of the service exceeds a first threshold includes: and adding the specific service to the specific service list, thereby early warning the service exceeding the performance threshold value and improving the service reliability.
In an embodiment of the present disclosure, for different services, due to different characteristics of the services, such as processor load, I/O throughput, and the like, the first thresholds of the second performances, such as processor utilization rate, interface response time, request success rate, exception proportion, and the like, may be different, so as to adapt to the requirements of different services, and improve the reliability of the services. In one embodiment of the present disclosure, the first threshold for different services may also be the same, simplifying the design.
According to the technical scheme provided by the embodiment of the disclosure, different first threshold values are adopted for different services, so that the requirements of different services are adapted, and the service reliability is improved.
An exemplary application scenario of the service performance prediction method according to an embodiment of the present disclosure is described below with reference to fig. 3. Fig. 3 shows a flowchart of an exemplary application scenario of a service performance prediction method according to an embodiment of the present disclosure. It should be appreciated that the exemplary scenario illustrated in fig. 3 is merely for illustrating the concepts and principles of the present disclosure, and should not be construed as limiting the present disclosure, and does not imply that the present disclosure is only applicable to such an application scenario. As shown in fig. 3, an exemplary application scenario of the service performance prediction method includes: steps S301, S302, S303, S304, S305, S306.
In step S301, a desired overall machine room flow is input. The expected overall room traffic may be the estimated overall room traffic during peak traffic periods. The traffic may be for a single machine room or for a plurality of distributed machine rooms.
In step S302, the call amount of the service is generated according to the mapping relationship. And on the basis of obtaining the flow of the whole machine room, aiming at different services in the machine room, generating the call quantity of each service according to the mapping relation between the call quantities of the services.
In step S303, an offline model calculation is performed. The off-line model may be a trained neural network model, and different neural network models may be used for different services. And calculating second performances such as the utilization rate of the processor, the response time of the interface, the success rate of the request, the abnormal proportion and the like by using the call volume of the service.
In step S304, it is determined whether a threshold value is reached. And judging whether the second performances such as the utilization rate of the processor, the response time of the interface, the request success rate, the abnormal proportion and the like reach the threshold value. If the threshold value is not reached, executing step S305; if the threshold is reached, step S306 is executed.
In step S305, the service is normal.
In step S306, save to the local database, and add the service to the performance bottleneck list. And for the service reaching the threshold value, storing the service into a local database, adding the service into a performance bottleneck list, and performing early warning monitoring. And a mode of allocating more computing resources and I/O interface resources for the service can be adopted subsequently, so that the service performance is prevented from being deteriorated during the service peak period, and the user experience is prevented from being influenced.
A service performance prediction apparatus according to an embodiment of the present disclosure is described below with reference to fig. 4. Fig. 4 is a schematic structural diagram of a service performance prediction apparatus according to an embodiment of the present disclosure. As shown in fig. 4, the service performance prediction apparatus 400 includes: a training module 401, an obtaining module 402, a generating module 403 and a calculating module 404.
The training module 401 is configured to train a service performance model using historical data of the service, wherein the historical data of the service comprises invocation data of the service as input features of the service performance model and first performance data of the service as output criteria of the service performance model.
The obtaining module 402 is configured to obtain a mapping relationship of call data between services at a first point in time.
The generating module 403 is configured to generate a call amount of the service for the first input traffic according to the mapping.
The calculation module 404 is configured to calculate second performance data of the service for the first input traffic with the call volume of the service generated for the first input traffic according to the trained service performance model.
According to the technical scheme provided by the embodiment of the disclosure, the training module is configured to train the service performance model by using the historical data of the service, wherein the historical data of the service comprises calling data of the service as the input feature of the service performance model and first performance data of the service as the output standard of the service performance model; the obtaining module is configured to obtain a mapping relation of call data between services at a first time point; the generation module is configured to generate a call amount of the service for the first input traffic according to the mapping relation; the calculation module is configured to calculate second performance data of the service aiming at the first input flow by using the call quantity of the service generated aiming at the first input flow according to the trained service performance model, so that the service performance is accurately predicted through historical data under the condition of not consuming real online resources, and the service reliability is improved.
In one embodiment of the present disclosure, the first point in time is a point in time in the historical data at which an incoming traffic peak has occurred most recently in the system providing the service.
According to the technical scheme provided by the embodiment of the disclosure, the first time point is the time point of the input flow peak occurring in the historical data in the system for providing the service for the latest time, so that the accuracy of the historical data of the first time point is ensured, and the accuracy of service performance prediction is ensured.
In one embodiment of the present disclosure, the call data of the service includes call data of the service itself and call data of a dependent service of the service.
According to the technical scheme provided by the embodiment of the disclosure, the calling data of the service comprises the calling data of the service and the calling data of the dependent service of the service, so that the condition of service calling is fully considered, and the accuracy of service performance prediction is improved.
In one embodiment of the present disclosure, the mapping relationship of the call data between the services is a proportional relationship of the call volume of the services and a value of the call volume.
According to the technical scheme provided by the embodiment of the disclosure, the mapping relation of the calling data among the services is the proportional relation of the calling quantity of the services and the value of the calling quantity, so that the calling quantities of a plurality of different services are accurately evaluated, and the accuracy of service performance prediction is improved.
In one embodiment of the disclosure, the history data of the services includes call volume between the services at a first point in time, and the first input traffic is a preset value.
According to the technical scheme provided by the embodiment of the disclosure, the historical data of the service comprises the call quantity between services at the first time point, and the first input flow is a preset value, so that the condition of service call is fully considered, and the accuracy of service performance prediction is improved.
In one embodiment of the present disclosure, the first performance data and the second performance data refer to at least one of a processor utilization rate, an interface response time, a request success rate, and an exception proportion of the system providing the service, and the first input traffic refers to a preset number of accesses to the system providing the service.
According to the technical scheme provided by the embodiment of the disclosure, the first performance data and the second performance data refer to at least one of the processor utilization rate, the interface response time, the request success rate and the abnormal proportion of the system providing the service, and the first input flow refers to the preset access times aiming at the system providing the service, so that the service performance is comprehensively evaluated, and the service performance prediction accuracy is improved.
In one embodiment of the present disclosure, the service performance model is a neural network model.
According to the technical scheme provided by the embodiment of the disclosure, the service performance model is the neural network model, so that the accuracy of service performance prediction is improved.
In one embodiment of the present disclosure, different neural network models are employed for different services.
According to the technical scheme provided by the embodiment of the disclosure, different neural network models are adopted for different services, so that the adaptability to different services is improved, and the accuracy of service performance prediction is improved.
A service performance prediction apparatus according to another embodiment of the present disclosure is described below with reference to fig. 5. Fig. 5 shows a schematic structural diagram of a service performance prediction apparatus according to another embodiment of the present disclosure. As shown in fig. 5, the service performance prediction apparatus 500 includes, in addition to the training module 401, the obtaining module 402, the generating module 403, and the calculating module 404 that are the same as those in fig. 4: a determination module 501 and an identification module 502.
The determining module 501 is configured to determine whether the second performance data of the service exceeds a first threshold.
The identification module 502 is configured to identify the service as the particular service in response to determining that the second performance data of the service exceeds the first threshold.
According to the technical scheme provided by the embodiment of the disclosure, the determining module is configured to determine whether the second performance data of the service exceeds a first threshold; the identification module is configured to identify the service as a specific service in response to determining that the second performance data of the service exceeds the first threshold, thereby early warning the service exceeding the performance threshold and improving service reliability.
In one embodiment of the present disclosure, the identification module 502 is further configured to: the specific service is added to the specific service list.
According to the technical scheme provided by the embodiment of the disclosure, the identification module is further configured to: and adding the specific service to the specific service list, thereby early warning the service exceeding the performance threshold value and improving the service reliability.
In one embodiment of the present disclosure, different first thresholds are employed for different services.
According to the technical scheme provided by the embodiment of the disclosure, different first threshold values are adopted for different services, so that the requirements of different services are adapted, and the service reliability is improved.
It will be appreciated by those skilled in the art that the embodiments discussed with reference to fig. 4 and 5 may employ some or all of the details of the embodiments described with reference to fig. 1-3 to provide the technical effects achieved by the embodiments described with reference to fig. 1-3 to the embodiments discussed with reference to fig. 4 and 5. For details, reference may be made to the description made above with reference to fig. 1 to 3, and details thereof are not repeated herein.
Fig. 6 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
The foregoing embodiments describe the internal functions and structures of the service performance prediction apparatus, and in one possible design, the structure of the service performance prediction apparatus may be implemented as an electronic device, such as shown in fig. 6, and the electronic device 600 may include a processor 601 and a memory 602.
The memory 602 is used for storing programs that support a processor to execute the service performance method in any of the above embodiments, and the processor 601 is configured to execute the programs stored in the memory 602.
The memory 602 is used to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor 601 to implement the steps of:
training a service performance model by using historical data of a service, wherein the historical data of the service comprises calling data of the service as an input feature of the service performance model and first performance data of the service as an output standard of the service performance model;
acquiring a mapping relation of calling data among the services at a first time point;
generating the calling amount of the service aiming at the first input flow according to the mapping relation;
and calculating second performance data of the service aiming at the first input flow by using the call quantity of the service generated aiming at the first input flow according to the trained service performance model.
According to the technical scheme provided by the embodiment of the disclosure, the first time point is a time point of a latest occurrence of an input traffic peak in a system providing the service in the historical data.
According to the technical scheme provided by the embodiment of the disclosure, the calling data of the service comprises calling data of the service and calling data of dependent services of the service.
According to the technical scheme provided by the embodiment of the disclosure, the mapping relation of the call data among the services is the proportional relation of the call volume of the services and the value of the call volume.
According to the technical scheme provided by the embodiment of the disclosure, the historical data of the services comprises the call volume between the services at the first time point, and the first input flow is a preset value.
According to the technical solution provided by the embodiment of the present disclosure, the first performance data and the second performance data refer to at least one of a processor utilization rate, an interface response time, a request success rate, and an exception proportion of a system providing a service, and the first input traffic refers to a preset number of accesses to the system providing the service.
According to the technical scheme provided by the embodiment of the disclosure, the service performance model is a neural network model.
According to the technical scheme provided by the embodiment of the disclosure, different neural network models are adopted for different services.
According to the technical solution provided by the embodiment of the present disclosure, the memory 602 is configured to store one or more computer instructions, where the one or more computer instructions are further executed by the processor 601 to implement the following steps:
determining whether second performance data for the service exceeds a first threshold;
identifying the service as a particular service in response to determining that the second performance data for the service exceeds a first threshold.
According to the technical solution provided by the embodiment of the present disclosure, identifying the service as the specific service in response to determining that the second performance data of the service exceeds the first threshold includes:
adding the specific service to a specific service list.
According to the technical scheme provided by the embodiment of the disclosure, different first threshold values are adopted for different services.
The processor 601 is configured to perform all or part of the aforementioned method steps.
It is to be noted that the processor 601 in the present embodiment may be implemented as two or more processors. A portion of the processor, for example, a central processing unit, executes a first data processing mode. Another part of the processor, for example, a graphics processor, performs a second data processing mode.
Exemplary embodiments of the present disclosure also provide a computer storage medium for storing computer software instructions for the diet information providing device, which includes a program for executing the method of any one of the above embodiments, thereby providing technical effects of the method.
FIG. 7 illustrates a schematic block diagram of a computer system suitable for use in implementing a service performance prediction method according to an embodiment of the present disclosure.
As shown in fig. 7, the computer system 700 includes a processor (CPU, GPU, FPGA, etc.) 701, which can perform part or all of the processing in the embodiment shown in the above-described drawing according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the system 700 are also stored. The CPU701, the ROM702, and the RAM703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
To the I/O interface 705, AN input section 706 including a keyboard, a mouse, and the like, AN output section 707 including a keyboard such as a Cathode Ray Tube (CRT), a liquid crystal display (L CD), and the like, a speaker, and the like, a storage section 708 including a hard disk, and the like, and a diet information providing section 709 including a network interface card such as a L AN card, a modem, and the like, the diet information providing section 709 performs diet information providing processing via a network such as the internet, a drive 710 is also connected to the I/O interface 705 as necessary, a removable medium 711 such as a magnetic disk, AN optical disk, a magneto-optical disk, a semiconductor memory, and the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, the methods described above with reference to the figures may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program comprising program code for performing the methods of the figures. In such an embodiment, the computer program may be downloaded and installed from a network through the diet information providing section 709, and/or installed from the removable medium 711.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer-readable storage medium stores one or more programs which are used by one or more processors to perform the methods described in the present disclosure, thereby providing technical effects brought by the methods.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A method for predicting service performance, comprising:
training a service performance model by using historical data of a service, wherein the historical data of the service comprises calling data of the service as an input feature of the service performance model and first performance data of the service as an output standard of the service performance model;
acquiring a mapping relation of calling data among the services at a first time point;
generating the calling amount of the service aiming at the first input flow according to the mapping relation;
and calculating second performance data of the service aiming at the first input flow by using the call quantity of the service generated aiming at the first input flow according to the trained service performance model.
2. The method of claim 1, wherein the first time point is a time point in the historical data at which an incoming traffic peak has occurred most recently in a system providing the service.
3. Method according to claim 1 or 2, characterized in that the call data of the service comprises call data of the service itself and call data of dependent services of the service.
4. The method according to claim 1 or 2, wherein the history data of the services comprises call volume between the services at the first time point, and the first input traffic is a preset value.
5. The method of claim 4, wherein the first performance data and the second performance data refer to at least one of processor utilization, interface response time, request success rate, and exception rate of a serviced system, and wherein the first input traffic refers to a preset number of accesses to the serviced system.
6. The method according to claim 1 or 2, characterized in that the service performance model is a neural network model.
7. The method of claim 1 or 2, further comprising:
determining whether second performance data for the service exceeds a first threshold;
identifying the service as a particular service in response to determining that the second performance data for the service exceeds a first threshold.
8. A service performance prediction apparatus, comprising:
a training module configured to train a service performance model using historical data of a service, wherein the historical data of the service includes invocation data of the service as an input feature of the service performance model and first performance data of the service as an output standard of the service performance model;
the obtaining module is configured to obtain a mapping relation of calling data among the services at a first time point;
the generating module is configured to generate the calling amount of the service for the first input traffic according to the mapping relation;
a calculation module configured to calculate second performance data of the service for the first input traffic with the call volume of the service generated for the first input traffic according to the trained service performance model.
9. An electronic device comprising a memory and a processor; wherein,
the memory is to store one or more computer instructions, wherein the one or more computer instructions are to be executed by the processor to implement the method of any one of claims 1-7.
10. A readable storage medium having stored thereon computer instructions, characterized in that the computer instructions, when executed by a processor, implement the method according to any one of claims 1-7.
CN202010271099.4A 2020-04-08 2020-04-08 Service performance prediction method and device, electronic equipment and readable storage medium Pending CN111475393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010271099.4A CN111475393A (en) 2020-04-08 2020-04-08 Service performance prediction method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010271099.4A CN111475393A (en) 2020-04-08 2020-04-08 Service performance prediction method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN111475393A true CN111475393A (en) 2020-07-31

Family

ID=71750081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010271099.4A Pending CN111475393A (en) 2020-04-08 2020-04-08 Service performance prediction method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111475393A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617038A (en) * 2013-11-28 2014-03-05 北京京东尚科信息技术有限公司 Service monitoring method and device for distributed application system
CN107257289A (en) * 2017-04-24 2017-10-17 努比亚技术有限公司 A kind of risk analysis equipment, monitoring system and monitoring method
CN107943579A (en) * 2017-11-08 2018-04-20 深圳前海微众银行股份有限公司 Resource bottleneck Forecasting Methodology, equipment, system and readable storage medium storing program for executing
CN108259376A (en) * 2018-04-24 2018-07-06 北京奇艺世纪科技有限公司 The control method and relevant device of server cluster service traffics
CN108984304A (en) * 2018-07-11 2018-12-11 广东亿迅科技有限公司 Server expansion calculation method and device based on regression equation
CN109032914A (en) * 2018-09-06 2018-12-18 掌阅科技股份有限公司 Resource occupation data predication method, electronic equipment, storage medium
CN109062769A (en) * 2018-08-21 2018-12-21 南京星邺汇捷网络科技有限公司 The method, apparatus and equipment of IT system performance risk trend prediction
CN109309596A (en) * 2017-07-28 2019-02-05 阿里巴巴集团控股有限公司 A kind of method for testing pressure, device and server
US20190044825A1 (en) * 2018-02-19 2019-02-07 GAVS Technologies Pvt. Ltd. Method and system to proactively determine potential outages in an information technology environment
CN109327353A (en) * 2018-09-29 2019-02-12 阿里巴巴集团控股有限公司 Service traffics determine method, apparatus and electronic equipment
US20190213099A1 (en) * 2018-01-05 2019-07-11 NEC Laboratories Europe GmbH Methods and systems for machine-learning-based resource prediction for resource allocation and anomaly detection
CN110149396A (en) * 2019-05-20 2019-08-20 华南理工大学 A kind of platform of internet of things construction method based on micro services framework

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617038A (en) * 2013-11-28 2014-03-05 北京京东尚科信息技术有限公司 Service monitoring method and device for distributed application system
CN107257289A (en) * 2017-04-24 2017-10-17 努比亚技术有限公司 A kind of risk analysis equipment, monitoring system and monitoring method
CN109309596A (en) * 2017-07-28 2019-02-05 阿里巴巴集团控股有限公司 A kind of method for testing pressure, device and server
CN107943579A (en) * 2017-11-08 2018-04-20 深圳前海微众银行股份有限公司 Resource bottleneck Forecasting Methodology, equipment, system and readable storage medium storing program for executing
US20190213099A1 (en) * 2018-01-05 2019-07-11 NEC Laboratories Europe GmbH Methods and systems for machine-learning-based resource prediction for resource allocation and anomaly detection
US20190044825A1 (en) * 2018-02-19 2019-02-07 GAVS Technologies Pvt. Ltd. Method and system to proactively determine potential outages in an information technology environment
CN108259376A (en) * 2018-04-24 2018-07-06 北京奇艺世纪科技有限公司 The control method and relevant device of server cluster service traffics
CN108984304A (en) * 2018-07-11 2018-12-11 广东亿迅科技有限公司 Server expansion calculation method and device based on regression equation
CN109062769A (en) * 2018-08-21 2018-12-21 南京星邺汇捷网络科技有限公司 The method, apparatus and equipment of IT system performance risk trend prediction
CN109032914A (en) * 2018-09-06 2018-12-18 掌阅科技股份有限公司 Resource occupation data predication method, electronic equipment, storage medium
CN109327353A (en) * 2018-09-29 2019-02-12 阿里巴巴集团控股有限公司 Service traffics determine method, apparatus and electronic equipment
CN110149396A (en) * 2019-05-20 2019-08-20 华南理工大学 A kind of platform of internet of things construction method based on micro services framework

Similar Documents

Publication Publication Date Title
US8234229B2 (en) Method and apparatus for prediction of computer system performance based on types and numbers of active devices
US10819603B2 (en) Performance evaluation method, apparatus for performance evaluation, and non-transitory computer-readable storage medium for storing program
CN110618924B (en) Link pressure testing method of web application system
US8874642B2 (en) System and method for managing the performance of an enterprise application
CN110633194B (en) Performance evaluation method of hardware resources in specific environment
CN110502431B (en) System service evaluation method and device and electronic equipment
US20120317069A1 (en) Throughput sustaining support system, device, method, and program
CN113837596B (en) Fault determination method and device, electronic equipment and storage medium
CN110569166A (en) Abnormality detection method, abnormality detection device, electronic apparatus, and medium
US8180716B2 (en) Method and device for forecasting computational needs of an application
CN107480703B (en) Transaction fault detection method and device
CN116560794A (en) Exception handling method and device for virtual machine, medium and computer equipment
CN111897706A (en) Server performance prediction method, device, computer system and medium
CN111143209A (en) Layered pressure testing method and device, electronic equipment and storage medium
CN110413482B (en) Detection method and device
CN115292146B (en) System capacity estimation method, system, equipment and storage medium
CN111475393A (en) Service performance prediction method and device, electronic equipment and readable storage medium
CN111858267A (en) Early warning method and device, electronic equipment and storage medium
CN116701123A (en) Task early warning method, device, equipment, medium and program product
CN115509853A (en) Cluster data anomaly detection method and electronic equipment
CN108984271A (en) Load balancing method and related equipment
CN115222278A (en) Intelligent inspection method and system for robot
CN114819367A (en) Public service platform based on industrial internet
CN114063881B (en) Disk management method and device for distributed system
CN113742083A (en) Scheduling simulation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200731

RJ01 Rejection of invention patent application after publication