CN114826951B - Service automatic degradation method, device, computer equipment and storage medium - Google Patents

Service automatic degradation method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN114826951B
CN114826951B CN202210608028.8A CN202210608028A CN114826951B CN 114826951 B CN114826951 B CN 114826951B CN 202210608028 A CN202210608028 A CN 202210608028A CN 114826951 B CN114826951 B CN 114826951B
Authority
CN
China
Prior art keywords
flow
target
prediction model
value
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210608028.8A
Other languages
Chinese (zh)
Other versions
CN114826951A (en
Inventor
程鹏
白佳乐
任政
谢伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210608028.8A priority Critical patent/CN114826951B/en
Publication of CN114826951A publication Critical patent/CN114826951A/en
Application granted granted Critical
Publication of CN114826951B publication Critical patent/CN114826951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a service automatic degradation method, a device, a computer device and a storage medium. Relates to the field of artificial intelligence, and can be used in the field of financial science and technology. The method comprises the following steps: determining target flow data from a plurality of historical flow data of a server; predicting target flow data through a flow prediction model to obtain a predicted flow result of the server, wherein the predicted flow result is used for representing the flow condition of the server in a prediction period; determining a target service to be degraded under the condition that a target flow value exceeding a flow threshold exists in the predicted flow result; and carrying out degradation treatment on the target service to realize the current limit on the target service. By adopting the method, the target service can be degraded before the flow of the server exceeds the flow threshold, sufficient server resources are reserved for the server, and the stability of the core service in the server is improved.

Description

Service automatic degradation method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a method, apparatus, computer device, and storage medium for automatically degrading services.
Background
In an actual application scenario, there are often multiple different services running on one server at the same time. To conserve server resources, the configuration of server resources is typically less than the total demand for server resources by all services. So during peak service periods, the server may not be able to allocate enough resources to perform tasks for all services. At this time, in order to ensure the normal operation of the core service, the non-core service needs to be degraded, so that the resource usage of the core service is preferentially ensured.
In the related technology, a mode of initiating service degradation to the uncore task after service call failure is often adopted, service degradation is not timely enough, and the following problems exist: when a service invocation failure occurs, a certain influence may be already caused on the core service, resulting in a reduction in the stability of the core service.
Disclosure of Invention
Based on this, it is necessary to provide a service automatic degradation method, apparatus, computer device and storage medium in order to solve the above technical problems.
In a first aspect, the present application provides a method for automatically degrading a service. The method comprises the following steps:
determining target flow data from a plurality of historical flow data of a server;
Predicting the target flow data through a flow prediction model to obtain a predicted flow result of the server, wherein the predicted flow result is used for representing the flow condition of the server in a predicted period;
determining a target service to be degraded under the condition that a target flow value exceeding a flow threshold exists in the predicted flow result;
and carrying out degradation processing on the target service to realize current limiting on the target service.
In one embodiment, the determining the target service to be degraded in the case that the target flow value exceeding the flow threshold exists in the predicted flow result includes:
determining a target flow value greater than the flow threshold from the predicted flow result;
determining a flow difference value according to the target flow value and the flow threshold value;
and determining a target service to be degraded from the services with the target service attribute according to the flow difference, wherein the service attribute is used for representing the core degree of the service.
In one embodiment, the flow prediction model includes at least two prediction models, and the predicting the target flow data by using the flow prediction model to obtain a predicted flow result of the server includes:
Predicting the target flow data through each prediction model to obtain a plurality of initial prediction results;
and carrying out fusion processing on the plurality of initial prediction results according to the prediction weight of each prediction model to obtain a prediction flow result.
In one embodiment, the method further comprises:
acquiring flow data corresponding to each flow characteristic from historical flow data corresponding to a sample server;
determining a target flow characteristic from the flow characteristics according to the flow data corresponding to the flow characteristics by adopting a characteristic weight algorithm;
constructing a training set according to the flow data corresponding to the target flow characteristics;
and training the initial prediction model corresponding to each prediction model through the training set to obtain the flow prediction model.
In one embodiment, the method further comprises:
determining the precision of each prediction model;
and determining the prediction weight of each prediction model according to the precision of each prediction model.
In one embodiment, the training, through the training set, the initial prediction model corresponding to each prediction model includes:
Aiming at any prediction model, in the process of training an initial prediction model corresponding to the prediction model through the training set, determining the optimal super-parameters of the prediction model by adopting a particle swarm algorithm, and in the process of determining the optimal super-parameters of the prediction model by adopting the particle swarm algorithm, updating inertia factors in the particle swarm algorithm in real time based on historical data in the particle motion process, and determining the optimal super-parameters of the prediction model by adopting the updated particle swarm algorithm.
In one embodiment, the updating the inertia factor in the particle swarm algorithm based on the historical data in the particle motion process in real time includes:
according to the historical data in the particle movement process, determining a historical maximum inertia factor, a historical minimum inertia factor, a particle historical minimum target value and a particle historical maximum target value of the particles in the movement process;
determining a first inertia factor adjustment value according to the historical maximum inertia factor, the historical minimum inertia factor, the particle historical minimum target value and the particle historical maximum target value;
and taking the difference value between the historical maximum inertia factor and the first inertia factor adjustment value as the inertia factor of the particle swarm algorithm at the current moment.
In a second aspect, the present application further provides a service automatic degradation device. The device comprises:
the first determining module is used for determining target flow data from a plurality of historical flow data of the server;
the prediction module is used for predicting the target flow data through a flow prediction model to obtain a predicted flow result of the server, wherein the predicted flow result is used for representing the flow condition of the server in a predicted period;
the second determining module is used for determining target service to be degraded under the condition that a target flow value exceeding a flow threshold exists in the predicted flow result;
and the degradation module is used for carrying out degradation processing on the target service so as to realize the current limiting of the target service.
In one embodiment, the downgrade module is further configured to:
determining a target flow value greater than the flow threshold from the predicted flow result;
determining a flow difference value according to the target flow value and the flow threshold value;
and determining a target service to be degraded from the services with the target service attribute according to the flow difference, wherein the service attribute is used for representing the core degree of the service.
In one embodiment, the flow prediction model includes at least two prediction models, and the prediction module is further configured to:
predicting the target flow data through each prediction model to obtain a plurality of initial prediction results;
and carrying out fusion processing on the plurality of initial prediction results according to the prediction weight of each prediction model to obtain a prediction flow result.
In one embodiment, the apparatus further comprises:
the acquisition module is used for acquiring flow data corresponding to each flow characteristic from the historical flow data corresponding to the sample server;
the third determining module is used for determining a target flow characteristic from the flow characteristics according to the flow data corresponding to the flow characteristics by adopting a characteristic weight algorithm;
the construction module is used for constructing a training set according to the flow data corresponding to the target flow characteristics;
and the training module is used for training the initial prediction model corresponding to each prediction model through the training set respectively to obtain the flow prediction model.
In one embodiment, the apparatus further comprises:
a fourth determining module, configured to determine the accuracy of each prediction model;
And a fifth determining module, configured to determine the prediction weight of each prediction model according to the accuracy of each prediction model.
In one embodiment, the training module is further configured to:
aiming at any prediction model, in the process of training an initial prediction model corresponding to the prediction model through the training set, determining the optimal super-parameters of the prediction model by adopting a particle swarm algorithm, and in the process of determining the optimal super-parameters of the prediction model by adopting the particle swarm algorithm, updating inertia factors in the particle swarm algorithm in real time based on historical data in the particle motion process, and determining the optimal super-parameters of the prediction model by adopting the updated particle swarm algorithm.
In one embodiment, the training module is further configured to:
according to the historical data in the particle movement process, determining a historical maximum inertia factor, a historical minimum inertia factor, a particle historical minimum target value and a particle historical maximum target value of the particles in the movement process;
determining a first inertia factor adjustment value according to the historical maximum inertia factor, the historical minimum inertia factor, the particle historical minimum target value and the particle historical maximum target value;
And taking the difference value between the historical maximum inertia factor and the first inertia factor adjustment value as the inertia factor of the particle swarm algorithm at the current moment.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing any of the methods above when executing the computer program.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs any of the methods above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprising a computer program, the computer program product comprising a computer program which, when executed by a processor, implements any of the methods above.
According to the service automatic degradation method, device, computer equipment and storage medium, the target service to be degraded can be determined under the condition that the target flow value exceeding the flow threshold exists in the predicted flow result by determining the target flow data and predicting the target flow data through the flow prediction model, namely the service automatic degradation method, device, computer equipment and storage medium provided by the embodiment of the application can be used for carrying out degradation processing on the target service before the flow of the server exceeds the flow threshold, the timeliness of the degradation processing is improved, and sufficient server resources are reserved for the server when the flow of the server is large by limiting the flow of the target service in advance, so that the stability of core service in the server can be improved.
Drawings
FIG. 1 is a flow diagram of a method for automatically degrading services in one embodiment;
FIG. 2 is a flow chart of step 106 in one embodiment;
FIG. 3 is a flow chart of step 104 in one embodiment;
FIG. 4 is a flow diagram of a method for automatically degrading services in one embodiment;
FIG. 5 is a flow diagram of a method for automatically degrading services in one embodiment;
FIG. 6 is a flow diagram of a method for automatically degrading services in one embodiment;
FIG. 7 is a schematic diagram of a method of automatic degradation of services in one embodiment;
FIG. 8 is a block diagram of an apparatus for automatically degrading services in one embodiment;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a service automatic degradation method is provided, and this embodiment is applied to a terminal for illustration by using the method, it is understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
Step 102, determining target flow data from a plurality of historical flow data of the server.
In this embodiment of the present application, the historical traffic data is traffic data generated on the server by the terminal when the service is invoked, for example, a traffic to rate value, a traffic ring rate value, a call failure number, a call timeout time, a caller, a callee, and the like. The target flow data is flow data satisfying a preset condition in the history flow data.
For example, the flow data satisfying the preset condition may be flow data corresponding to a flow characteristic having a characteristic weight greater than a threshold value. The specific value of the threshold in the embodiment of the present application is not specifically limited, and may be selected empirically by those skilled in the art. In the embodiment of the present application, the flow characteristics refer to the type of flow data, and the characteristic weights are used for characterizing the capability of predicting the flow result of a certain type of flow data. For example, any feature weight algorithm may be used to determine the feature weight of each flow feature, and select a flow feature with a feature weight greater than the threshold value from the feature weights, so as to use the flow data corresponding to the flow feature with the feature weight greater than the threshold value as the target flow data.
It should be noted that the target flow data is not necessarily part of the historical flow data, and if the feature weights of all the flow features are greater than the threshold value, the flow data corresponding to all the flow features, that is, all the historical flow data, may be selected as the target flow data.
After the target flow data is selected, the target flow data may be preprocessed, for example, average processing, normalization processing, etc. may be performed on each data within 1 second. In this embodiment of the present application, too many details are not described in detail, and any mode capable of implementing the above preprocessing operation is suitable for use in this embodiment of the present application.
And 104, predicting the target flow data through a flow prediction model to obtain a predicted flow result of the server, wherein the predicted flow result is used for representing the flow condition of the server in a predicted period.
In the embodiment of the application, the target flow data can be predicted through the pre-trained flow prediction model to obtain the corresponding predicted flow result, and the predicted flow result can represent the flow condition of the server in the prediction period so as to judge whether degradation processing is required to be carried out on the target service according to the flow condition of the server in the prediction period. The predicted traffic result includes predicted traffic values of the server at various time points in the predicted time period.
And 106, determining the target service to be degraded in the case that the target flow value exceeding the flow threshold exists in the predicted flow result.
In this embodiment of the present application, the flow threshold refers to an upper limit of service flow that can be processed by the server at the same time, which is a preset numerical value, and the specific numerical value can be set by those skilled in the art according to requirements. The target service is a service that can be subjected to degradation processing in the case where the traffic value of the server exceeds the traffic threshold. For example, services may be divided into core services and non-core services according to importance levels of the services in the server, wherein the importance of the core services is higher than that of the non-core services. The target service may be a non-core service, i.e. a service of relatively low importance among all services handled by the server.
After the predicted flow result is obtained, the predicted flow value of the server at each time point in the predicted period in the predicted flow result can be compared with the flow threshold. If the predicted flow value at any time point is greater than the flow threshold, the flow value at the time point of the server exceeds the flow threshold, and the predicted flow value at the time point can be determined as the target flow value. After the predicted flow value and the flow threshold value of each time point of the server in the predicted period are compared, determining target service to be degraded according to each target flow value and the flow threshold value, and performing degradation treatment on the target service in advance so as to avoid the problem that the core service may call failure under the condition that the flow value of the server exceeds the flow threshold value.
And step 108, performing degradation processing on the target service to realize the current limit on the target service.
In the embodiment of the application, performing degradation processing on the target service refers to limiting server resources that can be invoked by the target service, for example, temporarily masking the target service, and the like. The specific manner of degrading the target service is not particularly limited, and any manner of degrading the target service is applicable to the embodiment of the present application.
According to the service automatic degradation method provided by the embodiment of the application, the target service to be degraded can be determined under the condition that the target flow value exceeding the flow threshold exists in the predicted flow result by determining the target flow data and predicting the target flow data through the flow prediction model, namely the service automatic degradation method provided by the embodiment of the application can be used for carrying out degradation processing on the target service before the flow of the server exceeds the flow threshold, so that timeliness of the degradation processing is improved, and sufficient server resources are reserved for the server when the flow of the server is larger, and stability of core service in the server can be improved.
In one embodiment, as shown in fig. 2, in step 106, in the case that there is a target flow value exceeding the flow threshold in the predicted flow result, determining a target service to be downgraded includes:
step 202, determining a target flow value greater than a flow threshold from the predicted flow result.
In the embodiment of the application, the target flow value larger than the flow threshold in the predicted flow result can be determined according to the predicted flow result output by the flow prediction model, so that the target service needing degradation processing can be judged according to the target flow value and the flow threshold. The target flow value may be one or more.
Step 204, determining a flow difference value according to the target flow value and the flow threshold value.
It should be noted that, in the embodiments of the present application, the manner of determining the flow difference value according to the target flow value and the flow threshold value is not particularly limited. For example, in the case where the target flow value is one, the target flow value may be differenced from the flow threshold to determine the flow difference. In the case that the target flow value is a plurality of, the embodiment of the application can determine the flow difference by making the difference between the maximum target flow value and the flow threshold; the minimum target flow value and the flow threshold value can be differenced to determine the flow difference value; alternatively, a mean value of the target flow value may be taken, and the flow difference may be determined by making a difference between the mean value and the flow threshold.
And 206, determining a target service to be degraded from the services with the target service attribute according to the flow difference, wherein the service attribute is used for representing the core degree of the service.
In the embodiment of the application, the service attribute may be used to characterize the core degree of the service, for example, the service attribute may be divided into a first service attribute and a second service attribute, where the service with the first service attribute may be a core service, and the service with the second service attribute may be a non-core service. The target service attribute may be a second service attribute, that is, the embodiment of the present application may determine a target service to be downgraded from the non-core service.
For example, when determining the target service to be downgraded according to the traffic difference value, the number of target services to be downgraded may be determined according to the size of the traffic difference value, so that the number of target services is determined from the services (non-core services) having the target service attribute.
The number of the target services is positively correlated with the magnitude of the flow difference, that is, the smaller the flow difference is, the smaller the number of the target services is, otherwise, the larger the flow difference is, the larger the number of the target services is. In this way, the target service can be degraded, so that the influence on the uncore service in the server is reduced as much as possible while the flow value of the server in the prediction period is reduced.
For example, under the condition that the flow difference value is smaller, it is indicated that the flow value of the server exceeds a threshold value less in the prediction period, and at this time, the flow value of the server in the prediction period can be reduced by performing degradation processing on only part of non-core services, and enough resources are reserved for the core services, so that at this time, degradation processing can be performed on less non-core services, the influence on other non-core services is reduced, and the stability of other non-core services is ensured. Or under the condition of larger flow difference, the condition that the flow value of the server exceeds a threshold value more in the prediction period is indicated, and at the moment, the flow value of the server in the prediction period can be reduced by carrying out degradation treatment on most of target services, so that enough resources are reserved for core services, and at the moment, the degradation treatment can be carried out on more non-core services, and the stability of the core services is preferentially ensured.
In determining the number of target services from the services (non-core services) having the target service attribute, the target services may be selected randomly or may be selected based on importance levels of the respective services, for example, a service having a lower importance level may be preferentially selected to alleviate the influence of a service having a higher importance level.
According to the service automatic degradation method, the difference value between the flow value and the flow threshold value of the server in the prediction period can be judged according to the prediction flow result obtained through the flow prediction model, and then the target service to be degraded is determined according to the difference value. That is, the service automatic degradation method provided by the embodiment of the invention can carry out degradation processing on the target service before the flow of the server exceeds the flow threshold, improves the timeliness of the degradation processing, and reserves sufficient server resources for the server when the flow of the server is large by limiting the flow of the target service in advance, thereby improving the stability of the core service in the server.
In one embodiment, as shown in fig. 3, the flow prediction model includes at least two prediction models, and in step 104, predicting the target flow data by using the flow prediction model to obtain a predicted flow result of the server includes:
and step 302, respectively predicting the target flow data through each prediction model to obtain a plurality of initial prediction results.
In this embodiment of the present application, the target flow data selected in the foregoing embodiment may be predicted by a plurality of prediction models, respectively, to obtain initial prediction results of each prediction model. The prediction model is a pre-trained model for predicting the flow condition of the server in the prediction period, the embodiment of the application does not specifically limit the model structure and the training process of the prediction model, and the prediction model which can predict the flow value of the server in the prediction period according to the target flow data is applicable to the embodiment of the application. For example: propset model, LSTM model (Long Short Term Memory network, long and short term memory neural network), etc.
And step 304, carrying out fusion processing on a plurality of initial prediction results according to the prediction weights of the prediction models to obtain a prediction flow result.
In this embodiment of the present application, after obtaining the initial prediction results of each prediction model, fusion processing may be performed on each initial prediction result according to the prediction weights of each prediction model, for example: and weighting and summing all the initial prediction results according to the prediction weight of each prediction model to obtain a final prediction flow result.
The method for determining the prediction weights of the prediction models is not particularly limited, and all the methods for determining the prediction weights of the prediction models are applicable to the embodiments of the present application. For example: and determining the prediction weight according to the loss value of the prediction model during training.
According to the service automatic degradation method provided by the embodiment of the application, the target flow data can be predicted by adopting a plurality of prediction models, and the initial prediction results of the prediction models are obtained and then fused according to the prediction weights of the prediction models. The embodiment of the application adopts a plurality of prediction models to predict, so that errors generated in the prediction process of each prediction model can be made up, the problem of insufficient prediction precision of a single prediction model is solved, and the precision of flow prediction can be improved.
In one embodiment, as shown in fig. 4, the method further includes:
step 402, obtaining flow data corresponding to each flow characteristic from historical flow data corresponding to a sample server.
In the embodiment of the present application, the flow characteristics refer to the category of flow data. By way of example, traffic characteristics may include traffic to ratio, traffic ring ratio, number of call failures, call timeout time, caller, callee, etc. The flow rate equal ratio is a ratio of the current flow rate to the historical current flow rate, for example, a ratio of the current flow rate to the current flow rate of the last week or the current flow rate of the last month, etc. The flow ring ratio is the ratio of the current period flow to the previous period flow data, for example, the ratio of the current day flow to the previous day flow or the current week flow to the previous week flow, etc.
And step 404, determining the target flow characteristics from the flow characteristics according to the flow data corresponding to the flow characteristics by adopting a characteristic weight algorithm.
In the embodiment of the present application, a feature weighting algorithm may be used to determine a target flow feature from the flow features according to the flow data corresponding to each flow feature. For example, a feature weight algorithm may be used to determine the feature weight of each flow feature, and rank the feature weights of the flow features from high to low, and determine the flow feature with a higher feature weight as the target flow feature. For example, taking the example that the traffic characteristics include a traffic-to-rate, a traffic-loop rate, a call failure number, a call timeout time, a caller, a callee, the target traffic characteristics may include: flow same ratio, flow ring ratio, call failure times, call timeout time.
It should be noted that, the feature weight algorithm is not limited in particular in the embodiment of the present application, and any algorithm capable of determining the feature weight of each flow feature according to the flow data corresponding to each flow feature is suitable for the embodiment of the present application.
Taking a characteristic weight algorithm as an example, a ReliefF algorithm is taken as an example, and the ReliefF algorithm is a multi-category characteristic extraction algorithm. In the embodiment of the present application, the total traffic data may be divided into two sample groups: flow data that triggers a flow restriction (i.e., flow data that is generated when the flow value of the sample server exceeds a flow threshold) and flow data that does not trigger a flow restriction (i.e., flow data that is generated when the flow value of the sample server does not exceed a flow threshold). The ReliefF algorithm firstly randomly takes out one flow data a from all flow data; taking k nearest neighbor samples of the flow data a from the sample group classified as the flow data a, and marking the k nearest neighbor samples as a set H; in sample groups classified differently from the flow data a, k nearest neighbor samples from the flow data a are also taken out respectively and recorded as a set M; subsequently, for any feature a, the algorithm calculates a first average of the differences of each element in the set H with a over feature a and a second average of the differences of each element in the set M with a over feature a, and determines the feature weight of feature a from the first average and the second average.
Step 406, constructing a training set according to the flow data corresponding to the target flow characteristics.
And step 408, training the initial prediction models corresponding to the prediction models through the training set to obtain the prediction models.
According to the method and the device, a training set can be constructed according to flow data corresponding to the target flow characteristics screened from the self-flow characteristics, and then initial prediction models corresponding to all the prediction models are trained to obtain prediction models capable of predicting the flow conditions of the server in a prediction period.
It should be noted that, in the embodiments of the present application, the training process of the initial prediction model is not specifically limited. The training mode that the initial prediction model can be trained to obtain the trained prediction model is applicable to the embodiment of the application. For example: an ensemble learning algorithm, etc.
According to the service automatic degradation method provided by the embodiment of the application, the training set can be constructed according to the flow data corresponding to the flow characteristics of which the characteristic weights meet the preset conditions, and the initial prediction model is trained through the training set, so that the trained prediction model is obtained. Because the flow data are screened, the flow data corresponding to the flow characteristics with larger characteristic weight, namely, the flow characteristics with stronger prediction capability are formed into the training set, redundant data with weaker prediction capability can be prevented from being input into the initial prediction model for training, and therefore the training speed of the initial prediction model can be improved.
In one embodiment, as shown in fig. 5, the method further includes:
step 502, determining the accuracy of each prediction model.
In the embodiment of the application, the precision of the prediction model is used for representing the capacity of accurately predicting the flow condition of the server in the prediction period when the prediction model predicts according to the flow data. After the prediction model is trained, the prediction model is checked by adopting data which are not used in the training process, and the prediction weight of each prediction model is determined according to the accuracy of each prediction model obtained in the checking process.
Note that, the manner in which the accuracy of each prediction model is determined in the embodiments of the present application is not particularly limited. Any way of determining the accuracy of each prediction model when each prediction model is tested is suitable for the embodiments of the present application.
Step 504, determining the prediction weight of each prediction model according to the precision of each prediction model.
In the embodiment of the present application, the prediction weight of each prediction model may be further determined according to the accuracy of each prediction model obtained in the above steps. For example, for any prediction model, the ratio of the precision of the prediction model to the sum of the precision of all prediction models may be used as the prediction weight of the prediction model. For example, if the accuracy of the first prediction model is X and the accuracy of the second prediction model is Y, the prediction weight of the first prediction model may be X/(x+y), and the prediction weight of the second prediction model may be Y/(x+y).
According to the service automatic degradation method provided by the embodiment of the application, the prediction weight of each prediction model can be determined according to the precision of each prediction model. According to the method and the device for predicting the flow, the prediction model with low precision can be guaranteed to have low prediction weight, and the prediction model with high precision is guaranteed to have high prediction weight, so that the precision of flow prediction can be further improved.
In one embodiment, in step 408, training the initial prediction model corresponding to each prediction model through the training set to obtain each prediction model includes:
for any prediction model, in the process of training an initial prediction model corresponding to the prediction model through a training set, determining the optimal super-parameters of the prediction model by adopting a particle swarm algorithm, and in the process of determining the optimal super-parameters of the prediction model by adopting the particle swarm algorithm, updating inertia factors in the particle swarm algorithm in real time based on historical data in the particle motion process, and determining the optimal super-parameters of the prediction model by adopting the updated particle swarm algorithm.
The super-parameters are external parameters which need to be set manually in each prediction model, such as iteration times, batch sizes and the like. The inertia factor is a parameter for controlling the original speed capability of the particles in the particle swarm algorithm, and a larger inertia factor is more beneficial to global searching of the particles, and a smaller inertia factor is more beneficial to local searching of the particles.
In the embodiment of the application, the value of the inertia factor of each particle in the particle swarm algorithm can be updated firstly based on the historical data in the particle motion process, and then the optimal super-parameters of each prediction model are determined according to the updated particle swarm algorithm. When determining the optimal super-parameters of a certain prediction model according to the particle swarm optimization, namely searching the optimal values of the super-parameters of the prediction model, the super-parameters of the prediction model can be set as coordinates of particles in an n-dimensional space, and the optimal solution searched by the particle swarm optimization in the n-dimensional space is the optimal values of the super-parameters of the prediction model.
According to the service automatic degradation method provided by the embodiment of the application, the particle swarm algorithm can be updated by updating the inertia factors in the particle swarm algorithm, and then the optimal super parameters of each prediction model are determined by the updated particle swarm algorithm. The inertial factors of the particle swarm algorithm are adjusted due to the capability of the inertial factors to control the particles to conduct global searching and local searching, so that the speed of the particles reaching the optimal solution can be increased, the accuracy of searching the optimal solution by the particles is improved, and the training speed of the prediction model can be further increased.
In one embodiment, as shown in fig. 6, in the above embodiment, updating the inertia factor in the particle swarm algorithm based on the historical data in the particle movement process in real time includes:
Step 602, determining a historical maximum inertia factor, a historical minimum inertia factor, a particle historical minimum target value and a particle historical maximum target value of the particle in the movement process according to historical data in the particle movement process.
The historical maximum inertia factor is the maximum value of all inertia factor values taken by the particles in the motion process, the historical minimum inertia factor is the minimum value of all inertia factor values taken by the particles in the motion process, the particle historical minimum target value is the minimum value of all objective function values taken by the particles in the motion process, and the particle historical maximum target value is the maximum value of all objective function values taken by the particles in the motion process.
Step 604, determining a first inertia factor adjustment value according to the historical maximum inertia factor, the historical minimum inertia factor, the particle historical minimum target value, and the particle historical maximum target value.
In this embodiment of the present application, the first inertia factor adjustment value is used to adjust an inertia factor value of the particle in a motion process, and is positively related to a difference between a historical maximum inertia factor and a historical minimum inertia factor, and is negatively related to a difference between a historical maximum target value of the particle and a historical minimum target value of the particle. Illustratively, the first inertia factor adjustment value may be determined by differencing the historical maximum inertia factor and the historical minimum inertia factor, multiplying the results by the difference between the current target value of the particle (i.e., the target function value of the particle at the current time) and the historical minimum target value of the particle, and dividing by the difference between the historical maximum target value of the particle and the historical minimum target value of the particle (see equation (one)):
Wherein θ is a first inertia factor adjustment value, ω max Omega is the historic maximum inertial factor min For the historical minimum inertia factor, f is the current target value of the particle, f min F is the particle history minimum target value max Is the particle history maximum target value.
Step 606, taking the difference between the historical maximum inertia factor and the first inertia factor adjustment value as the inertia factor of the particle swarm algorithm at the current moment.
In this embodiment of the present application, the difference may be made between the historical maximum inertia factor and the first inertia factor adjustment value, so as to obtain an inertia factor of the particle swarm algorithm at the current moment (see formula (two)):
ω=ω max θ (formula (II))
Wherein ω is the inertia factor of the particle swarm algorithm at the current moment, ω max For the historical maximum inertia factor, θ is the first inertia factor adjustment value.
According to the service automatic degradation method provided by the embodiment of the application, the inertia factor of the particles at the current moment can be adjusted according to the historical data of the particles in the motion process. The inertial factors of the particle swarm algorithm are adjusted due to the capability of the inertial factors to control the particles to conduct global searching and local searching, so that the speed of the particles reaching the optimal solution can be increased, the accuracy of searching the optimal solution by the particles is improved, and the training speed of the prediction model can be further increased.
In order for those skilled in the art to better understand the embodiments of the present application, the embodiments of the present application are described below by way of specific examples.
Illustratively, as shown in FIG. 7, a flow chart of a method of service auto-downgrading is shown.
The service automatic degradation method provided by the embodiment of the application adopts various service degradation strategies at the same time, including timeout degradation, failure times degradation, fault degradation and current limiting degradation. For timeout degradation, failure times degradation, and failure degradation, embodiments of the present application employ a fixed threshold degradation policy. For example, for timeout degradation, the service may be degraded if the number of service timeouts exceeds a threshold; for failure times degradation, service degradation processing can be performed on the condition that the service call failure times exceed a threshold value; for failure degradation, the service may be degraded if the number of service call failures exceeds a threshold.
For current limiting degradation, the embodiment of the application adopts an early warning degradation strategy, namely, the flow condition of the server in a prediction period is predicted through a flow prediction model, and a target service to be degraded is determined according to a predicted flow result, so that degradation processing is carried out on the target service in advance.
When training the flow prediction model, the embodiment of the application needs to determine the optimal super parameters of each prediction model in the flow prediction model. In the embodiment of the application, the optimal super-parameters of each prediction model can be determined through a particle swarm algorithm. The step of determining the optimal super parameter by using the particle swarm algorithm may refer to the related description of the foregoing embodiments, which are not described herein.
The particle swarm algorithm may be updated when the optimal superparameter for each predictive model is determined by the particle swarm algorithm. The particle swarm algorithm has the defects that when searching a target, a local optimal solution is easily trapped, so that a global optimal solution cannot be searched, and the searching precision is low. Aiming at the defect of the particle swarm algorithm, the embodiment of the application updates the inertia factors in the particle swarm algorithm in real time when the particle swarm algorithm searches based on historical data in the particle motion process. The step of updating the inertia factor in the particle swarm algorithm may refer to the related description of the foregoing embodiments, which are not described herein.
After determining the optimal super parameters of the prediction model, the embodiment of the application trains the prediction model to obtain the flow prediction model. In the training process of the prediction model, the embodiment of the application can screen each flow characteristic in the historical flow data corresponding to the sample server, and only constructs a training set according to the flow data corresponding to the target flow characteristic with higher characteristic weight.
According to the method and the device, the prediction models are trained by adopting 80% of data in the flow data corresponding to the target flow characteristics, after model training is completed, the accuracy of the prediction models is checked by adopting the remaining 20% of data in the flow data corresponding to the target flow characteristics, and the prediction weights of the prediction models are determined according to the accuracy of the prediction models.
In the practical use of the traffic prediction model, the embodiment of the present application needs to determine the target traffic data from a plurality of historical traffic data of the server. The target flow data may be, for example, flow data corresponding to the target flow characteristics determined in the predictive model training process above among a plurality of historical flow data of the server.
According to the embodiment of the application, the target flow data can be predicted through the flow prediction model, and degradation processing is carried out on the target service according to the predicted flow result.
According to the service automatic degradation method, the target flow data are determined, the target flow data are predicted through the flow prediction model, the target service to be degraded can be determined under the condition that the flow of the server exceeds the flow threshold value in the prediction period represented by the predicted flow result, and degradation processing is timely carried out on the target service, namely the service automatic degradation method provided by the embodiment of the application can carry out degradation processing on the target service before the flow of the server exceeds the flow threshold value, timeliness of degradation processing is improved, and sufficient server resources are reserved for the server when the flow of the server is large by limiting the flow of the target service in advance, so that stability of core service in the server can be improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a service automatic degradation device for realizing the service automatic degradation method. The implementation of the solution to the problem provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the service automatic degradation device or devices provided below may be referred to the limitation of the service automatic degradation method hereinabove, and will not be described herein.
In one embodiment, as shown in fig. 8, there is provided an automatic service degradation apparatus, including: a first determination module 802, a prediction module 804, a second determination module 806, a demotion module 808, wherein:
a first determining module 802, configured to determine target traffic data from a plurality of historical traffic data of a server;
the prediction module 804 is configured to predict the target flow data through a flow prediction model, to obtain a predicted flow result of the server, where the predicted flow result is used to characterize a flow condition of the server in a prediction period;
a second determining module 806, configured to determine a target service to be degraded, in a case that a target flow value exceeding a flow threshold exists in the predicted flow result;
and the degradation module 808 is configured to perform degradation processing on the target service to implement current limiting on the target service.
According to the service automatic degradation device, the target flow data are determined, and the target flow data are predicted through the flow prediction model, so that the target service to be degraded can be determined under the condition that the target flow value exceeding the flow threshold exists in the predicted flow result, and the degradation processing is timely performed on the target service, namely the service automatic degradation device provided by the embodiment of the application can be used for performing the degradation processing on the target service before the flow of the server exceeds the flow threshold, the timeliness of the degradation processing is improved, and sufficient server resources are reserved for the server when the flow of the server is large by limiting the flow of the target service in advance, so that the stability of core service in the server can be improved.
In one embodiment, the downgrade module 808 is further configured to:
determining a target flow value greater than the flow threshold from the predicted flow result;
determining a flow difference value according to the target flow value and the flow threshold value;
and determining a target service to be degraded from the services with the target service attribute according to the flow difference, wherein the service attribute is used for representing the core degree of the service.
In one embodiment, the flow prediction model includes at least two prediction models, and the prediction module 804 is further configured to:
predicting the target flow data through each prediction model to obtain a plurality of initial prediction results;
and carrying out fusion processing on the plurality of initial prediction results according to the prediction weight of each prediction model to obtain a prediction flow result.
In one embodiment, the apparatus further comprises:
the acquisition module is used for acquiring flow data corresponding to each flow characteristic from the historical flow data corresponding to the sample server;
the third determining module is used for determining a target flow characteristic from the flow characteristics according to the flow data corresponding to the flow characteristics by adopting a characteristic weight algorithm;
The construction module is used for constructing a training set according to the flow data corresponding to the target flow characteristics;
and the training module is used for training the initial prediction model corresponding to each prediction model through the training set respectively to obtain the flow prediction model.
In one embodiment, the apparatus further comprises:
a fourth determining module, configured to determine the accuracy of each prediction model;
and a fifth determining module, configured to determine the prediction weight of each prediction model according to the accuracy of each prediction model.
In one embodiment, the training module is further configured to:
aiming at any prediction model, in the process of training an initial prediction model corresponding to the prediction model through the training set, determining the optimal super-parameters of the prediction model by adopting a particle swarm algorithm, and in the process of determining the optimal super-parameters of the prediction model by adopting the particle swarm algorithm, updating inertia factors in the particle swarm algorithm in real time based on historical data in the particle motion process, and determining the optimal super-parameters of the prediction model by adopting the updated particle swarm algorithm.
In one embodiment, the training module is further configured to:
According to the historical data in the particle movement process, determining a historical maximum inertia factor, a historical minimum inertia factor, a particle historical minimum target value and a particle historical maximum target value of the particles in the movement process;
determining a first inertia factor adjustment value according to the historical maximum inertia factor, the historical minimum inertia factor, the particle historical minimum target value and the particle historical maximum target value;
and taking the difference value between the historical maximum inertia factor and the first inertia factor adjustment value as the inertia factor of the particle swarm algorithm at the current moment.
The various modules in the service automatic degradation device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 9. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method of service auto-downgrading.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of automatic service degradation, the method comprising:
determining target flow data from a plurality of historical flow data of a server;
predicting the target flow data through a flow prediction model to obtain a predicted flow result of the server, wherein the predicted flow result is used for representing the flow condition of the server in a prediction period, the flow prediction model is obtained by determining the optimal super-parameters of the flow prediction model through a particle swarm algorithm in the process of training an initial prediction model and updating inertia factors in the particle swarm algorithm in real time based on historical data in the particle motion process in the process of determining the optimal super-parameters of the flow prediction model through the particle swarm algorithm;
Determining a target service to be degraded under the condition that a target flow value exceeding a flow threshold exists in the predicted flow result;
performing degradation processing on the target service to realize current limiting on the target service;
the method for updating the inertia factors in the particle swarm algorithm comprises the following steps:
the method comprises the steps of performing difference on a historical maximum inertia factor and a historical minimum inertia factor to obtain a first result, multiplying the first result by a difference value between a particle current target value and a particle historical minimum target value to obtain a second result, and dividing the second result by a difference value between the particle historical maximum target value and the particle historical minimum target value to obtain a first inertia factor adjustment value;
the historical maximum inertia factor and the first inertia factor adjustment value are subjected to difference to obtain a current inertia factor;
and updating the inertia factors in the particle swarm algorithm according to the current inertia factors.
2. The method of claim 1, wherein the determining a target service to be downgraded if there is a target flow value in the predicted flow result that exceeds a flow threshold comprises:
determining a target flow value greater than a flow threshold from the predicted flow result;
Determining a flow difference value according to the target flow value and the flow threshold value;
and determining a target service to be degraded from the services with the target service attribute according to the flow difference, wherein the service attribute is used for representing the core degree of the service.
3. The method according to claim 1 or 2, wherein the flow prediction model includes at least two prediction models, and the predicting the target flow data by the flow prediction model to obtain the predicted flow result of the server includes:
predicting the target flow data through each prediction model to obtain a plurality of initial prediction results;
and carrying out fusion processing on the plurality of initial prediction results according to the prediction weight of each prediction model to obtain a prediction flow result.
4. A method according to claim 3, characterized in that the method further comprises:
acquiring flow data corresponding to each flow characteristic from historical flow data corresponding to a sample server;
determining a target flow characteristic from the flow characteristics according to the flow data corresponding to the flow characteristics by adopting a characteristic weight algorithm;
Constructing a training set according to the flow data corresponding to the target flow characteristics;
and training the initial prediction model corresponding to each prediction model through the training set to obtain the flow prediction model.
5. The method according to claim 4, wherein the method further comprises:
determining the precision of each prediction model;
and determining the prediction weight of each prediction model according to the precision of each prediction model.
6. The method of claim 5, wherein said determining said predictive weight for each of said predictive models based on the accuracy of each of said predictive models comprises:
and regarding any prediction model, taking the ratio of the precision of the prediction model to the sum of the precision of all prediction models as the prediction weight of the prediction model.
7. An apparatus for automatic degradation of services, the apparatus comprising:
the first determining module is used for determining target flow data from a plurality of historical flow data of the server;
the prediction module is used for predicting the target flow data through a flow prediction model to obtain a predicted flow result of the server, wherein the predicted flow result is used for representing the flow condition of the server in a prediction period, the flow prediction model is obtained by determining the optimal super-parameters of the flow prediction model through a particle swarm algorithm in the process of training an initial prediction model and updating inertia factors in the particle swarm algorithm in real time based on historical data in the particle motion process in the process of determining the optimal super-parameters of the flow prediction model through the particle swarm algorithm;
The second determining module is used for determining target service to be degraded under the condition that a target flow value exceeding a flow threshold exists in the predicted flow result;
the degradation module is used for carrying out degradation processing on the target service so as to realize current limiting on the target service;
the method for updating the inertia factors in the particle swarm algorithm comprises the following steps:
the method comprises the steps of performing difference on a historical maximum inertia factor and a historical minimum inertia factor to obtain a first result, multiplying the first result by a difference value between a particle current target value and a particle historical minimum target value to obtain a second result, and dividing the second result by a difference value between the particle historical maximum target value and the particle historical minimum target value to obtain a first inertia factor adjustment value;
the historical maximum inertia factor and the first inertia factor adjustment value are subjected to difference to obtain a current inertia factor;
and updating the inertia factors in the particle swarm algorithm according to the current inertia factors.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202210608028.8A 2022-05-31 2022-05-31 Service automatic degradation method, device, computer equipment and storage medium Active CN114826951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210608028.8A CN114826951B (en) 2022-05-31 2022-05-31 Service automatic degradation method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210608028.8A CN114826951B (en) 2022-05-31 2022-05-31 Service automatic degradation method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114826951A CN114826951A (en) 2022-07-29
CN114826951B true CN114826951B (en) 2024-02-20

Family

ID=82519687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210608028.8A Active CN114826951B (en) 2022-05-31 2022-05-31 Service automatic degradation method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114826951B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113949666A (en) * 2021-11-15 2022-01-18 中国银行股份有限公司 Flow control method, device, equipment and system
CN114116207A (en) * 2021-11-11 2022-03-01 中国银行股份有限公司 Flow control method, device, equipment and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114116207A (en) * 2021-11-11 2022-03-01 中国银行股份有限公司 Flow control method, device, equipment and system
CN113949666A (en) * 2021-11-15 2022-01-18 中国银行股份有限公司 Flow control method, device, equipment and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
降级路网的多类用户均衡分配模型及求解算法;史峰;罗端高;;交通运输系统工程与信息;20080815(第04期);全文 *
预测信息下降级路网的交通流演化;况爱武;张胜伟;覃定明;;长沙理工大学学报(自然科学版);20191228(第04期);全文 *

Also Published As

Publication number Publication date
CN114826951A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN110289994B (en) Cluster capacity adjusting method and device
KR20210032140A (en) Method and apparatus for performing pruning of neural network
CN113326126A (en) Task processing method, task scheduling device and computer equipment
CN113408711A (en) Ship motion extremely-short-term forecasting method and system based on LSTM neural network
CN114168318A (en) Training method of storage release model, storage release method and equipment
CN111832693A (en) Neural network layer operation and model training method, device and equipment
CN114826951B (en) Service automatic degradation method, device, computer equipment and storage medium
CN114444676A (en) Model channel pruning method and device, computer equipment and storage medium
CN116524296A (en) Training method and device of equipment defect detection model and equipment defect detection method
CN116700955A (en) Job processing method, apparatus, computer device, and readable storage medium
CN116541128A (en) Load adjusting method, device, computing equipment and storage medium
CN115941696A (en) Heterogeneous Big Data Distributed Cluster Storage Optimization Method
WO2023113946A1 (en) Hyperparameter selection using budget-aware bayesian optimization
CN113485848A (en) Deep neural network deployment method and device, computer equipment and storage medium
CN113407192B (en) Model deployment method and device
CN114465957B (en) Data writing method and device
CN113723593B (en) Cut load prediction method and system based on neural network
CN113162780B (en) Real-time network congestion analysis method, device, computer equipment and storage medium
CN114841757A (en) Prediction model training method and device and price prediction method and device
CN117076093B (en) Storage resource scheduling method and device based on machine learning and storage medium
CN116610546A (en) Job early warning method, device, computer equipment and storage medium
CN115202591B (en) Storage device, method and storage medium of distributed database system
CN117372148A (en) Method, apparatus, device, medium and program product for determining credit risk level
CN117395179A (en) Distribution method and device for intention-driven network security monitoring resources
CN114090238A (en) Edge node load prediction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant