CN114826951A - Service automatic degradation method, device, computer equipment and storage medium - Google Patents

Service automatic degradation method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN114826951A
CN114826951A CN202210608028.8A CN202210608028A CN114826951A CN 114826951 A CN114826951 A CN 114826951A CN 202210608028 A CN202210608028 A CN 202210608028A CN 114826951 A CN114826951 A CN 114826951A
Authority
CN
China
Prior art keywords
flow
target
traffic
prediction model
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210608028.8A
Other languages
Chinese (zh)
Other versions
CN114826951B (en
Inventor
程鹏
白佳乐
任政
谢伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210608028.8A priority Critical patent/CN114826951B/en
Publication of CN114826951A publication Critical patent/CN114826951A/en
Application granted granted Critical
Publication of CN114826951B publication Critical patent/CN114826951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a method, a device, a computer device and a storage medium for automatically degrading service. Relates to the field of artificial intelligence and can be used in the field of financial science and technology. The method comprises the following steps: determining target traffic data from a plurality of historical traffic data of a server; predicting target traffic data through a traffic prediction model to obtain a predicted traffic result of the server, wherein the predicted traffic result is used for representing the traffic condition of the server in a prediction time period; determining a target service to be degraded under the condition that a target flow value exceeding a flow threshold value exists in the predicted flow result; and performing degradation processing on the target service to realize the current limitation on the target service. By adopting the method, the target service can be degraded before the flow of the server exceeds the flow threshold, sufficient server resources are reserved for the server, and the stability of the core service in the server is improved.

Description

Service automatic degradation method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a method, an apparatus, a computer device, and a storage medium for automatically degrading a service.
Background
In a practical application scenario, a plurality of different services are often run simultaneously on one server. In order to conserve server resources, the configuration of server resources is typically less than the total demand on server resources by all services. The server may not be able to allocate sufficient resources to all services to perform the task during service peaks. At this time, in order to ensure the normal operation of the core service, it is necessary to downgrade the non-core service, and preferentially ensure the resource usage of the core service.
In the related art, a mode of initiating service degradation to a non-core task after service invocation fails is often adopted, service degradation processing is not timely enough, and the following problems exist: when a service invocation failure occurs, a certain impact may have been made on the core service, resulting in a reduced stability of the core service.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device and a storage medium for automatically degrading a service.
In a first aspect, the present application provides a method for automatically downgrading a service. The method comprises the following steps:
determining target traffic data from a plurality of historical traffic data of a server;
predicting the target traffic data through a traffic prediction model to obtain a predicted traffic result of the server, wherein the predicted traffic result is used for representing the traffic condition of the server in a prediction time period;
determining a target service to be degraded under the condition that a target flow value exceeding a flow threshold value exists in the predicted flow result;
and performing degradation processing on the target service to realize the current limitation of the target service.
In one embodiment, the determining a target service to be degraded if a target traffic value exceeding a traffic threshold exists in the predicted traffic result includes:
determining a target flow value greater than the flow threshold from the predicted flow result;
determining a flow difference value according to the target flow value and the flow threshold value;
and determining a target service to be degraded from the services with the target service attribute according to the flow difference, wherein the service attribute is used for representing the core degree of the service.
In one embodiment, the predicting the target traffic data by using the traffic prediction model to obtain the predicted traffic result of the server includes:
predicting the target flow data through each prediction model to obtain a plurality of initial prediction results;
and performing fusion processing on the plurality of initial prediction results according to the prediction weight of each prediction model to obtain a predicted flow result.
In one embodiment, the method further comprises:
obtaining flow data corresponding to each flow characteristic from historical flow data corresponding to the sample server;
determining a target flow characteristic from each flow characteristic according to the flow data corresponding to each flow characteristic by adopting a characteristic weight algorithm;
constructing a training set according to the flow data corresponding to the target flow characteristics;
and training the initial prediction models corresponding to the prediction models respectively through the training set to obtain the flow prediction models.
In one embodiment, the method further comprises:
determining the accuracy of each of the predictive models;
and determining the prediction weight of each prediction model according to the precision of each prediction model.
In one embodiment, the training the initial prediction models corresponding to the prediction models respectively through the training set includes:
aiming at any prediction model, adopting a particle swarm algorithm to determine the optimal hyper-parameter of the prediction model in the process of training the initial prediction model corresponding to the prediction model through the training set, updating an inertia factor in the particle swarm algorithm on the basis of historical data in the particle motion process in real time in the process of determining the optimal hyper-parameter of the prediction model by adopting the particle swarm algorithm, and adopting the updated particle swarm algorithm to determine the optimal hyper-parameter of the prediction model.
In one embodiment, the updating the inertia factor in the particle swarm algorithm in real time based on historical data in the particle motion process includes:
determining a historical maximum inertia factor, a historical minimum target value of the particle and a historical maximum target value of the particle in the movement process according to historical data in the movement process of the particle;
determining a first inertia factor adjusting value according to the historical maximum inertia factor, the historical minimum target value of the particles and the historical maximum target value of the particles;
and taking the difference value of the historical maximum inertia factor and the first inertia factor adjustment value as the inertia factor of the particle swarm algorithm at the current moment.
In a second aspect, the present application further provides an apparatus for automatically degrading services. The device comprises:
the system comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining target flow data from a plurality of historical flow data of a server;
the prediction module is used for predicting the target traffic data through a traffic prediction model to obtain a predicted traffic result of the server, and the predicted traffic result is used for representing the traffic condition of the server in a prediction time period;
a second determining module, configured to determine, when a target traffic value exceeding a traffic threshold exists in the predicted traffic result, a target service to be degraded;
and the degradation module is used for performing degradation processing on the target service so as to realize the current limitation on the target service.
In one embodiment, the downgrading module is further configured to:
determining a target flow value greater than the flow threshold from the predicted flow result;
determining a flow difference value according to the target flow value and the flow threshold value;
and determining a target service to be degraded from the services with the target service attribute according to the flow difference, wherein the service attribute is used for representing the core degree of the service.
In one embodiment, the traffic prediction model includes at least two prediction models, and the prediction module is further configured to:
predicting the target flow data through each prediction model to obtain a plurality of initial prediction results;
and performing fusion processing on the plurality of initial prediction results according to the prediction weight of each prediction model to obtain a predicted flow result.
In one embodiment, the apparatus further comprises:
the acquisition module is used for acquiring the traffic data corresponding to each traffic characteristic from the historical traffic data corresponding to the sample server;
a third determining module, configured to determine a target flow characteristic from each of the flow characteristics according to the flow data corresponding to each of the flow characteristics by using a characteristic weight algorithm;
the construction module is used for constructing a training set according to the flow data corresponding to the target flow characteristics;
and the training module is used for respectively training the initial prediction models corresponding to the prediction models through the training set to obtain the flow prediction models.
In one embodiment, the apparatus further comprises:
a fourth determining module for determining the accuracy of each of the predictive models;
a fifth determining module, configured to determine the prediction weight of each prediction model according to the precision of each prediction model.
In one embodiment, the training module is further configured to:
aiming at any prediction model, adopting a particle swarm algorithm to determine the optimal hyper-parameter of the prediction model in the process of training the initial prediction model corresponding to the prediction model through the training set, updating an inertia factor in the particle swarm algorithm on the basis of historical data in the particle motion process in real time in the process of determining the optimal hyper-parameter of the prediction model by adopting the particle swarm algorithm, and adopting the updated particle swarm algorithm to determine the optimal hyper-parameter of the prediction model.
In one embodiment, the training module is further configured to:
determining a historical maximum inertia factor, a historical minimum target value of the particle and a historical maximum target value of the particle in the movement process according to historical data in the movement process of the particle;
determining a first inertia factor adjusting value according to the historical maximum inertia factor, the historical minimum target value of the particles and the historical maximum target value of the particles;
and taking the difference value of the historical maximum inertia factor and the first inertia factor adjustment value as the inertia factor of the particle swarm algorithm at the current moment.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing any of the above methods when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, implements any of the above methods.
In a fifth aspect, the present application further provides a computer program product. The computer program product, including a computer program, the computer program product, including a computer program that, when executed by a processor, implements any of the above methods.
According to the method, the device, the computer equipment and the storage medium for automatically degrading the service, the target traffic data is determined, the target service to be degraded is determined under the condition that the target traffic value exceeding the traffic threshold exists in the predicted traffic result by predicting the target traffic data through the traffic prediction model, and the target service is degraded in time, namely the method, the device, the computer equipment and the storage medium for automatically degrading the service provided by the embodiment of the application can degrade the target service before the traffic of the server exceeds the traffic threshold, so that the timeliness of degradation is improved, sufficient server resources are reserved for the server by limiting the traffic of the target service in advance when the traffic of the server is large, and the stability of core services in the server can be improved.
Drawings
FIG. 1 is a flow diagram that illustrates a method for automatic degradation of services, according to one embodiment;
FIG. 2 is a schematic flow chart of step 106 in one embodiment;
FIG. 3 is a schematic flow chart of step 104 in one embodiment;
FIG. 4 is a flow diagram that illustrates a method for automatic degradation of services, according to one embodiment;
FIG. 5 is a flow diagram that illustrates a method for automatic degradation of services, according to one embodiment;
FIG. 6 is a flow diagram that illustrates a method for automatic degradation of services, according to one embodiment;
FIG. 7 is a diagram of a method for automatic degradation of services in one embodiment;
FIG. 8 is a block diagram of an apparatus for automatic degradation of service in one embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In an embodiment, as shown in fig. 1, a method for automatically degrading a service is provided, and this embodiment is illustrated by applying the method to a terminal, and it is to be understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and is implemented by interaction between the terminal and the server. In this embodiment, the method includes the steps of:
step 102, determining target flow data from a plurality of historical flow data of the server.
In the embodiment of the application, the historical traffic data is traffic data generated on the server when the terminal calls the service, such as a traffic equal ratio, a traffic ring ratio, a call failure number, a call timeout time, a caller, a callee, and the like. The target flow rate data is flow rate data satisfying a preset condition among the historical flow rate data.
For example, the flow data satisfying the preset condition may be the flow data corresponding to the flow characteristic having the characteristic weight greater than the threshold value. The specific value of the threshold in the embodiment of the present application is not particularly limited, and may be selected by a person skilled in the art according to experience. In the embodiment of the present application, the flow characteristics refer to types of flow data, and the characteristic weight is used to characterize the capability of a certain type of flow data to predict a flow result. For example, any feature weight algorithm may be adopted to determine the feature weight of each flow feature, and the flow feature with the feature weight larger than the threshold value is selected from the feature weights, so that the flow data corresponding to the flow feature with the feature weight larger than the threshold value is used as the target flow data.
It should be noted that the target flow rate data is not necessarily part of the historical flow rate data, and if the feature weight of all the flow rate features is greater than the threshold, the flow rate data corresponding to all the flow rate features, that is, all the historical flow rate data may be selected as the target flow rate data.
After the target flow data is selected, the target flow data may be preprocessed, for example, averaging, normalizing, and the like may be performed on each data within 1 second. Detailed description of the specific process is omitted in this embodiment, and any manner that can implement the above preprocessing operation is suitable for this embodiment.
And 104, predicting the target flow data through the flow prediction model to obtain a predicted flow result of the server, wherein the predicted flow result is used for representing the flow condition of the server in a prediction time period.
In the embodiment of the application, the target traffic data can be predicted through a pre-trained traffic prediction model to obtain a corresponding predicted traffic result, and the predicted traffic result can represent the traffic condition of the server in a prediction time period so as to judge whether degradation processing needs to be performed on the target service according to the traffic condition of the server in the prediction time period. The predicted time interval is a time interval with a certain length from the current time, the specific length of the predicted time interval can be set by a person skilled in the art according to requirements, and the predicted flow result includes a predicted flow value of the server at each time point in the predicted time interval.
And 106, under the condition that the target flow value exceeding the flow threshold exists in the predicted flow result, determining the target service to be degraded.
In the embodiment of the present application, the traffic threshold refers to an upper limit of service traffic that can be processed by the server at the same time, and is a preset value, and a specific value may be set by a person skilled in the art according to a requirement. The target service is a service that can be subjected to degradation processing in the case where the traffic value of the server exceeds the traffic threshold. For example, the server may divide the services into core services and non-core services according to the importance degree of the services, wherein the core services have higher importance than the non-core services. The target service may be a non-core service, i.e. the target service is a service of relatively low importance of all services handled by the server.
After obtaining the predicted flow result, the predicted flow value of the server at each time point in the prediction period in the predicted flow result may be compared with the flow threshold. If the predicted flow value at any time point is greater than the flow threshold, it indicates that the flow value at the time point will exceed the flow threshold, and at this time, the predicted flow value at the time point may be determined as the target flow value. After the predicted traffic value of the server at each time point in the prediction period is compared with the traffic threshold, the target service to be degraded can be determined according to each target traffic value and the traffic threshold, and the target service is degraded in advance, so that the problem that the core service is possibly invoked in failure under the condition that the traffic value of the server exceeds the traffic threshold is solved.
And 108, performing degradation processing on the target service to realize the current limitation on the target service.
In the embodiment of the present application, performing degradation processing on the target service refers to limiting a server resource that can be called by the target service, for example, temporarily shielding the target service. The embodiment of the present application does not specifically limit the specific manner of performing degradation processing on the target service, and all manners capable of achieving degradation on the target service are applicable to the embodiment of the present application.
According to the method for automatically degrading the service, the target traffic data are determined, the target service to be degraded is determined under the condition that the target traffic value exceeding the traffic threshold exists in the predicted traffic result by predicting the target traffic data through the traffic prediction model, and the target service is degraded in time, namely the method for automatically degrading the service provided by the embodiment of the application can degrade the target service before the traffic of the server exceeds the traffic threshold, so that the timeliness of the degradation is improved, the traffic of the target service is limited in advance, so that sufficient server resources are reserved for the server when the traffic of the server is large, and the stability of core services in the server can be improved.
In one embodiment, as shown in fig. 2, in step 106, in the case that there is a target traffic value exceeding the traffic threshold in the predicted traffic result, determining the target service to be degraded includes:
in step 202, a target flow rate value greater than a flow rate threshold is determined from the predicted flow rate result.
In the embodiment of the application, a target flow value which is greater than a flow threshold value in the predicted flow result can be determined according to the predicted flow result output by the flow prediction model, so that a target service which needs to be subjected to degradation processing can be judged according to the target flow value and the flow threshold value. The target flow rate value may be one or more.
And step 204, determining a flow difference value according to the target flow value and the flow threshold value.
In the embodiments of the present application, the manner of determining the flow rate difference value according to the target flow rate value and the flow rate threshold value is not particularly limited. For example, in the case where the target flow rate value is one, the target flow rate value may be differentiated from the flow rate threshold value to determine a flow rate difference value. In the case that a plurality of target flow values exist, the embodiment of the present application may determine a flow difference value by subtracting a maximum target flow value from a flow threshold value; or the minimum target flow value is differed with the flow threshold value to determine the flow difference value; alternatively, the flow rate difference may be determined by taking the average value of the target flow rate values and subtracting the average value from the flow rate threshold value.
And step 206, determining a target service to be degraded from the services with the target service attribute according to the traffic difference, wherein the service attribute is used for representing the core degree of the service.
In this embodiment of the present application, the service attribute may be used to characterize a core degree of the service, for example, the service attribute may be divided into a first service attribute and a second service attribute, where the service having the first service attribute may be a core service, and the service having the second service attribute may be a non-core service. The target service attribute may be a second service attribute, that is, the target service to be degraded may be determined from the non-core service in the embodiment of the present application.
For example, when determining the target services to be degraded according to the traffic difference, the number of target services to be degraded may be determined according to the size of the traffic difference, so as to determine the number of target services from the services (non-core services) having the target service attribute.
The number of the target services is positively correlated with the magnitude of the traffic difference, that is, the smaller the traffic difference is, the smaller the number of the target services is, and conversely, the larger the traffic difference is, the larger the number of the target services is. In this way, the target service can be degraded, and the influence on the non-core service in the server can be reduced as much as possible while the flow value of the server in the prediction period is reduced.
For example, in the case of a small traffic difference, it is stated that the traffic value of the server exceeds the threshold less in the prediction period, and at this time, only performing degradation processing on part of the non-core services may reduce the traffic value of the server in the prediction period, and reserve sufficient resources for the core services, so that degradation processing may be performed on less non-core services, the influence on other non-core services is reduced, and the stability of other non-core services is ensured. Or, under the condition of a large flow difference value, it is stated that the flow value of the server exceeds the threshold value more in the prediction time period, and at this time, degradation processing needs to be performed on most target services to reduce the flow value of the server in the prediction time period, and sufficient resources are reserved for the core service, so that degradation processing can be performed on more non-core services, and the stability of the core service is preferentially ensured.
In the process of determining the number of target services from the services (non-core services) having the target service attributes, the target services may be randomly selected, or the target services may be selected based on the importance levels of the services, for example, the service with the lower importance level may be preferentially selected to mitigate the influence on the service with the higher importance level.
According to the automatic service degradation method provided by the embodiment of the application, the difference value between the flow value and the flow threshold value of the server in the prediction time period can be judged according to the predicted flow result obtained through the flow prediction model, and then the target service to be degraded is determined according to the difference value. That is, the method for automatically degrading the service provided by the embodiment of the application can degrade the target service before the traffic of the server exceeds the traffic threshold, so that the timeliness of the degradation is improved, and sufficient server resources are reserved for the server when the traffic of the server is large by limiting the traffic of the target service in advance, so that the stability of the core service in the server can be improved.
In an embodiment, as shown in fig. 3, the traffic prediction model includes at least two prediction models, and in step 104, predicting target traffic data by the traffic prediction model to obtain a predicted traffic result of the server includes:
and 302, respectively predicting the target flow data through each prediction model to obtain a plurality of initial prediction results.
In the embodiment of the application, the target flow data selected in the embodiment can be respectively predicted through a plurality of prediction models, so that the initial prediction result of each prediction model is obtained. The prediction model is a model which is trained in advance and used for predicting the flow condition of the server in the prediction time period, the model structure and the training process of the prediction model are not specifically limited in the embodiment of the application, and the prediction model which can predict the flow value of the server in the prediction time period according to the target flow data is suitable for the embodiment of the application. For example: prophet model, LSTM model (Long Short Term Memory network), etc.
And step 304, performing fusion processing on a plurality of initial prediction results according to the prediction weight of each prediction model to obtain a predicted flow result.
In the embodiment of the present application, after the initial prediction results of the prediction models are obtained, the initial prediction results may be fused according to the prediction weights of the prediction models, for example: and according to the prediction weight of each prediction model, weighting and summing the initial prediction results, and the like to obtain a final predicted flow result.
The embodiment of the present application does not specifically limit the manner of determining the prediction weight of each prediction model, and any manner that can determine the prediction weight of a prediction model is applicable to the embodiment of the present application. For example: and determining a prediction weight according to the loss value of the prediction model in training.
According to the automatic service degradation method provided by the embodiment of the application, the target flow data can be predicted by adopting a plurality of prediction models, and after the initial prediction results of the prediction models are obtained, the initial prediction results are fused according to the prediction weights of the prediction models. According to the embodiment of the application, a plurality of prediction models are adopted for prediction, so that errors generated in the prediction process of each prediction model can be made up, the problem of insufficient prediction precision of a single prediction model is solved, and the precision of flow prediction can be improved.
In one embodiment, as shown in fig. 4, the method further includes:
step 402, obtaining the flow data corresponding to each flow characteristic from the historical flow data corresponding to the sample server.
In the embodiment of the present application, the traffic characteristics refer to the category of traffic data. Illustratively, the traffic characteristics may include a traffic equal ratio value, a traffic ring ratio value, a number of call failures, a call timeout time, a caller, a callee, and the like. The flow same-ratio is a ratio of current-period flow to historical same-period flow, for example, a ratio of current-day flow to previous-week same-day flow or previous-month same-day flow, and the like. The flow loop ratio is a ratio of the current flow to the previous flow data, for example, a ratio of the current flow to the previous flow or a ratio of the current flow to the previous flow, and the like.
Step 404, determining a target flow characteristic from each flow characteristic according to the flow data corresponding to each flow characteristic by using a characteristic weight algorithm.
In the embodiment of the present application, a feature weight algorithm may be adopted to determine the target flow characteristics from each flow characteristic according to the flow data corresponding to each flow characteristic. For example, a feature weight algorithm may be used to determine a feature weight of each flow feature, and rank the feature weights of the flow features from high to low, and determine the flow feature with the higher feature weight as the target flow feature. For example, taking the traffic characteristics including the traffic equal ratio value, the traffic ring ratio value, the number of call failures, the call timeout time, the caller, and the callee as examples, the target traffic characteristics may include: flow same ratio, flow ring ratio, call failure times, and call timeout time.
It should be noted that, in the embodiment of the present application, a feature weight algorithm is not specifically limited, and any algorithm that can determine the feature weight of each flow feature according to the flow data corresponding to each flow feature is suitable for the embodiment of the present application.
Taking the feature weight algorithm as the ReliefF algorithm as an example, the ReliefF algorithm is a multi-class feature extraction algorithm. In the embodiment of the present application, the entire traffic data may be divided into two sample groups: traffic data that triggers throttling (i.e., traffic data generated when the sample server's traffic value exceeds a traffic threshold) and traffic data that does not trigger throttling (i.e., traffic data generated when the sample server's traffic value does not exceed a traffic threshold). Firstly, a flow data a is randomly taken out from all the flow data by a Relieff algorithm; taking out k nearest neighbor samples of the traffic data a from a sample group classified as the same as the traffic data a, and recording the k nearest neighbor samples as a set H; in a sample group classified differently from the traffic data a, k nearest neighbor samples of the traffic data a are also taken out respectively and are recorded as a set M; then, for any feature a, the algorithm calculates a first average of the difference between each element in the set H and a on the feature a and a second average of the difference between each element in the set M and a on the feature a, and determines the feature weight of the feature a according to the first and second averages.
And 406, constructing a training set according to the flow data corresponding to the target flow characteristics.
And 408, training the initial prediction models corresponding to the prediction models respectively through the training set to obtain the prediction models.
In the embodiment of the application, a training set can be constructed according to flow data corresponding to the target flow characteristics screened from the flow characteristics, and then the initial prediction models corresponding to the prediction models are trained to obtain the prediction models capable of predicting the flow conditions of the server in the prediction time period.
It should be noted that, the training process of the initial prediction model is not specifically limited in the embodiments of the present application. All the training modes capable of training the initial prediction model and obtaining the trained prediction model are suitable for the embodiment of the application. For example: ensemble learning algorithms, etc.
According to the automatic service degradation method provided by the embodiment of the application, a training set can be constructed according to the traffic data corresponding to the traffic characteristics of which the characteristic weights meet the preset conditions, and the initial prediction model is trained through the training set to obtain the trained prediction model. Because the flow data are screened, and only the flow data with larger feature weight, namely the flow data corresponding to the flow features with stronger prediction capability, form a training set, the embodiment of the application can avoid inputting redundant data with weaker prediction capability into the initial prediction model for training, thereby improving the training speed of the initial prediction model.
In one embodiment, as shown in fig. 5, the method further includes:
step 502, determining the accuracy of each prediction model.
In the embodiment of the application, the precision of the prediction model is used for representing the capability of accurately predicting the traffic condition of the server in the prediction time period when the prediction model performs prediction according to the traffic data. After the training of the prediction model is completed, the prediction model is checked by adopting data which is not used in the training process, and the prediction weight of each prediction model is determined according to the precision of each prediction model obtained in the checking process.
In the embodiments of the present application, the method of determining the accuracy of each prediction model is not particularly limited. All the ways of determining the accuracy of each prediction model when each prediction model is checked are applicable to the embodiment of the present application.
And step 504, determining the prediction weight of each prediction model according to the precision of each prediction model.
In the embodiment of the present application, the prediction weight of each prediction model may be further determined according to the accuracy of each prediction model obtained in the above steps. For example, for any prediction model, the ratio of the precision of the prediction model to the sum of the precision of the entire prediction model may be used as the prediction weight of the prediction model. For example, if the accuracy of the first prediction model is X and the accuracy of the second prediction model is Y, the prediction weight of the first prediction model may be X/(X + Y) and the prediction weight of the second prediction model may be Y/(X + Y).
According to the automatic service degradation method provided by the embodiment of the application, the prediction weight of each prediction model can be determined according to the precision of each prediction model. According to the embodiment of the application, the prediction model with low precision is low in prediction weight, and the prediction model with high precision is high in prediction weight, so that the precision of flow prediction can be further improved.
In one embodiment, in step 408, the training of the initial prediction model corresponding to each prediction model through the training set respectively to obtain each prediction model includes:
aiming at any prediction model, in the process of training an initial prediction model corresponding to the prediction model through a training set, determining the optimal hyper-parameter of the prediction model by adopting a particle swarm algorithm, updating an inertia factor in the particle swarm algorithm on the basis of historical data in the particle motion process in real time in the process of determining the optimal hyper-parameter of the prediction model by adopting the particle swarm algorithm, and determining the optimal hyper-parameter of the prediction model by adopting the updated particle swarm algorithm.
The hyper-parameters are external parameters which need to be set manually in each prediction model, such as iteration times, batch size and the like. The inertia factor is a parameter for controlling the particles to keep the original speed capability in the particle swarm algorithm, larger inertia factor is more beneficial to global search of the particles, and smaller inertia factor is more beneficial to local search of the particles.
In the embodiment of the application, the inertia factor value of each particle in the particle swarm algorithm can be updated based on the historical data in the particle motion process, and then the optimal hyper-parameter of each prediction model is determined according to the updated particle swarm algorithm. When the optimal hyper-parameter of a certain prediction model is determined according to the particle swarm algorithm, namely the optimal value of the hyper-parameter of the prediction model is searched, the hyper-parameter of the prediction model can be set as the coordinate of the particle in the n-dimensional space, and the optimal solution searched in the n-dimensional space by the particle swarm algorithm is the optimal value of the hyper-parameter of the prediction model.
According to the automatic service degradation method provided by the embodiment of the application, the particle swarm algorithm can be updated by updating the inertia factors in the particle swarm algorithm, and then the optimal hyper-parameters of each prediction model are determined through the updated particle swarm algorithm. Due to the fact that the inertia factors control the global searching and local searching capability of the particles, the inertia factors of the particle swarm algorithm are adjusted, the speed of the particles reaching the optimal solution can be increased, the accuracy of the particles searching the optimal solution can be improved, and the training speed of the prediction model can be further increased.
In one embodiment, as shown in fig. 6, in the above embodiment, updating the inertia factor in the particle swarm algorithm in real time based on the historical data in the particle motion process includes:
step 602, determining a historical maximum inertia factor, a historical minimum target value of the particle and a historical maximum target value of the particle in the movement process according to historical data in the movement process of the particle.
The historical maximum inertia factor is the maximum value of all the inertia factor values which are taken by the particles in the moving process, the historical minimum inertia factor is the minimum value of all the inertia factor values which are taken by the particles in the moving process, the historical minimum target value of the particles is the minimum value of all the objective function values which are taken by the particles in the moving process, and the historical maximum target value of the particles is the maximum value of all the objective function values which are taken by the particles in the moving process.
And step 604, determining a first inertia factor adjustment value according to the historical maximum inertia factor, the historical minimum inertia factor, the particle historical minimum target value and the particle historical maximum target value.
In the embodiment of the application, the first inertia factor adjustment value is used for adjusting an inertia factor value of the particle in the movement process, and is positively correlated with a difference value between the historical maximum inertia factor and the historical minimum inertia factor and negatively correlated with a difference value between the historical maximum target value and the historical minimum target value of the particle. For example, the first inertia factor adjustment value may be determined by subtracting the historical maximum inertia factor from the historical minimum inertia factor, multiplying the result by the difference between the current target value of the particle (i.e., the value of the objective function of the particle at the current time) and the historical minimum target value of the particle, and dividing by the difference between the historical maximum target value of the particle and the historical minimum target value of the particle (see formula (one)):
Figure BDA0003672144380000141
wherein, theta is a first inertia factor adjusting value omega max Is the historical maximum inertia factor, ω min Is the historical minimum inertia factor, f is the current target value of the particle, f min As the minimum target value of particle history, f max Is the maximum target value of the particle history.
And 606, taking the difference value of the historical maximum inertia factor and the first inertia factor adjustment value as the inertia factor of the particle swarm algorithm at the current moment.
In the embodiment of the present application, a difference may be made between the historical maximum inertia factor and the first inertia factor adjustment value, so as to obtain an inertia factor of the particle swarm algorithm at the current time (see formula (ii)):
ω=ω max -theta (equation (two))
Wherein, ω is an inertia factor of the particle swarm algorithm at the current moment, ω is max And theta is the historical maximum inertia factor, and theta is the first inertia factor adjusting value.
According to the automatic service degradation method provided by the embodiment of the application, the inertia factor of the particle at the current moment can be adjusted according to the historical data of the particle in the motion process. Due to the fact that the inertia factors control the global searching and local searching capability of the particles, the inertia factors of the particle swarm algorithm are adjusted, the speed of the particles reaching the optimal solution can be increased, the accuracy of the particles searching the optimal solution can be improved, and the training speed of the prediction model can be further increased.
In order to make the embodiments of the present application better understood by those skilled in the art, the embodiments of the present application are described below by specific examples.
Illustratively, as shown in FIG. 7, a flow chart of a method of automatic degradation of a service is shown.
The method for automatically degrading the service provided by the embodiment of the application adopts a plurality of service degradation strategies simultaneously, wherein the strategies comprise overtime degradation, failure frequency degradation, fault degradation and current limiting degradation. For timeout degradation, failure frequency degradation and fault degradation, the embodiment of the application adopts a fixed threshold degradation strategy. For example, for timeout downgrading, a downgrading process may be performed on a service if the number of times the service has timed out exceeds a threshold; for the degradation of the failure times, the service can be degraded under the condition that the failure times of service calling exceed a threshold value; for failure degradation, the service may be degraded if the number of service invocation failures exceeds a threshold.
For the current-limiting degradation, an early-warning degradation strategy is adopted, namely the traffic condition of the server in a prediction time period is predicted through a traffic prediction model, and a target service to be degraded is determined according to a predicted traffic result, so that the target service is degraded in advance.
When the flow prediction models are trained, the embodiment of the application needs to determine the optimal hyper-parameter of each prediction model in the flow prediction models. In the embodiment of the application, the optimal hyper-parameter of each prediction model can be determined through a particle swarm algorithm. The step of determining the optimal hyper-parameter by using the particle swarm algorithm may refer to the description related to the foregoing embodiments, and the embodiments of the present application are not described herein again.
When the optimal hyper-parameters of each prediction model are determined through the particle swarm algorithm, the particle swarm algorithm can be updated. The particle swarm algorithm has the defects that a local optimal solution is easy to fall into when a target is searched, so that a global optimal solution cannot be searched, and the searching precision is low. Aiming at the defect of the particle swarm optimization, the embodiment of the application updates the inertia factor in the particle swarm optimization in real time when the particle swarm optimization is searched based on historical data in the particle motion process. The step of updating the inertia factor in the particle swarm algorithm may refer to the description of the foregoing embodiment, and the embodiment of the present application is not described herein again.
After the optimal hyper-parameter of the prediction model is determined, the embodiment of the application trains the prediction model to obtain the flow prediction model. In the training process of the prediction model, the embodiment of the application can screen each flow characteristic in the historical flow data corresponding to the sample server, and only builds the training set according to the flow data corresponding to the target flow characteristic with higher characteristic weight.
In the embodiment of the application, 80% of the flow data corresponding to the target flow characteristics is used for training each prediction model, after model training is completed, the remaining 20% of the flow data corresponding to the target flow characteristics is used for checking the precision of each prediction model, and the prediction weight of each prediction model is determined according to the precision of each prediction model.
When the traffic prediction model is actually used, the embodiment of the application needs to determine target traffic data from a plurality of historical traffic data of the server. For example, the target traffic data may be traffic data corresponding to the target traffic characteristics determined in the above-mentioned predictive model training process in a plurality of historical traffic data of the server.
The embodiment of the application can further predict the target traffic data through the traffic prediction model, and perform degradation processing on the target service according to the predicted traffic result.
According to the method for automatically degrading the service, the target traffic data are determined, the target service to be degraded is determined and the target service is degraded in time under the condition that the traffic of the server exceeds the traffic threshold value in the predicted traffic result representation prediction period through the traffic prediction model, and the target service is degraded in time.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides an automatic service degradation device for implementing the above-mentioned automatic service degradation method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the service automatic degradation device provided below can be referred to the limitations of the service automatic degradation method in the foregoing, and details are not described here.
In one embodiment, as shown in fig. 8, there is provided a service automatic degradation apparatus, including: a first determination module 802, a prediction module 804, a second determination module 806, a degradation module 808, wherein:
a first determining module 802, configured to determine target traffic data from a plurality of historical traffic data of a server;
the prediction module 804 is configured to predict the target traffic data through a traffic prediction model to obtain a predicted traffic result of the server, where the predicted traffic result is used to represent a traffic condition of the server in a prediction time period;
a second determining module 806, configured to determine, when a target traffic value exceeding a traffic threshold exists in the predicted traffic result, a target service to be degraded;
a downgrading module 808, configured to perform downgrading processing on the target service to implement current limiting on the target service.
According to the automatic service degradation device, the target traffic data is determined, the target traffic data is predicted through the traffic prediction model, the target service to be degraded can be determined under the condition that the target traffic value exceeding the traffic threshold exists in the predicted traffic result, and the target service is degraded in time, namely the automatic service degradation device provided by the embodiment of the application can degrade the target service before the traffic of the server exceeds the traffic threshold, the timeliness of degradation is improved, the traffic of the target service is limited in advance, sufficient server resources are reserved for the server when the traffic of the server is large, and the stability of core services in the server can be improved.
In one embodiment, the downgrading module 808 is further configured to:
determining a target flow value greater than the flow threshold from the predicted flow result;
determining a flow difference value according to the target flow value and the flow threshold value;
and determining a target service to be degraded from the services with the target service attribute according to the flow difference, wherein the service attribute is used for representing the core degree of the service.
In one embodiment, the traffic prediction model includes at least two prediction models, and the prediction module 804 is further configured to:
predicting the target flow data through each prediction model to obtain a plurality of initial prediction results;
and performing fusion processing on the plurality of initial prediction results according to the prediction weight of each prediction model to obtain a predicted flow result.
In one embodiment, the apparatus further comprises:
the acquisition module is used for acquiring the traffic data corresponding to each traffic characteristic from the historical traffic data corresponding to the sample server;
a third determining module, configured to determine a target flow characteristic from each of the flow characteristics according to the flow data corresponding to each of the flow characteristics by using a characteristic weight algorithm;
the construction module is used for constructing a training set according to the flow data corresponding to the target flow characteristics;
and the training module is used for respectively training the initial prediction models corresponding to the prediction models through the training set to obtain the flow prediction models.
In one embodiment, the apparatus further comprises:
a fourth determining module for determining the accuracy of each of the predictive models;
a fifth determining module, configured to determine the prediction weight of each prediction model according to the precision of each prediction model.
In one embodiment, the training module is further configured to:
aiming at any prediction model, adopting a particle swarm algorithm to determine the optimal hyper-parameter of the prediction model in the process of training the initial prediction model corresponding to the prediction model through the training set, updating an inertia factor in the particle swarm algorithm on the basis of historical data in the particle motion process in real time in the process of determining the optimal hyper-parameter of the prediction model by adopting the particle swarm algorithm, and adopting the updated particle swarm algorithm to determine the optimal hyper-parameter of the prediction model.
In one embodiment, the training module is further configured to:
determining a historical maximum inertia factor, a historical minimum target value of the particle and a historical maximum target value of the particle in the movement process according to historical data in the movement process of the particle;
determining a first inertia factor adjusting value according to the historical maximum inertia factor, the historical minimum target value of the particles and the historical maximum target value of the particles;
and taking the difference value of the historical maximum inertia factor and the first inertia factor adjustment value as the inertia factor of the particle swarm algorithm at the current moment.
The various modules in the service automatic degradation apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of automatic degradation of services.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above-described method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (11)

1. A method for automatic degradation of services, the method comprising:
determining target traffic data from a plurality of historical traffic data of a server;
predicting the target traffic data through a traffic prediction model to obtain a predicted traffic result of the server, wherein the predicted traffic result is used for representing the traffic condition of the server in a prediction time period;
determining a target service to be degraded under the condition that a target flow value exceeding a flow threshold value exists in the predicted flow result;
and performing degradation processing on the target service to realize the current limitation of the target service.
2. The method of claim 1, wherein determining a target service to be downgraded if a target traffic value exceeding a traffic threshold exists in the predicted traffic result comprises:
determining a target flow value greater than a flow threshold value from the predicted flow result;
determining a flow difference value according to the target flow value and the flow threshold value;
and determining a target service to be degraded from the services with the target service attribute according to the flow difference, wherein the service attribute is used for representing the core degree of the service.
3. The method according to claim 1 or 2, wherein the traffic prediction model includes at least two prediction models, and the predicting the target traffic data by the traffic prediction model to obtain the predicted traffic result of the server includes:
predicting the target flow data through each prediction model to obtain a plurality of initial prediction results;
and performing fusion processing on the plurality of initial prediction results according to the prediction weight of each prediction model to obtain a predicted flow result.
4. The method of claim 3, further comprising:
obtaining flow data corresponding to each flow characteristic from historical flow data corresponding to the sample server;
determining a target flow characteristic from each flow characteristic according to the flow data corresponding to each flow characteristic by adopting a characteristic weight algorithm;
constructing a training set according to the flow data corresponding to the target flow characteristics;
and training the initial prediction models corresponding to the prediction models respectively through the training set to obtain the flow prediction models.
5. The method of claim 4, further comprising:
determining the accuracy of each of the predictive models;
and determining the prediction weight of each prediction model according to the precision of each prediction model.
6. The method according to claim 4, wherein the training the initial prediction model corresponding to each prediction model through the training set comprises:
aiming at any prediction model, adopting a particle swarm algorithm to determine the optimal hyper-parameter of the prediction model in the process of training the initial prediction model corresponding to the prediction model through the training set, updating an inertia factor in the particle swarm algorithm on the basis of historical data in the particle motion process in real time in the process of determining the optimal hyper-parameter of the prediction model by adopting the particle swarm algorithm, and adopting the updated particle swarm algorithm to determine the optimal hyper-parameter of the prediction model.
7. The method of claim 6, wherein the updating the inertia factor in the particle swarm algorithm in real time based on historical data during particle motion comprises:
determining a historical maximum inertia factor, a historical minimum target value of the particle and a historical maximum target value of the particle in the movement process according to historical data in the movement process of the particle;
determining a first inertia factor adjustment value according to the historical maximum inertia factor, the historical minimum target value of the particle and the historical maximum target value of the particle;
and taking the difference value of the historical maximum inertia factor and the first inertia factor adjustment value as the inertia factor of the particle swarm algorithm at the current moment.
8. An apparatus for automatic degradation of services, the apparatus comprising:
the system comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining target flow data from a plurality of historical flow data of a server;
the prediction module is used for predicting the target traffic data through a traffic prediction model to obtain a predicted traffic result of the server, and the predicted traffic result is used for representing the traffic condition of the server in a prediction time period;
a second determining module, configured to determine, when a target traffic value exceeding a traffic threshold exists in the predicted traffic result, a target service to be degraded;
and the degradation module is used for performing degradation processing on the target service so as to realize the current limitation on the target service.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
11. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 7 when executed by a processor.
CN202210608028.8A 2022-05-31 2022-05-31 Service automatic degradation method, device, computer equipment and storage medium Active CN114826951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210608028.8A CN114826951B (en) 2022-05-31 2022-05-31 Service automatic degradation method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210608028.8A CN114826951B (en) 2022-05-31 2022-05-31 Service automatic degradation method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114826951A true CN114826951A (en) 2022-07-29
CN114826951B CN114826951B (en) 2024-02-20

Family

ID=82519687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210608028.8A Active CN114826951B (en) 2022-05-31 2022-05-31 Service automatic degradation method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114826951B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113949666A (en) * 2021-11-15 2022-01-18 中国银行股份有限公司 Flow control method, device, equipment and system
CN114116207A (en) * 2021-11-11 2022-03-01 中国银行股份有限公司 Flow control method, device, equipment and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114116207A (en) * 2021-11-11 2022-03-01 中国银行股份有限公司 Flow control method, device, equipment and system
CN113949666A (en) * 2021-11-15 2022-01-18 中国银行股份有限公司 Flow control method, device, equipment and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
况爱武;张胜伟;覃定明;: "预测信息下降级路网的交通流演化", 长沙理工大学学报(自然科学版), no. 04, 28 December 2019 (2019-12-28) *
史峰;罗端高;: "降级路网的多类用户均衡分配模型及求解算法", 交通运输系统工程与信息, no. 04, 15 August 2008 (2008-08-15) *

Also Published As

Publication number Publication date
CN114826951B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
KR20210032140A (en) Method and apparatus for performing pruning of neural network
CN110289994B (en) Cluster capacity adjusting method and device
CN113326126A (en) Task processing method, task scheduling device and computer equipment
CN116645132A (en) Multi-factor variable-based time sequence prediction method and device, electronic equipment and medium
CN110796485A (en) Method and device for improving prediction precision of prediction model
CN112990420A (en) Pruning method for convolutional neural network model
CN117632905B (en) Database management method and system based on cloud use records
CN114168318A (en) Training method of storage release model, storage release method and equipment
CN114826951B (en) Service automatic degradation method, device, computer equipment and storage medium
CN112416814A (en) Management method for garbage collection in solid state disk, storage medium and electronic device
CN114281474A (en) Resource adjusting method and device
US20210406692A1 (en) Partial-activation of neural network based on heat-map of neural network activity
CN112667394B (en) Computer resource utilization rate optimization method
KR20220099487A (en) Method for exploration via manipulating curiosity by anticipating others and prioritization of experiences for multi-agent reinforcement learning
CN114465957B (en) Data writing method and device
CN115202591B (en) Storage device, method and storage medium of distributed database system
CN114841757A (en) Prediction model training method and device and price prediction method and device
CN113162780B (en) Real-time network congestion analysis method, device, computer equipment and storage medium
CN113407192B (en) Model deployment method and device
CN113723593A (en) Load shedding prediction method and system based on neural network
CN114691459A (en) Software system aging prediction method and system
CN117478621A (en) Bandwidth allocation method, device, computer equipment, storage medium and program product
CN116610546A (en) Job early warning method, device, computer equipment and storage medium
CN114090238A (en) Edge node load prediction method and device
CN117395179A (en) Distribution method and device for intention-driven network security monitoring resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant