CN110830384A - Method, device and system for limiting service flow - Google Patents

Method, device and system for limiting service flow Download PDF

Info

Publication number
CN110830384A
CN110830384A CN201910943938.XA CN201910943938A CN110830384A CN 110830384 A CN110830384 A CN 110830384A CN 201910943938 A CN201910943938 A CN 201910943938A CN 110830384 A CN110830384 A CN 110830384A
Authority
CN
China
Prior art keywords
service
data
server
cluster
flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910943938.XA
Other languages
Chinese (zh)
Other versions
CN110830384B (en
Inventor
单健锋
江鹏辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Koubei Network Technology Co Ltd
Original Assignee
Zhejiang Koubei Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Koubei Network Technology Co Ltd filed Critical Zhejiang Koubei Network Technology Co Ltd
Priority to CN201910943938.XA priority Critical patent/CN110830384B/en
Publication of CN110830384A publication Critical patent/CN110830384A/en
Application granted granted Critical
Publication of CN110830384B publication Critical patent/CN110830384B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application discloses a method, a device and a system for limiting service flow, relates to the technical field of flow control, and can realize cluster intelligent flow limitation and improve the efficiency and accuracy of service flow limitation. The method comprises the following steps: sending a service request carrying user service operation data and operation environment data so as to limit the service flow processed by each server when the cluster needs to process concurrent service requests in a centralized manner according to the user service operation data and the operation environment data and by referring to the maximum service flow which can be borne by each server when the server in the cluster processes historical similar data; and receiving the service corresponding to the service request. The method and the device are suitable for limiting the traffic flow.

Description

Method, device and system for limiting service flow
Technical Field
The present application relates to the field of flow control technologies, and in particular, to a method, an apparatus, and a system for limiting a flow of a service flow.
Background
In order to ensure that the system on the service line is highly available, the traditional current limiting mode of manual configuration can be adopted at present. Specifically, the code may be manually written in advance to configure a fixed current limiting parameter, for example, a fixed maximum flow rate may be configured according to information such as CPU performance of the server and a disk array of the server. And after receiving a large number of concurrent service requests, performing service flow limitation according to the current limiting parameter so as to reduce the condition that the server processes a large number of concurrent requests.
However, there are distinct regional and temporal characteristics of traffic flow changes for certain traffic services, and the traffic that the server can tolerate is dynamic. Therefore, the current limiting parameters need to be changed frequently by manpower according to actual conditions, so that the traditional current limiting mode is not flexible enough, server resources cannot be fully utilized, manual recoding is needed for each change, the efficiency of service traffic current limiting is affected, and the labor cost is increased.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus, and a system for limiting traffic flow, and mainly aims to solve the technical problems that the current conventional current limiting method affects the efficiency of limiting traffic flow and increases labor cost.
According to an aspect of the present application, there is provided a method for limiting traffic flow, which is applicable to a client side, the method including:
sending a service request carrying user service operation data and operation environment data so as to limit the service flow processed by each server when the cluster needs to process concurrent service requests in a centralized manner according to the user service operation data and the operation environment data and by referring to the maximum service flow which can be borne by each server when the server in the cluster processes historical similar data;
and receiving the service corresponding to the service request.
Optionally, the maximum traffic flow is calculated by a neural network model, and the method further includes:
and collecting and uploading historical business operation data and operation environment data of the user so as to create a training set and a test set corresponding to the antagonistic neural network model.
Optionally, if the returned service is not received within a preset time period after the service request is sent, the method further includes:
inquiring other cluster server information which can also obtain the business service;
and retransmitting the service request according to the information of the other cluster servers.
Optionally, after receiving the service corresponding to the service request, the method further includes:
and outputting the received business service.
According to another aspect of the present application, there is provided another method for limiting traffic flow, which is applicable to a service side, the method including:
receiving concurrent service requests, wherein the service requests carry user service operation data and operation environment data;
when a cluster needs to process concurrent service requests in a centralized manner, according to service operation data and operation environment data of each user, referring to maximum service flow which can be borne by each server when the server in the cluster processes historical similar data, and limiting the service flow processed by each server in the cluster.
Optionally, referring to the maximum service traffic, limiting the service traffic processed by each server in the cluster, specifically including:
and forwarding the concurrent service request to the servers in the cluster, and enabling the service flow processed by the servers in the cluster to be smaller than or equal to the maximum service flow corresponding to each server.
Optionally, the method further includes:
receiving historical service operation data and historical operation environment data of each user uploaded by each client;
creating a training set and a testing set according to the historical service operation data and the historical operation environment data;
and training a generator by utilizing the training set based on an antagonistic neural network algorithm and a decision device by utilizing the test set, and if the generator passes the verification of the decision device, determining the generator passing the verification as an antagonistic neural network model, wherein the antagonistic neural network model is used for calculating to obtain the maximum service flow.
Optionally, the creating a training set and a test set according to the historical service operation data and the historical operation environment data specifically includes:
querying load information of servers in the cluster when historical concurrent service requests carrying the historical service operation data and the historical operation environment data are processed in a centralized manner;
analyzing historical maximum service flow which can be borne by the servers in the cluster when the servers process the historical concurrent service requests in a centralized manner according to the load information;
and establishing the training set and the testing set by taking the historical business operation data and the historical operation environment data as sample characteristic data and the historical maximum business flow as sample label data corresponding to the sample characteristic data.
Optionally, the calculation process for limiting the maximum service traffic processed by the server service traffic in the cluster specifically includes:
and inputting the user service operation data and the operation environment data into the antagonistic neural network model, and acquiring the maximum service flow which can be borne by the servers in the cluster when the servers process the historical similar data.
Optionally, before the training set training generator based on the confrontational neural network algorithm and the test set training decider, the method further includes:
performing sparse matrix processing on the training set and the test set;
the training set training generator based on the confrontation neural network algorithm and the test set training decision device based on the confrontation neural network algorithm specifically comprise:
based on the antagonistic neural network algorithm, a training set training generator processed by a sparse matrix is utilized, and a test set training decision device processed by the sparse matrix is utilized.
Optionally, if the generator fails the check of the determiner, the method further includes:
and continuing training the generator and the judger, verifying the new generator by using the new judger after each training until the new generator passes the verification of the new judger, and determining the new generator passing the verification as the confrontation neural network model.
Optionally, before creating a training set and a test set according to the historical service operation data and the historical operation environment data, the method further includes:
carrying out duplicate removal and dirty removal invalid data filtering on the historical service operation data and the historical operation environment data;
creating a training set and a testing set according to the historical service operation data and the historical operation environment data, specifically comprising:
and creating a training set and a testing set according to the filtered historical business operation data and the filtered historical operation environment data.
According to another aspect of the present application, there is provided a traffic limiting apparatus, which is applicable to a client side, the apparatus including:
a sending module, configured to send a service request carrying user service operation data and operation environment data, so that when a cluster needs to centrally process concurrent service requests, according to each user service operation data and operation environment data, referring to a maximum service flow that can be borne by a server in the cluster when the server processes historical similar data, and limiting service flows that are processed by the server;
and the receiving module is used for receiving the service corresponding to the service request.
Optionally, the maximum traffic flow is calculated by a neural network model, and the apparatus further includes:
the acquisition module is used for acquiring historical service operation data and operation environment data of a user;
the sending module is further configured to upload the historical business operation data and the historical operation environment data of the user, which are acquired by the acquisition module, so as to create a training set and a test set corresponding to the antagonistic neural network model.
Optionally, the apparatus further comprises: a query module;
the query module is used for querying information of other cluster servers which can also acquire the service if the returned service is not received within a preset time after the service request is sent;
and the sending module is further configured to resend the service request according to the information of the other cluster servers.
Optionally, the apparatus further comprises: and the output module is used for outputting the received business service.
According to another aspect of the present application, there is provided a traffic flow limiting apparatus, which is applicable to a service side, the apparatus including:
the receiving module is used for receiving concurrent service requests, wherein the service requests carry user service operation data and operation environment data;
and the limiting module is used for referring to the maximum service flow which can be borne by the servers in the cluster when the servers in the cluster process the historical similar data according to the service operation data and the operation environment data of each user when the cluster needs to process the concurrent service requests in a centralized manner, and limiting the service flow processed by the servers in the cluster.
Optionally, the limiting module is specifically configured to forward the concurrent service request to the servers in the cluster, and make the service traffic processed by the servers in the cluster smaller than or equal to the maximum service traffic corresponding to each of the servers.
Optionally, the apparatus further comprises: a creating module and a training module;
the receiving module is also used for receiving historical service operation data and historical operation environment data of each user uploaded by each client;
the creating module is used for creating a training set and a testing set according to the historical service operation data and the historical operation environment data;
the training module is used for training the generator by using the training set based on an antagonistic neural network algorithm and training the decision device by using the test set, if the generator passes the verification of the decision device, the generator passing the verification is determined as an antagonistic neural network model, and the antagonistic neural network model is used for calculating to obtain the maximum service flow.
Optionally, the creating module is specifically configured to query load information of servers in a cluster when historical concurrent service requests carrying the historical service operation data and the historical operation environment data are processed in a centralized manner;
analyzing historical maximum service flow which can be borne by the servers in the cluster when the servers process the historical concurrent service requests in a centralized manner according to the load information;
and establishing the training set and the testing set by taking the historical business operation data and the historical operation environment data as sample characteristic data and the historical maximum business flow as sample label data corresponding to the sample characteristic data.
Optionally, the apparatus further comprises: and the computing module is used for inputting the user service operation data and the operation environment data into the antagonistic neural network model and acquiring the maximum service flow which can be borne by the servers in the cluster when the servers process the historical similar data.
Optionally, the creating module is further configured to perform sparse matrix processing on the training set and the test set;
the training module is specifically used for a training set training generator processed by using a sparse matrix based on an antagonistic neural network algorithm, and a testing set training decision device processed by using the sparse matrix.
Optionally, the training module is further configured to continue training the generator and the decider if the generator fails to pass the verification of the decider, verify the new generator with the new decider after each training until the new generator passes the verification of the new decider, and determine the new generator that passes the verification as the antagonistic neural network model.
Optionally, the apparatus further comprises:
the filtering module is used for carrying out duplicate removal and dirty removal invalid data filtering on the historical service operation data and the historical operation environment data;
the creating module is specifically configured to create a training set and a test set according to the filtered historical service operation data and historical operation environment data.
According to yet another aspect of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described method of throttling of traffic applicable on a client side.
According to yet another aspect of the present application, there is provided a client device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, the processor implementing the above-mentioned method for throttling traffic applicable on the client side when executing the program.
According to yet another aspect of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described method of throttling service traffic applicable on a service side.
According to yet another aspect of the present application, there is provided a server apparatus, including a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, the processor implementing the above-mentioned method for limiting traffic flow applicable to a server side when executing the program.
According to another aspect of the present application, there is provided a traffic limiting system, including: the client device and the server device.
Compared with the traditional current limiting mode, the method, the device and the system for limiting the service flow provided by the application can limit the service flow processed by the servers in the cluster respectively according to the user service operation data and the operation environment data carried in the request when the cluster needs to process high-concurrency service requests and by referring to the maximum service flow born by the servers in the cluster when the servers process historical similar data respectively, thereby realizing the scheme of intelligent current limiting of the cluster. And forwarding the high-concurrency service requests to a server for processing on the premise of ensuring that the maximum service flow is not exceeded. The flow limiting value is set without manual recoding according to actual conditions, and a suitable flow limiting value is automatically given every time, so that the efficiency of service flow limiting can be improved, and the labor cost is reduced. And because the current limiting is more timely and accurate, the down phenomenon of the server is further reduced, the performance of the server is prevented from being influenced by a large number of concurrent requests, and the cluster processing capacity is improved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart illustrating a method for limiting traffic flow according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating another method for limiting traffic flow according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating a further method for limiting traffic flow according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating an example structure of an application scenario provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram illustrating a flow limiting apparatus for traffic according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram illustrating another flow limiting device for traffic according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram illustrating a current limiting apparatus for traffic according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram illustrating a current limiting apparatus for traffic flow according to an embodiment of the present application;
fig. 9 shows a schematic structural diagram of a current limiting system for traffic according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The method aims to solve the technical problems that the traditional current limiting mode can influence the efficiency of service flow current limiting and can increase labor cost. The embodiment provides a method for limiting traffic flow, which can be applied to a client side as shown in fig. 1, and the method includes:
101. and the client sends a service request carrying user service operation data and operation environment data to the server.
Furthermore, when the cluster needs to process concurrent service requests in a centralized manner, the server side can refer to the maximum service flow which can be borne by the servers in the cluster when the servers process historical similar data according to the service operation data and the operation environment data of each user, and limit the service flow processed by the servers. In this embodiment, the service end may be a current limiting device or equipment for request forwarding, and specifically may be a gateway device, a management server corresponding to a cluster, or other network front-end equipment. In the present embodiment, the transmitted service request is used for requesting a service, for example, the service may be a service of a local life category, such as a meal ordering reservation, a hotel reservation, a wedding photo reservation, a train ticket/air ticket reservation, and the like.
The business operation data may include page click data (such as click category, click service, click path, etc.) performed by the user on the application page to obtain the business service, application login data, and the like. The operating environment data may include an application identification of the operation, an application version, an operating system version, payment environment data, and the like. The maximum service flow which can be borne by the server in the cluster correspondingly can be the flow limiting value of the server, namely, the service flow processed by the server does not exceed the flow limiting value, so that the safe operation of the server is ensured, and the possibility of downtime caused by overlarge load is reduced.
For example, when a server receives a large number of concurrent service requests, for the request needs of a user, a cluster needs to process the high-concurrency service requests in a centralized manner, and at this time, in order to avoid that excessive traffic pressure affects the processing performance of the cluster service, the maximum traffic which can be borne by servers in the cluster when processing history similar data can be calculated in real time according to various user service operation data and operation environment data carried by the requests, and then the size of the flow limit value of each server in the cluster can be adjusted in real time by using the reference values. Therefore, the flow limitation is intelligently realized through the flow limitation values corresponding to the servers respectively.
102. And receiving the service corresponding to the service request.
It should be noted that the cluster intelligent current limiting scheme provided by the method of the embodiment may be applied to current limiting scenarios of multiple service flows, and is used to solve the problem of optimization processing of a large number of concurrent requests in multiple scenarios.
Compared with the prior art, the method of the embodiment can limit the service flow processed by the servers in the cluster respectively according to the maximum service flow which can be borne by each user service operation data and operation environment data carried in the request and by referring to the maximum service flow which can be borne by each server when the server in the cluster processes the historical similar data when the cluster needs to process the high-concurrency service request in a centralized manner, thereby realizing the scheme of cluster intelligent flow limitation. And forwarding the high-concurrency service requests to a server for processing on the premise of ensuring that the maximum service flow is not exceeded. The flow limiting value is set without manual recoding according to actual conditions, and a suitable flow limiting value is automatically given every time, so that the efficiency of service flow limiting can be improved, and the labor cost is reduced. And because the current limiting is more timely and accurate, the down phenomenon of the server is further reduced, the performance of the server is prevented from being influenced by a large number of concurrent requests, and the cluster processing capacity is improved.
Further, as a refinement and an extension of the above embodiment, after step 102, the method may further include: and the client outputs the received business service. For example, if the service request is successfully processed, the requested local life service is displayed, played, and the like; if the service request is an abnormal request, response information of request failure is obtained and then output in the forms of characters, pictures, videos, audios, light, vibration and the like.
In order to ensure the calculation accuracy of the maximum traffic for limiting the server traffic processing in the cluster, the maximum traffic may be calculated by the antagonistic neural network model. And the confrontation neural network model needs a training set and a testing set with relatively good data quality when being created in the early stage, so that the calculation accuracy of the model obtained by training can be ensured. Therefore, in order to obtain a training set and a test set containing relatively real data, optionally, the service operation data and the operation environment data of the user history may be collected by each client and then uploaded to the server, and accordingly, the method of this embodiment may further include: and the client acquires and uploads historical business operation data and operation environment data of the user so as to create a training set and a test set corresponding to the antagonistic neural network model. Besides, the client can continuously acquire the business operation data and the operation environment data of the user and upload the data, so that the training set and the test set are updated, the training and the antagonistic neural network model is updated, and the calculation accuracy of the model is further improved.
Sometimes, due to reasons such as traffic flow limitation, network delay, server downtime and the like, no response is made for a long time after a service request is sent, and corresponding local life services cannot be returned. Therefore, in order to improve the response speed of the service request, as an optional manner, if the client does not receive the returned service for a preset time after sending the service request, the method in this embodiment may further include: inquiring other cluster server information which can also obtain the service; and then, according to the information of other cluster servers, the service request of the service is retransmitted.
The preset time duration can be configured in advance according to actual requirements, so that the overlong no-response waiting time duration is avoided. For this embodiment, if the current cluster server does not feed back the request result in a certain time, the request may be sent to other cluster servers that can also obtain the service, so as to quickly obtain the corresponding service, reduce unnecessary waiting time, implement quick response of the request, and improve the user experience.
The above embodiment is about the flow limiting process of the service traffic described at the client side, and further, to fully illustrate the implementation of this embodiment, this embodiment further provides another flow limiting method of the service traffic, which can be applied to the server side, as shown in fig. 2, the method includes:
201. the service end receives the concurrent service request.
The service request carries user service operation data and operation environment data.
202. When the cluster needs to process concurrent service requests in a centralized manner, according to the service operation data and the operation environment data of each user, the maximum service flow which can be borne by each server when the server in the cluster processes the historical similar data is referred, and the service flow which is processed by each server in the cluster is limited.
After the server receives each service request (especially under the condition of relatively large request quantity) sent by each client side, in order to ensure high availability of the cluster system, the maximum service flow which can be borne by each server in the cluster when the server processes historical similar data can be calculated according to each user service operation data and operation environment data in the request, and then the flow limiting value is intelligently calculated, so that the overall performance of the cluster system is ensured.
For example, according to the maximum service traffic corresponding to each server in the cluster, the service request is forwarded to the servers, and the service traffic processed by the servers is controlled to be smaller than the respective maximum service traffic. And the service requests blocked by the current limiting can be temporarily stored in a buffer queue for waiting to be processed by the server. When the request is forwarded, the request can be preferentially sent to a server with stronger processing capability (such as determined according to the absolute value of the difference between the current processing flow and the maximum service flow corresponding to the current processing flow) for processing.
Compared with the prior art, the method for limiting the service flow applicable to the service end side does not need to manually re-encode and set the flow limiting value according to the actual situation, but automatically provides the appropriate flow limiting value every time, so that the efficiency of limiting the service flow can be improved, and the labor cost can be reduced. And because the current limiting is more timely and accurate, the down phenomenon of the server is further reduced, the performance of the server is prevented from being influenced by a large number of concurrent requests, and the cluster processing capacity is improved.
Further, as a refinement and an extension of the specific implementation of the foregoing embodiment, in order to fully describe the specific implementation process of this embodiment, this embodiment provides another method for limiting traffic flow, as shown in fig. 3, where the method includes:
301. and the server receives the historical service operation data and the historical operation environment data of each user uploaded by each client.
Wherein, the historical business operation data can comprise user click data of business application pages of a plurality of ports (such as clicking information of pre-ordered food, travel groups, photos, hotel services and the like in the local life application page); the historical operating environment data may include click operating environment data (e.g., data such as a network (e.g., wifi, mobile network, etc.) connected when a service is clicked to obtain, user traffic, location, etc.), payment environment data (e.g., data such as a network connected for payment, location, etc.), etc. The historical service operation data and the historical operation environment data are equivalent to corresponding request data when the user obtains the service historically.
In this embodiment, in order to create a current-limiting model, service backflow needs to be performed first, and specifically, each client (e.g., APP page) collects historical service operation data and historical operation environment data of each user and uploads the historical service operation data and the historical operation environment data to the server, so that a corresponding algorithm data model is refluxed.
302. And carrying out duplicate removal and dirty removal invalid data filtering on the received historical business operation data and historical operation environment data.
The invalid data filtering mode can ensure the quality of subsequent data, avoid using invalid data to train a model, and ensure the accuracy of model calculation.
303. And creating a training set and a testing set according to the filtered historical business operation data and the filtered historical operation environment data.
In this embodiment, a training set and a test set may be created by extracting data from the filtered historical business operation data and historical operation environment data according to a certain proportion according to actual needs, for example, the filtered historical local life data is equally divided into two parts, and then one part is used to create the training set and the other part is used to create the test set.
As an optional manner, step 303 may specifically include: firstly, inquiring load information of a server in a cluster when historical concurrent service requests carrying the historical service operation data and the historical operation environment data are processed in a centralized manner; then, according to the inquired load information (such as the number of connections, the CPU load, the I/O load, the hard disk usage, and the like), analyzing historical maximum service flows that can be borne by the servers in the cluster when the servers process the historical concurrent service requests in a centralized manner, for example, when the corresponding loads of the types of service requests processed by the servers reach a certain maximum load threshold (when the threshold is exceeded, the servers are likely to be abnormal), obtaining the historical maximum service flows that can be borne by the servers; and finally, taking the historical service operation data and the historical operation environment data as sample characteristic data, taking the historical maximum service flow obtained by analysis as sample label data corresponding to the sample characteristic data, and creating a training set and a testing set. By the method, the quality of the data in the created training set and the test set can be guaranteed, and a model with high calculation accuracy can be trained.
The created training set and test set are equivalent to storing the mapping relation between the sample characteristic data and the sample label data.
304. Training a generator by using a training set based on an antagonistic neural network algorithm, training a decision device by using a test set, and determining the generator passing the check as an antagonistic neural network model if the generator passes the check of the decision device.
Wherein the antagonistic neural network model is used for calculating the maximum traffic flow.
In this embodiment, the decision device is used to check the generator, and if the decision device passes the check, the generator is proved to be in accordance with the calculation standard, and can be used as the antagonistic neural network model.
Further, in order to improve the accuracy of the data, before training the decision maker with the training set based on the neural network algorithm and training the decision maker with the test set, as an alternative, the method of this embodiment further includes: carrying out sparse matrix processing on the training set and the test set; accordingly, the model training process in step 304 may specifically include: based on the antagonistic neural network algorithm, a training set training generator processed by a sparse matrix is utilized, and a test set training decision device processed by the sparse matrix is utilized. By supplementing the spare values of the data in the training set and the test set in this way, more effective training data can be obtained.
Further, if the generator fails to pass the check of the determiner, the method of this embodiment may further include: and continuing training the generator and the judger, verifying the new generator by using the new judger after each training until the new generator passes the verification of the new judger, and determining the new generator passing the verification as the confrontation neural network model. Through the mode of training the model according with the requirement through the recursive training, the calculation accuracy of the model can be improved. After obtaining the confrontation neural network model, aiming at the training result, the engineering processing is carried out, and the business engineering system is accessed, so as to realize the intelligent flow control.
305. When the cluster needs to process concurrent service requests in a centralized manner, the operation data and the operation environment data of each user service carried by the request are input into the antagonistic neural network model, and the maximum service flow which can be borne by each server when the server in the cluster processes historical similar data is obtained.
306. And forwarding the concurrent service requests to the servers in the cluster, and enabling the service flow processed by the servers in the cluster to be smaller than or equal to the maximum service flow corresponding to each server.
By the aid of the current limiting mode, downtime of the server can be reduced, server performance is prevented from being affected by a large number of concurrent requests, and cluster processing capacity is improved. If the service flow processed by the servers in the cluster can not be ensured to be less than or equal to the maximum service flow corresponding to each server after the service request is forwarded, forwarding part of the service request firstly so as to ensure that the service flow processed by the servers is less than or equal to the maximum service flow corresponding to each server, after the part of the service request is processed, forwarding other requests, and ensuring that the service flow is less than the flow limiting value corresponding to each server in the process of forwarding the request.
To further explain the implementation process of the method of the present embodiment, the following application scenarios are given by taking the service request of the local life class as an example, but not limited thereto:
for example, as shown in fig. 4, data reflow is performed by using a local life service data center, user click data, payment environment data and the like on a local life application page are collected, the reflowed data is cleaned and analyzed, and a training set and a test set of a model are created through feature matching, so that a model meeting requirements is obtained through training of the training set and the test set based on an anti-neural network algorithm, wherein the training process is recursively subjected to processes of parameter verification, parameter adjustment, model calibration and the like. After the qualified antagonistic neural network model is obtained through training, engineering processing is carried out according to a training result, and a business engineering system is accessed, so that intelligent flow control of business flow of the business system is realized. If a user orders food in a centralized manner in a noon, a large number of concurrent requests are sent, the requests carry data such as user flow, ordering environment, regional information and the like, the data are input into a trained antagonistic neural network model, the current limiting model obtains data distribution according to past similar data characteristics, then the maximum flow which can be borne by each server in a cluster is output, the size of a current limiting value is adjusted in real time according to current flow data, the purpose of intelligent current limiting of the cluster is achieved through the method, and the phenomenon that the server is down caused by centralized processing of a large number of concurrent requests is reduced. And finally, in order to ensure the request processing success rate, the ordering request sent by the client can be preferentially sent to the server which can bear the maximum flow for processing.
The embodiment provides an intelligent current limiting scheme based on a countermeasure neural network to guarantee the availability of a service and a system. Aiming at the traffic flow characteristics of local life services, the artificial intelligence model is obtained by using the antagonistic neural network training traffic data, so that the intelligent current limiting process of the online system is intelligently adjusted. The method can meet the requirement of local life service flow processing, is more flexible compared with the traditional flow limiting scheme, and can improve the efficiency and accuracy of service flow limiting.
Further, as a specific implementation of the method shown in fig. 1, an embodiment of the present application provides a current limiting apparatus applicable to a traffic flow on a client side, as shown in fig. 5, the apparatus includes: a sending module 41 and a receiving module 42.
The sending module 41 may be configured to send a service request carrying user service operation data and operation environment data.
Furthermore, when the cluster needs to process concurrent service requests in a centralized manner, according to the service operation data and the operation environment data of each user, the maximum service flow which can be borne by each server when the server in the cluster processes the historical similar data is referred, and the service flow processed by each server is limited.
And the receiving module 42 may be configured to receive a service corresponding to the service request.
In a specific application scenario, optionally, the maximum traffic flow is calculated by an antagonistic neural network model; correspondingly, as shown in fig. 6, the apparatus further includes: an acquisition module 43;
an acquisition module 43, configured to acquire historical service operation data and operation environment data of a user;
the sending module 41 may further be configured to upload the historical business operation data and the historical operation environment data of the user, which are collected by the collecting module, so as to create a training set and a test set corresponding to the antagonistic neural network model.
In a specific application scenario, as shown in fig. 6, the apparatus further includes: a query module 44;
the query module 44 is configured to query, if the returned service is not received for a preset time period after the service request is sent, information of other cluster servers that can also obtain the service;
the sending module 41 may be further configured to resend the service request according to the information of the other cluster servers.
In a specific application scenario, as shown in fig. 6, the apparatus further includes: an output module 45; the output module 45 may be configured to output the received service.
It should be noted that other corresponding descriptions of the functional units related to the current limiting device applicable to the service traffic at the user client side provided in this embodiment may refer to the corresponding descriptions in fig. 1, and are not described herein again.
Further, as a specific implementation of the methods shown in fig. 2 and fig. 3, an embodiment of the present application provides a current limiting apparatus applicable to service traffic on a server side, as shown in fig. 7, the apparatus includes: a receiving module 51 and a limiting module 52.
A receiving module 51, configured to receive a concurrent service request, where the service request carries user service operation data and operation environment data;
the limiting module 52 is configured to, when a cluster needs to process concurrent service requests in a centralized manner, refer to maximum service flows that can be borne by servers in the cluster when the servers process historical similar data according to service operation data and operation environment data of each user, and limit the service flows that are processed by the servers in the cluster.
In a specific application scenario, the limiting module 52 may be specifically configured to forward the concurrent service request to the servers in the cluster, and enable a service traffic processed by the servers in the cluster to be less than or equal to the maximum service traffic corresponding to each of the servers.
In a specific application scenario, as shown in fig. 8, the apparatus further includes: a creation module 54 and a training module 55;
the receiving module 51 is further configured to receive historical service operation data and historical operation environment data of each user, which are uploaded by each client;
the creating module 54 may be configured to create a training set and a test set according to the historical service operation data and the historical operation environment data;
the training module 55 is configured to train a generator based on an antagonistic neural network algorithm by using the training set, train a decision device by using the test set, and determine the generator passing the check as an antagonistic neural network model if the generator passes the check of the decision device, where the antagonistic neural network model is used to calculate the maximum traffic flow.
In a specific application scenario, the creating module 54 is specifically configured to query load information of servers in a cluster when historical concurrent service requests carrying the historical service operation data and the historical operation environment data are processed in a centralized manner; analyzing historical maximum service flow which can be borne by the servers in the cluster when the servers process the historical concurrent service requests in a centralized manner according to the load information; and establishing the training set and the testing set by taking the historical business operation data and the historical operation environment data as sample characteristic data and the historical maximum business flow as sample label data corresponding to the sample characteristic data.
In a specific application scenario, as shown in fig. 8, the apparatus further includes: a calculation module 56;
the calculating module 56 may be configured to input the user service operation data and the operation environment data into the antagonistic neural network model, and obtain maximum service flow that can be borne by the servers in the cluster when the servers process the historical similar data.
In a specific application scenario, the creating module 54 may be further configured to perform sparse matrix processing on the training set and the test set;
the training module 55 is specifically configured to utilize a training set training generator after sparse matrix processing based on an antagonistic neural network algorithm, and to utilize a test set training decision device after sparse matrix processing.
In a specific application scenario, the training module 56 may be further configured to continue training the generator and the decider if the generator fails to pass the verification of the decider, verify the new generator with the new decider after each training until the new generator passes the verification of the new decider, and determine the new generator that passes the verification as the antagonistic neural network model.
In a specific application scenario, as shown in fig. 8, the apparatus further includes: a filtration module 57;
the filtering module 57 may be configured to perform invalid data filtering for deduplication and dirtying on the historical service operation data and the historical operation environment data.
The creating module 54 is specifically configured to create a training set and a test set according to the filtered historical service operation data and historical operation environment data.
It should be noted that other corresponding descriptions of the functional units related to the current limiting device for service traffic applicable to the server side provided in this embodiment may refer to the corresponding descriptions in fig. 2 and fig. 3, and are not described herein again.
Based on the method shown in fig. 1, correspondingly, the present application further provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for throttling traffic flow applicable to the client side of the user shown in fig. 1 is implemented. Based on the methods shown in fig. 2 and fig. 3, the present application further provides another storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for limiting current of service traffic applicable to the service side shown in fig. 2 and fig. 3 is implemented.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of the embodiments of the present application.
Based on the method shown in fig. 1 and the virtual device embodiments shown in fig. 5 and fig. 6, in order to achieve the above object, an embodiment of the present application further provides a client device, which may specifically be a personal computer, a tablet computer, a smart phone, a smart watch, a smart bracelet, or other network devices, and the client device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the above-described method for throttling traffic applicable to the client side of a user as shown in fig. 1.
Based on the method shown in fig. 2 and fig. 3 and the virtual device embodiment shown in fig. 7 and fig. 8, in order to achieve the above object, an embodiment of the present application further provides a server device, which may specifically be a gateway device, a server, or other network devices. The apparatus includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the above-mentioned method for throttling traffic applicable to the service end side as shown in fig. 2 and 3.
Optionally, both the two entity devices may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and the like. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
Those skilled in the art will appreciate that the physical device structure of the client device and the server device provided in the present embodiment does not constitute a limitation to the two physical devices, and may include more or less components, or combine some components, or arrange different components.
The storage medium may further include an operating system and a network communication module. The operating system is a program that manages the hardware and software resources of the two physical devices described above, supporting the operation of the information processing program as well as other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and communication with other hardware and software in the information processing entity device.
Based on the above, further, the present embodiment also provides a flow limiting system for service traffic, as shown in fig. 9, the system includes a server device 61, a client device 62;
therein, the client device 62 may be used to perform the method as shown in fig. 1, and the server device 61 may be used to perform the method as shown in fig. 2 and 3.
The client device 62 is configured to send a service request to the server device 61, where the service request carries user service operation data and operation environment data;
a server device 61 operable to receive concurrent service requests from the respective client devices 62; when a cluster needs to process concurrent service requests in a centralized manner, according to user service operation data and operation environment data carried in the service requests, maximum service flow which can be borne by a server in the cluster when the server processes historical similar data is referred, and the service flow processed by the server in the cluster is limited.
The client device 62 may also be configured to receive a returned service corresponding to the service request.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. By applying the technical scheme of the application, the intelligent current limiting scheme based on the antagonistic neural network is provided to ensure the availability of services and systems. Aiming at the service flow characteristics of the service, the confrontation neural network is used for training service data to obtain an artificial intelligence model, so that the intelligent current limiting process of the online system is intelligently adjusted. The method can meet the requirement of service flow processing, is more flexible compared with the traditional flow limiting scheme, and can improve the efficiency and accuracy of service flow current limiting.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (10)

1. A method for limiting traffic flow, comprising:
sending a service request carrying user service operation data and operation environment data so as to limit the service flow processed by each server when the cluster needs to process concurrent service requests in a centralized manner according to the user service operation data and the operation environment data and by referring to the maximum service flow which can be borne by each server when the server in the cluster processes historical similar data;
and receiving the service corresponding to the service request.
2. The method of claim 1, wherein the maximum traffic flow is calculated by a countering neural network model, the method further comprising:
and collecting and uploading historical business operation data and operation environment data of the user so as to create a training set and a test set corresponding to the antagonistic neural network model.
3. A method for limiting traffic flow, comprising:
receiving concurrent service requests, wherein the service requests carry user service operation data and operation environment data;
when a cluster needs to process concurrent service requests in a centralized manner, according to service operation data and operation environment data of each user, referring to maximum service flow which can be borne by each server when the server in the cluster processes historical similar data, and limiting the service flow processed by each server in the cluster.
4. A device for limiting traffic flow, comprising:
a sending module, configured to send a service request carrying user service operation data and operation environment data, so that when a cluster needs to centrally process concurrent service requests, according to each user service operation data and operation environment data, referring to a maximum service flow that can be borne by a server in the cluster when the server processes historical similar data, and limiting service flows that are processed by the server;
and the receiving module is used for receiving the service corresponding to the service request.
5. A device for limiting traffic flow, comprising:
the receiving module is used for receiving concurrent service requests, wherein the service requests carry user service operation data and operation environment data;
and the limiting module is used for referring to the maximum service flow which can be borne by the servers in the cluster when the servers in the cluster process the historical similar data according to the service operation data and the operation environment data of each user when the cluster needs to process the concurrent service requests in a centralized manner, and limiting the service flow processed by the servers in the cluster.
6. A storage medium having stored thereon a computer program, characterized in that the program, when being executed by a processor, implements the method of throttling traffic of any of claims 1 to 2.
7. A client device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, wherein the processor implements the method of throttling traffic flow of any one of claims 1 to 2 when executing the program.
8. A storage medium having stored thereon a computer program, characterized in that the program, when being executed by a processor, implements the method of throttling traffic of claim 3.
9. A server device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, wherein the processor implements the method of throttling traffic of claim 3 when executing the program.
10. A system for limiting traffic flow, comprising: a client device as claimed in claim 7 and a server device as claimed in claim 9.
CN201910943938.XA 2019-09-30 2019-09-30 Method, device and system for limiting service flow Active CN110830384B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910943938.XA CN110830384B (en) 2019-09-30 2019-09-30 Method, device and system for limiting service flow

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910943938.XA CN110830384B (en) 2019-09-30 2019-09-30 Method, device and system for limiting service flow

Publications (2)

Publication Number Publication Date
CN110830384A true CN110830384A (en) 2020-02-21
CN110830384B CN110830384B (en) 2023-04-18

Family

ID=69548618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910943938.XA Active CN110830384B (en) 2019-09-30 2019-09-30 Method, device and system for limiting service flow

Country Status (1)

Country Link
CN (1) CN110830384B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444400A (en) * 2020-04-07 2020-07-24 中国汽车工程研究院股份有限公司 Force and flow field data management method
CN111770002A (en) * 2020-06-12 2020-10-13 南京领行科技股份有限公司 Test data forwarding control method and device, readable storage medium and electronic equipment
CN112367266A (en) * 2020-10-29 2021-02-12 北京字节跳动网络技术有限公司 Current limiting method, current limiting device, electronic equipment and computer readable medium
CN112925639A (en) * 2021-02-10 2021-06-08 中国工商银行股份有限公司 Self-adaptive transaction current limiting method, device and system
CN113411233A (en) * 2021-06-17 2021-09-17 建信金融科技有限责任公司 Method and device for monitoring CPU utilization rate of central processing unit
CN113783799A (en) * 2020-06-09 2021-12-10 马上消费金融股份有限公司 Flow control method and device
CN115348208A (en) * 2021-04-27 2022-11-15 中移(苏州)软件技术有限公司 Flow control method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2214725A1 (en) * 1995-03-06 1996-09-12 James Francis Furka Service management operation and support system and method
KR20110028106A (en) * 2009-09-11 2011-03-17 한국전자통신연구원 Apparatus for controlling distribute denial of service attack traffic based on source ip history and method thereof
CN102111276A (en) * 2009-12-29 2011-06-29 北京四达时代软件技术股份有限公司 Real-time charging method and system
CN107360117A (en) * 2016-05-09 2017-11-17 阿里巴巴集团控股有限公司 The method, apparatus and system of data processing
CN107592345A (en) * 2017-08-28 2018-01-16 中国工商银行股份有限公司 Transaction current-limiting apparatus, method and transaction system
CN107659595A (en) * 2016-07-25 2018-02-02 阿里巴巴集团控股有限公司 A kind of method and apparatus for the ability for assessing distributed type assemblies processing specified services
CN109885399A (en) * 2019-01-17 2019-06-14 平安普惠企业管理有限公司 Data processing method, electronic device, computer equipment and storage medium
CN110061930A (en) * 2019-02-01 2019-07-26 阿里巴巴集团控股有限公司 A kind of limitation of data traffic, cut-off current determination method and apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2214725A1 (en) * 1995-03-06 1996-09-12 James Francis Furka Service management operation and support system and method
KR20110028106A (en) * 2009-09-11 2011-03-17 한국전자통신연구원 Apparatus for controlling distribute denial of service attack traffic based on source ip history and method thereof
CN102111276A (en) * 2009-12-29 2011-06-29 北京四达时代软件技术股份有限公司 Real-time charging method and system
CN107360117A (en) * 2016-05-09 2017-11-17 阿里巴巴集团控股有限公司 The method, apparatus and system of data processing
CN107659595A (en) * 2016-07-25 2018-02-02 阿里巴巴集团控股有限公司 A kind of method and apparatus for the ability for assessing distributed type assemblies processing specified services
CN107592345A (en) * 2017-08-28 2018-01-16 中国工商银行股份有限公司 Transaction current-limiting apparatus, method and transaction system
CN109885399A (en) * 2019-01-17 2019-06-14 平安普惠企业管理有限公司 Data processing method, electronic device, computer equipment and storage medium
CN110061930A (en) * 2019-02-01 2019-07-26 阿里巴巴集团控股有限公司 A kind of limitation of data traffic, cut-off current determination method and apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANUJA NAGARE,SHALINI BHATIA: "Traffic Flow Control using Neural Network" *
周汝强: "基于网络业务量建模的流量异常检测", 《中国优秀硕士学位论文全文数据库》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444400A (en) * 2020-04-07 2020-07-24 中国汽车工程研究院股份有限公司 Force and flow field data management method
CN113783799A (en) * 2020-06-09 2021-12-10 马上消费金融股份有限公司 Flow control method and device
CN111770002A (en) * 2020-06-12 2020-10-13 南京领行科技股份有限公司 Test data forwarding control method and device, readable storage medium and electronic equipment
CN112367266A (en) * 2020-10-29 2021-02-12 北京字节跳动网络技术有限公司 Current limiting method, current limiting device, electronic equipment and computer readable medium
CN112925639A (en) * 2021-02-10 2021-06-08 中国工商银行股份有限公司 Self-adaptive transaction current limiting method, device and system
CN115348208A (en) * 2021-04-27 2022-11-15 中移(苏州)软件技术有限公司 Flow control method and device, electronic equipment and storage medium
CN115348208B (en) * 2021-04-27 2024-04-09 中移(苏州)软件技术有限公司 Flow control method and device, electronic equipment and storage medium
CN113411233A (en) * 2021-06-17 2021-09-17 建信金融科技有限责任公司 Method and device for monitoring CPU utilization rate of central processing unit

Also Published As

Publication number Publication date
CN110830384B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN110830384B (en) Method, device and system for limiting service flow
WO2020093694A1 (en) Method for generating video analysis model, and video analysis system
US11483218B2 (en) Automating 5G slices using real-time analytics
US20180260621A1 (en) Picture recognition method and apparatus, computer device and computer- readable medium
CN107741899B (en) Method, device and system for processing terminal data
US9588813B1 (en) Determining cost of service call
JP2019513246A (en) Training method of random forest model, electronic device and storage medium
CN110806954A (en) Method, device and equipment for evaluating cloud host resources and storage medium
CN112506619B (en) Job processing method, job processing device, electronic equipment and storage medium
CN109615022B (en) Model online configuration method and device
US20200394448A1 (en) Methods for more effectively moderating one or more images and devices thereof
CN109062807B (en) Method and device for testing application program, storage medium and electronic device
CN112380392A (en) Method, apparatus, electronic device and readable storage medium for classifying video
JP6900853B2 (en) Device linkage server and device linkage program
CN117834724B (en) Video learning resource management system based on big data analysis
EP3890312A1 (en) Distributed image analysis method and system, and storage medium
CN109409948B (en) Transaction abnormity detection method, device, equipment and computer readable storage medium
CN114253710A (en) Processing method of computing request, intelligent terminal, cloud server, equipment and medium
CN105530658A (en) Remote diagnosis method of wireless communication module, device and system
CN112001622A (en) Health degree evaluation method, system, equipment and storage medium of cloud virtual gateway
CN111062914A (en) Method, apparatus, electronic device and computer readable medium for acquiring facial image
US20180349190A1 (en) Process control program, process control method, information processing device, and communication device
CN107332824B (en) Cloud application identification method and device
CN113364625A (en) Data transmission method, intermediate transmission equipment and storage medium
CN112560938A (en) Model training method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant