CN110708258B - Flow control method, device, server and storage medium - Google Patents

Flow control method, device, server and storage medium Download PDF

Info

Publication number
CN110708258B
CN110708258B CN201910935923.9A CN201910935923A CN110708258B CN 110708258 B CN110708258 B CN 110708258B CN 201910935923 A CN201910935923 A CN 201910935923A CN 110708258 B CN110708258 B CN 110708258B
Authority
CN
China
Prior art keywords
server
servers
flow
traffic
quota
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910935923.9A
Other languages
Chinese (zh)
Other versions
CN110708258A (en
Inventor
李俊良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910935923.9A priority Critical patent/CN110708258B/en
Publication of CN110708258A publication Critical patent/CN110708258A/en
Application granted granted Critical
Publication of CN110708258B publication Critical patent/CN110708258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1012Server selection for load balancing based on compliance of requirements or conditions with available server resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a flow control method, a flow control device, a server and a storage medium. Wherein the method comprises the following steps: acquiring first information; the first information represents the flow generated on all the second servers, the second servers are registered in a distributed cluster corresponding to the first service, and at least two second servers are registered in the distributed cluster; the flow generated on each second server is obtained by counting the query requests received by the corresponding second servers by the atomic lock; the atomic locks are arranged in the corresponding second servers; performing flow control on the first service according to the flow quota of each of the at least two second servers and the first information; the flow quota of each second server is determined based on the total flow quota of the first service and the configuration parameters of the corresponding second server.

Description

Flow control method, flow control device, server and storage medium
Technical Field
The present invention belongs to the field of network technologies, and in particular, to a flow control method, apparatus, server, and storage medium.
Background
Under the service scenes of ticket robbery in spring transportation, second killing of network commodities, hot event search and the like, a service system encounters a high-concurrency service request in a short time, so that the pressure of a server is increased steeply, and the pressure of the server needs to be relieved through flow control. When the related art performs flow control on each server in the distributed cluster, the throughput rate of the distributed cluster is often low.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, a server, and a storage medium for flow control, so as to solve the problem that the throughput of a distributed cluster is low when flow control is performed in the related art.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a flow control method, which is applied to a first server and comprises the following steps:
acquiring first information; the first information represents the flow generated on all the second servers, the second servers are registered in a distributed cluster corresponding to the first service, and at least two second servers are registered in the distributed cluster; the flow generated on each second server is obtained by counting the query requests received by the corresponding second servers by the atomic lock; the atomic locks are arranged in the corresponding second servers;
performing flow control on the first service according to the flow quota of each of the at least two second servers and the first information; wherein the content of the first and second substances,
the traffic quota for each second server is determined based on the total traffic quota for the first service and the configuration parameters of the corresponding second server.
In the foregoing solution, the acquiring the first information includes:
receiving the generated flow reported by each of the at least two second servers;
and determining the first information by using the received generated flow reported by each second server.
In the foregoing solution, the performing, according to the traffic quota of each of the at least two second servers and the first information, the traffic control on the first service includes:
and when the flow generated by all the second servers in the distributed cluster exceeds the total flow quota, performing flow control on the first service according to the flow quota of each of the at least two second servers and the first information.
In the foregoing solution, when performing flow control on the first service according to the flow quota of each of the at least two second servers and the first information, the method includes:
sending a first instruction to at least one second server in the at least two second servers; the first instruction is used for indicating that the corresponding second server refuses to receive the query request when the generated flow exceeds the corresponding flow quota.
In the foregoing solution, when performing flow control on the first service according to the flow quota of each of the at least two second servers and the first information, the method includes:
determining a difference between traffic generated on all second servers and the total traffic quota;
adding at least one second server in the distributed cluster, and setting a flow quota for each added second server; wherein the content of the first and second substances,
the number of second servers added is determined according to the difference value and the traffic quota.
In the foregoing solution, the method further includes:
acquiring second information, wherein the second information represents the upper flow limit of all third servers; the query request received by one second server is subjected to shunting processing through at least two third servers;
sending the second information to a corresponding second server so that the corresponding second server controls the flow of the corresponding third server according to the second information and the flow generated on the corresponding third server; wherein the content of the first and second substances,
the flow generated on each third server is obtained by counting the query request received by the corresponding third server by the atomic lock; the atomic lock is built in the corresponding third server.
The embodiment of the invention also provides a flow control method which is applied to a second server and comprises the following steps:
counting the received query request through a built-in atomic lock to obtain the flow generated on the second server;
sending the generated flow to a first server, so that the first server performs flow control on a first service according to the flow quota and first information of each of at least two second servers; wherein the content of the first and second substances,
the second server is registered in a distributed cluster corresponding to the first service, and at least two second servers are registered in the distributed cluster; the first information represents the flow generated on all the second servers; the traffic quota is determined by the first server based on a total traffic quota for the first traffic and a configuration parameter of the second server.
In the above scheme, the method further comprises:
receiving a first instruction; the first instruction is sent by the first server when traffic generated by all second servers in the distributed cluster exceeds the total traffic quota;
denying receipt of the query request when the generated traffic exceeds the traffic quota in response to the first instruction.
In the above scheme, the method further comprises:
receiving second information sent by the first server; the second information represents the upper flow limit of all the third servers; the query request received by the second server is subjected to shunting processing through at least two third servers;
controlling the flow of the corresponding third server according to the second information and the flow generated on the corresponding third server; wherein the content of the first and second substances,
the flow generated by each third server is obtained by counting the query request received by the corresponding third server by the atomic lock; the atomic lock is built in the corresponding third server.
The embodiment of the invention also provides a flow control device, which is arranged on the first server, and the device comprises:
a first acquisition unit configured to acquire first information; the first information represents the flow generated on all the second servers, the second servers are registered in a distributed cluster corresponding to the first service, and at least two second servers are registered in the distributed cluster; the flow generated on each second server is obtained by counting the query requests received by the corresponding second servers by the atomic lock; the atomic locks are arranged in the corresponding second servers;
a flow control unit, configured to perform flow control on the first service according to a flow quota of each of the at least two second servers and the first information; wherein, the first and the second end of the pipe are connected with each other,
the traffic quota for each second server is determined based on the total traffic quota for the first service and the configuration parameters of the corresponding second server.
The embodiment of the present invention further provides a flow control device, which is arranged on a second server, and the device includes:
the statistical unit is used for carrying out statistics on the received query request through a built-in atomic lock to obtain the flow generated on the second server;
a sending unit, configured to send the generated traffic to a first server, so that the first server performs traffic control on a first service according to a traffic quota and first information of each of at least two second servers; wherein the content of the first and second substances,
the second server is registered in a distributed cluster corresponding to the first service, and at least two second servers are registered in the distributed cluster; the first information represents traffic generated on all the second servers; the traffic quota is determined by the first server based on a total traffic quota for the first traffic and a configuration parameter of the second server.
An embodiment of the present invention further provides a server, including: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor is configured to execute the steps of implementing any of the above methods applied to the first server when running the computer program; alternatively, the steps of implementing any of the methods described above as applied to the second server are performed.
An embodiment of the present invention further provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of any one of the methods applied to the first server; alternatively, the computer program when executed by the processor performs the steps of any of the methods described above as applied to the second server.
In the embodiment of the invention, the flow generated by the second server is counted based on the atomic lock, so that the accurate counting of the flow is realized, and on the basis, the first server for flow control in the distributed cluster can realize the accurate control of the flow, and performs service expansion or service degradation on the service, so that the hardware utilization rate of the server is improved in a high-concurrency service scene, and the overall throughput rate of the distributed cluster is improved.
Drawings
Fig. 1 is a diagram of an implementation example of a distributed cluster according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of implementation of a first server side of a flow control method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart illustrating implementation of a first server side of a flow control method according to another embodiment of the present invention;
fig. 4 is a schematic flow chart illustrating an implementation of a first server side of a flow control method according to another embodiment of the present invention;
fig. 5 is a schematic flowchart illustrating an implementation process of a first server side of a flow control method according to another embodiment of the present invention;
fig. 6 is a schematic diagram of a multi-layer architecture of a distributed cluster according to an embodiment of the present invention;
fig. 7 is a schematic flow chart illustrating an implementation of a first server side of a flow control method according to another embodiment of the present invention;
fig. 8 is a schematic flow chart of an implementation of a second server side of a flow control method according to an embodiment of the present invention;
fig. 9 is a schematic flowchart of a second server side implementing the flow control method according to another embodiment of the present invention;
fig. 10 is a schematic flow chart of a second server side of a flow control method according to another embodiment of the present invention;
fig. 11 is a schematic diagram of a distributed cluster structure provided in an application embodiment of the present invention;
fig. 12 is a schematic structural diagram of a flow control device disposed in a first server according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a flow control device installed in a second server according to an embodiment of the present invention;
fig. 14 is a schematic diagram of a hardware composition structure of a server according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
The technical means described in the embodiments of the present invention may be arbitrarily combined without conflict.
In addition, in the embodiments of the present invention, "first", "second", and the like are used for distinguishing similar objects, and are not necessarily used for describing a specific order or a sequential order.
Fig. 1 shows an implementation example of a distributed cluster provided in an embodiment of the present invention, and in order to better explain the embodiment of the present invention, first, a distributed cluster related in the embodiment of the present invention is explained with reference to fig. 1.
In the embodiment of the present invention, at least two second servers are registered in the distributed cluster, in the embodiment of the present invention, the distributed cluster is used to provide service support for a service, the service is split into a plurality of sub-services and is deployed on different second servers, and the first server is used to perform flow control on each second server in the distributed cluster. In practical application, as an example, as shown in fig. 1, the Zookeeper is used as a cluster management service component of the distributed cluster, and each second server 12 in the distributed cluster is registered in the Zookeeper as a search service instance. As shown in fig. 1, there are 4 registration nodes in the Zookeeper, each registration node is a second server, and the first server is a management and control node, and performs flow control on the registration node on the Zookeeper based on a flow control policy.
Fig. 2 shows an implementation flow of the flow control method provided in the embodiment of the present invention, and in the embodiment of the present invention, an execution subject of the flow control method is the above first server.
S201: acquiring first information; the first information represents the flow generated on all the second servers, the second servers are registered in a distributed cluster corresponding to the first service, and at least two second servers are registered in the distributed cluster; the flow generated on each second server is obtained by counting the query requests received by the corresponding second servers by the atomic lock; the atomic locks are built into the respective second servers.
Here, the atomic lock is a micro lock variable based on multiprocessor switching granularity, and in practical applications, the atomic lock needs to be represented by an atomic property (atomic) in C/C + + language in a Linux environment. In the embodiment of the present invention, each second server is provided with an atomic lock inside, and the atomic lock performs an accumulation operation through an atomic type integer variable, that is, each time the second server receives a query request, the atomic lock performs an accumulation operation or a subtraction operation. Therefore, the traffic generated on the second server can be counted through the atomic lock built in the second server.
In practical application, because the atomic lock can only perform accumulation operation on integer variables, for other complex data structures such as non-integer variables, other complex structures can be disassembled to become a mode supported by the atomic lock, and then the flow control method provided by the embodiment of the invention is applied.
It should be noted that, in the embodiment of the present invention, each time the second server receives an inquiry request, the atomic lock performs an accumulation operation or a subtraction operation; the accumulation operation is to perform forward statistics on the flow, and when an inquiry request is received, the corresponding flow value is added by 1, which is equivalent to performing accumulation counting on the received inquiry request; the decrementing operation can be applied to business scenarios such as advertisement, for example, if an advertiser purchases 100 ten thousand of traffic, a value of 100 ten thousand is initially set, and each time an inquiry request is received, the corresponding traffic value is decremented by 1 until the traffic purchased by the advertiser is consumed.
In practical applications, the query request may also be expressed as a Page View (PV) number of times the client accesses the service.
In practical applications, the traffic may be a query rate Per Second (QPS), i.e. the number of query requests received by the Second server Per Second, or a transaction number Per Second (TPS); a transaction is a process in which a client sends a request to a server and the server responds according to the request, i.e. the number of transactions per second processed by the TPS, i.e. the second server. Corresponding to the case that the traffic is QPS or TPS, the atomic lock is used to count the query requests received by the second servers every second, so that the traffic generated on each second server can be obtained through the statistical result of the atomic lock. In practical applications, the traffic may also be a throughput in a set time period, and for the second server, taking the set time period as 5 seconds as an example, the corresponding throughput is the number of query requests received by the second server every 5 seconds, or the number of transactions processed by the second server every 5 seconds, in this case, in the process of performing traffic statistics, it is equivalent to perform peak clipping on QPS or TPS in the set time period, that is, only the total amount of query requests received by the second server or the total amount of transactions processed in the set time period is considered without considering the instantaneous peak value in the set time period.
Here, in the online operation process of the service, the second server reports the traffic generated on the second server to the first server according to the statistical result of the built-in atomic lock. In multithreading, the atomic lock can share the register, so that the flow generated on each second server can be sent to the first server in a distributed network registration mode, and the first server determines the first information according to the flow generated on each second server.
In practical application, the second server may report the generated traffic to the first server for aggregation by adopting a mode of reporting the traffic periodically. The first server receives the generated flow reported by each of the at least two second servers, and determines the first information by using the received generated flow reported by each of the at least two second servers.
S202: and performing flow control on the first service according to the flow quota of each of the at least two second servers and the first information.
Here, the traffic quota of each second server is determined based on the total traffic quota of the first service and the configuration parameter of the corresponding second server, where the total traffic quota is used to control the traffic brought by the service, that is, the traffic generated by the distributed cluster corresponding to the service does not exceed the total traffic quota. After determining the total traffic quota of the service, the first server determines the traffic quota of each second server according to the configuration parameters of each second server, and is used for controlling the traffic generated on the second server not to exceed the corresponding traffic quota. Here, the hardware configuration of the second server can be determined based on the configuration parameters of the second server, and on this basis, the traffic carrying capacity of the second server is further determined, so that the traffic quota is configured according to the traffic carrying capacity of each second server. For example, if the total traffic quota of the service is 1000 ten thousand access volumes and the hardware configuration of each second server in the distributed cluster is the same, the total traffic quota of the service may be evenly allocated to each second server in the distributed cluster, as shown in fig. 1, and in the case that the number of the second servers is 4, each second server may be allocated 250 ten thousand access volumes.
Here, the total traffic quota is allocated to the traffic, and for example, for the cloud service, the total traffic quota may be an access time quota or a bandwidth quota.
In the distributed cluster, the first server performs flow control on the service corresponding to the distributed cluster according to the flow quota of the second server and the first information, and further performs flow control on the service when the service providing capability of the distributed cluster for the service reaches an upper limit. As shown in fig. 3, the performing, according to the traffic quota of each of the at least two second servers and the first information, the flow control on the first service includes:
s2021: and when the flow generated by all the second servers in the distributed cluster exceeds the total flow quota, performing flow control on the first service according to the flow quota of each second server in the at least two second servers and the first information.
The first information represents the traffic generated by each second server in the distributed cluster, and therefore the first server adds the traffic generated by each second server according to the first information, thereby determining the traffic generated by all the second servers in the distributed cluster. And when the flow generated by all the second servers in the distributed cluster exceeds the total flow quota, performing flow control on the first service according to the flow quota of each second server and the first information.
Here, when the service providing capability of the distributed cluster for the service has reached the upper limit, the first service may be generally controlled in a manner of service degradation or service expansion. In an embodiment, corresponding to the manner of service degradation, as shown in fig. 4, the performing, according to the traffic quota of each of the at least two second servers and the first information, the flow control on the first traffic includes:
s2022: sending a first instruction to at least one second server in the at least two second servers; the first instruction is used for indicating that the corresponding second server refuses to receive the query request when the generated flow exceeds the corresponding flow quota.
Here, the first server sends a first instruction to the second server, after receiving the first instruction, the second server determines the generated traffic according to the statistical condition of the built-in atomic lock, and compares the generated traffic with the traffic quota of the second server, and when the generated traffic exceeds the traffic quota, the second server refuses to receive the query request, that is, does not respond to the query request sent to the second server any more. Through a flow control mode of service degradation, the throughput rate of the second server is ensured, for example, the QPS quota of the second server is 100, under a traffic field of killing seconds, the QPS of the second server is sent to 1000 in a short time, at this time, if service degradation is not performed, access congestion occurs on the second server due to suddenly generated high sending flow, the second server is paralyzed, all 1000 query requests cannot be responded, and through service degradation, the second server can at least ensure that 100 of the 1000 query requests are responded.
It should be noted that, because different sub-services in the service are respectively deployed on different second servers, when performing service degradation, the first instruction may be sent to the second server where the non-important service is deployed according to the priority of the service, so as to perform service degradation on the non-important service, and ensure that the important service is normally executed.
In practical application, when the second server refuses to receive the query request, for example, a situation that an access page corresponding to a service cannot be loaded or a corresponding search result cannot be obtained by an initiated search may occur corresponding to a client that initiates the query request.
In an embodiment, corresponding to a manner of service volume expansion, as shown in fig. 5, when performing flow control on the first service according to the flow quota of each of the at least two second servers and the first information, the method includes:
s2023: determining a difference between traffic generated on all second servers and the total traffic quota.
S2024: adding at least one second server in the distributed cluster, and setting a flow quota for each added second server; wherein the content of the first and second substances,
the number of second servers added is determined according to the difference and the traffic quota.
Service expansion is to add a second server to the distributed cluster corresponding to the service, so that the traffic processing capacity of the distributed cluster is matched with the generated traffic. As described above, the traffic quota for each second server in the distributed cluster is allocated based on the total traffic quota of the service, and once high concurrent traffic is generated on the distributed cluster and the generated traffic greatly exceeds the total traffic quota of the service, it is obvious that the existing second servers in the distributed cluster cannot digest the high concurrent traffic. Here, since the traffic generated on each first server is obtained based on atomic lock statistics, and the statistical result is accurate for each query request, how many second servers are added to the distributed cluster can be accurately determined according to the difference between the traffic generated by the distributed cluster and the total traffic quota, and how many traffic quotas are correspondingly allocated to the added second servers. In practical application, if the hardware configurations of the online second servers in the distributed cluster are the same as those of the alternative second servers, the traffic quota of each second server is also the same, so that the difference between the traffic generated by all the second servers in the distributed cluster and the total traffic quota of the service is divided by the traffic quota, and the number of the second servers that need to be added is obtained.
In the embodiment of the present invention, the first server allocates a traffic quota to each second server, and with reference to fig. 1, as shown in fig. 6, the second server obtains information from the first server, so as to determine the traffic quota. In the distributed cluster, a multi-layer mechanism is adopted to provide service support for services, as shown in fig. 6, each second server is configured with at least two third servers, and the query request received by the second server is distributed by the third servers. In practical application, the third server may be a database server, the second server serves as a traffic entry of the corresponding sub-service, the received query request is issued to one of the third servers, the third server responds to the query request, the database is used to search for the data requested by the query request, and the searched corresponding data is returned to the corresponding client, so that the response to the query request is completed.
As shown in fig. 7, the method further comprises:
s203: acquiring second information, wherein the second information represents the upper flow limits of all third servers; and the inquiry request received by one second server is subjected to shunting processing by at least two third servers.
Generally speaking, as a database server, different third servers have different hardware configurations of machines due to different machine models, and therefore, different database query service capabilities are provided. In practical application, before the second server is not registered to the distributed cluster and generates traffic, each third server determines the upper limit of the traffic through a pressure test, and reports the upper limit of the traffic to the first server for recording, so that the query service capacity of the third server is converted into a quantitative index, and further the second server reasonably distributes the traffic for the third server through flow control. Here, the upper traffic limit may be a QPS upper limit or a TPS upper limit of the third server. In practical applications, the stress test may also be performed after the second server registers with the distributed cluster and generates traffic.
S204: and sending the second information to a corresponding second server so that the corresponding second server performs flow control on a corresponding third server according to the second information and the flow generated on the corresponding third server.
The first server records the upper flow limit of each third server after acquiring second information representing the upper flow limit of each third server, the second server acquires an information list for recording the upper flow limit of each third server from the first server, and the third server is controlled according to the information list and the flow generated on the third server. Here, the third server also has an atomic lock built therein, and the atomic lock counts the query requests received by the third server, thereby obtaining the traffic generated on the third server. The query request received by the third server is processed by the second server by distributing the received query request to the third server. In practical application, if the traffic generated by the third server exceeds or receives the upper traffic limit of the third server, the second server may perform traffic early warning on the third server, or distribute the received query request to other third servers, so as to implement traffic control on the third server.
In practical applications, the third server may be controlled by combining a Robin algorithm or other consistent hashing algorithm.
Correspondingly, the embodiment of the invention provides a flow control method which is applied to a second server. Fig. 8 shows an implementation flow of a flow control method according to an embodiment of the present invention. As shown in fig. 8, the flow control method executed on the second server includes:
s801: and counting the received query request through a built-in atomic lock to obtain the flow generated on the second server.
In the embodiment of the present invention, each second server is internally provided with an atomic lock, and the atomic lock performs an accumulation operation through an integer variable of an atomic type, that is, each time the second server receives an inquiry request, the atomic lock performs an accumulation operation or a subtraction operation, so that the traffic generated on the second server can be obtained through statistics by the atomic lock built in the second server.
In practical applications, the traffic may be QPS or TPS, and for the second server, QPS is the number of query requests received by the second server per second; a transaction is a process in which a client sends a request to a server and the server responds according to the request, i.e. the number of transactions per second processed by the TPS, i.e. the second server. Corresponding to the case that the traffic is QPS or TPS, the atomic lock is used to count the query requests received by the second server every second, so that the traffic generated on each second server can be obtained through the statistical result of the atomic lock. In practical applications, the traffic may also be a throughput in a set time period, and for the second server, taking the set time period as 5 seconds as an example, the corresponding throughput is the number of query requests received by the second server every 5 seconds, or the number of transactions processed by the second server every 5 seconds, in this case, in the process of performing traffic statistics, it is equivalent to perform peak clipping on QPS or TPS in the set time period, that is, only the total amount of query requests received by the second server or the total amount of transactions processed in the set time period is considered without considering the instantaneous peak value in the set time period.
S802: and sending the generated flow to a first server so that the first server performs flow control on the first service according to the flow quota of each of at least two second servers and the first information.
Here, in the online operation process of the service, the second server reports the traffic generated on the second server to the first server according to the statistical result of the built-in atomic lock, the first server determines the first information according to the traffic generated on all the second servers, and performs traffic control on the service corresponding to the distributed cluster according to the traffic quota of the second server and the first information, and further performs traffic control on the service when the service provision capability of the distributed cluster for the service has reached the upper limit.
In an embodiment, when the service providing capability of the distributed cluster for the service has reached the upper limit, corresponding to the way of employing the service degradation, in an embodiment, as shown in fig. 9, the method further includes:
s803: receiving a first instruction; the first instruction is sent by the first server when traffic generated by all second servers in the distributed cluster exceeds the total traffic quota.
S804: denying receipt of the query request when the generated traffic exceeds the traffic quota in response to the first instruction.
When the flow generated by the distributed cluster exceeds the total flow quota of the service corresponding to the distributed cluster, the first server sends a first instruction to the second server, after the second server receives the first instruction, the generated flow is determined according to the statistical condition of the built-in atomic lock, the generated flow is compared with the flow quota of the second server, and when the generated flow exceeds the flow quota, the second server refuses to receive the query request, namely, the query request sent to the second server is not responded any more. Through a flow control mode of service degradation, the throughput rate of the second server is ensured, for example, the QPS quota of the second server is 100, under a traffic field of killing seconds, the QPS of the second server is sent to 1000 in a short time, at this time, if service degradation is not performed, access congestion occurs on the second server due to suddenly generated high sending flow, the second server is paralyzed, all 1000 query requests cannot be responded, and through service degradation, the second server can at least ensure that 100 of the 1000 query requests are responded.
In practical application, when the second server refuses to receive the query request, for example, a situation that an access page corresponding to a service cannot be loaded or a corresponding search result cannot be obtained by an initiated search may occur corresponding to a client that initiates the query request.
As an embodiment of the present invention, as shown in fig. 10, the method further includes:
s805: receiving second information sent by the first server; the second information represents the upper flow limit of all the third servers; and the query request received by the second server is subjected to shunting processing through at least two third servers.
In the distributed cluster, a multi-layer mechanism is adopted for service support of the service, at least two third servers are configured below each second server, and traffic generated on the second servers is subjected to shunting processing through the third servers. In practical application, the third server may be a database server, the second server is used as a traffic entry of the corresponding sub-service, the received query request is sent to one of the third servers, the third server responds to the query request, the database is used to search for the data requested by the query request, and the searched corresponding data is returned to the corresponding client, so that the response to the query request is completed. Generally speaking, as a database server, different third servers provide different database query service capabilities. In practical application, before the second server is not registered to the distributed cluster and generates traffic, each third server determines the upper limit of the traffic through a pressure test, and reports the upper limit of the traffic to the first server for recording. Here, the upper traffic limit may be a QPS upper limit or a TPS upper limit of the third server. In practical applications, the stress test may also be performed after the second server registers with the distributed cluster and generates traffic.
S806: and controlling the flow of the corresponding third server according to the second information and the flow generated on the corresponding third server.
The first server records the upper flow limit of each third server after acquiring second information representing the upper flow limit of each third server, the second server acquires an information list for recording the upper flow limit of each third server from the first server, and the third server is controlled according to the information list and the flow generated on the third server. Here, the third server also has an atomic lock built therein, and the atomic lock counts the query requests received by the third server, thereby obtaining the traffic generated by the third server. And the query request received by the third server is processed by the second server by distributing the received query request to the third server. In practical application, if the traffic generated by the third server exceeds or receives the upper traffic limit of the third server, the second server may perform traffic early warning on the third server, or distribute the received query request to other third servers, so as to implement traffic control on the third server.
In practical applications, the third server may be controlled by combining a Robin algorithm or other consistent hashing algorithm.
In the embodiment of the invention, the flow generated by the second server is counted based on the atomic lock, so that the accurate counting of the flow is realized, and on the basis, the first server for flow control in the distributed cluster can realize the accurate control of the flow, and performs service expansion or service degradation on the service, so that the hardware utilization rate of the server is improved in a high-concurrency service scene, and the overall throughput rate of the distributed cluster is improved.
In addition, as the atomic lock is adopted to count the flow generated on each second server, the flow counting is not the counting granularity of the whole interface level of the service system, but can be refined to be carried out aiming at each sub-service, the flow control fineness is high, and the method can particularly obtain good application effect under the service scenes of commercial application such as advertisement, finance and the like. For example, under the business scene of advertisement, the corresponding traffic of each advertiser can be counted in a refined manner, so that the actual advertisement putting effect is analyzed and charged according to the traffic quota put by the advertisers of different grades.
Fig. 11 is a schematic diagram of flow control provided by an application embodiment of the present invention, referring to fig. 11, in a service system, a background server is a second server in the embodiment of the present invention, and a downgrading device and a reporting device are built in the background server, where an atomic lock is built in the downgrading device, and is used to perform statistics of atomic lock levels on traffic generated on the background server and is responsible for the service downgrading related operations mentioned above; and the reporting device determines the flow generated on the background server according to the statistical result of the atomic lock and a set time period, and is connected with the monitoring system and the scheduling system of the service system. The monitoring system is used for showing the flow control condition of the service system to operation and maintenance personnel; the scheduling system, i.e. the first server in the embodiment of the present invention, is configured to perform flow control on a service system according to a corresponding flow control policy, for example, perform refined flow control through an Application Programming Interface (API) or under a certain service scenario. For the operator, the corresponding flow control policy may be configured on the first server through the operation background in fig. 11, and the first server executes flow control according to the flow control policy.
In fig. 11, the reporting device may also be configured in a real-time computing system, such as deployed in a flink.
To implement the method of the embodiment of the present invention, an embodiment of the present invention further provides a flow control device, which is disposed on the first server in fig. 1, and as shown in fig. 12, the flow control device includes:
a first obtaining unit 1201 for obtaining first information; the first information represents the flow generated on all the second servers, the second servers are registered in a distributed cluster corresponding to the first service, and at least two second servers are registered in the distributed cluster; the flow generated on each second server is obtained by counting the query requests received by the corresponding second servers by the atomic lock; the atomic locks are arranged in the corresponding second servers;
a flow control unit 1202, configured to perform flow control on the first service according to a flow quota of each of the at least two second servers and the first information; wherein the content of the first and second substances,
the traffic quota for each second server is determined based on the total traffic quota for the first service and the configuration parameters for the corresponding second server.
In an embodiment, the first obtaining unit 1201 is further configured to:
receiving the generated flow reported by each of the at least two second servers;
and determining the first information by using the received generated flow reported by each second server.
In an embodiment, the flow control unit 1202 is further configured to:
and when the flow generated by all the second servers in the distributed cluster exceeds the total flow quota, performing flow control on the first service according to the flow quota of each second server in the at least two second servers and the first information.
In an embodiment, when performing flow control on the first service according to the flow quota of each of the at least two second servers and the first information, the flow control unit 1202 is further configured to:
sending a first instruction to at least one second server in the at least two second servers; the first instruction is used for indicating that the corresponding second server refuses to receive the query request when the generated flow exceeds the corresponding flow quota.
In an embodiment, when performing flow control on the first service according to the flow quota of each of the at least two second servers and the first information, the flow control unit 1202 is further configured to:
determining a difference between traffic generated on all second servers and the total traffic quota;
adding at least one second server in the distributed cluster, and setting a flow quota for each added second server; wherein the content of the first and second substances,
the number of second servers added is determined according to the difference value and the traffic quota.
In one embodiment, the flow control device may further include:
the second acquisition unit is used for acquiring second information, and the second information represents the upper flow limits of all the third servers; the query request received by one second server is subjected to shunting processing through at least two third servers;
the issuing unit is used for issuing the second information to the corresponding second server so that the corresponding second server can control the flow of the corresponding third server according to the second information and the flow generated on the corresponding third server; wherein the content of the first and second substances,
the flow generated on each third server is obtained by counting the query request received by the corresponding third server by the atomic lock; the atomic lock is built in the corresponding third server.
In practical applications, the first obtaining unit 1201, the flow control unit 1202, the second obtaining unit, and the issuing unit may be implemented by a processor in the flow control device. Of course, the processor needs to run the program stored in the memory to implement the functions of the above-described program modules.
To implement the method of the embodiment of the present invention, an embodiment of the present invention further provides a flow control device, which is disposed on the second server in fig. 1, and as shown in fig. 13, the flow control device includes:
a counting unit 1301, configured to count the received query request through an internal atomic lock, to obtain a traffic generated on the second server;
a sending unit 1302, configured to send the generated traffic to a first server, so that the first server performs traffic control on a first service according to a traffic quota and first information of each of at least two second servers; wherein the content of the first and second substances,
the second server is registered in a distributed cluster corresponding to the first service, and at least two second servers are registered in the distributed cluster; the first information represents traffic generated on all the second servers; the traffic quota is determined by the first server based on a total traffic quota for the first traffic and a configuration parameter of the second server.
Wherein, in an embodiment, the flow control device further comprises:
a first receiving unit for receiving a first instruction; the first instruction is sent by the first server when traffic generated by all second servers in the distributed cluster exceeds the total traffic quota;
and the response unit is used for responding to the first instruction and refusing to receive the query request when the generated flow exceeds the flow quota.
Wherein, in an embodiment, the flow control device further comprises:
a second receiving unit, configured to receive second information sent by the first server; the second information represents the upper flow limit of all the third servers; the query request received by the second server is subjected to shunting processing through at least two third servers;
the control unit is used for controlling the flow of the corresponding third server according to the second information and the flow generated on the corresponding third server; wherein the content of the first and second substances,
the flow generated by each third server is obtained by counting the query request received by the corresponding third server by the atomic lock; the atomic lock is built in the corresponding third server.
In actual application, the statistical unit 1301, the sending unit 1302, the first receiving unit, the responding unit, the second receiving unit and the control unit may be implemented by a processor in the flow control device. Of course, the processor needs to run the program stored in the memory to implement the functions of the above-described program modules.
It should be noted that: in the flow rate control device provided in the embodiment of fig. 12 or 13, when performing flow rate control, only the division of the program modules is illustrated, and in practical applications, the processing distribution may be completed by different program modules according to needs, that is, the internal structure of the device may be divided into different program modules to complete all or part of the processing described above. In addition, the flow control device and the flow control method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Based on the hardware implementation of the program module, in order to implement the method of the embodiment of the present invention, the embodiment of the present invention further provides a server. Fig. 14 is a schematic diagram of a hardware composition structure of a server according to an embodiment of the present invention, and as shown in fig. 14, the server includes:
a communication interface 1401 capable of exchanging information with other devices such as a network device and the like;
the processor 1402 is connected to the communication interface 1401 to implement information interaction with other devices, and is configured to execute a method provided by one or more technical solutions of the first server side or execute a method provided by one or more technical solutions of the second server side when running a computer program. And the computer program is stored on the memory 1403.
Of course, in practice, the various components in the server are coupled together by a bus system 1404. It is understood that bus system 1404 is used to enable connective communication between these components. The bus system 1404 includes a power bus, a control bus, and a status signal bus in addition to a data bus. The various buses are labeled as bus system 1404 in fig. 14 for the sake of clarity of illustration.
The memory 1403 in the embodiment of the present invention is used to store various types of data to support operations in the server. Examples of such data include: any computer program for operating on a server.
It will be appreciated that the memory 1403 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a magnetic random access Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), synchronous Static Random Access Memory (SSRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), synchronous Dynamic Random Access Memory (SLDRAM), direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 1403 described in connection with the embodiments of the present invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed by the above-mentioned embodiments of the present invention may be applied to the processor 1402, or implemented by the processor 1402. The processor 1402 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 1402. The processor 1402 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 1402 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 1403, and the processor 1402 reads the program in the memory 1403 and in combination with its hardware performs the steps of the aforementioned method.
Optionally, when the processor 1402 executes the program, implementing a corresponding process implemented by a server in each method according to the embodiment of the present invention, which is not described herein again for brevity.
In an exemplary embodiment, the present invention further provides a storage medium, specifically a computer-readable storage medium, for example, a memory storing a computer program, which is executable by a processor of a server to perform the steps of the foregoing method. The computer readable storage medium may be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash Memory, magnetic surface Memory, optical disk, or CD-ROM.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (13)

1. A flow control method applied to a first server, the method comprising:
acquiring first information; the first information represents the flow generated on all the second servers, the second servers are registered in a distributed cluster corresponding to the first service, and at least two second servers are registered in the distributed cluster; the flow generated on each second server is obtained by counting the query requests received by the corresponding second servers by the atomic lock; the atomic locks are arranged in the corresponding second servers;
performing flow control on the first service according to the flow quota of each of the at least two second servers and the first information; wherein the content of the first and second substances,
the traffic quota for each second server is determined based on the total traffic quota for the first service and the configuration parameters of the corresponding second server.
2. The method of claim 1, wherein obtaining the first information comprises:
receiving the generated flow reported by each of the at least two second servers;
and determining the first information by using the received generated flow reported by each second server.
3. The method according to claim 1, wherein the performing, according to the traffic quota of each of the at least two second servers and the first information, the flow control on the first service includes:
and when the flow generated by all the second servers in the distributed cluster exceeds the total flow quota, performing flow control on the first service according to the flow quota of each second server in the at least two second servers and the first information.
4. The method according to claim 3, wherein the performing the flow control on the first service according to the flow quota of each of the at least two second servers and the first information includes:
sending a first instruction to at least one second server in the at least two second servers; the first instruction is used for indicating that the corresponding second server refuses to receive the query request when the generated flow exceeds the corresponding flow quota.
5. The method according to claim 3, wherein the performing the flow control on the first service according to the flow quota of each of the at least two second servers and the first information includes:
determining a difference between traffic generated on all second servers and the total traffic quota;
adding at least one second server in the distributed cluster, and setting a flow quota for each added second server; wherein the content of the first and second substances,
the number of second servers added is determined according to the difference and the traffic quota.
6. The method of claim 1, further comprising:
acquiring second information, wherein the second information represents the upper flow limits of all third servers; the query request received by one second server is subjected to flow distribution processing through at least two third servers;
sending the second information to a corresponding second server so that the corresponding second server controls the flow of the corresponding third server according to the second information and the flow generated on the corresponding third server; wherein the content of the first and second substances,
the flow generated on each third server is obtained by counting the query request received by the corresponding third server by the atomic lock; the atomic lock is built in the corresponding third server.
7. A flow control method applied to a second server includes:
counting the received query request through a built-in atomic lock to obtain the flow generated on the second server;
sending the generated flow to a first server, so that the first server performs flow control on a first service according to the flow quota and first information of each of at least two second servers; wherein the content of the first and second substances,
the second server is registered in a distributed cluster corresponding to the first service, and at least two second servers are registered in the distributed cluster; the first information represents the flow generated on all the second servers; the traffic quota is determined by the first server based on a total traffic quota for the first traffic and a configuration parameter of the second server.
8. The method of claim 7, further comprising:
receiving a first instruction; the first instruction is sent by the first server when traffic generated by all second servers in the distributed cluster exceeds the total traffic quota;
denying receipt of the query request when the generated traffic exceeds the traffic quota, in response to the first instruction.
9. The method of claim 7, further comprising:
receiving second information sent by the first server; the second information represents the upper flow limit of all the third servers; the query request received by the second server is subjected to shunting processing through at least two third servers;
controlling the flow of the corresponding third server according to the second information and the flow generated on the corresponding third server; wherein, the first and the second end of the pipe are connected with each other,
the flow generated by each third server is obtained by counting the query request received by the corresponding third server by the atomic lock; the atomic lock is built in the corresponding third server.
10. A flow control apparatus, provided on a first server, the apparatus comprising:
a first acquisition unit configured to acquire first information; the first information represents the flow generated on all the second servers, the second servers are registered in a distributed cluster corresponding to the first service, and at least two second servers are registered in the distributed cluster; the flow generated on each second server is obtained by counting the query requests received by the corresponding second servers by the atomic lock; the atomic locks are arranged in the corresponding second servers;
a flow control unit, configured to perform flow control on the first service according to a flow quota of each of the at least two second servers and the first information; wherein the content of the first and second substances,
the traffic quota for each second server is determined based on the total traffic quota for the first service and the configuration parameters for the corresponding second server.
11. A flow control apparatus, provided on a second server, the apparatus comprising:
the statistical unit is used for carrying out statistics on the received query request through a built-in atomic lock to obtain the flow generated on the second server;
a sending unit, configured to send the generated traffic to a first server, so that the first server performs traffic control on a first service according to a traffic quota and first information of each of at least two second servers; wherein the content of the first and second substances,
the second server is registered in a distributed cluster corresponding to the first service, and at least two second servers are registered in the distributed cluster; the first information represents traffic generated on all the second servers; the traffic quota is determined by the first server based on a total traffic quota for the first traffic and a configuration parameter of the second server.
12. A server, comprising: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor is configured to execute the steps of implementing the method according to any one of claims 1 to 6 when running the computer program; or to perform the steps of implementing the method of any one of claims 7 to 9.
13. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, implementing the steps of the method according to any one of claims 1 to 6; alternatively, the computer program realizes the steps of the method of any one of claims 7 to 9 when executed by a processor.
CN201910935923.9A 2019-09-29 2019-09-29 Flow control method, device, server and storage medium Active CN110708258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910935923.9A CN110708258B (en) 2019-09-29 2019-09-29 Flow control method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910935923.9A CN110708258B (en) 2019-09-29 2019-09-29 Flow control method, device, server and storage medium

Publications (2)

Publication Number Publication Date
CN110708258A CN110708258A (en) 2020-01-17
CN110708258B true CN110708258B (en) 2023-04-07

Family

ID=69196798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910935923.9A Active CN110708258B (en) 2019-09-29 2019-09-29 Flow control method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN110708258B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111343240B (en) * 2020-02-12 2022-08-16 北京字节跳动网络技术有限公司 Service request processing method and device, electronic equipment and storage medium
CN111555986B (en) * 2020-04-26 2022-07-05 支付宝(杭州)信息技术有限公司 Congestion control method, device and equipment
CN113760940A (en) * 2020-09-24 2021-12-07 北京沃东天骏信息技术有限公司 Quota management method, device, equipment and medium applied to distributed system
CN113726885A (en) * 2021-08-30 2021-11-30 北京天空卫士网络安全技术有限公司 Method and device for adjusting flow quota
CN114443162B (en) * 2022-01-05 2023-05-23 福建天泉教育科技有限公司 Cloud primary micro-service flow control method and server
CN114745338A (en) * 2022-03-30 2022-07-12 Oppo广东移动通信有限公司 Flow control method, flow control device, storage medium and server
CN115242718B (en) * 2022-06-21 2024-01-30 平安科技(深圳)有限公司 Cluster current limiting method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101572609A (en) * 2008-04-29 2009-11-04 成都市华为赛门铁克科技有限公司 Method and device for detecting and refusing service attack
CN103874134A (en) * 2012-12-15 2014-06-18 华为终端有限公司 Flow control method and device
CN104092650A (en) * 2013-12-04 2014-10-08 腾讯数码(天津)有限公司 Service distributing request method and device
CN106230823A (en) * 2016-08-01 2016-12-14 北京神州绿盟信息安全科技股份有限公司 A kind of flow statistical method and device
CN108259426A (en) * 2016-12-29 2018-07-06 华为技术有限公司 A kind of ddos attack detection method and equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6161147A (en) * 1995-03-31 2000-12-12 Sun Microsystems, Inc. Methods and apparatus for managing objects and processes in a distributed object operating environment
US5687372A (en) * 1995-06-07 1997-11-11 Tandem Computers, Inc. Customer information control system and method in a loosely coupled parallel processing environment
US6212573B1 (en) * 1996-06-26 2001-04-03 Sun Microsystems, Inc. Mechanism for invoking and servicing multiplexed messages with low context switching overhead
CN109087055B (en) * 2018-06-06 2022-04-08 北京达佳互联信息技术有限公司 Service request control method and device
CN109191162A (en) * 2018-07-06 2019-01-11 中国建设银行股份有限公司 Information processing method, system, device and storage medium
CN109150746B (en) * 2018-07-06 2022-08-30 南京星云数字技术有限公司 Global flow control method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101572609A (en) * 2008-04-29 2009-11-04 成都市华为赛门铁克科技有限公司 Method and device for detecting and refusing service attack
CN103874134A (en) * 2012-12-15 2014-06-18 华为终端有限公司 Flow control method and device
CN104092650A (en) * 2013-12-04 2014-10-08 腾讯数码(天津)有限公司 Service distributing request method and device
CN106230823A (en) * 2016-08-01 2016-12-14 北京神州绿盟信息安全科技股份有限公司 A kind of flow statistical method and device
CN108259426A (en) * 2016-12-29 2018-07-06 华为技术有限公司 A kind of ddos attack detection method and equipment

Also Published As

Publication number Publication date
CN110708258A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
CN110708258B (en) Flow control method, device, server and storage medium
JP6878512B2 (en) Rolling resource credits for scheduling virtual computer resources
JP6465916B2 (en) Providing resource usage information for each application
Ghosh et al. Biting off safely more than you can chew: Predictive analytics for resource over-commit in iaas cloud
US10270668B1 (en) Identifying correlated events in a distributed system according to operational metrics
US8745216B2 (en) Systems and methods for monitoring and controlling a service level agreement
US10783002B1 (en) Cost determination of a service call
CN110490728B (en) Transaction and transaction supervision method, device and equipment based on block chain
KR101865318B1 (en) Burst mode control
US20060085544A1 (en) Algorithm for Minimizing Rebate Value Due to SLA Breach in a Utility Computing Environment
US20200320520A1 (en) Systems and Methods for Monitoring Performance of Payment Networks Through Distributed Computing
WO2017166643A1 (en) Method and device for quantifying task resources
EP2652695A2 (en) Hybrid cloud broker
Song et al. A two-stage approach for task and resource management in multimedia cloud environment
US20200153645A1 (en) Increasing processing capacity of processor cores during initial program load processing
US20170063708A1 (en) Resource exchange service transaction for cloud computing
US20230281056A1 (en) Artificial Intelligence Application Task Management Method, System, Device, and Storage Medium
CN113034233A (en) Method, apparatus, medium, and program product for allocating resources in a reading application
CN114175602A (en) Authority management of cloud resources
Melo et al. Performance and availability evaluation of the blockchain platform hyperledger fabric
CN111182479B (en) Information sending control method and device
US20140351550A1 (en) Memory management apparatus and method for threads of data distribution service middleware
CN110489418B (en) Data aggregation method and system
CN110009320B (en) Resource conversion method, device, system, storage medium and computer equipment
Lang et al. Not for the Timid: On the Impact of Aggressive Over-booking in the Cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant