CN109286573B - Peak clipping system based on distributed token bucket - Google Patents

Peak clipping system based on distributed token bucket Download PDF

Info

Publication number
CN109286573B
CN109286573B CN201811015371.1A CN201811015371A CN109286573B CN 109286573 B CN109286573 B CN 109286573B CN 201811015371 A CN201811015371 A CN 201811015371A CN 109286573 B CN109286573 B CN 109286573B
Authority
CN
China
Prior art keywords
service request
flow
layer system
token bucket
peak value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811015371.1A
Other languages
Chinese (zh)
Other versions
CN109286573A (en
Inventor
胡昇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Fumin Bank Co Ltd
Original Assignee
Chongqing Fumin Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Fumin Bank Co Ltd filed Critical Chongqing Fumin Bank Co Ltd
Priority to CN201811015371.1A priority Critical patent/CN109286573B/en
Publication of CN109286573A publication Critical patent/CN109286573A/en
Application granted granted Critical
Publication of CN109286573B publication Critical patent/CN109286573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Abstract

The invention relates to the field of data information processing systems, and provides a peak clipping system based on a distributed token bucket in order to solve the problem that a lower-layer system is crashed due to large data flow of a service request transmitted from an upper layer in the existing interactive multi-system, wherein the peak clipping system comprises a request receiving module, a flow judging module, an asynchronous message queue module and a consuming component, wherein after the request receiving module receives the service request transmitted by the upper-layer system, the flow judging module judges whether the flow of the service request exceeds a preset flow peak value or not, if the flow of the service request exceeds the flow peak value, the service request is transmitted to the asynchronous message queue module to be queued for waiting, if the flow of the service request does not exceed the flow peak value, the service request is transmitted to the lower-layer system, and the lower-layer system processes the service request; and the consumption component consumes the service request which is queued to wait until the flow of the service request does not exceed the flow peak value, and then sends the service request to a lower-layer system for processing.

Description

Peak clipping system based on distributed token bucket
Technical Field
The invention relates to the field of data information processing systems, in particular to a peak clipping system based on a distributed token bucket.
Background
With the emergence of SOA (service oriented technology architecture) in recent years, more and more application systems are beginning to be designed and deployed in a distributed manner. The system is changed from the original single technical architecture into a service-oriented multi-system architecture. The original business process which can be completed in one system is realized by multiple times of interaction among multiple systems.
However, due to the difference between the data transmission capability and the data reception capability of each system, the processing capability of each system for the service request is different, so in the interaction process, if the processing capability of the upper system is greater than that of the lower system, the upper system can process the service request with a large data flow, and if the lower system with a small processing capability directly processes the service request transmitted by the upper system, the lower system may be crashed due to the excessively large data flow. Therefore, it is necessary to perform peak clipping on the transmitted service request to prevent the breakdown of the underlying system.
Disclosure of Invention
The invention aims to provide a peak clipping system based on a distributed token bucket, which aims to solve the problem that a lower system is easy to crash due to overlarge data flow of a service request transmitted by an upper layer because the processing capacity of each system is different in the process of processing the service request by the interactive multi-system at present.
The basic scheme provided by the invention is as follows: the peak clipping system based on the distributed token bucket comprises a request receiving module, a flow judging module, an asynchronous message queue module and a consumption component, wherein after the request receiving module receives a service request sent by an upper-layer system, the flow judging module judges whether the flow of the service request exceeds a preset flow peak value or not, if the flow of the service request exceeds the flow peak value, the service request is sent to the asynchronous message queue module for queuing and waiting, if the flow of the service request does not exceed the flow peak value, the service request is sent to a lower-layer system, and the lower-layer system processes the service request;
after the lower-layer system processes the service request, the consumption assembly consumes the service request waiting in line, the flow judgment module judges whether the flow of the consumed service request exceeds the flow peak value or not, if the flow of the consumed service request does not exceed the flow peak value, the service request is sent to the lower-layer system for processing, if the flow of the consumed service request exceeds the flow peak value, the consumption assembly continues to consume the service request, and the service request is sent to the lower-layer system for processing until the flow of the service request does not exceed the flow peak value.
Name interpretation:
in the method, a service request is sent from an upper layer system to a lower layer system.
Service: a service refers to a service that one entity unit provides to another entity unit.
An asynchronous message queue: asynchronous message queues are containers that hold messages during their transmission. The message is a carrier for communicating contents between independent resources, a producer constructs the message, and a consumer uses the message; the queue is a carrier for storing messages, the producer places the messages in the queue, and the consumer takes the messages out of the queue.
Consumption: the service request is taken out and then is logically processed so as to be consumed, for example, the taken-out service request is displayed or executed or specified content in the service request is filtered by adopting a filter module.
The basic scheme has the beneficial effects that: compared with the existing peak clipping mode, in the present solution, 1. in the process of sending the service request to the lower layer system, performing peak clipping processing on the service request with large flow rate, that is, sending the service request exceeding the peak value of the flow rate to the asynchronous message queue for queuing waiting instead of directly rejecting or transmitting to the lower layer system, that is, the method divides the service request sent to the lower layer system into two parts in advance by taking the processing capability of the lower layer system as a boundary, the service request included in the first part is within the processing capability range of the lower layer system, so that the service request of the first part is directly sent to the lower layer system for processing, the service request included in the second part includes the service request exceeding the processing range of the lower layer system, the service request of the first part is sent to the asynchronous message queue for waiting, after the service request of the first part is processed, at this time, the second part of the service requests are sent to the lower layer system for processing, so that all the service requests received by the system are guaranteed to be processed, the loss of the service requests is avoided, and meanwhile, the flow of the service requests sent to the lower layer system is guaranteed to be within the processing capacity range of the lower layer system, and the lower layer system cannot be crashed when processing the service requests;
2. the setting of the asynchronous message queue can relieve the pressure of the lower layer system on one hand, and does not need to face a large number of service requests at the same time, on the other hand, in the existing preprocessing mode, the second part of service requests are discarded, and after the lower layer system finishes processing the first part of service requests, the lower layer system is in an idle state until the next service request arrives.
The first preferred scheme is as follows: as the optimization of the basic scheme, the flow judgment module adopts a token bucket algorithm to judge. Has the advantages that: considering that the upper layer system sends a large number of service requests downwards in the peak period, the token bucket algorithm capable of allowing a certain degree of burst transmission is adopted for judgment, and the property of allowing a certain degree of burst transmission of the token bucket can buffer the pressure of the lower layer system to a certain degree in the peak period of the upper layer system, so that the accumulation of a large number of service requests is avoided.
Description of the drawings: the token bucket can be viewed as a container with a certain capacity for storing tokens, and the system injects tokens into the bucket at a specified rate, so that when the token in the bucket is full and overflows, the tokens in the bucket are not increased. After the system receives a unit data (for network transmission, it can be a packet or a byte; for Web Service, it can be a request, in this application, it is a Service request), it takes out a token from token bucket, then processes the data or request. If there are no tokens in the token bucket, the data or request is directly discarded. A fixed token bucket size may generate tokens at a constant rate on its own. If the tokens are not consumed, or are consumed less quickly than they are generated, the tokens are continually incremented until the bucket is filled, thus allowing some degree of burst transmission. Later regenerated tokens will overflow from the bucket, and the maximum number of tokens that can be held in the last bucket never exceeds the size of the bucket.
Burst transmission: also commonly referred to as data bursts, which in the field of communications generally refers to relatively high bandwidth data transmission in a short time, and in this application refers to sending a large number of service requests downward during the peak time of the upper layer system.
The preferred scheme II is as follows: preferably, the maximum number of tokens in the token bucket matches the processing power of the underlying system. Has the advantages that: the maximum token number of the token bucket is matched with the processing capacity of the lower-layer system, so that even if some degree of burst transmission is allowed, the maximum service request receiving quantity of the lower-layer system does not exceed the maximum value allowed by the processing capacity of the lower-layer system, and the condition that the lower-layer system is broken down due to the fact that the received service request quantity is too large is avoided; the maximum number of tokens in this scheme is the traffic peak.
The preferable scheme is three: preferably, the maximum token number of the token bucket is dynamically adjustable. Has the advantages that: after the maximum token number of the token bucket can be dynamically adjusted, the maximum token number can be adjusted to realize optimization adjustment.
The preferable scheme is four: preferably, as a first preferred scheme, the traffic judging module implements a distributed token bucket algorithm based on a distributed coordination component zookeeper. Has the advantages that: when the content of the zookeeper changes, the zookeeper can inform all online clients of updating information in real time in a callback blocking mode, so that the flow distribution of multiple machines in a distribution scene can be completely balanced, the flow of downstream channels is uniformly controlled, and the system can be optimized and adjusted at any time conveniently.
Description of the drawings: ZooKeeper is a distributed, open-source distributed application program coordination service, is an open-source implementation of Chubby of Google, is an important component of Hadoop and Hbase, and is software for providing consistency service for distributed applications.
Client: a Client (Client), also called Client, refers to a program corresponding to a server and providing local services to clients.
The preferable scheme is five: preferably, the consumer component is a Kafka consumer component. Has the advantages that: and selecting a Kafka consumption component for consumption, explicitly supporting the partition of the message, and maintaining each partition to be ordered through cluster distributed consumption of the Kafka server and the consumer machine.
Description of the drawings: kafka was originally developed by Linkedin corporation as a distributed, partitioned, multi-replica, multi-subscriber, zookeeper-based coordinated distributed log system (which may also be referred to as an MQ system), commonly available for web/nginx logs, access logs, messaging services, etc., Linkedin contributed to the Apache foundation in 2010 and became the top-level open source item.
Drawings
Fig. 1 is a flow chart of an embodiment of a peak clipping system based on a distributed token bucket according to the present invention.
Detailed Description
The following is further detailed by the specific embodiments:
the peak clipping system based on the distributed token bucket shown in fig. 1 includes a request receiving module, a traffic judging module, an asynchronous message queue module and a consuming component, wherein after the request receiving module receives a service request sent by an upper system, the traffic judging module judges whether the traffic of the service request exceeds a preset traffic peak value, if the traffic of the service request exceeds the traffic peak value, the service request is sent to the asynchronous message queue module for queuing, if the traffic of the service request does not exceed the traffic peak value, the service request is sent to a lower system, and the lower system processes the service request;
after the lower-layer system processes the service request, the consumption assembly consumes the service request waiting in line, the flow judgment module judges whether the flow of the consumed service request exceeds the flow peak value or not, if the flow of the consumed service request does not exceed the flow peak value, the service request is sent to the lower-layer system for processing, if the flow of the consumed service request exceeds the flow peak value, the consumption assembly continues to consume the service request, and the service request is sent to the lower-layer system for processing until the flow of the service request does not exceed the flow peak value.
In the process, the flow judgment module judges by adopting a token bucket algorithm, the maximum token number of the token bucket is matched with the processing capacity of a lower-layer system, the maximum token number of the token bucket can be dynamically adjusted, the flow judgment module realizes the distributed token bucket algorithm based on a distributed coordination component zookeeper, and the consumption component is a Kafka consumption component.
The method comprises the steps of firstly judging the flow of a sent service request, and sending the service request exceeding a flow peak value to an asynchronous message queue for queuing, so that the flow of the service request sent to a lower-layer system is within the processing range of the processing capacity of the lower-layer system, and the lower-layer system can not be crashed due to overlarge data flow; after the lower layer system processes the received service request, a new service request can be processed, and the service request of the asynchronous message queue can be sent to the lower layer system for processing;
similarly, in order to avoid the crash of the lower layer system, before sending the service request to the lower layer system, the traffic of the service request in the asynchronous message queue needs to be judged, the judgment result is divided into two types, one type is that the traffic of the service request is not greater than the traffic peak value and is within the processing capacity of the lower layer system, therefore, the service request can be directly sent to the lower layer system for processing, the other type is that the traffic of the service request is greater than the traffic peak value, at this time, the service request is consumed by using the consumption component, then the traffic of the consumed service request is judged, the judgment result is also divided into two types, one type is that the traffic of the consumed service request is not greater than the traffic peak value, at this time, the service request is directly sent to the lower layer system for processing, and the other type is that the traffic of the consumed service request is greater than the traffic peak value, at this moment, the consumption component consumes the service request again, and then judges again, and the service request is sent to the lower layer system for processing until the flow of the service request is not larger than the peak flow, so that the flow of the service request sent to the lower layer system is ensured to be within the processing capacity range of the lower layer system, and the lower layer system cannot crash when processing the service requests.
If the traffic of each service request is within the range of the assumed value X ± X, and the processing capacity of the lower layer system is Y, therefore, after receiving the service request, if the traffic is less than Y, the batch of service requests is directly sent to the lower layer system for processing, if the traffic is greater than Y, if a total traffic of 23 service requests is sent, assuming that the total traffic of the first 11 service requests is less than Y, after the token bucket algorithm determines that the first 11 service requests are directly sent to the lower layer system for processing, and the following 12 service requests are sent to the asynchronous message queue for queuing, after the lower layer system completes processing of the first 11 service requests, the consuming component consumes the 12 queued information, in this embodiment, the specified content in the 12 service requests is filtered out, the specified content can be preset, the set specified content does not affect the processing of the service request, then the flow of the 12 filtered service requests is judged, if the total flow of the 12 service requests is less than Y, the service requests are sent to a lower layer system for processing, if the total flow of the 12 service requests is still greater than Y, the service requests are continuously consumed until the total flow of the service requests is less than Y and then sent to the lower layer system for processing.
Considering that when the service request does not meet the filling standard, the lower-layer system will return the service request when processing the service request, and the requester needs to modify and then resubmit the service request, which means that the service request of the requester needs to be queued again for processing, thereby prolonging the processing time of the service request and increasing the processing traffic for the lower-layer system; therefore, the system also comprises a pre-auditing subsystem, wherein the pre-auditing subsystem comprises a distribution module, a verification code sending module, a confirmation module, a result processing module, a queue adjusting module and a verification code combination module, the verification code sending module is pre-stored with image segmentation information needing to be identified, such as some image information uploaded by a client, such as a copy of a handwritten description, the image information needs to be identified before processing, and a plurality of pieces of image segmentation information can be spliced into complete image information; the distribution module randomly sends the service request which is queued at the front in the asynchronous message queue to a requester of the service request which is queued at the back, the verification code sending module sends pre-stored image segmentation information to the requester, for example, the service request a of the requester A is queued at the 8 th position at the moment, the service request B of the requester B is queued at the 18 th position at the moment, the distribution module sends the service request a to the requester B, the requester audits the service request after receiving the service request sent by the distribution module, namely, the requester B receives the service request a and audits the service request a, and simultaneously identifies the received image segmentation information, then sends the audited result and the text information of the identified image segmentation information to the result processing module through the confirmation module, if the audited result is negative, the asynchronous message queue sends the service request back to the requester for refilling, that is, the service request a is sent back to the requester A for modification or re-filling, if the verification result is yes, the result processing module stores the identified text information in association with the service request, and then, when the underlying system processes the service request in the asynchronous message queue, namely, when the request service a of the requester A is processed, if the request service can be processed normally in the processing process, the service request is in accordance with the filling standard, that is, the audit result is accurate, that is, the audit result of requester B is correct, so the queue adjustment module submits the audit result to the requester, the queuing sequence of the service request B of the requester B is advanced by one bit, the queuing sequence of the request service B is adjusted to the 17 th bit, and simultaneously, the associated and stored text information is sent to the verification code combination module for storage; if the request service can not be processed normally, that is, the service request is not in accordance with the filling standard, that is, the auditor does not perform careful audit when performing audit, at this moment, the queue adjusting module delays the queuing sequence of the service request of the audit result submitter by one bit, that is, the queuing sequence of the service request B of the requester B is adjusted to 19 th bit, and the submitted text information is discarded; after the identifying code sending module receives the character information, the identifying code combining module combines the character information to obtain the character information of the image information in the identifying code sending module, and therefore the system can not recognize the image information.
The foregoing is merely an example of the present invention, and common general knowledge in the field of known specific structures and characteristics is not described herein in any greater extent than that known in the art at the filing date or prior to the priority date of the application, so that those skilled in the art can now appreciate that all of the above-described techniques in this field and have the ability to apply routine experimentation before this date can be combined with one or more of the present teachings to complete and implement the present invention, and that certain typical known structures or known methods do not pose any impediments to the implementation of the present invention by those skilled in the art. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (5)

1. Peak clipping system based on distributed token bucket, its characterized in that: the system comprises a request receiving module, a flow judging module, an asynchronous message queue module and a consumption assembly, wherein after the request receiving module receives a service request sent by an upper-layer system, the flow judging module judges by adopting a token bucket algorithm, judges whether the flow of the service request exceeds a preset flow peak value or not, if the flow of the service request exceeds the flow peak value, the service request is sent to the asynchronous message queue module for queuing and waiting, if the flow of the service request does not exceed the flow peak value, the service request is sent to a lower-layer system, and the lower-layer system processes the service request;
after the lower-layer system processes the service request, the consumption assembly consumes the service request waiting in line, the flow judgment module judges whether the flow of the consumed service request exceeds the flow peak value or not, if the flow of the consumed service request does not exceed the flow peak value, the service request is sent to the lower-layer system for processing, if the flow of the consumed service request exceeds the flow peak value, the consumption assembly continues to consume the service request, and the service request is sent to the lower-layer system for processing until the flow of the service request does not exceed the flow peak value.
2. The distributed token bucket based peak clipping system of claim 1, wherein: the maximum number of tokens for the token bucket matches the processing power of the underlying system.
3. The distributed token bucket based peak clipping system of claim 2, wherein: the maximum number of tokens for the token bucket may be dynamically adjusted.
4. The distributed token bucket based peak clipping system of claim 1, wherein: the flow judgment module realizes a distributed token bucket algorithm based on a distributed coordination component zookeeper.
5. The distributed token bucket based peak clipping system of claim 4, wherein: the consuming component is a Kafka consuming component.
CN201811015371.1A 2018-08-31 2018-08-31 Peak clipping system based on distributed token bucket Active CN109286573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811015371.1A CN109286573B (en) 2018-08-31 2018-08-31 Peak clipping system based on distributed token bucket

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811015371.1A CN109286573B (en) 2018-08-31 2018-08-31 Peak clipping system based on distributed token bucket

Publications (2)

Publication Number Publication Date
CN109286573A CN109286573A (en) 2019-01-29
CN109286573B true CN109286573B (en) 2022-07-08

Family

ID=65183954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811015371.1A Active CN109286573B (en) 2018-08-31 2018-08-31 Peak clipping system based on distributed token bucket

Country Status (1)

Country Link
CN (1) CN109286573B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111176861A (en) * 2019-12-24 2020-05-19 深圳市优必选科技股份有限公司 Asynchronous service processing method, system and computer readable storage medium
CN112822080B (en) * 2020-12-31 2022-09-16 中国人寿保险股份有限公司上海数据中心 Bus system based on SOA architecture
CN113810307A (en) * 2021-10-11 2021-12-17 上海微盟企业发展有限公司 Data flow control method, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217495A (en) * 2008-01-11 2008-07-09 北京邮电大学 Traffic monitoring method and device applied under T-MPLS network environment
CN101556678A (en) * 2009-05-21 2009-10-14 中国建设银行股份有限公司 Processing method of batch processing services, system and service processing control equipment
CN101959236A (en) * 2009-07-13 2011-01-26 大唐移动通信设备有限公司 Traffic control method and device
CN105468784A (en) * 2015-12-24 2016-04-06 北京京东尚科信息技术有限公司 Method and device for processing highly concurrent traffic

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9219654B2 (en) * 2010-06-25 2015-12-22 Cox Communications, Inc. Preloading token buckets for dynamically implementing speed increases

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217495A (en) * 2008-01-11 2008-07-09 北京邮电大学 Traffic monitoring method and device applied under T-MPLS network environment
CN101556678A (en) * 2009-05-21 2009-10-14 中国建设银行股份有限公司 Processing method of batch processing services, system and service processing control equipment
CN101959236A (en) * 2009-07-13 2011-01-26 大唐移动通信设备有限公司 Traffic control method and device
CN105468784A (en) * 2015-12-24 2016-04-06 北京京东尚科信息技术有限公司 Method and device for processing highly concurrent traffic

Also Published As

Publication number Publication date
CN109286573A (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN109194586B (en) Peak clipping processing method based on distributed token bucket
CN109286573B (en) Peak clipping system based on distributed token bucket
EP1774714B1 (en) Hierarchal scheduler with multiple scheduling lanes
US20190173969A1 (en) Push notification delivery system
CN101057481B (en) Method and device for scheduling packets for routing in a network with implicit determination of packets to be treated as a priority
CN109257293B (en) Speed limiting method and device for network congestion and gateway server
US8149846B2 (en) Data processing system and method
US10554430B2 (en) Systems and methods for providing adaptive flow control in a notification architecture
US7248593B2 (en) Method and apparatus for minimizing spinlocks and retaining packet order in systems utilizing multiple transmit queues
US20080317059A1 (en) Apparatus and method for priority queuing with segmented buffers
CN110661668B (en) Message sending management method and device
CN106453126A (en) Virtual machine traffic control method and device
US20020059365A1 (en) System for delivery and exchange of electronic data
CN104734983A (en) Scheduling system, method and device for service data request
CN105700940A (en) Scheduler and dynamic multiplexing method thereof
CN109257303A (en) QoS queue dispatching method, device and satellite communication system
US9268621B2 (en) Reducing latency in multicast traffic reception
Freund et al. Competitive on-line switching policies
EP0957602A2 (en) Multiplexer
WO2017036238A1 (en) Service node adjusting method, apparatus and device
CN109688171B (en) Cache space scheduling method, device and system
US9813352B2 (en) Method for prioritizing network packets at high bandwidth speeds
US20020196799A1 (en) Throttling queue
US8589605B2 (en) Inbound message rate limit based on maximum queue times
CN113626221A (en) Message enqueuing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant