CN109286573A - Peak clipping system based on distributed token bucket - Google Patents

Peak clipping system based on distributed token bucket Download PDF

Info

Publication number
CN109286573A
CN109286573A CN201811015371.1A CN201811015371A CN109286573A CN 109286573 A CN109286573 A CN 109286573A CN 201811015371 A CN201811015371 A CN 201811015371A CN 109286573 A CN109286573 A CN 109286573A
Authority
CN
China
Prior art keywords
service request
flow
peak
token bucket
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811015371.1A
Other languages
Chinese (zh)
Other versions
CN109286573B (en
Inventor
胡昇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Fumin Bank Co Ltd
Original Assignee
Chongqing Fumin Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Fumin Bank Co Ltd filed Critical Chongqing Fumin Bank Co Ltd
Priority to CN201811015371.1A priority Critical patent/CN109286573B/en
Publication of CN109286573A publication Critical patent/CN109286573A/en
Application granted granted Critical
Publication of CN109286573B publication Critical patent/CN109286573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer And Data Communications (AREA)

Abstract

The present invention relates to processing data information system regions, in order to solve the multisystem interacted now because the data traffic for the service request that upper layer transport comes is big and the phenomenon that cause underlying system to collapse, provide a kind of peak clipping system based on distributed token bucket, including request receiving module, flow judgment module, asynchronous message Queue module, consumption component, after request receiving module receives the service request that upper-level system is sent, flow judgment module judges whether the flow of service request is more than preset peak flow, if the flow of service request is more than peak flow, it sends the service request in asynchronous message Queue module and waits in line, if the flow of service request is no more than peak flow, underlying system is sent by the service request, the request of underlying system processing business;Consumption component consumes the service request waited in line, and after the flow of service request is no more than peak flow, sends underlying system for the service request and handles.

Description

Peak clipping system based on distributed token bucket
Technical field
The present invention relates to processing data information system regions, specially a kind of peak clipping system based on distributed token bucket.
Background technique
With the rise of SOA (service-oriented Technical Architecture) in recent years, more and more application systems start to be distributed The design and deployment of formula.System becomes service-oriented multisystem framework by original single Technical Architecture.Originally it was at one The operation flow that system can be completed is realized by repeatedly interacting between multisystem.
However since the difference of the transmission of the data of each system and data reception capabilities etc. will make each system pair The processing capacity of service request is irregular, therefore, during interaction, if the processing capacity of upper-level system is greater than lower series of strata The processing capacity of system, then upper-level system can be handled the biggish service request of data traffic, if processing capacity is small Directly processing upper-level system transmits the service request of coming to underlying system at this time, it is likely that under will causing because of data traffic is excessive Layer system collapse.It is therefore desirable to peak clipping processing be carried out to the service request that transmission comes, to prevent the collapse of underlying system.
Summary of the invention
The invention is intended to provide a kind of peak clipping system based on distributed token bucket, existed with the multisystem for solving now interactive During processing business is requested, since the processing capacity of each system is different, underlying system is easy next because of upper layer transport The data traffic of service request is excessive and the phenomenon that causing underlying system to collapse.
The present invention provides base case: the peak clipping system based on distributed token bucket, including request receiving module, flow Judgment module, asynchronous message Queue module and consumption component, request receiving module receive the business that upper-level system is sent and ask After asking, flow judgment module judges whether the flow of service request is more than preset peak flow, if the flow of service request is super Inflow-rate of water turbine peak value sends the service request in asynchronous message Queue module and waits in line, if the flow of service request No more than peak flow, underlying system, the request of underlying system processing business are sent by the service request;
After underlying system has handled service request, consumption component consumes the service request waited in line, flow Judgment module judges whether the flow of post-consumer service request is more than peak flow, if judgement is no more than peak flow, by this Service request is sent to underlying system and is handled, if being more than, consumption component continues to consume the service request, until service request Flow be no more than peak flow after, send underlying system for the service request and handle.
Name Resolution:
Service request is sent to underlying system from upper-level system in this method.
Business: business refers to the service that a solid element is provided to another solid element.
Asynchronous message queue: asynchronous message queue is the container that message is saved in the transmission process of message.Wherein, message It is the carrier of communication between independent resource, the producer constructs message, and consumer uses message;Queue is then storage message Carrier, the producer is put into message in queue, and consumer takes out message from queue.
Consumption: referring to and carry out logical process after service request is removed to be consumed, the service request that will such as take out into Row display is executed or is filtered the specified content in service request using filter module.
The beneficial effect of base case is: compared with existing peak clipping mode, in the present solution, 1. in industry in this method During business request is sent to underlying system, peak clipping processing is carried out to the service request of big flow, i.e., will be more than peak flow Service request be dealt into asynchronous message queue waited in line rather than directly refusal or be all transmitted to underlying system, That is it is boundary that this method, which will be sent to the service request of underlying system by the processing capacity of underlying system, it has been divided into two in advance Part, the service request that first part includes is within the processing capacity range of underlying system, therefore the service request of this part Underlying system is sent straight to be handled, second part then include be more than that the business of process range of underlying system is asked It asks, the service request of this part is sent in asynchronous message queue and waits, after the service request of first part has been processed, The service request of second part would be sent to underlying system and be handled at this time, also ensure that all industry that system receives Business request all can be processed, avoids the loss of service request, while also guaranteeing the stream for being sent to the service request of underlying system Amount is all within the scope of underlying system processing capacity, and underlying system would not also occur when handling these service requests The case where collapse;
2. on the one hand the setting of asynchronous message queue can alleviate the pressure of underlying system, a large amount of without facing simultaneously Service request, on the other hand, in existing pretreatment mode, the service request of second part is dropped, and under After layer system has handled the service request of first part, underlying system would be at idle state, until service request next time Arrival, and in this method, after underlying system has handled the service request of first part, and by the service request of second part It is sent to underlying system to continue to handle, that is to say, that the industry of the free time processing second part of underlying system is utilized Business request, is not only that of avoiding the loss of service request, while also the free time of underlying system being utilized, improves The treatment effeciency of underlying system.
Preferred embodiment one: as the preferred of basic scheme, flow judgment module is judged using token bucket algorithm.It is beneficial Effect: it in view of upper-level system can send downwards a large amount of service request at peak period, therefore uses and can allow for certain journey The token bucket algorithm of the burst transfer of degree is judged, when upper-level system peak period, this certain journey of permission of token bucket The property of the burst transfer of degree can also buffer the pressure of underlying system to a certain extent, avoid a large amount of service request Accumulation.
Illustrate: token bucket can be regarded as the container that can store token with certain capacity, and system is according to rule Fixed rate injects token into bucket, and when token is expired and overflowed in bucket, token is not just further added by bucket.System receives one Unit data (for network transmission, can be a packet or a byte;For Web Service, it can be one and ask Ask, in the application be a service request) after, from token bucket take out a token, then to data or request at Reason.If not having token in token bucket, directly data or request can be abandoned.Fixed-size token bucket can be voluntarily with perseverance Fixed rate is constantly be generated token in a steady stream.If token is not consumed, or the speed being consumed is less than the speed generated, enables Board will constantly increase, and fill up until bucket, therefore also allow for burst transfer to a certain degree.The token generated again below It will be overflowed from bucket, the highest number of tokens that can be saved in last bucket is never more than the size of bucket.
Burst transfer: also commonly referred to as data burst refers generally to carry out in a short time in the field of communications relatively high The data of bandwidth are transmitted, and the application middle finger can send downwards a large amount of service request at upper-level system peak period.
Preferred embodiment two: preferably one preferred, the maximum token number of token bucket and the processing of underlying system Ability matches.The utility model has the advantages that the processing capacity of the maximum token number of token bucket and underlying system is matched setting, such one Come, even if the maximum service request reception amount of underlying system is not more than when allowing burst transfer to a certain degree The maximum value that the processing capacity of underlying system allows, avoids underlying system from collapsing since the service request amount received is excessive Routed situation;Highest number of tokens, that is, peak flow in this programme.
Preferred embodiment three: preferably two it is preferred, the highest number of tokens of token bucket is dynamically adapted.Beneficial to effect Fruit: it after the maximum token number of token bucket is dynamically adapted, can also be optimized and revised by adjusting highest number of tokens with realizing.
Preferred embodiment four: preferably one it is preferred, flow judgment module be based on distributed coordination component Zookeeper realizes distributed token bucket algorithm.The utility model has the advantages that zookeeper when content change, can block readjustment Mode notify all online client real-time update information, so as to complete equipilibrium in the case where being distributed scene more machines Flow distribution, and Channel of Downstream flow is uniformly controlled, it is optimized and revised at any time convenient for system.
Illustrate: ZooKeeper is one distributed, and the distributed application program coordination service of open source code is The realization of Chubby mono- open source of Google, is the significant components of Hadoop and Hbase, it is one and mentions for Distributed Application For the software of Consistency service.
Client: client (Client) or be user terminal refers to corresponding with server, provides local clothes for client The program of business.
Preferred embodiment five: preferably four it is preferred, consumption component be Kafka consumption component.The utility model has the advantages that selecting Kafka consumption component is consumed, and clearly supports the subregion of message, passes through the cluster of Kafka server and consumer's machine point Cloth consumption, maintains each subregion to be ordered into.
Illustrate: Kafka is initially to be developed by Linkedin company, be a distribution, subregion, more copies, order more Reader, it is common to can be used for web/ based on the distributed information log system (MQ system can also be regarded) that zookeeper coordinates Nginx log, access log, messaging service etc., Linkedin have contributed to Apache foundation in 2010 and have become top Grade open source projects.
Detailed description of the invention
Fig. 1 is that the present invention is based on the flow charts of the peak clipping system embodiment of distributed token bucket.
Specific embodiment
Below by the further details of explanation of specific embodiment:
Peak clipping system based on distributed token bucket as shown in Figure 1, including request receiving module, flow judgment module, Asynchronous message Queue module and consumption component, after request receiving module receives the service request that upper-level system is sent, flow Judgment module judges whether the flow of service request is more than preset peak flow, if the flow of service request is more than flow peak Value, sends the service request in asynchronous message Queue module and waits in line, if the flow of service request is no more than stream Peak value is measured, sends underlying system, the request of underlying system processing business for the service request;
After underlying system has handled service request, consumption component consumes the service request waited in line, flow Judgment module judges whether the flow of post-consumer service request is more than peak flow, if judgement is no more than peak flow, by this Service request is sent to underlying system and is handled, if being more than, consumption component continues to consume the service request, until service request Flow be no more than peak flow after, send underlying system for the service request and handle.
In the above process, flow judgment module is judged using token bucket algorithm, the maximum token number of token bucket with The processing capacity of underlying system matches, wherein the highest number of tokens of token bucket is dynamically adapted, and flow judgment module is based on dividing Cloth coordination component zookeeper realizes that distributed token bucket algorithm, consumption component are Kafka consumption component.
First the flow of the service request sent is judged, and by be more than peak flow service request be sent to it is different Waited in line in step message queue, so, the flow for being sent to the service request of underlying system is in underlying system In the process range of processing capacity, also underlying system would not be caused to collapse because data traffic is excessive;And in lower series of strata After system has handled the service request received, it is also just capable of handling new service request, the business of asynchronous message queue at this time Request can be sent to underlying system and be handled;
Likewise, the case where in order not to collapse underlying system, before sending service request to underlying system It is that the flow to the service request in asynchronous message queue is needed to be judged, the result of judgement is divided into two kinds, and one is industry The flow of business request is not more than peak flow, be within underlying system processing capacity, therefore at this time can be by service request It is sent directly to underlying system to be handled, another result is then that the flow of service request is greater than peak flow, is utilized at this time Consumption component consumes service request, then judges again the flow of post-consumer service request, the result of judgement Also it is divided into two kinds, one is the flows of post-consumer service request to be no larger than peak flow, at this time by business Request directly transmits underlying system and is handled, and another kind is the result is that the flow by post-consumer service request is also greater than stream Peak value is measured, consumption component again consumes service request at this time, then judges again, until the flow of service request is little After peak flow, service request is just sent to underlying system and is handled, so, ensured that and be sent to underlying system Service request flow be all within the scope of underlying system processing capacity, underlying system handle these service requests when Waiting also would not there is a situation where collapse.
Such as, it is assumed that within the scope of assumed value X ± x, the processing capacity of underlying system is the flow of every service request Y, therefore, after receiving service request, as flow is less than Y, this batch traffic request at this time is addressed directly to underlying system Handled, and if flow is greater than Y's, such as send is 23 service requests altogether, it is assumed that preceding 11 service requests Total flow is less than Y, then after token bucket algorithm judges, before 11 service requests be sent directly between at underlying system Reason, and subsequent 12 service requests are sent in asynchronous message queue and are waited in line, in underlying system by first 11 Service request processing after the completion of, consumption component consumes 12 information of queuing at this time, in the present embodiment using filtering Mode consumed, the specified information filtering in this 12 service requests is fallen, specified content can be set in advance, and The specified content of setting can't have an impact for the processing of service request, then to the flow of filtered 12 service requests Judged, if this 12 total flow is less than Y, is sent to underlying system and is handled, if being also greater than Y, continued Consumption, is addressed to underlying system after total flow has been less than Y and is handled.
In view of when service request does not meet and fills in standard, underlying system handles meeting when arriving the service request The service request is returned, claimant needs to resubmit after modifying to the service request, also means that claimant Service request needs requeue and handled so that the processing time of the service request extends, and under For layer system, the portfolio of processing is also incremented by;It therefore, further include having preliminary hearing subsystem in this system, preliminary hearing subsystem It include distribution module, identifying code sending module, confirmation module, result treatment module, regulator module and identifying code combination Module, identifying code sending module prestore the image segmentation information of identification in need, such as some image informations that client uploads, such as hand The copy of explanation is write, needs to identify the image information before treatment, several image segmentation information capable assemblings are at one A complete image information;Distribution module will be lined up forward service request and be sent to queuing at random rearward in asynchronous message queue Service request claimant, while prestore an image segmentation information is sent to the claimant by identifying code sending module, If the service request a of claimant A is lined up the 8th at this time, the service request b of claimant B is lined up the 18th at this time, and distribution module will Service request a is sent to claimant B, claimant after receiving the service request that distribution module is sent to the service request into Row audit, i.e. claimant B receive service request a and audit to service request a, while also to the image segmentation received Information is identified, then sends the text information of auditing result and the figure segmentation information identified by confirmation module To result treatment module, if auditing result be it is no, which is sent back to claimant and is filled out again by asynchronous information queue It writes, i.e., service request a is sent back into claimant A and modify or rewrite, if auditing result is yes, result treatment module The text information identified and the service request are associated storage, then, in underlying system in asynchronous message queue Service request when being handled, i.e., when handling the requested service a of claimant A, during the treatment, if the request Business can be normally processed, then illustrate that the service request meets the standard of filling in, i.e. auditing result is accurate, that is, The auditing result for saying claimant B is that correctly, therefore regulator module is then by auditing result submitter, i.e. claimant B's The Queue sequence of service request b shifts to an earlier date one, and the Queue sequence of requested service b is brought to the 17th, while by associated storage Text information is sent to identifying code composite module and is stored;If the requested service cannot be normally processed, i.e. the service request The standard of filling in is not met, that is to say, that auditor audits when auditing there is no careful, at this time regulator Module will then prolong one, the i.e. row of the service request b of claimant B after the Queue sequence of the service request of auditing result submitter Team's sequence is brought to the 19th, while the text information of submission being abandoned;Identifying code sending module after receiving text information, Identifying code composite module is combined the text information for being verified image information in yard sending module to these text informations, System is just eliminated to operate the identification of the graphical information.
What has been described above is only an embodiment of the present invention, and the common sense such as well known specific structure and characteristic are not made herein in scheme Excessive description, technical field that the present invention belongs to is all before one skilled in the art know the applying date or priority date Ordinary technical knowledge can know the prior art all in the field, and have using routine experiment hand before the date The ability of section, one skilled in the art can improve and be implemented in conjunction with self-ability under the enlightenment that the application provides This programme, some typical known features or known method should not become one skilled in the art and implement the application Obstacle.It should be pointed out that for those skilled in the art, without departing from the structure of the invention, can also make Several modifications and improvements out, these also should be considered as protection scope of the present invention, these all will not influence the effect that the present invention is implemented Fruit and patent practicability.The scope of protection required by this application should be based on the content of the claims, the tool in specification The records such as body embodiment can be used for explaining the content of claim.

Claims (6)

1. the peak clipping system based on distributed token bucket, it is characterised in that: including request receiving module, flow judgment module, different Message queue module and consumption component are walked, after request receiving module receives the service request that upper-level system is sent, flow is sentenced Disconnected module judges whether the flow of service request is more than preset peak flow, if the flow of service request is more than peak flow, It sends the service request in asynchronous message Queue module and waits in line, if the flow of service request is no more than flow peak The service request is sent underlying system, the request of underlying system processing business by value;
After underlying system has handled service request, consumption component consumes the service request waited in line, flow judgement Module judges whether the flow of post-consumer service request is more than peak flow, if judgement is no more than peak flow, by the business Request is sent to underlying system and is handled, if being more than, consumption component continues to consume the service request, until the stream of service request After amount is no more than peak flow, underlying system is sent by the service request and is handled.
2. the peak clipping system according to claim 1 based on distributed token bucket, it is characterised in that: the flow judges mould Block is judged using token bucket algorithm.
3. the peak clipping system according to claim 2 based on distributed token bucket, it is characterised in that: the token bucket is most The processing capacity of big token number and underlying system matches.
4. the peak clipping system according to claim 3 based on distributed token bucket, it is characterised in that: the token bucket is most Big token number is dynamically adapted.
5. the peak clipping system according to claim 2 based on distributed token bucket, it is characterised in that: the flow judges mould Block is based on distributed coordination component zookeeper and realizes distributed token bucket algorithm.
6. the peak clipping system according to claim 5 based on distributed token bucket, it is characterised in that: the consumption component is Kafka consumption component.
CN201811015371.1A 2018-08-31 2018-08-31 Peak clipping system based on distributed token bucket Active CN109286573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811015371.1A CN109286573B (en) 2018-08-31 2018-08-31 Peak clipping system based on distributed token bucket

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811015371.1A CN109286573B (en) 2018-08-31 2018-08-31 Peak clipping system based on distributed token bucket

Publications (2)

Publication Number Publication Date
CN109286573A true CN109286573A (en) 2019-01-29
CN109286573B CN109286573B (en) 2022-07-08

Family

ID=65183954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811015371.1A Active CN109286573B (en) 2018-08-31 2018-08-31 Peak clipping system based on distributed token bucket

Country Status (1)

Country Link
CN (1) CN109286573B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111176861A (en) * 2019-12-24 2020-05-19 深圳市优必选科技股份有限公司 Asynchronous service processing method, system and computer readable storage medium
CN112822080A (en) * 2020-12-31 2021-05-18 中国人寿保险股份有限公司上海数据中心 Bus system based on SOA architecture
CN113810307A (en) * 2021-10-11 2021-12-17 上海微盟企业发展有限公司 Data flow control method, system and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217495A (en) * 2008-01-11 2008-07-09 北京邮电大学 Traffic monitoring method and device applied under T-MPLS network environment
CN101556678A (en) * 2009-05-21 2009-10-14 中国建设银行股份有限公司 Processing method of batch processing services, system and service processing control equipment
CN101959236A (en) * 2009-07-13 2011-01-26 大唐移动通信设备有限公司 Traffic control method and device
US20110320631A1 (en) * 2010-06-25 2011-12-29 Cox Communications, Inc. Preloading token buckets for dynamically implementing speed increases
CN105468784A (en) * 2015-12-24 2016-04-06 北京京东尚科信息技术有限公司 Method and device for processing highly concurrent traffic

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217495A (en) * 2008-01-11 2008-07-09 北京邮电大学 Traffic monitoring method and device applied under T-MPLS network environment
CN101556678A (en) * 2009-05-21 2009-10-14 中国建设银行股份有限公司 Processing method of batch processing services, system and service processing control equipment
CN101959236A (en) * 2009-07-13 2011-01-26 大唐移动通信设备有限公司 Traffic control method and device
US20110320631A1 (en) * 2010-06-25 2011-12-29 Cox Communications, Inc. Preloading token buckets for dynamically implementing speed increases
CN105468784A (en) * 2015-12-24 2016-04-06 北京京东尚科信息技术有限公司 Method and device for processing highly concurrent traffic

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111176861A (en) * 2019-12-24 2020-05-19 深圳市优必选科技股份有限公司 Asynchronous service processing method, system and computer readable storage medium
CN112822080A (en) * 2020-12-31 2021-05-18 中国人寿保险股份有限公司上海数据中心 Bus system based on SOA architecture
CN112822080B (en) * 2020-12-31 2022-09-16 中国人寿保险股份有限公司上海数据中心 Bus system based on SOA architecture
CN113810307A (en) * 2021-10-11 2021-12-17 上海微盟企业发展有限公司 Data flow control method, system and storage medium

Also Published As

Publication number Publication date
CN109286573B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
CN109194586A (en) Peak clipping processing method based on distributed token bucket
CN111614718B (en) Enterprise communication channel fusion method, device, equipment and readable storage medium
CN109286573A (en) Peak clipping system based on distributed token bucket
CN110113387A (en) A kind of processing method based on distributed batch processing system, apparatus and system
EP0950952A2 (en) Server workload management in an asynchronous client/server computing system
CN107621973A (en) A kind of method for scheduling task and device across cluster
CN110661668B (en) Message sending management method and device
CN106330987A (en) Dynamic load balancing method
CN104657207B (en) Dispatching method, service server and the scheduling system of remote authorization request
CN104734983B (en) Scheduling system, the method and device of service data request
CN105264509A (en) Adaptive interrupt coalescing in a converged network
CN109600798A (en) Multi-domain resource allocation method and device in a kind of network slice
CN103023980A (en) Method and system for processing user service request by cloud platform
CN102904961A (en) Method and system for scheduling cloud computing resources
CN105096122A (en) Fragmented transaction matching method and fragmented transaction matching device
CN109542608A (en) A kind of cloud artificial tasks dispatching method based on mixing queuing network
CN111401991A (en) Data information distribution method and device, storage medium and computer equipment
CN102609307A (en) Multi-core multi-thread dual-operating system network equipment and control method thereof
CN109309646A (en) A kind of multi-media transcoding method and system
CN109242240A (en) Task based on unit time distribution and timeliness control develops cloud platform
CN106549782A (en) The bandwidth scheduling method and device of association stream in a kind of data center
US20090132582A1 (en) Processor-server hybrid system for processing data
CN105550025A (en) Distributed IaaS (Infrastructure as a Service) scheduling method and system
CN105893160B (en) A kind of dispatching method of multi-interface data
CN115391053B (en) Online service method and device based on CPU and GPU hybrid calculation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant