CN104798356A - Method and apparatus for controlling utilization in a horizontally scaled software application - Google Patents

Method and apparatus for controlling utilization in a horizontally scaled software application Download PDF

Info

Publication number
CN104798356A
CN104798356A CN201380060597.2A CN201380060597A CN104798356A CN 104798356 A CN104798356 A CN 104798356A CN 201380060597 A CN201380060597 A CN 201380060597A CN 104798356 A CN104798356 A CN 104798356A
Authority
CN
China
Prior art keywords
stream
application example
local
application
restriction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201380060597.2A
Other languages
Chinese (zh)
Other versions
CN104798356B (en
Inventor
帕·卡尔森
米卡埃尔·卡尔松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Publication of CN104798356A publication Critical patent/CN104798356A/en
Application granted granted Critical
Publication of CN104798356B publication Critical patent/CN104798356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/026Capturing of monitoring data using flow identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5019Ensuring fulfilment of SLA
    • H04L41/5022Ensuring fulfilment of SLA by giving priorities, e.g. assigning classes of service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Stored Programmes (AREA)
  • Computer And Data Communications (AREA)

Abstract

The present invention comprises an apparatus and method for distributed traffic control in a horizontally scaled application, in which a software-based application is implemented as a number of peer application instances that each provides a portion of the application's overall capability or capacity. An apparatus that includes a distributed traffic controller is instantiated or otherwise implemented at each application instance, and these apparatuses collectively operate to limit the overall utilization of the application by individual clients or affiliated groups of clients according to, e.g., Service Level Agreements or SLAs, and further operate to prevent disproportionate utilization of any one of the application instances. Advantageously, such operations are accomplished according to the teachings herein using efficient information propagation protocols between the distributed traffic controllers.

Description

For the method and apparatus of the utilance in level of control expanding software application
Technical field
The present invention relates generally to distributed treatment, particularly, relates to horizontal extension (horizontally scaled) treatment system.
Background technology
In the horizontal extension system of the type imagined herein, in multiple peer application example, realize whole software application, each peer application example provides the whole function of application and each peer application case representation always applies the part of capacity or performance capability.But, for managing existing solution from the applied business of client pool based on usually for the invalid a large amount of hypothesis of horizontal extension system.
This operate source is from following tradition hypothesis: in single instance, perform the Service control for client pool, such as, set up whole system by single hardware server, and carry out all business of route by a single point, wherein can observe business at this single point place and control.But in horizontal extension system, such as, due to fault, upgrading etc., hardware and/or software instances may be put at any time and make a return journey.
May more crucially, the peer application example distribution service from client pool to horizontal extension application may cause some application examples, and by excessive use, some application examples are not fully utilized.Such as, given client or be at least derived from the contextual given connection of same client and " viscosity may be had more " than other clients or connection.At this on the one hand, " viscosity " connection is lasting and is associated with the applied business continued.
Think " load distribution " scheme that circulates herein to assign to each application example and do not consider the following fact from the applied business in different clients pond: the viscosity that caused by Distributed Application business connects that may to be aggregated in application example one or more.In addition, between peer application entity the state of synchronous service controling parameters in available network bandwidth with reach and/or may be expensive in message number needed for maintain synchronization state.
Summary of the invention
The present invention includes the apparatus and method controlled for the distributed service in horizontal extension application, wherein, the application based on software is implemented as multiple peer application example, and each application example provides the total capacity of application or a part for capacity.In each application example place instantiation or otherwise realize comprising the device of distributed service controller, and these device co-operate are to limit each client or the subject consumer end group overall utilization to application according to such as service level agreement or SLA, and operation is to prevent the disproportionate utilance of any one in application example.Advantageously, efficient information dissemination agreement is used to complete these operations according to being taught between distributed service controller of this paper.
In more detailed example, instruction herein discloses and controls independent client to the method for the utilance of software application.Application is implemented as multiple peer application example, and these peer application examples receive applied business from any one or more clients multiple client, and realize described method at each application example place.
Had this understanding, described method comprises: the applied business entering described application example is categorized as the stream corresponding with the different clients in described client and/or dissimilar applied business; And estimate each local requirements flowed about described application example.Described method also comprises: exchange local demand information with other application examples one or more in described application example.Described exchange comprises: send for the local requirements estimated by the described stream at described application example place, and receives the local requirements of similar estimation of all similar stream at other application example places in described application example.
According to the method, the local demand information exchanged is used to determine the total demand value of each stream at application example place at each application example place.Total demand value is determined about application.In this sense, the constant current determined total demand value of giving for given application example place can be interpreted as the local requirements estimated by this stream at this application example place and the local requirements sum estimated by all similar stream for other application example places in a non-limiting example.
Advantageously, described method continuation uses and flows for each the local utilance restriction that determined total demand value carrys out the stream at computing application example place.
Correspondingly, described method also comprises: according to the local utilance restriction whether exceeding described stream, the applied business in each being flowed is labeled as and does not meet Policy Service or meet Policy Service.This operation can be understood to that the first order is supervised, and applies by the restriction of stream utilance in the supervision of this first order.
As second step or second level supervision, described method additionally comprises: determine whether the polymerization of the applied business of all streams at described application example place exceeds the restriction of local polymerization utilance.According to described method, based on whether exceeding described this locality polymerization utilance restriction and/or based on to meeting Policy Service and the differentiation not meeting Policy Service, control the buffer memory to the aggregated application business going to described application example.Such as, although can, in response to not meeting Policy Service to retrain independent stream, also can be following situation: carry out buffer memory to aggregated application business and relate to: at least apply different cache priority levels to meeting Policy Service and not meeting Policy Service exceeding between local polymerization utilance restricted period.
The device comprising distributed service controller and communication controler is used to realize said method and distortion thereof or expansion in one or more embodiments of instructing herein.Device based on software, such as, can be implemented as logic OR functional circuit according to the execution of the computer program instructions be stored in computer-readable medium.In an exemplary case, device is implemented as a part for each application example, or is implemented as the adjoint program of the application example execution in associating host operating system environment.
In exemplary configuration, distributed service controller will enter the application class of the application example that it is associated for stream, and to the supervision of each stream application first order based on token bucket.In other words, by the business be applied to by stream token bucket supervision scheme in each stream, with according to the local utilance restriction whether exceeding stream the applied business in stream is labeled as meet strategy or do not meet strategy, and selectively, based on each stream, apply first order business regulate (such as, by abandoning some in the applied business that flows automatically).
According to by distributed service controller for the application example matched to this distributed service controller stream estimated by local requirements determine that these by flow utilance limit at other application examples for the local requirements estimated by all similar stream with by other distributed service controllers.In other words, by the sorting parameter of each stream at each application example place---such as, type of service, client domain etc.---define this stream, and any stream with the Another Application example place of same category parameter is similar stream.Therefore, the aggregate demand be associated with any given stream or total demand depend on the local demand of all similar stream between all application examples.
The communication controler matched with distributed service controller exchanges local demand information, thus provide the propagation of local requirements between all distributed service controllers, thus make it possible to calculate when considering the respective streams demand at every other application example place total demand value and dynamically regulate local utilance to limit based on each stream accurately.
As another advantage, each distributed service controller is to the polymerization of the applied business at each application example place---namely, and the aggregated flow of all each stream at Combination application example place---application second level supervision.The supervision of polymer grade can relate to: optionally regulate aggregated flow according to whether exceeding the restriction of local polymerization utilance.
Changing along with the total demand be associated with this stream to the ratio of the constant current capacity of permitting or other application resources for given application example place according to the effect of the present invention of independent claims.Certainly, the invention is not restricted to these or other preceding feature and advantage.In fact, those skilled in the art will recognize supplementary features and advantage after the following detailed description of reading after checking accompanying drawing.
Accompanying drawing explanation
Fig. 1 is the block diagram of an embodiment of the distributed processing system(DPS) realizing horizontal extension application.
Fig. 2 shows the block diagram of the exemplary details of the distributed processing system(DPS) of Fig. 1.
Fig. 3 is the block diagram of an embodiment of the distributed service controller imagined herein.
Fig. 4 is the block diagram of other exemplary details of the distributed service controller of Fig. 3.
Fig. 5 is the logical flow chart of an embodiment of the distributed service control method imagined herein.
Fig. 6 A and Fig. 6 B is to provide the business categorizing device that can realize in distributed service controller and the block diagram of supervising other exemplary details of arranging by stream (per-flow).
Fig. 7 be a diagram that the block diagram based on applied business being labeled as the embodiment by stream supervision meeting strategy or do not meet strategy.
Fig. 8 A, Fig. 8 B and Fig. 9-11 is the logical flow charts of being supervised according to the business based on token bucket that one or more embodiments of instructing herein perform by distributed service controller.
Figure 12 is the signal flow graph of the embodiment exchanging local demand information between distributed service controller.
Figure 13 produces and sends the logical flow chart of synchronous (SYN) message in return embodiment of a part for local demand information.
Figure 14 receives and processes the logical flow chart of the certain message types in return embodiment of a part for local demand information received at distributed service controller place.
Figure 15 be a diagram that the schematic diagram of the example that the distributed service that distributed service herein controls instruction regulation controls.
Embodiment
Fig. 1 shows the application 10 based on software, and it is referred to as in this discussion " application 10 ".Client pool 12-1,12-2 etc. use application 10, and will be appreciated that each client 12-1,12-2 etc. will consume the specific part of its total capacity or ability when using application 10.For ease of reference, what use Reference numeral 12 usually to refer in client 12-1,12-2 etc. when not having suffix is any one or more.Therefore, during term " client 12 " and " multiple client 12 " refer in client any one and client respectively any two or more.
In addition, as used herein term client is " many implications " term.Usually, client 12 comprises a certain component software example---this component software example is instantiated in system equipment or system---, and this component software example produces one or more applied business going to application 10.Exemplary client 12 can produce multiple different messages type, such as, establishment, reading, renewal etc., and each type of message can be regarded as each business " stream " about application 10, wherein each this type of stream can be managed by service level agreement or SLA, wherein, service level agreement or SLA consult providing the tissue of application 10 and utilize via one or more client 12 between the customized tissue applied.The multiple users being attached to customized tissue can run multiple similar client 12 or multiple dissimilar client 12, each client 12 utilizes application 10 according to SLA clause, wherein, SLA clause can be applicable to by this type of clients 12 all the common utilization of application 10.
Remember above mentioned scheme, can see, application 10 is implemented as multiple peer application example 14-1,14-2 etc.Unless for the sake of clarity needed suffix, otherwise this discussion will use term " application example 14 " usually to refer to any given application example in application example 14-1,14-2 etc., and term " multiple application example 14 " will be used similarly to refer to two or more application examples any given in application example 14-1,14-2 etc.
Each application example 14 operates as the copy of application 10, therefore provides the whole function of application 10, but only provides a part for total application power or capacity.The total capacity or capacity that can measure according to affairs amount per second etc. is represented with horizontal extension form by the set of peer application example 14.10 (universally) of application and each application example 14 will be understood to include the following or be represented by the following: the functional circuit that realizes in digital processing circuit configures and associated memory in one or more computer system, such as, the server of operation system, wherein, application example 14 performs in this operating system.Accompanying drawing depicts this treatment circuit with all meanings, and it is referred to as " distributed processing system(DPS) 16 ".
Single client in client 12 through one or more computer network 18 (such as, one or more public data network or private data network, can the Internet be comprised, and can be included in wherein support that Virtual Private Network (VPN) connects) communicative couplings communicate with the single application example in application example 14.Each client 12 sends applied business to application 10, and wherein, this business comprises such as according to the request message that definition agreement sends.Load equalizer 20 receives the applied business inputted from client pool 12, and use such as circulation distribution function to be distributed to respective application example in application example 14, wherein, enter each new opplication message of load equalizer 20 or a collection of new application message and be distributed to the next one in application example 14.
Therefore, the applied business entering any one in application example 14 comprises the applied business stream 22 of any amount, as mentioned above.In other words, for any given application example 14, the applied business of input can comprise the polytype message from the multiple clients in client 12.
After a while the applied business entering each application example 14 is logically separated into by the process described in detail in this article each stream 22, wherein each flows the applied business of the given type of 22 ordinary representation and has given client associate.Client associate can be specific, such as, from the business etc. of client 12-1 or 12-2, or client associate can be the association of aspect, territory, such as, the any client 12 be associated with other customized licenses with identical SLA, other customized licenses wherein said give this type of clients 12 all to the access right of application 10.At this on the one hand, should also be noted that the applied business entering any given application example 14 from load equalizer 20 is not also logically categorized as stream 22 usually---this classification combines the distributed service of instructing herein to control execution.Therefore, the layout of the label " 22 " in Fig. 1 is not intended to only refer to that incoming traffic comprises the business be associated with the stream 22 of any amount as this term is defined herein.
Although the distribution of services method that load equalizer 20 is taked " liberally " can distribute initial request message to each application example 14, but some requests in these requests are served rapidly, request simultaneously relates to " viscosity " of follow-up business or is connected lastingly, wherein, follow-up business anchors to the connection of this viscosity.Therefore, be not each input request message or other types business reception application example 14 place cause identical process load.Therefore, when during delivery applications business, can load imbalance be occurred when not considering the viscosity of this business.
Because represent the combination possible arbitrarily of viscosity affairs and non-sticky affairs to the mixed flow of the applied business of any one application example 14 from various client 12, therefore each application example 14 comprises and is configured to control the specific stream of applied business and matches to the device 28 of the maximum overall utilization of software application 10 or with this device 28.In brief, the distributed service that device 28 provides the complex form of the set to application example 14 controls, and does not hinder the performance of application 10 and without the need to a large amount of signalings between it.
Will be appreciated that, device 28 can represent that the functional circuit such as realized in the digital processing circuit of distributed processing system(DPS) 16 by the computer program instructions performing storage is arranged, wherein, the computer program instructions specific implementation of storage is herein for the processing logic described in device 28.It will also be understood that, device 28 can be replicated at each application example place, makes to realize similar device 28 for the respective application example 14 comprising whole application 10.Each such device 28 can be implemented in the computer program comprising application example 14, or can be implemented as about the attached of application example 14 or " adjoint " program.
Device 28 comprises distributed service controller 30, is abbreviated as " DTC " by this distributed service controller 30 in the accompanying drawings.For convenience's sake, abbreviation " DTC " will be used hereinafter.DTC 30 is configured to estimate local requirements about application example 14 for each stream 22 of the applied business at application example 14 place.
Device 28 also comprises communication controler 32, describes in the accompanying drawings according to abbreviation CC to communication controler 32.Hereinafter by this abbreviation of use.CC 32 is configured to exchange local demand information with other application examples one or more in application example 14.Exchange local demand information to comprise: be sent in the local requirements that application example 14 place estimates for each stream 22 of the applied business at application example 14 place, and receive the local requirements of similar estimation of similar stream 22 at other application example places in application example 14.
As herein after a while by detailed description, if two streams 22 of the applied business at two different application example 14 places comprise the applied business of identical type and are derived from identical client 12 or are derived from identical client domain or context, then they are considered to " similar stream ".Term " territory " refers to following situation: use the potential a lot of client 12 of single customized entity identification, make all applied business being derived from these clients 12 belong to identical client domain or context, thus meet the SLA signed by customized entity as a whole.More briefly, if the stream 22 at one of application example 14 place and Another Application entity 14 place another stream 22 classification be identical (such as, two stream 22 comprises the applied business of identical type and two streams 22 are context-sensitive with identical client), then another of stream 22 and Another Application entity 14 place at one of application example 14 place flows 22 " similar ".
It is also to be noted that, the logic OR functional circuit proposed for device 28 in Fig. 1 is separated can have some advantage, such as, arrange and device 28 is separated into logic controller 30 and 32, one of them this quasi-controller is managed distributed service everywhere at its respective application example 14 and is controlled, the information exchange between another this quasi-controller processing unit 28 thus the state synchronized maintained between all DTC 30.But, also can realize the control circuit of combination and/or other functions can be used to divide implement device 28.Therefore, the layout described is not appreciated that restrictive.
DTC 30 is further configured to the total demand value of each stream 22 determining application example 14 place on whole meaning about application 10.This can be understood to evaluate the local requirements of each stream 22 determined at application example 14 place and assess the local requirements of all similar stream 22 at other application example 14 places.Therefore, the determination of flowing the total demand value that 22 are associated with each based between CC 32 to the exchange of local demand information.
DTC 30 is further configured to: according to the local utilance restriction of flowing 22 determined total demand values for each and calculate this stream 22; Applied business in each being flowed according to the local utilance restriction whether exceeding stream 22 is labeled as and does not meet Policy Service or meet Policy Service; Determine whether the polymerization of the applied business in all streams 22 at applied business 14 place exceeds the restriction of local polymerization utilance; And based on whether exceeding the restriction of local polymerization utilance and to meeting Policy Service and not meeting the differentiation of Policy Service, based on every first-class and/or aggregated flow, control the buffer memory to the aggregated application business going to application example 14.
Therefore, in at least some embodiments, DTC 30 can be understood to based on using the local utilance restriction determining each stream 22 at application example 14 place for the stream 22 at DTC 30 place and the determined utilance of similar stream 22 (demand) information at other application example 14 places, applied business stream in each separately stream 22 is supervised, wherein, the local utilance restriction of flowing 22 is directly proportional with the aggregate demand represented by its all similar stream 22 to stream 22.This permission is supervised jointly to the applied business of all similar stream 22 between all application examples 14, and applies overall control or network control to the applied business of client, and without the need to centralized flow-control mechanism.
In certain embodiments, DTC 30 is configured to cooperate with CC 32 with by exchanging local demand information via carrying out communication based on other application examples one or more in the inverse entropy agreement of gossip and application example 14, wherein, via the inverse entropy agreement based on gossip, the local requirements that any one place in application example 14 estimates is propagated into the every other application example of application example 14.See such as Van Renesse, R., Dumitriu, D., Gough, V., & Thomas, C., " Efficient Reconciliation and Flow Control forAnti-Entropy Protocols; " Second Workshop on Large-Scale DistributedSystems and Middleware (LADIS 2008), Yorktown Heights, NY:ACM (ISBN:978-1-60558-296-2).In addition, see Bailly F, Longo G., Biologicalorganization and anti-entropy, J BiolSyst 17 (1): 63 – 96 (2009).These two lists of references provide the exemplary details of the information exchange based on gossip, and they are incorporated to herein by way of reference.
Returning the exemplary details of DTC 30, contemplating the method for the multiple local requirements for estimating each stream 22 herein.In a non-limiting example, DTC 30 is configured to the local requirements being estimated each stream 22 by following at least one mode: count for the quantity of the protocol conversation of stream 22 activity application example 14 place; Based on the flow rate that whether there is any new business to estimate expection in definition interval in stream 22; And estimate to expect flow rate based on the arrival rate of the applied business measured in stream 22.
Each DTC 30 is also configured to the total demand value of each stream 22 determining its corresponding application example 14 place.Yes determines about whole application 10 for this total demand value, and such as determine this total demand value by carrying out summation with the DTC 30 at other application example 14 places for the local requirements that similar stream 22 is estimated to the local requirements estimated for stream 22.This information is known by exchanging local demand information via CC 32 between DTC 30.
About the local utilance restriction of each stream 22 at any given application example place determined in application example 14, in certain embodiments, the corresponding DTC 30 local flow rate restriction be configured to by calculating the applied business in each stream 22 calculates the local utilance restriction of this stream 22.At this on the one hand, need to remember, each application example 14 observe from each client 12 by the mixing of the applied business of load equalizer 20 dynamic distribution; Therefore, the DTC 30 at any given application example place can be configured to based on type of service, with ask the origin mark that is associated etc. to be sorted out by the applied business of input or be otherwise categorized as not homogeneous turbulence 22.Such as, stream 22 can be all application request from given client 12 comprised in soap/http or Telnet message, and wherein, all requests have identical user ID.Each stream 22 can be associated with special services level agreements or SLA.Therefore, DTC 30 must operate with a scattered manner, but fulfiling application 10 wants the overall SLA met to promise to undertake about client 12 and applied business stream 22 that is corresponding or combination thereof.
In certain embodiments, each given DTC 30 is also configured to the local flow rate restriction calculating the applied business in each stream 22 in the following manner: calculate the restriction of local flow rate with known total maximum flow rate restriction of all similar stream 22 of scale factor convergent-divergent, wherein, scale factor is confirmed as the local requirements of stream 22 and the ratio of this stream 22 with the total demand value of its all similar stream 22 at other application example places.Total maximum flow rate restriction may be derived from SLA or other pre-configured constraints, and can be the configuration data items stored in memory, and wherein, memory is included in each device 28 or can be accessed by each device 28.Should also be noted that DTC 30 can also be configured to calculate further by the local burst sizes restriction of the applied business calculated in stream 22 the local utilance restriction of stream 22.
Therefore, at each application example 14 place, the applied business entering application example 14 from load equalizer 20 can be classified as stream 22, wherein, each stream 22 obeys supervision---such as, and maximum flow rate and/or burst sizes restriction---and retrain further according to the aggregated flow of polymerization utilance restriction to the applied business of this type of stream 22 at application example 14 place.Each DTC 30 allows to carry out decentralized control to application utilance in the following manner about this generic operation of the respective application example in application example 14: prevent each client 12 or attached multiple client 12 from making given application example 14 excess load, still guarantee that application 10 meets the SLA requirement about these clients 12 simultaneously.
About the buffer memory of the aggregated flow controlled the applied business going to application example 14, in certain embodiments, DTC 30 is configured to the applied business of polymerization to be buffered in one or more delay buffer, and according to priority splitting scheme, the one or more delay buffers for application example 14 are emptied, wherein, with do not meet compared with Policy Service, this priority splitting scheme applies shorter buffer delay to meeting Policy Service usually.
Such as, if exceed the restriction of local polymerization utilance, then DTC 30 is such as by emptying buffer memory to regulate the aggregated flow of the applied business going to application example 14 according to priority splitting scheme, wherein, with meet compared with application of policies business, priority splitting scheme is not liked not meeting application of policies business.Represent the local excessive use of given stream 22 owing to not meeting application of policies business, therefore this has following impact: carry out throttling or suppression to one or more streams 22 at application example 14 place.
Certainly, as mentioned above, one or more service parameters that DTC 30 can be further configured to according to can be used for defining in any SLA of each stream 22 come any aggregated application delineation of activities priority in one or more delay buffer.In addition, as mentioned above, based on each stream applied business is labeled as and meets strategy or do not meet strategy.Therefore, stream 22 may excessive use application example 14 at any time, the business in this stream 22 is marked as and does not meet strategy, and the business be in another following stream 22 of its local utilance restriction is marked as and meets Policy Service.
Meet Policy Service by the applied business of input being categorized as stream 22 and the applied business of each stream 22 being labeled as or do not meet Policy Service, the application message that DTC 30 can be further configured in each stream in convection current as required 22 carries out throttling or selective intake, thus maintains the accordance to the maximum service level ensured in the corresponding SLA that can be used for stream 22.Therefore, can to abandon for each stream 22 and the aggregated flow applied business shaping represented by combination of each stream 22, selectivity grouping at each application example 14 place or the applied business rate limit of other types or adjustment.
Fig. 2 shows application example 14 and related device 28 thereof and can be implemented in computing system that is identical or that be separated---such as, and the identical or server hardware that is separated---on, and virtualized server can be used realize.In this illustrative diagram, application example 14-1 and 14-2 is implemented on the virtualized server that resides on the first physical server, and this virtualized server provides operating system environment for its execution.Certainly, its related device 28 resides in this identical operating system environment.
Application example 14-3 with 14-4 and related device 28 thereof also reside in this identical server, but they realize outside the virtualized server of hosts applications example 14-1 and 14-2.Respective application example in two additional application example 14-5 and 14-6 shown in each trustship diagram in two Additional servers.These Additional servers or can cannot be positioned at same position place with other servers, but they at least can link communicatedly, exchange local demand information to be provided between the CC 32 in corresponding device 28.
Fig. 3 shows the exemplary functions circuit realiration of DTC30 that the device 28 that realizes in conjunction with each application example 14 comprises.In the example shown, DTC 30 comprises SLA grader 40, SLA limiter 42, demand memory device 33, rate calculator 46 and demand distributor and adds receiver 48, it can be understood to include the element of whole CC 32 or CC 32, for the local demand information of exchange.
The input applied business of DCT 30 is the polymerizations of all applied business sent to application example by load equalizer 20 (not shown), and therefore comprises the mixing of the business of the stream 22 from any amount.Similarly, DTC 30 is flowed out and the applied business flowing into application example 14 represents aggregated flow.But, compared with the input from load equalizer 20 observed with DTC 30 is polymerized, can to supervising from DTC 30 to the aggregated flow of the business of application example, shaping or otherwise regulate.Such as, compared with flowing into the aggregated application business of DTC 30, rate limit or other mode shapings can be carried out to the aggregated application business flowing out DTC 30.
In order to understand the operation of the SLA limiter 42 in exemplary configuration better, Fig. 4 shows and configures according to the functional circuit of the SLA limiter 42 of an embodiment.In an illustrated embodiment, SLA limiter 42 comprises token bucket selector 50, at accompanying drawing and hereinafter referred to as " TB selector 50 ".SLA limiter 42 also comprises business monitor 52, this business monitor 52 logically comprises token bucket " A ", " B ", " C " etc. for respective streams 22 (being labeled as " A ", " B ", " C " etc.), and uses these token buckets to operate.In addition, SLA limiter 42 comprises TB and supervises SLA limiter 54, queue processor 56 and corresponding high-priority queue 58 and Low Priority Queuing 60, corresponding high-priority queue 58 and Low Priority Queuing 60 in fact can comprise multiple high-priority queue and Low Priority Queuing and can in host operating environment device 28 can working storage in be implemented.These entities can be understood to that SLA implements device 62 as a whole.
Although give exemplary details for the operation of these detailed functions circuit arrangement, this contributes to the whole method of operation realized with reference to each device 28.Fig. 5 provides the example of the method, the method is expressed as in the diagram " method 500 ".Method 500 will be understood to the method for the maximum overall utilization controlling independent client 12 pairs of software application 10, wherein, cross over multiple peer application example 14 and realize application 10, described multiple peer application example 14 receives the applied business being dynamically distributed to them from multiple client 12.
The method 500 at each application example 14 place comprises: the applied business entering application example 14 is divided into stream 22 (frame 502); Estimate (frame 504) about the local requirements of application example 14 to each stream 22; Local demand information (frame 506) is exchanged with other application examples one or more in application example 14, comprise: be sent in the local requirements that application example 14 place estimates for each stream 22 at application example 14 place, and receive the local requirements of similar estimation of similar stream 22 at other application example places; Determine the total demand value (frame 508) of stream 22 about application 10 based on the local demand information exchanged; According to local utilance restriction (frame 510) flowing 22 determined total demand values for each and calculate this stream 22; And according to the local utilance restriction whether exceeding stream 22 applied business in each stream 22 to be labeled as and not meet Policy Service or meet Policy Service (frame 512).
Method 500 also comprises: determine whether the polymerization of the applied business of all streams 22 at application example 14 place exceeds local polymerization utilance restriction (frame 514); And based on whether exceed the restriction of local polymerization utilance and/or based on such as according to cache priority level to meeting Policy Service and not meeting the differentiation of Policy Service, control the buffer memory to the aggregated application business going to application example.
Method 500 controls the applied business between the server/application example 14 of any amount with a scattered manner, and its major advantage is the controlled shared of the client 12 pairs of application resources producing any amount.Put it briefly, utilize the device 28 imagined and method 500 herein, there is not central point Resourse Distribute implemented or controls.Or rather, each device 28 is used as the business adjuster of the respective application example in application example 14, and runs two main algorithm: the local service control algolithm provided by DTC 30 and the state propagation algorithm provided by CC 32.
Although following details can change in some aspects, but in one or more embodiments, the DTC 30 at each application example 14 place calculates termly based on the applied business inputted for application example 14 in one or more classification stream 22 and stores " demand ".Use propagation algorithm by these requirements estimated termly and effectively propagate into other DTC 30---namely, the local demand information of each DTC 30 place calculating and other DTC 30 are shared.In addition, each DTC 30 calculate based on its oneself demand and other DTC 30 demand calculating and represent that the configuration of total application capacity or known value carry out computational resource limits value with identical measurement item---such as, speed.Each DTC 30 limits based on regulating the applied business stream 22 at application example place the applied business processed by its respective application example 14 in each stream 22, no matter and its peer-to-peer, thus the justice of application capacity is provided or balancedly shares.
Therefore, application example 14 itself does not need shared state information, and application example 14 does not have special role compared with its peer application example 14.Similarly, device 28 does not have special role compared with its reciprocity device 28.Each device 28 is simply used in the local requirements propagated between device 28 and operates in a similar manner, to provide independently business regulatory function at each application example place, but this allows the satisfied SLA requirement for each client 12 of whole application 10, and does not allow any one excessive use application 10 in these clients.Therefore, device 28 provides enough coordinations to implement SLA at application level between application example 14.
As mentioned above, can identify this business categorizing based on the origin be associated with the applied business inputted for any given application example 14 is not homogeneous turbulence.Such as, stream can be the request from client 12 comprised in soap/http or Telnet message, and wherein, this type of requests all have identical user ID.Therefore, each stream can be associated with specific SLA, and such as, overall satisfied minimum request rate and/or maximum burst size are wanted in application 10.
In order to discuss exemplary operation details further, go back to Fig. 3, SLA grader 40 reads or otherwise determines for given applied business---such as, given input request message---client identification, and for this client 12 use be applicable to SLA grader business is tagged.Note, SLA grader 40 can also realize in application example 14, the applied business inputted is made to be identified by application example 14 and tag, be passed to device 28 to carry out controlled buffer memory (as taught herein), then the applied business through regulating obtained is returned application example 14 to process by DTC 30.
Applications client mark (ID) can be stored in demand memory device 44.SLA limiter 42 is operated as traffic shaping device by the SLA label implementing to be arranged by SLA grader 40, and wherein, SLA label can be understood to type traffic classification or identifier.In this, token bucket schemes can be used to realize traffic shaping.See such as Kim, Han Seok; Park, Eun-Chan; Heo, Seo Weon, " A Token-Bucket Based Rate ControlAlgorithm with Maximum and Minimum Rate Constraints; " IEICETransactions on Communications, Volume E91.B, Issue 5, pp.1623-1626 (2010), it is incorporated to herein by way of reference.
Rate calculator 46 reads the information in demand memory device 44 with given interval " A ", and all token bucket rates upgraded explicitly with TB selector 50 in SLA limiter 42---see the token bucket of such as customer end A, B and C, as shown in the business monitor 52 in Fig. 4.Demand distributor and receiver 48 read demand memory device 44 with given interval " B ", and use the local demand information of inverse entropy protocol algorithm to other application example 14 places based on gossip to carry out synchronously.Interval A and B is without the need to equal, and such as, interval B can be longer than interval A, and can in the context of application-specific as required or expect the absolute value at these two intervals is arranged.Shorter interval provides better " system " response, but is the increase in the signaling consumption between reciprocity device 28.But there is obvious flexibility, this is because compared with CC 32, rate calculator 46 and DTC 30 are usually used as being separated process operation in device 28.
Turn to the exemplary SLA limiter shown in Fig. 4 to realize, can find out, SLA limiter 42 performs traffic shaping based on the operation of multiple functional circuit.The local utilance restriction---such as, local rate limits---that SLA limiter 42 is operationally implemented each application message arrived to calculate is to be processed by application example 14.SLA limiter 42 performs following logic processing operations to each application message entering application example 14: (1) reads the SLA label from application message by TB selector 50, and from the existing TB supervision example collection business monitor 52, selects special token bucket supervision example based on SLA label.In other words, identify the stream 22 belonging to application message, and such as identify for stream A, B, C etc. the token bucket be applicable to.
The suitable TB supervised in example by the TB in business monitor 52 is supervised example and carrys out evaluate application message.Whether exceed the local utilance restriction calculated for this stream 22 according to the applied business of stream 22, application message is labeled as meet strategy or do not meet strategy.Correspondingly, queue processor 56 uses supervision result from business monitor 52 to select correct queue (low priority or high priority).As mentioned above, can also divide (such as according to other priority, based on the minimum service speed of SLA, it causes the applied business of some buffer memory to be buffered with the priority higher than the applied business of other buffer memorys) organize SLA to implement the queue 58 and 60 that comprises of device 62.
Business monitor 52 can be understood to apply the first order or the supervision of first step business, and it performs based on often first-class.Correspondingly, SLA implements device 62 and can be understood to apply the second level or the supervision of second step business, and wherein key difference implements the supervision implemented of device 62 to the converging operation of all Business Streams 22 of device 28/ application example 14 by SLA.In exemplary configuration, SLA implements device 62 based on whether exceeding the restriction of local polymerization utilance and/or based on meeting Policy Service and not meeting the differentiation between Policy Service, control the buffer memory to the aggregated application business going to application example 14.Such as, can refer to do not exceed the restriction of local polymerization utilance as long as control buffer memory, just not buffered message, or message can end at different buffer memorys, wherein with each buffer memory of different priority Drain.
In addition, such as the restriction of local polymerization utilance can be expressed according to one or more stream parameter, such as, maximum aggregated flow speed and/or maximum aggregated burst size.
In a method, queue processor 56 checks whether buffer memory 58,60 comprises any applied business.If this Class Queue all are empty, then SLA implements device 62 and does not apply business adjustment, and when the buffer delay do not divided through priority, given application message is being delivered to application example 14.
On the other hand, if buffer memory 58,60 is not empty, then queue processor 56 is according to the emptying buffer memory 58,60 of precedence scheme of definition, such as, with the applied business of buffer memory in the emptying high-priority buffer of speed 0.99*R_tot, and with the applied business in the emptying low-priority buffer of speed 0.01*R_tot, wherein R_tot represents maximum aggregated flow speed.
Therefore, at least some embodiment as herein described, not homogeneous turbulence 22 can be mapped to different buffer memorys.The high priority represented by buffer memory 58 and 60 and low priority are its examples.Can all applied business be positioned in this type of buffer memory, then discharge these applied business according to local utilance restriction and/or according to the restriction of this locality polymerization utilance.
Fig. 6 A shows the exemplary operation of SLA grader 40, and this SLA grader 40 is to such as carrying out " filtering " from all applied business for application example input in load equalizer 20 or other sources or otherwise carry out logical process.As the result of its process, such as, tag via above-mentioned SLA and input applied business is categorized as stream 22, wherein, each flows 22 and comprises all applied business context-sensitive with identical client.In this example, can find out that input applied business is classified as multiple stream 22 by SLA grader 40, such as, FLOWA, FLOWB, FLOWC are until FLOWN.
Fig. 6 B extends this phase homogeneous turbulence process example according to an embodiment by showing the operation of SLA limiter 42 to each stream 22.But before further investigation details, it will be helpful for introducing the mark being used for this type of process:
-" flow_x " represents between all application examples 14 for the contextual all applied business of same client;
-" flow_x; i " represent that any given application example 14 place is for the contextual all applied business of same client, namely, " i " indicates the application-specific example in application example 14, therefore will be appreciated that, the DTC 30 at application example 14-i place estimates flow_x, the local requirements of i, and based on receiving flow_x, y, flow_x, z etc. estimate the total demand value be associated of flow_x, and wherein, " y " and " z " represents flow_x other examples at corresponding other application examples 14-y and 14-z place;
-" d_x, i " represents for the local requirements estimated by flow_x, i;
-" r_x, i " represents this locality stream utilance restriction for flow_x, i in flow rate, and other restrictions can additionally or alternatively be suitable for, such as, maximum burst size, it is represented as " b_x, i ";
-" r_tot; i " this locality polymerization utilance restriction that can be applicable to the polymerization of all streams at given application example 14-i place of the restriction of " b_tot, i " expression flow rate and burst sizes restricted representation---such as, r_tot, i=r_x, i+r_y, i+r_z, i, wherein, r_y, i and r_z, i represents the maximum flow rate restriction of flow_y and flow_z at application example 14-i place;
Utilize mark above, " R_x " may be used for representing the maximum overall utilization for all example flow_x, the i of flow_x.Similarly, " B_x " may be used for maximum total burst sizes restriction of all example flow_x, the i represented for flow_x.
In addition, " R_tot " may be used for the maximum aggregated flow speed of the polymerization of all stream examples represented between all application examples 14.Similarly, " B_tot " may be used for the maximum burst size restriction of the polymerization of all stream examples represented between all application examples.
Remember mark above, device 28 operation below each application example i place such as performs with regular intervals of time: the d_x estimating each flow_x in the following manner, i: the quantity of protocol conversation such as used flow_x counts or estimates the recent expection flow rate of flow_x (such as, suppose that flow rate in the future equals current arrival rate), if or in flow_x, observe any application message recently, then by d_x, i is set to " 1 ", otherwise by d_x, i is set to " 0 ", wherein, this binary approach can be particularly conducive to carries out completely average distribution to the business from load equalizer 20.
In addition, but at each application example i, with regular intervals of time,---not necessarily and for identical interval, the interval of needs estimate---operates below performing each device 28:
All d_x, i values of all streams 22 at-" announcement " application example i place, wherein, can use the inverse entropy agreement based on gossip to complete announcement;
-estimate based on the known demand from other application examples the total demand value of all streams 22 coming computing application example i place, such as, the total demand of given flow_x is all d_x, the i sums on D_x=all application examples i=1 to N; And
Each flow_x at-adjustment application example i place, the r_x of i stream, i, distribute to carry out this locality proportional with the state at other example places (demand), such as, according to r_x, i=R_x* (d_x, i/D_x) the local utilance restriction of each flow_x, the i that express by flow rate is calculated.
Note, set-up procedure above illustrates simple exemplary local utilance restriction and determines.The local utilance restriction of each stream 22 at given application example 14 place can comprise: implement minimum and maximum flow rate.In addition, note, (such as, similar linear expression can also be used) in a similar fashion and upgrade b_x, i and B_x.In addition, in certain embodiments, such as, for the traffic shaping under maximum transaction amount limited case per second, can also r_tot and b_tot be adjusted, such as, r_tot, i=R_tot*d_x, i for all x sums divided by d_x, i for all x and all N sums.
Operation above implements R_tot restriction to distributed system 16---namely, on the whole meaning about application 10.Similar enforcement can have been carried out about maximum aggregated burst size etc.
Therefore, still in the context of Fig. 6 B, will be appreciated that SLA limiter 42 performs multiple operating procedure, comprise the first operating procedure: according to the priority about corresponding local utilance speed definition, applied business is classified---a kind of " supervision " of form.In one of given priority query realized in buffer memory 58,60, polymerization is from the application message of each classification of all applied business stream 22.Priority query is the single inspection post that the aggregated application business performed by device 28 regulates.Priority query's service parameter---such as, maximum flow rate and maximum burst size---implement to reach business, comprise all inlet flows 22, exemplified here is stream 22-1,22-2 etc., and wherein each stream 22 corresponds to different client contexts.Herein, " can reach business " can be maximum usable frequency, and it can be provided by maximum possible " load ", or is defined with managing by size marking (dimensioning) and/or management tactics, such as, and license quota.This type of speed amounts to up to such as R_tot.
The service parameter value of FLOWA and the parameter value of other application examples 14---that is, having identical applied business type and client domain---from having similar FLOWA are carried out Fast synchronization.Utilize the mark of foregoing descriptions, similar stream 22 can be expressed as the different instances of given flow_x, such as, the flow_x at the flow_x at application example 14-i place, i, application example 14-j place, the flow_x at j and application example 14-k place, k.Herein, " x " represents common customer end context, and " i ", " j " and " k " represent and receive the different application example that belongs in the application example 14 of the contextual applied business of this client.
Fig. 7 illustrates the process that the example " interior " of supervising SLA limiter 54 at business monitor 52 and/or TB performs.In one or more embodiments, the supervision algorithm used by two these type of processing units is identical.
Fig. 8 A and Fig. 8 B describes the exemplary details of token bucket algorithm, wherein performs in the subblock of token bucket algorithm in business monitor 52 and TB supervision SLA limiter 54.In the example shown, algorithm is represented as " method 800 ".In addition, the algorithm run in these subblocks can be identical, and difference is that entity 52 operates every client stream utilance restriction of user based on each client stream, and entity 54 uses polymerization utilance restriction operation convergence service stream.Therefore, two entrances of token bucket processing method 800 can be seen, namely, entrance " B " (in fig. 8 a, for entity 52) and entrance " K " is (in the fig. 8b, for entity 54), and corresponding two exit points " C " (for entity 52) and " L " (for entity 54).
Process in method 800 from following steps " ": receiving the time that is indicated to performs the signal (step 1) of update of token bucket.Determine the quantity for will add the token of token bucket to constant current 22, and this represents that implementing local utilance for stream 22 limits (step 2).Based on multiple because usually determining to make decisions to this, wherein, multiple factor has comprised Delta Time, business service or traffic category etc. such as since recent renewal.Step 3 comprises: the quantity having determined the token that will arrange in the token bucket related to based on the time such as since recent renewal, business service classification, current token bucket cache size, maximum burst speed etc.Utilize make these determine, step 4-11 outline for determine application message meet strategy or do not meet strategy illustrative methods.
Fig. 9 depicts the algorithm run in queue processor 56 in one or more embodiments, and wherein this algorithm is usually represented as " method 900 ".Shown method 900 comprises treatment step 1-11, and to the syndication message flow operation in each setter 28, and performs for each of applied business entering access to plant 28, such as, performs the application message of each new input.
Note, item " R " in flow diagram is corresponding with the method 1100 described in Figure 11, its be performed as separate processes and for removing the every of applied business---such as, if comprise the applied business entering access to plant 28 and/or application example 14 each request or other application messages---there is more than one message in any one in the queue of queue processor 56.
In the more Detailed example explanation of the process in fig .9, each input message has the priority be associated, and the priority using this to be associated in the algorithm is to determine the characteristic (such as, postpone, abandon, reorder) of the service of message.Algorithm operates in two patterns.When a new message arrives, following situation occurs:
If there is queuing message, then immediately message is ranked.Message based priority selects queue.
If all queues are empty, then check that the business (arrival process of=message) observed recently is to determine whether current message is polymerized in utilance restriction in this locality of the stream 22 belonging to current message.This inspection is based on the token bucket algorithm described in detail herein.This action implements sustainable speed and maximum burst size to the message arrived immediately after one another.If message in the restriction of this locality polymerization utilance, then directly discharges this message to be processed by application example 14.If message violates limits, then message is put into the queue matched with its priority.
When ranking to the first message, calculate until time period that this message can be discharged.According to rate limit (r_tot, i from the restriction of this locality polymerization utilance) with since allowing a upper message to obtain this time period by institute's elapsed time.(considering that elapsed time prevents from applying too low speed).Then, scheduling is slept by this algorithm, until this time period in the past.
When time period past tense, releasing request from one of queue.Which determine from queue to discharge by the priority of queue, compared with the queue of lower priority, with the queue of higher probability selection higher priority.If selected queue is empty, then select the queue of lower priority (repeat this process, until find message, if necessary, " unrolling " is to higher priority).
If still there is message in any queue, then dispatch new sleep, otherwise do not carry out any operation.
Figure 10 shows the method 1000 of renewal conversational list and the token bucket policing parameter such as performed in rate calculator 46.Treatment step 1-6 can be seen, wherein, exchange based on the local requirements between one or more in device 28 and its reciprocity device the inverse entropy data structure (aEDS) (step 1) upgrading setter 28 place, and the information of renewal is write corresponding conversational list (step 2 and 3) to mark application/service device, timestamp together with version information.Process continues the local utilance restriction (it is implemented via the token bucket process realized in business monitor 52) (step 4) upgrading the client 12 connected, etc. aEDS regeneration interval to be defined full (step 5), and signaling/trigger new renewal (step 6).
This type of processes to comprise and such as between device 28, regularly or periodically exchanges local demand information.In addition, should be understood that, in one or more embodiments, aEDS comprises data structure, and all related needs that this data structure comprises the stream 22 at application example 14 place are estimated.
A kind of method of demand of constant current 22 of giving at any given device place in estimation unit 28 carrys out potential demand based on the quantity of the current client session opened for stream 22 at application example 14 place.Consider that each additional session gives the possibility of the more requests sent out toward this application-specific example 14, for the rough estimate of the demand that each counting flowing the session that 22 open applies as stream 22 pairs of application examples 14.
Current, other tolerance can be used to represent demand.Such as, session, the quantity of connection or transaction count or the speed supported by application example 14/distributed can be used and/or the observation time of advent of application message or application example 14 can be used to stay the present load of server thereon.DTC 30 in the device 28 of each application example 14 place operation is based on arranging local service controling parameters (such as, local utilance restriction) calculate termly the application capacity of configuration suitable/Fairshare so that business monitor 52 sentence often first-class based on distribute.The application capacity of configuration can be understood to the total system speed configured, and this total system speed divides in this locality and implemented in this locality by companion devices 28 at each application example 14 place between application example 14.
As mentioned above, the calculation of capacity (r_x, i and r_tot, i) for each stream 22 of applied business can be based on: for the known demand of the similar stream 22 at other application example 14 places and the SLA that configures for client 12.In one approach, for supervise in any given application example in application example 14 to the restriction of the local utilance of the applied business of constant current 22 for the ratio with local demand and aggregate demand pro rata calculated capacity distribute, such as, the local utilance restriction=flow the application capacity * (total demand of the local demand/stream of stream) distributed for this type of of stream.According to previous mark, provide the local utilance restriction of the stream x at application example i place according to the following formula:
r_x,i=R_x*(d_x,i/D_x)。
Such as according to the client identification be associated with stream 22, the capacity of distribution can be known.In addition, as mentioned above, the local requirements sum can estimated according to the local requirements estimated at application example 14 place related to and all similar stream 22 for other application example places is to calculate the total demand of stream 22.Certainly, other algorithms may be used for the local utilance restriction calculating each stream.Can distribute by the minimum capacity distribution of each stream 22 for application example 14 place and heap(ed) capacity the entirety obtaining fairness to improve.
About the details exchanging local requirements between the device 28 at different application example 14 places, exemplary three coordinations that Figure 12 shows between two reciprocity devices 28 are shaken hands.Suppose to exchange messages between device 28 at application example 14-1 place and the device 28 at application example 14-2 place, three message are SYN, ACK and ACK2.SYN message comprises all entries with ID and version information, and does not have the demand data of the demand schedule from application example 14-1.ACK message comprises the version entry of corresponding new content item based on the demand schedule information at application example 14-2 place and disappearance.
In other words, its demand data and the demand data received from the device 28 of application example 14-1 compare by the device 28 at application example 14-2 place, and the request upgraded and to any missing entry is provided in ACK message, that is, the demand data of any stream 22 do not considered in the information safeguarded in the device 28 at application example 14-1 place.Similarly, but to the ACK2 message that application example 14-2 returns be included in application example 14-2 place available application example 14-1 place disappearance information.In this way, between all devices 28 of respective application example 14, propagate all information of all streams 22 at all application example 14 places, and necessarily between all possible pairing of device 28, directly do not exchange local demand information.
Therefore, Figure 12 can be understood to be adopted by the CC 32 in device 28 with the non-limiting example of the inverse entropy agreement based on gossip of shared local requirements.Preferably, be selected for any inverse entropy algorithm exchanging this type of information use three coordinations are shaken hands.All the time between two peer-to-peers mutually understood, coordination is performed.Not all peer-to-peer all needs mutual understanding, but must there is at least one all application example application example known at first 14, and it is referred to as " seed ".
More specifically, there is at least one device 28 with the local requirements of every other device 28, make to be added to support that each new application example/device 28 of whole application 10 can start to carry out coordination with this sub-means 28 and shake hands.Only just needing this operation when starting, making peer-to-peer can start to intercom mutually.At least two seeds should be there are, to avoid single fault point in each is trooped against entropy.Several message in the cluster back and forth after, all peer-to-peers are understood mutually.
New application example 14 to be put into and is trooped by one, shakes hands with regard to starting to carry out coordination with known peer-to-peer.The process selecting seed is random, and this is the effective means guaranteeing distribution fast.
Figure 13 and Figure 14 describe in further detail the exemplary inverse entropy algorithm for exchanging local demand information between device 28.These algorithms such as run in the CC32 of each such device 28 place realization.
In fig. 13, see treatment step 1-4, it is briefly expressed as " method 1300 ".Method 1300 comprises first step: algorithm waits is indicated to the signal that the time exchanges local demand information (such as by sending the information from previously mentioned aEDS).Step 2 comprises: may select the reciprocity CC 32 exchanging with it local demand information randomly, and step 3 and step 4 comprise: the signal sending SYN signal to selected reciprocity CC 32 and send for sending aEDS.
In other words, step 3 sends aEDS and step 4 makes algorithm get back to the wait state described in step 1 to random peer.Obviously, waiting step can be applied between step 3 and step 4---such as, intervalometer---to control the speed that can send SYN request.
In fig. 14, the treatment step 1-11 of the handshake information for the treatment of various types of reception is seen.These steps are briefly expressed as " method 1400 " and process from following steps: wait for from peer-to-peer and receive aEDS signal (step 1), and continue to determine to have received the aEDS message (step 2) of which kind of type.Such as, message can be the SYN message comprising aEDS request (that is, to the request of local demand information comprising device 28 place receiving CC 32).For this message, method 1400 comprises the SYN message that process receives, and determines will send which local entry, ask which remote entry, and sends the ACK message (step 3,6 and 9) obtained.
For the ACK message received, method 1400 comprises: process ACK message, determine to return which local entry in ACK2 message, use the local demand information of Data Update from ACK, the CC32 then ACK message that ACK2 message sends it back reception be derived from (step 4,7 and 10).Show the similar process (step 5 and 8) for receiving ACK2 message.
As one of a lot of exemplary application of instruction herein, horizontal extension application 10 can use disclosed device 28 and function equivalent thereof to utilize telecommunications supply system to arrange the service level agreement of guarantee to user, wherein, telecommunications supply system makes multiple user share identical hardware platform.In telecommunications industry, a lot of telecom operators have started affairs that it is divided into less organizational unit.These telecom operators wish to share hardware investment between this type of sub-operators all.The method 500 for the utilance in level of control expanding unit 10 of instructing herein and device 28 allow any telecom operators to use the present invention to come to share given hardware investment between the more cell unit of any amount.In addition, instruction herein makes it possible to collect business model data, to arrange the just size of each client of this type systematic.
In another example, using method 500 and device 28 in " affairs amount per second " or " paying as required " transaction module.In this class model, client pays license fee, thus permits the affairs amount (TPS) per second of the maximum quantity defined to client.Therefore, given client may have the client 12 producing and go to any amount of the applied business of application 10, and device 28 operates about the application example 14 comprising application 10, to limit the maximum TPS and the distribution of balanced transaction between application example 14 that application 10 provides to client.
In another example, instruction herein defines the software utilance that user is separated and serves as easily extensible " cloud ".This is possible, and its reason is that these instructions make it possible to be separated in any given hardware/software system of horizontal extension by user name, application ID, IP address or any other every user's applications exploiting rate identified.By permitting the restriction of every user availability to user software, cloud provider avoids any unique user and monopolizes cloud service partially.Obviously, this makes stream be benefited, no matter and whether host operating system allows to apply user's restriction.
When providing another exemplary application of instructing herein, Figure 15 provides the simple application of the distributed service control of instructing herein.There is two methods example 14, it has four customer end A connected, AS, B and C, client that wherein, AS represents " viscosity ".Object is herein to whole application implementation application message speed, and wherein, whole application is represented by the two methods example 14 operated on two different servers.
Total application example speed of configuration is 1000 application messages (msg/sec) per second.This value gives 2000msg/sec for whole two node clusters.Spare capacity for every virtual server of redundancy scene is 200msg/sec, adds up to 400.Following speed is provided in the application: the applications client to user A provides 600msg/sec, provides 800msg/sec to the applications client of user B, and provides 600msg/sec to the applications client of user C to applications client 12.Other attributes for shown suppositive scenario comprise the following fact: unallocated spare capacity 200msg/sec, that is, in the case of a fault used as redundancy.
Subscription client As---" p1 " see in schematic diagram---starts synchronous circulating after a while, and will be appreciated that As represents multiple viscosity client's side link.Load---in the diagram see " p3 "---is increased to 600msg/sec from 400msg/sec by applications client C on the second synchronous circulating.Herein, each synchronous circulating is appreciated that and recalculates demand information based on the input from the known local demand information of application example 14 and long-range demand information in this locality.
In another example, the application 10 of usage level expansion is carried out by larger telecom operators' tissue.Tissue is divided into less part, such as, and 19 different organizational units.Distributed processing system(DPS) 16 comprises ten the different servers running application, via load balancing this application of horizontal extension between these servers of input request.Total supplied capacity of system is 10*400=4000 supply request/second.Each organize subelement obtain 4000 request/second capacity configurable share, wherein, this share be responsible for how many subscribers based on each subelement.The movement that supply request creates, revises or deletes in attaching position register (HLR) subscriber database is customized.
Ten servers (wherein each runs the application example 14 of provisioning software application 10) are formed " trooping " of being fed to by load equalizer 20.Each application example 14 in trooping receives business from load equalizer, and wherein load equalizer distributes the supply service request of input between ten servers.The complicated utilance provided due to the device 28 matched with the respective application example in application example 14 limits, and therefore load equalizer 20 can be very simple, such as, and the circulation distribution of service request.
Use the agreement that two different in this embodiment, that is, TELNET and HTTP.Longer TELNET of dialogue-based/life-span connects and makes the more difficult whole load of average distribution on trooping.HTTP can carry out the stateless protocol distributed by working load equalizer based on each request.By only adding new server, every for supplied capacity server can be increased by 400 request/seconds.But the combination of these agreements makes business load skewness.Advantageously, connect because consider viscosity in the local requirements of device 28 execution is estimated, even if therefore utilize very simple distribution of services scheme in the cluster, also can realize load balancing.
In another example, application example 14 comprises 20 isomery load equalizers, and it distributes load in the catenet with 16000 virtualized servers and non-virtualized server.In this example, load equalizer capacity is divided into not homogeneous turbulence based on the application ID of application message by the distributed approach of instructing herein.In this case, the object of load equalizer in host server provider context, carries out charging based on disposable load equalizer capacity to the user of load equalizer, instead of carry out charging for specialized hardware cost to user.
Obviously, when benefiting from the instruction provided in aforementioned description and the accompanying drawing that is associated, those skilled in the art will expect amendment and other embodiments of disclosed invention.Therefore, should be understood that, the present invention is not limited to disclosed specific embodiment, and amendment and other embodiments are intended to be included in the scope of the present disclosure.Although can adopt particular term herein, these terms are only used without the object for limiting in general descriptive sense.

Claims (20)

1. one kind controls independent client (12) to the method (500) of the utilance of software application (10), wherein said application is implemented as multiple peer application example (14), described multiple peer application example (14) receives applied business from any one or more clients (12) multiple client (12), and described method (500) comprises at each application example (14) place:
Applied business classification (502) that will enter described application example (14) is the stream (22) corresponding with the different clients in described client (12) and/or dissimilar applied business;
The local requirements of (504) each stream (22) is estimated about described application example (14);
(506) local demand information is exchanged with other application examples one or more in described application example (14), comprise: send for the described local requirements estimated by the described stream (22) at described application example (14) place, and receive the local requirements for the similar estimation of all similar stream (22) at other application example places in described application example (14);
Determine the total demand value of (508) each stream (22) about described application (10) based on exchanged local demand information;
According to the local utilance restriction calculating (510) each stream (22) for described stream (22) determined total demand value;
According to the local utilance restriction whether exceeding described stream (22), by described applied business mark (512) in each stream (22) for not meeting Policy Service or meeting Policy Service;
Determine whether the polymerization of the applied business of all streams (22) at (514) described application example (14) place exceeds the restriction of local polymerization utilance; And
Based on whether exceeding described this locality polymerization utilance restriction and to meeting Policy Service and the differentiation not meeting Policy Service, control the buffer memory to the aggregated application business going to described application example (14) based on each stream and/or aggregated flow.
2. method according to claim 1 (500), wherein, exchange described local demand information to comprise: communicate with other application examples one or more in described application example (14) via based on the inverse entropy agreement of gossip, the local requirements that any one the application example place in described application example (14) estimates is propagated into the every other application example in described application example (14) by the described inverse entropy agreement based on gossip.
3. method according to claim 1 and 2 (500), wherein, estimate that the local requirements of each stream (22) comprises following at least one item: described application example (14) place is counted for the quantity of the protocol conversation of described stream (22) activity; The expection flow rate of described stream (22) is estimated based on the applied business new arbitrarily whether received in described stream (22) in definition interval; And the expection flow rate of described stream (22) is estimated based on the arrival rate of the described applied business measured in described stream (22).
4. according to the method in any one of claims 1 to 3 (500), wherein, determine that the total demand value of described stream (22) comprising: to the local requirements estimated by the described stream (22) for described application example (14) place know with the exchange by described local demand information for other application example places in described application example (14) all similar stream (22) estimated by described local requirements sue for peace.
5. method according to any one of claim 1 to 4 (500), wherein, the described local utilance restriction calculating each stream (22) comprises: the local flow rate restriction calculating described stream (22).
6. method according to claim 5 (500), wherein, the local flow rate restriction calculating described stream (22) comprises: limit according to calculating described local flow rate about described application (10) for known total maximum flow rate restriction of described stream (22) and its all similar stream (22), and convergent-divergent is carried out in total maximum flow rate restriction described in passing ratio factor pair, described scale factor determines according to the ratio of the local requirements of described stream (22) and the total demand value of described stream (22).
7. method according to claim 5 (500), wherein, the local utilance restriction calculating each stream (22) also comprises: the local burst sizes restriction calculating the applied business in described stream (22).
8. according to method in any one of the preceding claims wherein (500), wherein, control to comprise the buffer memory of the aggregated application business going to described application example (14): meet Policy Service relative to not meeting Policy Service priority treatment, comprise: empty the one or more delay buffers (58,60) for described application example (14) according to priority splitting scheme, wherein do not meet compared with Policy Service with described, described priority splitting scheme applies shorter buffer delay to the described Policy Service that meets usually.
9. method according to claim 8 (500), also comprise: according to the one or more service parameters defined in the respective service level agreements SLA be associated with each stream in described stream (22) to the aggregated application business Further Division priority in described one or more delay buffer, so that the one or more minimum applied business parameter defined for described stream (22) during to meet in described SLA one or more.
10. according to method in any one of the preceding claims wherein (500), wherein, described markers step also comprises supervision step, described supervision step comprises: carry out throttling or selective intake, to keep meeting with for the maximum service level ensured in the known respective service level agreements SLA of described stream (22) to the application message in the applied business in each stream (22) as required.
11. 1 kinds for controlling the device (28) of independent client (12) to the utilance of software application (10), wherein said application (10) is implemented as multiple peer application example (14), described multiple peer application example (14) receives applied business from any one or more clients (12) multiple client (12), and described device (28) is implemented in each application example (14) place and comprises:
Distributed service controller (30), be configured to the applied business entering described application example (14) to be categorized as the stream (22) corresponding with the different clients in described client (12) and/or dissimilar applied business, and estimate the local requirements of each stream (22) about described application example (14); And
Communication controler (32), be configured to exchange local demand information with other application examples one or more in described application example (14), comprise: be sent in described application example (14) place for described application example (14) place described stream (22) estimated by local requirements, and receive for the local requirements of the similar estimation of all similar stream (22) at other application examples (14) place; And
Described distributed service controller (30) is further configured to:
Determine the total demand value of each stream (22) about described application (10) based on exchanged local demand information;
According to the local utilance restriction calculating each stream (22) for described stream (22) determined total demand value;
According to the local utilance restriction whether exceeding described stream (22), the applied business in each stream (22) is labeled as and does not meet Policy Service or meet Policy Service;
Determine whether the polymerization of the applied business of all streams (22) at described application example (14) place exceeds the restriction of local polymerization utilance; And
Based on whether exceeding described this locality polymerization utilance restriction and to meeting Policy Service and the differentiation not meeting Policy Service, control the buffer memory to the aggregated application business going to described application example (14) based on each stream and/or aggregated flow.
12. devices according to claim 11 (28), wherein, described distributed service controller (30) is configured to cooperate to exchange described local demand information in the following manner with described communication controler (32): communicate with other application examples one or more in described application example (14) via based on the inverse entropy agreement of gossip, the local requirements that any one application example place in described application example (14) estimates is propagated into the every other application example in described application example (14) by the described inverse entropy agreement based on gossip.
13. devices (28) according to claim 11 or 12, wherein, described distributed service controller (30) is configured to the local requirements being estimated each stream (22) by following at least one item: count for the quantity of the protocol conversation of described stream (22) activity described application example (14) place; Whether in described stream (22), the expection flow rate that any one new applied business estimates described stream (22) is received based in definition interval; And the expection flow rate of described stream (22) is estimated based on the arrival rate of the applied business measured in described stream (22).
14. according to claim 11 to the device (28) according to any one of 13, wherein, described distributed service controller (30) is configured to determine in the following manner the total demand value of each stream (22): to the local requirements estimated by the described stream (22) for described application example (14) place with know by exchanging described local demand information for other application examples (14) place all similar stream (22) estimated by local requirements sue for peace.
15. according to claim 11 to the device (28) according to any one of 14, wherein, described distributed service controller (30) is configured to the local utilance restriction calculating each stream (22) in the following manner: the local flow rate restriction calculating described stream (22).
16. devices according to claim 15 (28), wherein, described distributed service controller (30) is configured to calculate in the following manner the local flow rate restriction of described stream (22): calculate described local flow rate restriction according to about described application for known total maximum flow rate restriction scaling factor of described stream (22), described scale factor determines according to the ratio of the local requirements of described stream (22) and the total demand value of described stream (22).
17. devices according to claim 15 (28), wherein, described distributed service controller (30) is configured to limit to the local utilance calculating each stream (22) further by with under type: the local burst sizes restriction calculating described stream (22).
18. according to claim 11 to the device (28) according to any one of 17, wherein, described distributed service controller (30) is configured to control the buffer memory to the aggregated application business going to described application example (14) in the following manner: to one or more delay buffer (58, 60) the aggregated application business in carries out buffer memory, and empty the one or more delay buffers (58 for described application example (14) according to priority splitting scheme, 60), wherein do not meet compared with Policy Service with described, described priority splitting scheme applies shorter buffer delay to the described Policy Service that meets usually.
19. devices according to claim 18 (28), wherein, described distributed service controller (30) is further configured to: according to the one or more service parameters defined in the respective service level agreements SLA be associated with each stream in the described stream (22) at described application example place to any aggregated application delineation of activities priority in described one or more delay buffer (58,60), to meet the one or more minimum applied business parameter of the one or more middle definition in described SLA.
20. according to claim 11 to the device (28) according to any one of 19, wherein, meet Policy Service together with the applied business in each stream (22) is labeled as or does not meet Policy Service, described distributed service controller (30) is further configured to carries out throttling or selective intake, to keep meeting with for the maximum service level ensured in the known respective service level agreements SLA of described stream (22) to the application message in each stream (22) as required.
CN201380060597.2A 2012-11-21 2013-09-10 Method and apparatus for the utilization rate in controlled level expanding software application Active CN104798356B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US13/683,549 2012-11-21
US13/683,549 US9112809B2 (en) 2012-11-21 2012-11-21 Method and apparatus for controlling utilization in a horizontally scaled software application
PCT/SE2013/051048 WO2014081370A1 (en) 2012-11-21 2013-09-10 Method and apparatus for controlling utilization in a horizontally scaled software application

Publications (2)

Publication Number Publication Date
CN104798356A true CN104798356A (en) 2015-07-22
CN104798356B CN104798356B (en) 2018-02-16

Family

ID=49382562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201380060597.2A Active CN104798356B (en) 2012-11-21 2013-09-10 Method and apparatus for the utilization rate in controlled level expanding software application

Country Status (7)

Country Link
US (1) US9112809B2 (en)
EP (1) EP2923479B1 (en)
CN (1) CN104798356B (en)
BR (1) BR112015011655A2 (en)
MX (1) MX340418B (en)
MY (1) MY172784A (en)
WO (1) WO2014081370A1 (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104519021B (en) * 2013-09-29 2018-07-20 新华三技术有限公司 The method and device for preventing malicious traffic stream from attacking
EP2919510A1 (en) * 2014-03-10 2015-09-16 Telefonaktiebolaget L M Ericsson (publ) Technique for controlling bandwidth usage of an application using a radio access bearer on a transport network
JP6237397B2 (en) * 2014-03-27 2017-11-29 富士通株式会社 Control device and communication method
US10680957B2 (en) 2014-05-28 2020-06-09 Cavium International Method and apparatus for analytics in a network switch
US11838851B1 (en) * 2014-07-15 2023-12-05 F5, Inc. Methods for managing L7 traffic classification and devices thereof
EP3016345A1 (en) * 2014-10-28 2016-05-04 Alcatel Lucent Method and device for controlling a content delivery device
CN107111520A (en) 2014-11-11 2017-08-29 统有限责任两合公司 Method and system for the real time resources consumption control in DCE
US9871733B2 (en) * 2014-11-13 2018-01-16 Cavium, Inc. Policer architecture
US10182013B1 (en) 2014-12-01 2019-01-15 F5 Networks, Inc. Methods for managing progressive image delivery and devices thereof
US9654483B1 (en) * 2014-12-23 2017-05-16 Amazon Technologies, Inc. Network communication rate limiter
US11895138B1 (en) 2015-02-02 2024-02-06 F5, Inc. Methods for improving web scanner accuracy and devices thereof
US9959152B2 (en) * 2015-02-27 2018-05-01 Matrixx Software, Inc. Adaptive quota management system
US10463957B2 (en) * 2015-03-17 2019-11-05 Amazon Technologies, Inc. Content deployment, scaling, and telemetry
US20160285957A1 (en) * 2015-03-26 2016-09-29 Avaya Inc. Server cluster profile definition in a distributed processing network
US10110458B2 (en) * 2015-10-27 2018-10-23 Nec Corporation VM-to-VM traffic estimation in multi-tenant data centers
US10552768B2 (en) * 2016-04-26 2020-02-04 Uber Technologies, Inc. Flexible departure time for trip requests
KR102424356B1 (en) * 2017-11-06 2022-07-22 삼성전자주식회사 Method, Apparatus and System for Controlling QoS of Application
BR112020019697A2 (en) * 2018-03-27 2021-01-05 Netflix, Inc. TECHNIQUES FOR DESIGNED ANTIENTROPY REPAIR
CN115668881A (en) * 2020-05-19 2023-01-31 起元技术有限责任公司 Optimization of communications in a distributed computing network
CN112394960B (en) * 2020-11-23 2024-06-07 中国农业银行股份有限公司 Control method and device for service flow, electronic equipment and computer storage medium
CN114884889B (en) * 2022-07-11 2022-10-14 三未信安科技股份有限公司 Combined current limiting method for distributed service

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7418494B2 (en) * 2002-07-25 2008-08-26 Intellectual Ventures Holding 40 Llc Method and system for background replication of data objects
CN101491025A (en) * 2006-07-10 2009-07-22 国际商业机器公司 Method for distributed traffic shaping across a cluster
CN101496005A (en) * 2005-12-29 2009-07-29 亚马逊科技公司 Distributed replica storage system with web services interface
CN102498696A (en) * 2009-07-17 2012-06-13 英国电讯有限公司 Usage policing in data networks

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4009344A (en) * 1974-12-30 1977-02-22 International Business Machines Corporation Inter-related switching, activity compression and demand assignment
US7286474B2 (en) * 2002-07-12 2007-10-23 Avaya Technology Corp. Method and apparatus for performing admission control in a communication network
US7305431B2 (en) * 2002-09-30 2007-12-04 International Business Machines Corporation Automatic enforcement of service-level agreements for providing services over a network
US7716180B2 (en) * 2005-12-29 2010-05-11 Amazon Technologies, Inc. Distributed storage system with web services client interface
JP5023899B2 (en) * 2007-09-03 2012-09-12 日本電気株式会社 Stream data control system, stream data control method, and stream data control program
EP2138972A1 (en) * 2008-06-24 2009-12-30 Nicolas Duchamp Method for automatically classifying money transfers made on a bank account
US8606899B1 (en) * 2012-05-29 2013-12-10 Sansay, Inc. Systems and methods for dynamic session license control

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7418494B2 (en) * 2002-07-25 2008-08-26 Intellectual Ventures Holding 40 Llc Method and system for background replication of data objects
CN101496005A (en) * 2005-12-29 2009-07-29 亚马逊科技公司 Distributed replica storage system with web services interface
CN101491025A (en) * 2006-07-10 2009-07-22 国际商业机器公司 Method for distributed traffic shaping across a cluster
CN102498696A (en) * 2009-07-17 2012-06-13 英国电讯有限公司 Usage policing in data networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ROBBERT VAN RENESSE 等: "Efficient Reconciliation and Flow Control for Anti-Entropy Protocols", 《ACM 2008 ISBN:978-1-60558-296-2》 *

Also Published As

Publication number Publication date
EP2923479A1 (en) 2015-09-30
MY172784A (en) 2019-12-12
CN104798356B (en) 2018-02-16
MX2015006471A (en) 2015-08-14
WO2014081370A1 (en) 2014-05-30
MX340418B (en) 2016-07-08
BR112015011655A2 (en) 2017-07-11
EP2923479B1 (en) 2017-06-28
US9112809B2 (en) 2015-08-18
US20140143300A1 (en) 2014-05-22

Similar Documents

Publication Publication Date Title
CN104798356A (en) Method and apparatus for controlling utilization in a horizontally scaled software application
CN109618002B (en) Micro-service gateway optimization method, device and storage medium
CN107733689A (en) Dynamic weighting polling dispatching strategy process based on priority
CN101009655B (en) Traffic scheduling method and device
CN107995045B (en) Adaptive service function chain path selection method and system for network function virtualization
Liu et al. eBA: Efficient bandwidth guarantee under traffic variability in datacenters
US20030200317A1 (en) Method and system for dynamically allocating bandwidth to a plurality of network elements
Ayoubi et al. MINTED: Multicast virtual network embedding in cloud data centers with delay constraints
CN103457881B (en) Execution data leads directly to the system of forwarding
RU2643666C2 (en) Method and device to control virtual output queue authorization and also computer storage media
US11929911B2 (en) Shaping outgoing traffic of network packets in a network management system
Wang et al. Optimizing network slice dimensioning via resource pricing
CN116389491B (en) Cloud edge computing power resource self-adaptive computing system
JP2016208195A (en) Packet relay device, copy function distribution method in packet relay device
Liu et al. Revenue maximizing online service function chain deployment in multi-tier computing network
Bruschi et al. Move with me: Scalably keeping virtual objects close to users on the move
CN109862591A (en) It is a kind of based on Qos eat dishes without rice or wine slice bandwidth borrow with caching sharing method
WO2024055780A1 (en) Computing power network information announcement and routing decision-making method and apparatus, and medium
Buzhin et al. Evaluation of Telecommunication Equipment Delays in Software-Defined Networks
Guan et al. Virtual network embedding supporting user mobility in 5G metro/access networks
Chakravarthy et al. Software-defined network assisted packet scheduling method for load balancing in mobile user concentrated cloud
KR20120055947A (en) Method and apparatus for providing Susbscriber-aware per flow
Narman et al. DDSS: Dynamic dedicated servers scheduling for multi priority level classes in cloud computing
Bonald et al. The multi-source model for dimensioning data networks
Wang et al. Efficient and fair: Information-agnostic online coflow scheduling by combining limited multiplexing with drl

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant