WO2014186756A1 - Input-output prioritization for database workload - Google Patents

Input-output prioritization for database workload Download PDF

Info

Publication number
WO2014186756A1
WO2014186756A1 PCT/US2014/038477 US2014038477W WO2014186756A1 WO 2014186756 A1 WO2014186756 A1 WO 2014186756A1 US 2014038477 W US2014038477 W US 2014038477W WO 2014186756 A1 WO2014186756 A1 WO 2014186756A1
Authority
WO
WIPO (PCT)
Prior art keywords
capacity
request
token
bucket
tokens
Prior art date
Application number
PCT/US2014/038477
Other languages
French (fr)
Inventor
David Craig YANACEK
Bjorn Patrick SWIFT
Wei Xiao
Kiran-Kumar Muniswamy-Reddy
Miguel Mascarenhas FILIPE
Yijun Lu
Original Assignee
Amazon Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amazon Technologies, Inc. filed Critical Amazon Technologies, Inc.
Priority to CN201480034626.2A priority Critical patent/CN105431815B/en
Priority to JP2016514144A priority patent/JP6584388B2/en
Priority to EP14798257.3A priority patent/EP2997460A4/en
Priority to CA2912691A priority patent/CA2912691C/en
Publication of WO2014186756A1 publication Critical patent/WO2014186756A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases

Definitions

  • DBMS 2J Database management systems
  • DBMS 2J Database management systems
  • DBMS may be o erand by a ih»r3 ⁇ 4i-parfY pro ider that hosts the DBMS on servers in a itefcicenlcr and provides the DBMS as 3 service to vark»is entities such as corpofctfiom, uaivmilies, govemotent gcn fcs, ami other types of cust mers, in order to host the DBMS «»d provide the: verviee to the various en ities, the provider typically muintaifts significant resources in hardware, software, and infrastructure.
  • tf3 ⁇ 4e rovider ma Incur various ngoing costs related to o eruiirig the DBMS such as power, maiotenanec co and the salaries of technical personnel. Accordingly, ift order to provide a responsive ssryise to the various entities, tlte provider may attempt to msxirotxe tfte apacit and utilization of the hardware and other resourc s installed at its data eemer*,
  • FIG. i »> a Hock dsagrura depkutig 3 ⁇ 4 datals3ve management system ex sed to cmi «user through » w «b se ice and limiting capacity cwMnnrHion through the use of a token allocution and co ⁇ i ⁇ umptiyrt mechanism * .
  • FIG. 2 is a flowchart depicting enforcement of a provisioned capacity using a token atiocwon and consumption rne haraxm.
  • FIO.3A is a diagram depicting allocation of tokens to a token bucket and multiple reqaesi types conorming lok s from the token bucke .
  • FIG- 3B is a iagram depicting allocatin token* to multiple token buckets that are categorized according to request type sod withdrawing tokens from the backet based on processing a corresponding request type.
  • [8098J FIC .4 is a diagram depicting dividing: request types into a plurality of request classy and associating the classes with admittance policies thai may control which token bucket etewnines admittance find which bucket or Jnicke tokens iire withdrawn ftojiri,
  • FiC S is a flow chart depicting an embodiment for obtaining and applying an admit ance policy based on a classification of a request.
  • FIG. 6 A is « diagram depicting dividing request types into a plurality of request classes and associating the classes with admittance policies that govern request admittance and token withdrawal from a hierarchical arrangement of token buckets.
  • FIG.6B is a diagram depicting deducting tokens from t parent bucket in a hierarchieal afTangement of token buckets-
  • FIG.6C is a diagram depicting deducting token bucke s from a child bucket in a hierarchical arrangement of tofeen buckets.
  • FIG.6D is a diagram depicting deduc in , from a pares* backet, more tokens than ate available to the parent.
  • FIG.6F is a diagram depicting a hierarchical arrangeme t of token buckets where two or more classes of equal priority share a parent bucket
  • FIG.7A h a diagram depicting an illustrative example of a user mterface for obtaining custoraer»nrovjded information pertaminf to request classes, token buckets, and admittance policies.
  • FIG. 70 is a diagram depicting an n ttstrative example of a user interface for obtaining customer-provided information pertaining to request classes, token buckets, and admittance policies, adapted for an approach that utilizes a hierarchical token-bucket model.
  • FIG.8 is a flowchart depicting ste s for receiving capacity allocation and admittance policy information in conjunction wit a customer's definition of a new table, and using the information ta create token ucke* and apply admittance policy on a per- artition basis.
  • IG, 9 is a block diagram depicting an embodiment of a computing environment in which aspects of the present disclosure may be racticed.
  • a provider ma host a DBMS ih a datacetaer and provide access to the D&MS a a service to various entities, in that regard, the DBMS may be exposed through a web service, a web a plication, a remofs procedure 3 ⁇ 4U and so forth- These mechanisms and others may be referred to herein as services. It) some embex!imenis.
  • a DBMS may provide a j ⁇ ejp ted froia-eiK- thai exposes tws? or more of the services to end asers of the entities or customers.
  • requests that include various operations and queries to be performed on the DBMS through the use of application programmi g interface (" ⁇ calls Jo the service..
  • a request may crnnpri.se, for example, an ⁇ evocation of an API rt behalf of a customer, a* well as w invocation of an operation on a DBMS on behalf of a easterner.
  • the provider may also equire payment fr m a customer in exchange for the use of this ca a ity. Howev , t e profitability of the endeavor ma d p nd on a customer paying an amount that is pro ortions! to the capacity consumed on its behalf, A limit on c pacity consumption may be imposed on a customer and enforced through various technique* soch a3 ⁇ 4 tiHottfjR , queuing, and NO forth. Whe « u age exceeds the amount provisioned to the customer, re uests fomrvi es on behalf of a ustome may be rejected or suspended.
  • An end user may invoke operation on the DBMSs by ending a re est including an identifier of the operation ami one or more parameters.
  • Any number of operations may be identified, and may include operations such as reading or writing data, performing queries on data, and various data definition and schema-related instructions such as creating a»d deleting les
  • the parameters that may be t chtded with the reques may be any type. *ud> as tegtaal val es, e ⁇ ttmeraied vahtes, integers and so forth. The particular eombittaiion of parameters will vary based on ihe type of operation being invoked,
  • a DBMS is a software and hardware system for msant ini ⁇ an organised collection of data, in a DBMS, data is typically organized by associati ns between key values and additional data, The natare of the associations may be based on reul -world relationship!* that exist in the collection of data. Of if may be arbitrary. Varfou* operation* may be performed by a DBMS, including data definition, querie , apdates. and adtr,inisfcratk Some DBMS* provide for interaction with the database using query languttgss sach as structured query language
  • a DBMS may comprise various architectural componems, such a a storage engine thai acts to tore data one on or more st ia e devices such as solid-slate drives.
  • Use provider ma host an number of DBMS* within its dataeenters.
  • DBMS* ma operate on any number of computing node* and may be a ⁇ KKiatcd with various storage devices a»d connected using a wide variety of networking equipment and topologies. Moreover, at variet of DBMS* may he osted, including rc&tiionai databas-j*, ohject-oneoied databases, rs ⁇ it tufed query language CNoSQL”) databases and so forth.
  • a limit on capacity consumption may he m osed on & emtomer. in embodiments, $ costomer a bove a set level of capacity ewasumption.
  • the casJomer's ievei of capacity ensumption may he limited through various estimation and rneasurement tixhniqu s. Because of the wide variety of c ⁇ jiwiingmw ces th»t may e involved ir» processinga request, canity consum tion may be difficult to determine.
  • quantities such a the amount of data sent to or received from a client application may be employed to estimate the capacity eonsimed by processing a certain request.
  • a query request may sam a database table in order to determine rows that conform to the constraint specified in the query,
  • the number of ro** fetun ed may be a p oxy for capacity coem tion.
  • the query may ave been limited in scope and, therefore, is likely to have consum less capacity tha a query thai resulted m many rows of data being returned. Mote details of example embodiments are described below in connection with the frgtires.
  • a provider hosts one or more DB Si within a data center and provides acc ss to the va ious DBMS « through a web service.
  • FIG. i depicts an environment fo hosting provisioned w b services and databases within a data center.
  • EndOser applications 188 may be conn cted to cksncnt* within data center tiD by communications network 103, gateway 104, an touter 106. Thowi of ordinar skill in the an will recognize thai his network configuration is on of many ossi le configurations iha maybe iwrorporaied into embodiments of d* present disclosure.
  • e service HQ may provide various APIs to rovide funct ions related to the operation of database I j& n some cases, 03 ⁇ 4e APIs may serve as Hghi- weight wrapper ⁇ , buif( top of mofe complex data ase interfaces or protocols.
  • depicted AH J i I mig t provide access to a query function of database i 16 through use of an interface adhering to representational state transfer ("REST') priocipfes.
  • End-user appfications 102 may then invoke AH H i, using comparatively simple REST semantics, to query a key- value datitbase without needing to understand the technical details of database 116.
  • Web service I 10 and database 116 may operate on a variety of platforms such as one or more computers, virtnal machines or other forms of computing services which may collectively be referred to a* competing nodes. Operation of these nodes, «s well as associated storage devices. network infrastructure and so forth, involves various costs. These eos s include t ose related to hardware and s ⁇ ftwareacojoisitio*. maintenance, power, personnel and so forth. The costs may also fo ude factors such s opponunity cost incurred whe consumption of resources by one customer prevents atiliataiion of the service by anot her.
  • 10929 ⁇ t ⁇ er ⁇ kws performed by web service 110 arid database I !6 on behalf of a customer may be correlated to consumption of an amount Of capacity on a given computing node.
  • the correlation may allow a hosting service provider to calculate the costs incurred by providing the service. For example, if a given customer invokes a web service that utili es one- hundred percent of a computing node's capacity over an entire day, the C M of providing the service may be the sum of acquisition, maintenance arid operating cosis for the computing node prorated for a tv&njy- u hour period.
  • Acceptance policy 308 involves determining whether or not a e ues should be processed, in general terms, a goal of acceptance policy 108 may be to ensure that requests performed on behal of a customer are not permitted to exceed a provisioned amoaot of capacity.
  • a customer may b pnsvlstofied twenty-five percent of a computing noo s capacity, Acceptance policy 108 may then set to limit that customer's average consumption of capacity to no more than twenty-five percent- In som ernbodiroents peak usage may be permitted to rise above that amount for a limited period of time.
  • Embodiments may employ 3 token bucket model to limit capacity coem tio .
  • a token bucket may be seen conceptually as a collection of tokens, each of which represents a unit of work that the owner of the bucket is authorized to perform. Tokens may be added to a bucket at an acenmoimion rate, which may for example be based on a leve of service. When work is performed, a number of tokens equivalent to the amount of work performed is withdrawn from the bucket If no tokens are available; the work may not e performed. Using this approach, over time the amoant of work dial may be performed 3 ⁇ 4* limited by the rat? at which tokens are added to the bucket.
  • a token may be considered to represent a unit of cost related to performing the database operation.
  • the cost of a request to perform an operation on database i 16 might correspond to the siz of data returned when the operation is executed.
  • the cost of performing the operation, » « measured in tokens, may be determined by di iding the size of the data by a size per token value.
  • a requeste operation may be deemed to cost at least ope token, bu the full c st may not be known until after the operation has actually been performed;
  • a request may be admitted when at least one token is avail able.
  • the request may thea be processed and the total co « of the request determined based on one or more measured quantities,
  • tokens may accumulate in a virtual container such as a token mk ⁇ H i 12.
  • the token bucket may be considered to represent an association between units of permitted capacity consumption, represented by tokens, and m entity such as a customer or service.
  • token bucket 112 maybe associated with all operations performed on that table.
  • Other erobodtroent might associate a token bucket with a table ankion, easterner, c*srn «mg node and so %th;
  • Token accumulation policy 1 1 may govern the addition of tokens info oken bucket ! 12, in an embodiment, accumulation poli y i 14 compri.ves a rate of addition and a maximum token capacity. For example, a policy might indicate that a given backet should accumulate tokens at a rate of twenty per second but that no more than one hundred tokens shoeld be allowed to accumulate.
  • Token and token buckets may be represented b various structures.
  • acceptance policy 108, token bucket ! ⁇ 2 and token accumulation policy i i 4 are implemented by u module of functionality, such an a software library, executable program and so forth.
  • the module ma represent tokens and token accumulation by recording a cymertt number of tokens, a maximum number of tokens, a rate of accumulation and last token addition time, When determining whether or not to admit or reject the request, the module may first update the current number of tokens based on the rate of accumulation, ⁇ h last time tokens were added and the current time.
  • the number of new tokens accumulated maji be determined by multiplying the rate of accumulation by the amount of rime that elapsed since the last update to the count of available tokens.
  • Tfei* value may be added to the count of currently available token*, but not allowed to exceed the maximum number of tokens allowed in the bucket.
  • Other techniques for maintaining the token bucket such as those based on a periodicall invoked routine, are als possible.
  • ⁇ ⁇ 391 FfG.2 depicts an embodiment of applying acceptance and token accumulation policies. Although depicted as a sequence of operations starting at operation 200 and ending wi operation 216, those of ordinary skill in the nn will appreciate that the operations depicted are intended to be illustrative of an embodiment and that at least some of the depicted operations may be altered, omitted, reordered or performed in parallel.
  • a request to perform a service is repetved.
  • the request might comprise a databu. ⁇ query.
  • the cost of the database query may be determined based upon the amount of data returned by performing the ⁇ guery, possibly measured by the number of bytes of data returned t die end tiser.
  • Operation 204 depicts updating the count of available t k ns, in an embodiment, the number of available tokens may be adjusted based on a last update time, a token accumulation rate and the current Ume. T3 ⁇ 4c maximum number of available tokens may also be limited.
  • M(jwe ief,b1 ⁇ 2 ⁇ 3 ⁇ 4 varidtts embodiments ma uiiftv*; the carrcnt. token c ⁇ teltft fa!iheiow zero, it ntay be the case that no tokens are available for deduction.
  • Operation 206 depicts determining if a token h avai able for dedu tion. Some embodiments may consider ne token to be sufficient to admit the request, while Olivers may atterapt to estimate the numbe of tokens, i.e. ihe capacity, processing the request will cpnsbnte. As used herein, the terms sufficient tokens or sufficient capacity r3 ⁇ 4iiy refe to one token, 3 fixed number of tokens, a number ot tokens based on an estimate of capacity that will be utilized by processing a e uest end so forth, if insufficient tokens are available, the request is rejected as depicted by operation 208. A client application and/or the customer of the hosting service may be notified of the rejection.
  • Hie embodiment depicted by FIG. may allow the count of tokens conently in u bucket to fail below zer .
  • a negative token balance may correspond to a blackout period during which no requests thai depend upon that token bucket may be admitted.
  • the length of the blackout period can depend upon the current negative token count and the rate of token accumulation, For example, if the token accumulation rate is ten per second, and the number of available tokens is -100, the length of the blackout period may be ten seconds.
  • a query request 304 may require liuk (3a3 ⁇ 4i outfiowand ca se a comparativel small arooam of token outflow 312.
  • a previously executed maintenance task may ha e caused a blackout period, ami the quer e uests may be rejected despite their low east, It may be advantageous to avoid such situations.
  • FIG. 3B ⁇ fepi ts dividing provisioned capacity into two separate token buckets 314 and 316. Token inflow is divided equally as depicted by token inflow 308» and token inflow 308b.
  • the cost of performing the Song-running maintenance tasks an the queries remains constant, and thus token outflow rates 3 jO and 312 remain unchanged.
  • This arrangement prevents quer requests from being blocked by executing a long-running maintenance task. However, it may be the case that the maintenance task is rarely called. If so, much of the capacity reserved to the long-fanning task may be wasted.
  • Request admittance may be determined based on more than one bucket, and for tokens to be deductible ftum more t an one bucket. For example, on embodiment might divide request types into high, medium and tew priorities. A high priority reqttest might b able to withdraw from any bucket, the medium request ftom two of the three buckets, and the low priority request from only one. Categories of similar request type* ma • •be described as classes, These classes may then be associated ith an admittance policy.
  • the admittance polic may comprise an indication of whkh token buckets should be a3 ⁇ 4sd to determine whether or not a request should be admitted, and a technique or approach for withdrawing tokens once the total cost of the request is known.
  • FIG, 4 depicts an embodiment for allocating capacity based on request classes and admittance policies
  • Incoming requests may be classified as belonging to a class of requests, seen as elm "A" tequests 400, class* "B” requests 402 and class " * reques»404,
  • An admittance polic ma then be applied to determine which buckets are involved in admitting or rejecting the request and how tokens should be deducted from those buckets,
  • Each d of requests may be associated with an admittance policy.
  • the polic ma invok « variety of logical aa prbcedttral mechanisms reltaed to the use of tokens, dme aspect of an admittance policy involves determinin whether or not a request shotild be admitted for processing.
  • a policy 406 might specify that class * * A" requests 400 should be admitted if at least one token is available in bucket ' ⁇ ' 414.
  • a second policy 0$ might specify that class "8" requests should be admitted if a token xis s in either bucket "X" 412 or backet "Y 414.
  • a third policy 410 for class " 4 requests 404 might imiieate that requests should be admitted based on bucket *$ ⁇ 2 alone.
  • These eastnpkss are illustrative and many oihef eorot» «aiio «*s a « possible,
  • a request may be admitted based on a variable or p edefined token threshold.
  • a variable or p edefined token threshold One example in olves admitting requests only when a bucket has a number of tokens equal to a predicted number of tokens that might be consumed Another example involves us g a mo ing millg of previous com to set the minimum number of tokens. Numerous additional embodiments are possible,
  • Admittance policy may also involve determining how tokens are deducted.
  • a e uest is fim admitted, one or more tokens may be deducted from the bucket upon which the admittafice as based.
  • the total cost of the request may not be known until the e uest has been at least partially processed. Therefore, at some time after a request has been admitted, a second token deduction may occur.
  • the policy may specify one or more buckets as the target of the deduction, and may also specify a distribution between die buckets,
  • tokens may be withdrawn front the same bucket that determined the admittance. For example, if a request wete admitted based on the presence of a toke in backet "X* 412, depicted in O; 4 > the full token tost couid lso fee deducted from btcket W X" 4 I 2,
  • X* 412 depicted in O
  • X* 412 depicted in O
  • Another embodiment deducts first front the bucket that determined the admittan e, and then From one or more subsequent buckets. For example, if a request was allowed based on an available token in bucket * 'X" 12, a portion of the total cost, onc determined, could be deducted from bucket "X" 4! 2 and a portion from bocket "V 414, The amount deducted from the first bucket 412, may be determined to prevent the available token count from falling below a threshold value such 8s tm. The remainder may be deducted from the second backet or the last backet i a series of buckets, possibly causing that bucket's available token count to fail below zero. T us in FIG.4.
  • FIG. 5 depicts an embodiment for obtaining and applying an admittance policy. Although depicted as a sequence of operations beginning ith operation 500 and ending with operation 16, those of rdinary skill the art mil appreciate thai the depicted o erates »*3 ⁇ 4 invaded to be iiittarative, and that at least seme of the depicted operations maty fee altered. mitted, mofdercd r ⁇ ormed in parallel.
  • a rocess for applying an admittance policy may involve receiving a request ax depicted by operation 502.
  • the class to which the reque t belong ma then be determined. This may be done hroug a variet of means, in an embodimen the API associated with a request invocation may allo for 1 one or more parametersthat identify the class.
  • One example involves a textaai para meter that names trie; class (hat the rctpics* corresponds io.
  • request may be classified based on their type. For example, write requests may be classified separately from read requests. Other embedments might analyze re es s to determine their potential costs and assign e u st* with similar cost leyefc to t e same class.
  • Re uest cl ss may be based on factors in addition to or instead of those s ecified in request parameters. For example, a given eustom «viQ3 ⁇ 4nttfier or 8ec*r1 ⁇ 4y role might be associated ith a request class, the c (Corner or rote might be available from the context in which a request was »*oked, or it might be specified a* a parameter in the request. Other potential factors include die scarce internet protocol address of the request, the particular A Pi being invoked and so forth, In addition configuration or other mecbaniiiros may be used to define classification rules.
  • a configuration associated with it customer, a web service, a AH or other entity might be associated with a request.
  • Various default rules possibly specified in the configuration, might also apply.
  • a default value might be applied whim no other classification rule is applicable.
  • Embodiments may allow default values to be oveniddcu.
  • Embodiments may also allow for certain default values to be fixed, so that they cannot be overridden,
  • a corresponding admittance policy may be received, obtained or otherwise readied for application as depicted by operation 504. This may involve accessing a record that describes the admittance policy, such as a list of buckets from which tokens may he withdrawn.
  • the record might, for example, be stored in a database, embedded in a code r source, configuration fife and so forth.
  • structures; representing the buckets; may be part of the policy description, or in other words she buckets and the policy definition may comprise an integrated structure.
  • Some embodiments may Omit this step sod ap l policy by selecting an appropriate p ih of execution in (he mstrttciwrrs thai perform the various techniques described herein.
  • the request may be admftfed or rejected by operation 506.
  • the a imi ⁇ e policy may describe one or more backets which wii! be checked for an available token. Some embodiments may require multiple tokens or base admittance on totem being at least above » threshold e el Some embodiment* may alfcwa requ st Jd be admitted when the token oam is negative, and the policy (fescription might indicate a threshold negative value below which requests should not be admitted.
  • Operation 506 may also involve deducting al feast one token.
  • the number of tokens deduc ted may be same amoun of tokens that w s used to determine whether or not the request should be admitted. In this way, another requ st will not be admitted based on the same oken or set of tokens.
  • Embodiments may also syechroftizs access to the buckets in order o prevent multtpie requests from being admitted based on the same token or set of tokens.
  • the request may be processed as depicted by operation 508.
  • the total cost of the request may be determined based at least in part on processing the request, in various embodimeftts, the size of data returned to the client of the service may be used. For example, tf the request wax a database query, a token cost could be derived from the rotal size of the data returned to the client after executing the query, !n other embodiments, various p formance metrics might be collected while the request s processed and used as a basis of a cost determination,
  • Varioas embodiments may deduct the cost of the operation once the request has been performed. This step may be performed in compliance w3 ⁇ 4h the admitt nce policy, as depicted by operation 514.
  • the admittance policy may describe various approaches to deducting the token cost, such as distributing among multiple buckets or deducting the cost from the bucket that determined the admittance.
  • Various other policies may be employed,
  • token buckets may have a hierarchical relationship. 3 ⁇ 4is approach may allow for convenient administration of the s lec ed admittance policy because it allows a priontizatioo scheme to be defined with a one -to-one mapping between request classes and token buckets.
  • the hierarchical token buckets may be convenientl defined as by specifyin parent-child relationships between token buckets along with respective maximum token capacities.
  • FIO. tSA is illustrative of tut embodiment employing an ej ample of hierarchical token buckets, It depicts two token buckets. Socket " * 60S is a parent bucket of 610, and has a maximum capacity of thirty tokens.
  • Bucket "Y" 610 is a child of parent *>X" 608, and has a maximum capacity of five tokens.
  • the tokens are shared etwee the buckets in hierarchical fashion, with a ch ld bttcket bav jig access to alt of iis parent's tokens.
  • ch ld bttcket bav jig access to alt of iis parent's tokens.
  • class "A” requests 600 am associated with Socket "Y” 610 based on application of class "A” policy 60 .
  • class "S” requests are ssorted with Bucket "X” 608 based on class B policy 606.
  • Admittance policies may comprise a one-to-one mapping tween a re uesi class and a bucket whi h may be used to determine whether or not to admit a request and from which token bucket, w buckets, the tokens are withdrawn.
  • Aft admittance policy may also comprise additional element such as the minimum number of available tok ns needed to admit a request Various methods and techniques described herein regarding aon- hierarcfcical approaches ma also be applied to hierarchical approaches.
  • a request may be admitted based on the trvaifcbHHy of at ki one tok n in the token bucket that is mapped to the request's class. For example, class "A" rcqoests 60 (na be admitted when Bucket ; Y" 610 ' has an availabl -token. Similarly, class B" requests 602 may be admitted when Bucket "X" 608 has »an available token. Some embodiment may require that more than one token be available, based for example on art estima ed cost of processing the request
  • the token or tokens required for admittance may he o ⁇ dmcd when she request is admitted for processing, Embodiments may deduct one token upon admitting me request The rema ning cost, once known, may be deducted from the same bucket that determined admittance .
  • FIG. 68 is a diagram depleting ah operation 50 deducting two tokens from bucket M X" 60S *
  • the fjre-deducttVm state of Buckets * * 608 and ' * 610 is derfcie by figore element 651, arid the post-deduction state by element 5
  • the tokens are deducted from Bucket. "X" 608 to result in a token coam of twenty-eight.
  • the tokens available to "Y w 610 remain unchanged.
  • operation 660 depicts deducting two tokens from Bucket * ⁇ ' 610, The stale before deduction is depicted by figure element 66 ⁇ .
  • the stae after the deduction as depicted by eiement 6625-bows that the tofcen ay tkhk to both Backet * 3 ⁇ 4" ⁇ alid ⁇ V" 610 H ve been reduced by two.
  • This approach reflects sharing available tokens between the two buckets in a hierarchical fashion. When a request is admitted or processed on the basis of tokens being available in a child bucket the tokens may be withdrawn from the child bucket and each of its parents.
  • various embodiments may require that, in order for a request to be admitted, a token must be available in both the child bucket and in each of its parents.
  • » equest could be admitted on. the basis of suae b3 ⁇ 4foir deduction 661 , e amw both Bucket '*Y 6I0aod its parent Bucket "X" 608 have at least oh token.
  • On the otter h3 ⁇ 4 emb ⁇ i « & may permit a parent backet, saeh as Bucket "X” 608, to process requests eve if there are insufficient toke s in » child bucket.
  • embodiments may require at least one token to be present So parent backet "X * ' 608, Accordingly, fa these embodiments no requests dependent on B cke ' ⁇ " 610 would be admitted until the attmber of tokens available to both Backet "X" 608 and Bucket 610 rises above zero.
  • embodiments may apt reduce the numjbcr of tokens is a child bucket when tokens are dedacted from its parent .
  • lanboditnents may 1 require at least ne token to be present in each of the child buck 's parents.
  • preventing: the child * :* token count from gomg negative may help to prevent blackout periods for services associated with the child bucket
  • Factors may determine the length of a blackout period include the degree to which token counts in a child bucket and its parents are e ative, and the child and parent buckets respective reftll rates. For example, in FIG.6D, a request dependent on Buc et 610 may not be admitted until at least one token is available in both Bucket "X" 608 and Backet "Y” 610. The rate at Which Bracket "X" 608 refills with tokens ma therefore influence the length of blackout periods seen with requests dependent on Bucket "V'SiO, Bucket *4 X 608 may, however, be as igned an accumulation rate t at is proportionally higher than that assigned to Bucket T * 610.
  • FIG. 6g depicts operiUion 6S0 deducting more tokens from Bucket T * 610 than are available to it.
  • Backet U Y" 6l0 has five tokea ⁇ a ailable ⁇ » it, us delete the portion of the figure at element 681.
  • Backet " * 6 0 has negative thirty tokens available to it.
  • Bucket M XT 608 is left with negative five tokens.
  • This state, depicted by element 6S2 * reflects deductin fmm child Socket "Y" 610 u on hich admhtance is based, and from its parent Bucket " * 608.
  • Embodiments of hierarchical token backetv may cmp toy a variety of ⁇ algorithms, data structures and so forth.
  • a record may be used to track the number of tokens available to a patent token bucket
  • the namber of tokens available to Its. child en may (Hen be determined based y» a set of ks, w algorithm and so fenh.
  • (he namber of tokens available a child token bucket may be deierfnined based oa the namber of tokens available to the parent token ts ke ;a3 ⁇ 4J the nrniraum number of tok ens the child token bucket is a lowed to accumul te.
  • a hierarchy of token buckets may accumulate new tokens us a group.
  • a token accumu ation rote may be associated with the hierarchy that includes Ba k ts "X"6Q8 and M Y" 610.
  • the number of tokens available to both Buckets ' * 60S and * ⁇ " 610 may increase up lo their respec ive maximums,
  • FIG, 6F depicts a hierarchies! a angement of token buckets in which tw or mora uest c asses share buckets having equal priority- Oas»
  • "A" requests 6000 may be directed to Bucket ** X" 60! 0, and clas "B" reqoesls 6002 may be directed to Bucket 6012
  • Parent Bucket M F * 00 may have a total capacity of 100, corresponding to quickci "JC 6010 receiving an allocation of 80 tokens per second and Backet "Y" 6012 receiving an allocation of 20 tokens per second.
  • the maximum capacit of the buc ets may be the same as Iheir respeelive rates of token aHocation,
  • An admission policy may be defined for the arrangement depicted in FIG.6F in which the child buckets share eqaat priority.
  • the admission policy may proceed as follows: Upon the arrival of a request corresponding to clas "A" requests 6000, bucket "Y ⁇ 6012 may be onsul ed for token availability, if at least o e token is avail e, the request may be admitted. The count of available tokens in backet * ⁇ " 6 12 and Parent Bucket may each be reduced upon admittance and aga upon processing the request. If iasafficicnt tokens are available in Backet "Y * 6012, the request may be admitted if there is a lea*, one token in Parent Bucket "P" 6008.
  • a token may be deducted from Parent Bucket "F" 6008 upon admittance and again upon processing the request.
  • Class " " requests 6002 may be processed in a similar manner by defining class "B” policy 6006 in a manner similar to class ' A * * policy 6004, adjusting for differences in token allocation rates,
  • a c nse uence of thi3 ⁇ 4 app oa h involves reque ts from each class having ccess to their provisioned capacity, but able to draw on additional capacity if the sibling bucket af.deruiili3 ng the capacity provfeiooed to it r3 ⁇ 4r example, if only class "A" requests 600 are issued, there will be up to 100 tokens per .second available for processing them. If the workload is mixed and do» 'W re uests 6000 consume ten tokens per second then class "B" requesls 6Q02: mil be able to consume «n to 9G tokens per second.
  • aspects of the adrotttsince policy way be defined by a customer of a besting provider.
  • the definition may be pcrfoaned b tfte custOBier ⁇ imefaetion with a user interface rovided by the service provider.
  • a web page might allow the user to define the classes and the respective backets from which tokens may e withdrawn.
  • the user might also set various additional parameters, such a a minimum token amount for requests in each class.
  • Various embodiment.* may provide means for administering the policies thai govern input/outsat priomrzation, which may for example include defining request classes, token buckets, admittance policies and so forth
  • user interface 700 ma comprise par of a sequence of user interface ages presented to a user during creation o a hosted database table.
  • the previous step 701 user interlace component ma represent a button, hyperlink or similar element for navigating to a previous point in a use interface wizard for defining a hosted database table.
  • the create table 702 user interface element may indicate that the user has finished su plying pararnetefs to the table creation process and that a table should created 13 ⁇ 4c depicted mis interface elements are in ended to be generalized e m le* of an approach to providing such an interface, and should not be construed as limiting the scope of i present disclosure,
  • ie ⁇ 73 ⁇ 4J User interface 700 may contain One or more policy defmiiions 704a, 704b and 704c, which may be used to supply parameters indicative of the relationship between class o request and one or more buckets from which tokens may be withdrawn, and possibly other various elements of an admittance policy.
  • policy definition element ?04a may comprise a class indication 706a, primary bucket indication 708a, and secondary bucket indication l3 ⁇ 4
  • the class indication 706a may comprise various, parameters that describe a class, which may include a class name and one or more request types, in some embodiments, the request classes and request types are pre -defined by the hosting service provider.
  • a number of additional class indications may be presented in user interface 700, such as policy definition 704b comprising class indication 706b.
  • policy definition 704b comprising class indication 706b.
  • a number of additional class indications may be presented in user interface 700, such as policy definition 704b comprising class indication 706b.
  • ucket indication 708b arid secondary bucket indication 7 lOb
  • policy definition 7Q c comprising class indication 73 ⁇ 4Se 4 primary bucket indication 708c and secondary bucket indication 7!Qc
  • Primary bucket element 708a and secondary bucket element 7lGa indicate the token backets that comprise part of the admittance policy, as well as their respective priorities.
  • the token bucket indicated by 708a would be consulted first in applyin admittance d eisioR for requests that fail into the class specified by class i ⁇ dical ft 7D6a.
  • the toke bucket specified by secondary token bucket ?IOa would be consulted second- Policy ⁇ felinUtCtts 0 704b and 704c ma refer to the same token bu kets or to overfs
  • FKJL 7B depicts an illustrative embodiment of a usc mterfxe for defining admittance policies employing hierarchical token buckets.
  • ser interface 750 may comprise part of it table definition rocess that employs previous ste 751 and create tabic 752 user iftterface eleraems to navigate to other ste s in the process and to indicate that the process should be completed.
  • ⁇ O0&1I User interface 750 may comprise one or more policy definitions 754a, 754b and 754c thai aikw the customer to supply parameters for cresting admittance policies and hierarchical token buckets.
  • policy definition 754a might include a bucket name 756a user nterface element indicative of the nairte of a token bucket, ⁇ some embodiments this may be a drop box or other user interface element containing predefined bucket names.
  • Request classes 760a might comprise combo box, list box or other user interface elemen allowing equest cfesse&to ⁇ assigned to he ueket indeed by bucket name 756a.
  • the child token busket 758» : yssr interlace eien»W might be ta specify one or more child ifike bjacket*, such as Bucket €10 depicted in FIG; 6A. This may be a list box or other user interface element allowing one or more children token buckets to be selected. In some embodiments, the parent token bucket might be specified in place of one or more child token buckets.
  • User interface 750 may allow for a mnftber of additional policy definitions to be defined. For example, user interface 750 also contains user interface elements for defining or editing policy definition 754b. which comprises backet name 756b, request classes 760b ;md child backet 758b, and policy definition 754c, which comprises bucket name 756c, equest daise 760c and child bucket 758c.
  • user interfaces 700 and 750 may use alternative representations for directly or indirectly specifying a token backet and corresponding request classes.
  • user interface 700 might prc.* «m a choke of high, medium or low priority that cottld be associated with a fequesi class.
  • Hie telationships bei een backets could be Wm ⁇ from the priority level selected by a user.
  • a simitar approach could be employed in user interface 750.
  • Embodiments may employ use* interfaces similar to those depicted in FIGS.7A and ?& to allow customers to subsequently edit request classes, buckets, relationships between buckets and so forth. Ooc exaniple in olves changing the definition of a table .
  • a customer might make various modtftcafiotfts to the dettnittoifi of a database table, for example by adding additkatal columns.
  • the modified table mig t be associated with a different usage pattern. Accordingly, the customer might also specify a change in the capacity allocated lo the table, the admittance policy, the number of buckets and so for*. Vartovs user interface elements or APIs might be used to supply the relevant information.
  • FIGS. 7 A and 78 may be implemented osing a wide variety of technologies, including thtck*cliertt, tMfcr or other archilecfores.
  • a weh Server operating in a hosting provider'? ; data center serves hypertext madcap language ("HTML'") forms to a cnstomer's browser.
  • HTML' hypertext madcap language
  • a web service H within a hosting provider data center receives and pnfce* «ss the information supplied by the customer.
  • FIG.8 is a flowchart depicting a process far creating a database table with ssoc at d token buckets and admi ttance policies. Although depicted as a sequence of operations .starting with operation 800 and ending with iteration 814, it will be appreciated by those of ordinary skilj in the art t at at leas* ⁇ rt of the depicted operations may be altered ⁇ omitted, reordered or performed in parallel. For example, the information indicated by operations 802*808 may be received concurrently at the hosting provider data center.
  • Operation 802 depicts receiving information indicative of a table definition.
  • Embodiments of the present disclosure may allocate capacity on a per-table basis or on a per* partition basis if the defined table involves m re than one partition, partition may comprise a subdivision of a tabic, each of which may be maintained by a DBMS operating on one or mo e computing nodes. Because capacity may be allocated on a per-table or psr-partition basis, admittance policies and tofcen buckets may be deraed en a similar basis.
  • Operation 804 depicts receiving information indicative of one or more classes of requests. These classes may be defined by the user, for example through a ttser interface sach as those depicted in FIGS, 7A and ?B. m other embodiments, the hosting provider may predefine the request classes. In an embodiment, the request classes arc labeled as "high,” “medium” and "low. **
  • ⁇ &888J Operation 806 depicts receiving informauoa indicative of the buckeis that should be created, in som embodiments the information may comprise s listing or count of the buckets to be created, while in others the information may be inferred. Far example, s one-to-one correspondence between request classes and token buckets may be inferred. In an embodiment, thira token buckets may be created to correspwd to i e "high,” "rnediuro ⁇ afjd M ow 3 ⁇ 4 request classes,
  • mfo*mation indicative of one or more admittance policies may be received, this information may comprise a mapping between request classes and buckets, and mav include iofomut n incKcative of die order in which/buckets sboald be consulted to determine admittance, a method of deducting tokens and so ford).
  • the information may be onfined with other information referred to by operations 802-806. Certain aspects of the information may be determined infereniially.
  • u policy description that references a bucket may be used to infer that the referenced bucket should be created.
  • a partitioning scheme may be determined.
  • the table or ether .service to be hotted may be di vided among multiple computing nodes. Accordingly, embodiments may determine how many and which computing nodes to involve, as well as other aspects of partitioning the table sacfc as; determining how to divide the data maintained by the table. For services not involvin tables, this may involve d «efmimng:& ⁇ - to divide the wwkloads bandied by the respective partitions,
  • Per-customer capacity may, for example, be divided evenly b tw en the partitions or it may be divided based m the amount of workload a partition or computing node is expected to handle. For example, if a first partition is expected to handle three-fourths of a table's workload, it may be allocated threc-f artbs of the capacity,
  • Allocating per-custoraer capacity to a partition may involve assigning a proportion of a total amount of token generation to a partition. For example, it may be determined based at least in part on a customer's service tier that he should be allocated a given quantity of tokens per second. Continuing the previous example, three-fourths of that capacity could be allocated to one partition or competing node a»d the remaining one-fourth to another. This is depicted by operation 8 IQ.
  • the total per-castomer capacity assigned to a partition may be suballocated to the token iwkm to ⁇ created on that partition, as depicted by operation 8! 2, Continuing the previous example, if thr3 ⁇ 4»-faarihs of the total capacity corresponded to token generation at a rate of seventy-five tokens per second, men a total o seventy-five tokens per second could be allocated to the token buckets associated wit that partition or computing node, if there were three token buckets for that partition, then each could be allo ated twe «y-five okens per second.
  • pec-bupk ⁇ i iir «Q allocation rale may be used to create, initialise or otherwise represent a token backet, la varies en ⁇ odirnen s, creating a bucket may comj3 ⁇ 41se inHializtng various data stractures, such as a nxord con3 ⁇ 4>rising a maxtam token capacity, a current capacity, a token allocation rate and a 3asi addition time.
  • data stractures such as a nxord con3 ⁇ 4>rising a maxtam token capacity, a current capacity, a token allocation rate and a 3asi addition time.
  • Numerous other embodiments are possible. For example, in some embodiments there may not be a one-to-one correspondence between logically defined buckets and data stracturcs stored io memory.
  • operation 802 could comprise receiving information indicative of a change to the definition of an existing table.
  • information pertaining to request class s, bucket efmitioro and relationships, admittance policies, parfitioniag scheme and so forth can be received subsequent to their in tial defini e, imd the eorre ⁇ nding entities and relationships updated accordingly.
  • FIG. 9 is a diagram depicting an example of a distributed computing environment e which iKpects of the present invention may be practiced.
  • Various users 900a may interact with various client applications, operating on any type of computing device 903a, to communicate over ⁇ « «»iicatit»ft etwork 9 4 with tt#»se$ executing on various competing node* H ⁇ kk. 10o and 9!0c within a data center 920.
  • client appiieauons 903b may communicate without user intervention.
  • Communications network 904 may comprise any combination of communications technology, including the internet, wired and wireless local area networks, fiberoptic networks, satellite communications and so forth. Any number of networking protocols may be employed.
  • data center 920 may be configured to communicate with additional da a centers, such that the computing nodes and processes executing thereon may communicate with computing nodes and ro ess operating within other data centers.
  • Prwesse* OR compu!ing node 9*Gst may execute in conjanctltm with an operat ⁇ a system or alternatively may execute asabarfr-metal process that directly interacts with physical resources such as processors 9 , memories 918 or storage devices 914.
  • Computing nodes 910b and 9i0e are depicted as operating on virtual machine host 912, which may provide shared access to various physical resources such as ph sics! processors, memory and .storage devices. Arty number of virlualiiafton mechanisms might be employed to host the computing nodes.
  • the various coffi a ip nodes depicted in FIG, 9 may be cortfjguted to host web services, database man gement systems, usiness objects, monitoring and diagnostic facilities, and so forth.
  • ⁇ computing node may refer to varices types of computing resources, such as personal computers, servers, clustered computing devices and so forth.
  • computing nodes When implemented in hardware form, computing nodes are generally associated with one or more memories configured to store computer-readable instructions, and one or more processors configured to read and execute the msitmions.
  • a hardware-based computing node may also comprise one or more storag devices, network interfaces, communications buses, user interface devices find so forth.
  • Computing nodes also encompass visualized computing resources, .such as virtual machines implemented with or without a hypcrvtsor, virtuaiiacd bare-metal environments, and so forth.
  • a ⁇ rtaaljzation-based computing node made have vinusdized access to hardware resources, its well as rion-virtualized access, he computing node may be configured to execute on operating system, well as one or more application programs.
  • a computing node might also comprise bare-metai application programs.
  • Each of the processes, methods and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code module* executed by one or more computers or computer processors.
  • the code modules may be siored on any type of non- transitory computer-readable medium or computer storage device, such as hard drives, solid state memor optical d sc and/or the like.
  • the processes and algorithms may be implemented partially or wholly in appiicaticn-sjpecnle circuitry.
  • the results of the disclosed processes and process ste s may be stored, persistently or otherwise, in any type of non-transitory computer storage such as, e.g.. volatile or nort-voiattle storage.
  • SUBSTTTUTE SHEET (RULE 26) setecwA ⁇ i ⁇ m. K> #3 ⁇ 4$ ⁇ the acepe vtfifa lte , n aiddjtion, certain method or process Nocks may be omitted in some implementations.
  • the rnethocis and rocesses described herein are also not limited to any particular sequence, arrf the blocks or stales rekting thereto can be performed in other sequences that ate appropriate.
  • described blocks or states may be performed in a» order other than that specifically disclosed, or multiple blocks or states may be combined its a single block or state.
  • the example blocks or states may be pcriormed in serial, in parallel or in some other manner. Blocks or states may be added, to or removed from the disclosed example embodiments.
  • the example systems and components described herein may be configured differently than described. For example, elements may be added to, removed front or rearranged com a ed to the disclosed example embodiments.
  • ASICs application-specific integrated circuits
  • controller e.g., b executing appropriate instructions, and tfKl ⁇ ng microcontrollers and/or embedded controllers
  • FPGAs field-programmable gate rra s
  • CPLDs complex programmable logic devices
  • Some or all of the modules, systems and data structures may also be stored (e.g., as software instructions or structured data) on a computer-readable medium, such as a hard disk, memory, a network, or a portable media article to be read by an appropriate drive or via an appropriate connection.
  • the . systems, modules and data structures may also be transmitted as generated data signals (e.g.
  • one or more armories having stored ⁇ hereon computer readable instructions that, upon execution, cause the system at least to:
  • a mer comprising one or more memories having stored thereon computer readable instructions that, upon execution by a computing device, cause the system at feast to: de er in that a second capacity indicator ;a> «)ciated wtii a second token bucket of the plurality of taker* buckets is indicative Of a hck of capacity to perform the operation on behal of the customer,
  • ⁇ computer- implemented method for prioritizing capacity consumption comprising: receiving a request to perform an operation on one or more computing nodes, the equest comprising information indicative of a request class, the operation to be performed on behalf of a cus omer
  • the first data .structure comprises a first capacity indicator
  • ihc first capacity indicator is indicative of a capacity of the one or more computing nodes to perform the operation on behalf of the customer
  • a non-transitory compater-readable storage medium having stored thereon instructions that, upon execution by a computing device, cause the computing device to at least:
  • a system for prioritizing ca acity consumption comprising:
  • ooc or more memories having stored thereon computer readable instft ⁇ on that, upon execution by a computing device, cause the system at least to:
  • t e data structures comprise a first capacity indicator
  • Toe system of c iause 23 further comprising one or more memories having stored thereon computer readable instructions that, upon execution by a computing device cause the s stem at least io;
  • tne term "or" is used in its inclusive sense (and not in its exclusive sense) so that when used, for e am e ⁇ w connect a list of elements, the term * « ⁇ " m an* one. some or «II of the elements in the list.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Computer And Data Communications (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Pharmaceuticals Containing Other Organic And Inorganic Compounds (AREA)

Abstract

A database management system may be operated by a third-party provider that hosts the system in a datacenter and provides access to the system to end users on behalf of various entities. Limits on total capacity consumption may be imposed, but may result in service outages when capacity consumption exceeds those limits. Requests to perform operations on the system may be classified. The request classifications may be associated with policies for admitting or rejecting the request. Om or more token buckets representative of capacity available to the request to perform the operation may be used to determine to admit the request and updated based on the cost of performing the operation.

Description

INPUT-OUTPUT pR oRrr ZATioN FOR DATABASE WOR KWAD
C OSS-REFERE CE TO RELATED APPLICATIONS f0O0i| This application chums the benefit of U . Ruem Application No. 13,897,232, filed May {?.20 , ihe disclosure of which is m a cruUK. hereia by refe ence in Us enttret .
BACKGROUND
(0 2J Database management systems ("DBMS") may be o erand by a ih»r¾i-parfY pro ider that hosts the DBMS on servers in a itefcicenlcr and provides the DBMS as 3 service to vark»is entities such as corpofctfiom, uaivmilies, govemotent gcn fcs, ami other types of cust mers, in order to host the DBMS «»d provide the: verviee to the various en ities, the provider typically muintaifts significant resources in hardware, software, and infrastructure. In aodkion, tf¾e rovider ma Incur various ngoing costs related to o eruiirig the DBMS such as power, maiotenanec co and the salaries of technical personnel. Accordingly, ift order to provide a responsive ssryise to the various entities, tlte provider may attempt to msxirotxe tfte apacit and utilization of the hardware and other resourc s installed at its data eemer*,
BRIEF DESCRiPTlON OF DRAWINGS
The drawings provided herein are designed to illustrate example embodiments and are not intended, to limit the scope of the disclosure.
(00041 FIG. i »> a Hock dsagrura depkutig ¾ datals3ve management system ex sed to cmi«user through » w«b se ice and limiting capacity cwMnnrHion through the use of a token allocution and co{i\umptiyrt mechanism*.
| Q0$1 FIG. 2 is a flowchart depicting enforcement of a provisioned capacity using a token atiocwon and consumption rne haraxm.
f0006} FIO.3A is a diagram depicting allocation of tokens to a token bucket and multiple reqaesi types conorming lok s from the token bucke .
10087 j FIG- 3B is a iagram depicting allocatin token* to multiple token buckets that are categorized according to request type sod withdrawing tokens from the backet based on processing a corresponding request type. [8098J FIC .4 is a diagram depicting dividing: request types into a plurality of request classy and associating the classes with admittance policies thai may control which token bucket etewnines admittance find which bucket or Jnicke tokens iire withdrawn ftojiri,
10069? FiC S is a flow chart depicting an embodiment for obtaining and applying an admit ance policy based on a classification of a request.
J081BJ FIG. 6 A is « diagram depicting dividing request types into a plurality of request classes and associating the classes with admittance policies that govern request admittance and token withdrawal from a hierarchical arrangement of token buckets.
[0ΟΠ] FIG.6B is a diagram depicting deducting tokens from t parent bucket in a hierarchieal afTangement of token buckets-
(Θ6Ι2| FIG.6C is a diagram depicting deducting token bucke s from a child bucket in a hierarchical arrangement of tofeen buckets.
(O013J FIG.6D is a diagram depicting deduc in , from a pares* backet, more tokens than ate available to the parent.
|00t¾ »; 6ΊΕ is adtagram depicting deducting, from achild backer, more tokftftsihan are available locither Ihe child bucket or the parent bucket.
1001 S] FIG.6F is a diagram depicting a hierarchical arrangeme t of token buckets where two or more classes of equal priority share a parent bucket
{0016 FIG.7A h a diagram depicting an illustrative example of a user mterface for obtaining custoraer»nrovjded information pertaminf to request classes, token buckets, and admittance policies.
106173 FIG. 70 is a diagram depicting an n ttstrative example of a user interface for obtaining customer-provided information pertaining to request classes, token buckets, and admittance policies, adapted for an approach that utilizes a hierarchical token-bucket model. j¾0!8] FIG.8 is a flowchart depicting ste s for receiving capacity allocation and admittance policy information in conjunction wit a customer's definition of a new table, and using the information ta create token ucke* and apply admittance policy on a per- artition basis.
IG, 9 is a block diagram depicting an embodiment of a computing environment in which aspects of the present disclosure may be racticed.
DETAILED DESCRIPTIO
£0020! As noted above, a provider ma host a DBMS ih a datacetaer and provide access to the D&MS a a service to various entities, in that regard, the DBMS may be exposed through a web service, a web a plication, a remofs procedure ¾U and so forth- These mechanisms and others may be referred to herein as services. It) some embex!imenis. a DBMS may provide a j^ejp ted froia-eiK- thai exposes tws? or more of the services to end asers of the entities or customers. Thr ugh the services, the end users may mate requests that include various operations and queries to be performed on the DBMS through the use of application programmi g interface ("ΑΡΠ calls Jo the service.. A request may crnnpri.se, for example, an {evocation of an API rt behalf of a customer, a* well as w invocation of an operation on a DBMS on behalf of a easterner.
(0821] The provider may also equire payment fr m a customer in exchange for the use of this ca a ity. Howev , t e profitability of the endeavor ma d p nd on a customer paying an amount that is pro ortions! to the capacity consumed on its behalf, A limit on c pacity consumption may be imposed on a customer and enforced through various technique* soch a¾ tiHottfjR , queuing, and NO forth. Whe« u age exceeds the amount provisioned to the customer, re uests fomrvi es on behalf of a ustome may be rejected or suspended. This ma be dtsadvant gcot s to the customer in a variety of circumstance*, for examp e, the service ma be a component of an e-cdromeree eh site or similar application which may become non uactional if rs ucet* for the service are rejected, Howevet, it may be that not ail re ue ts for services re equally important to the customer. hile various request such as displaying the conte t of an e-commerce shopping can or ro essi a customer on er may he of high importance, others may not For example, certain types of requests with low relative importance may include maintenance t sks, report generation, data mining and so forth. These ta ks may also happen to consume a relatively targe portion of capacity, and are therefore m re likely to cause outages, blackout periods, or delays c used when a customer's provisioned capacity a been exceeded,
(0Θ221 An end user may invoke operation on the DBMSs by ending a re est including an identifier of the operation ami one or more parameters. Any number of operations may be identified, and may include operations such as reading or writing data, performing queries on data, and various data definition and schema-related instructions such as creating a»d deleting les The parameters that may be t chtded with the reques may be any type. *ud> as tegtaal val es, e^ttmeraied vahtes, integers and so forth. The particular eombittaiion of parameters will vary based on ihe type of operation being invoked,
[0023] A DBMS is a software and hardware system for msant ini^ an organised collection of data, in a DBMS, data is typically organized by associati ns between key values and additional data, The natare of the associations may be based on reul -world relationship!* that exist in the collection of data. Of if may be arbitrary. Varfou* operation* may be performed by a DBMS, including data definition, querie , apdates. and adtr,inisfcratk Some DBMS* provide for interaction with the database using query languttgss sach as structured query language
SQV), while others use APIs containing operations such us put{) and gei{) and so forth, fnteiaction wit (he database may also be hosed m various protocols or stan ards, such as hypcriexi markup language ("HTML") and extemied markup language CXML*f). A DBMS may comprise various architectural componems, such a a storage engine thai acts to tore data one on or more st ia e devices such as solid-slate drives.
{062 ) Use provider ma host an number of DBMS* within its dataeenters. I
DBMS* ma operate on any number of computing node* and may be a^KKiatcd with various storage devices a»d connected using a wide variety of networking equipment and topologies. Moreover, at variet of DBMS* may he osted, including rc&tiionai databas-j*, ohject-oneoied databases, rs^it tufed query language CNoSQL") databases and so forth.
{90251 s aoed atei, a limit on capacity consumption may he m osed on & emtomer. in embodiments, $ costomer a bove a set level of capacity ewasumption. The casJomer's ievei of capacity ensumption may he limited through various estimation and rneasurement tixhniqu s. Because of the wide variety of c ^jiwiingmw ces th»t may e involved ir» processinga request, canity consum tion may be difficult to determine. However* various measurable quantiti s may serve as reasonable proxies for ca acit conannption, in varioas embodiments, quantities such a the amount of data sent to or received from a client application may be employed to estimate the capacity eonsimed by processing a certain request. For example, a query request may sam a database table in order to determine rows that conform to the constraint specified in the query, The number of ro** fetun ed may be a p oxy for capacity coem tion. For xample* if a single w of data returned, the query may ave been limited in scope and, therefore, is likely to have consum less capacity tha a query thai resulted m many rows of data being returned. Mote details of example embodiments are described below in connection with the frgtires.
[ΰθ2ΰ] As noted above, in an example embodiment a provider hosts one or more DB Si within a data center and provides acc ss to the va ious DBMS« through a web service. FIG. i depicts an environment fo hosting provisioned w b services and databases within a data center. EndOser applications 188 may be conn cted to cksncnt* within data center tiD by communications network 103, gateway 104, an touter 106. Thowi of ordinar skill in the an will recognize thai his network configuration is on of many ossi le configurations iha maybe iwrorporaied into embodiments of d* present disclosure.
[0627 e service HQ may provide various APIs to rovide funct ions related to the operation of database I j& n some cases, 0¾e APIs may serve as Hghi- weight wrapper}, buif( top of mofe complex data ase interfaces or protocols. For exarnple, depicted AH J i I mig t provide access to a query function of database i 16 through use of an interface adhering to representational state transfer ("REST') priocipfes. End-user appfications 102 may then invoke AH H i, using comparatively simple REST semantics, to query a key- value datitbase without needing to understand the technical details of database 116.
[00281 Web service I 10 and database 116 may operate on a variety of platforms such as one or more computers, virtnal machines or other forms of computing services which may collectively be referred to a* competing nodes. Operation of these nodes, «s well as associated storage devices. network infrastructure and so forth, involves various costs. These eos s include t ose related to hardware and s^ftwareacojoisitio*. maintenance, power, personnel and so forth. The costs may also fo ude factors such s opponunity cost incurred whe consumption of resources by one customer prevents atiliataiion of the service by anot her.
10929} t^er^kws performed by web service 110 arid database I !6 on behalf of a customer may be correlated to consumption of an amount Of capacity on a given computing node. The correlation may allow a hosting service provider to calculate the costs incurred by providing the service. For example, if a given customer invokes a web service that utili es one- hundred percent of a computing node's capacity over an entire day, the C M of providing the service may be the sum of acquisition, maintenance arid operating cosis for the computing node prorated for a tv&njy- u hour period.
180303 Accordingly, consumption of capacity may be limited through various means such as the embodiment depicted in BO. I . Acceptance policy 308 involves determining whether or not a e ues should be processed, in general terms, a goal of acceptance policy 108 may be to ensure that requests performed on behal of a customer are not permitted to exceed a provisioned amoaot of capacity. Fo exampte, a customer may b pnsvlstofied twenty-five percent of a computing noo s capacity, Acceptance policy 108 may then set to limit that customer's average consumption of capacity to no more than twenty-five percent- In som ernbodiroents peak usage may be permitted to rise above that amount for a limited period of time. (003!} When a custGrner's capacity 108 may reject incoming requests/Deeding on the nattire of the request, this may have consequences that are important to the cttstomcr. For example, & easterner might run a sfccf mg w*b site which directs requests to database ! 16 la retrieve the contents of an end u¾rVsbopping cart if the request is rejected, an error message rather than a competed sate mi t result. On the other hand, some types of requests an he postponed without signlficani cojisequences, Possible examples include maintenance tasks, report generation and so forth. Accordingly, acceptance policy 108 may be implemented to account for the type of request being invoked when making admit or reject decisions.
£0032] Embodiments may employ 3 token bucket model to limit capacity coem tio . A token bucket may be seen conceptually as a collection of tokens, each of which represents a unit of work that the owner of the bucket is authorized to perform. Tokens may be added to a bucket at an acenmoimion rate, which may for example be based on a leve of service. When work is performed, a number of tokens equivalent to the amount of work performed is withdrawn from the bucket If no tokens are available; the work may not e performed. Using this approach, over time the amoant of work dial may be performed ¾* limited by the rat? at which tokens are added to the bucket.
100331 In order to prevent near-term over-utiiicuion of capacity v a li mit may be imposed do the maximum number of tokens that may be added to the bucket. Any tokens to be added above tha limit may be discarded,
(8634] In the context of a database operation, a token may be considered to represent a unit of cost related to performing the database operation. f¾r example, the cost of a request to perform an operation on database i 16 might correspond to the siz of data returned when the operation is executed. The cost of performing the operation, »« measured in tokens, may be determined by di iding the size of the data by a size per token value.
$635J A requeste operation may be deemed to cost at least ope token, bu the full c st may not be known until after the operation has actually been performed; In one embodiment, among many possible embodiments, a request may be admitted when at least one token is avail able. The request may thea be processed and the total co« of the request determined based on one or more measured quantities,
|003«j In BO. I , tokens may accumulate in a virtual container such as a token mk<H i 12. The token bucket may be considered to represent an association between units of permitted capacity consumption, represented by tokens, and m entity such as a customer or service. For exampfe, when a c stortwr creates a tabk o^ 116, token bucket 112 maybe associated with all operations performed on that table. Other erobodtroent might associate a token bucket with a table ankion, easterner, c*srn «mg node and so %th;
[0637 j Token accumulation policy 1 1 may govern the addition of tokens info oken bucket ! 12, in an embodiment, accumulation poli y i 14 compri.ves a rate of addition and a maximum token capacity. For example, a policy might indicate that a given backet should accumulate tokens at a rate of twenty per second but that no more than one hundred tokens shoeld be allowed to accumulate.
{0638 j Token and token buckets may be represented b various structures. In embodiment* acceptance policy 108, token bucket ! ί 2 and token accumulation policy i i 4 are implemented by u module of functionality, such an a software library, executable program and so forth. The module ma represent tokens and token accumulation by recording a cymertt number of tokens, a maximum number of tokens, a rate of accumulation and last token addition time, When determining whether or not to admit or reject the request, the module may first update the current number of tokens based on the rate of accumulation, {h last time tokens were added and the current time. For example, when a datasiruetuie <»rn¾¾>onojng to a token bucket e amined to determine if capacity isayaiiaole, the number of new tokens accumulated maji be determined by multiplying the rate of accumulation by the amount of rime that elapsed since the last update to the count of available tokens. Tfei* value may be added to the count of currently available token*, but not allowed to exceed the maximum number of tokens allowed in the bucket. Other techniques for maintaining the token bucket, such as those based on a periodicall invoked routine, are als possible.
{ ϋ391 FfG.2 depicts an embodiment of applying acceptance and token accumulation policies. Although depicted as a sequence of operations starting at operation 200 and ending wi operation 216, those of ordinary skill in the nn will appreciate that the operations depicted are intended to be illustrative of an embodiment and that at feast some of the depicted operations may be altered, omitted, reordered or performed in parallel.
(0040 At operation 262, a request to perform a service is repetved. A* afi example, the request might comprise a databu.^ query. The cost of the database query may be determined based upon the amount of data returned by performing the <guery, possibly measured by the number of bytes of data returned t die end tiser.
10041} Operation 204 depicts updating the count of available t k ns, in an embodiment, the number of available tokens may be adjusted based on a last update time, a token accumulation rate and the current Ume. T¾c maximum number of available tokens may also be limited. When a re¾«csi is admitted, a token Is deducted from &s current cjwmi oT &cns. M(jwe ief,b½^¾ varidtts embodiments ma uiiftv*; the carrcnt. token c^teltft fa!iheiow zero, it ntay be the case that no tokens are available for deduction. Operation 206 depicts determining if a token h avai able for dedu tion. Some embodiments may consider ne token to be sufficient to admit the request, while Olivers may atterapt to estimate the numbe of tokens, i.e. ihe capacity, processing the request will cpnsbnte. As used herein, the terms sufficient tokens or sufficient capacity r¾iiy refe to one token, 3 fixed number of tokens, a number ot tokens based on an estimate of capacity that will be utilized by processing a e uest end so forth, if insufficient tokens are available, the request is rejected as depicted by operation 208. A client application and/or the customer of the hosting service may be notified of the rejection.
j¾04i} Operation 2IOdepicL admfttmg a request when at least one token is available. The count of available tokens may b¾ reduced by one and the request is processed, as depicted by operation 212> Once the request has been processed, (he number of available tokens may be adjusted farther downward, as depicted by operation 21 , based on the actaal cost of performing me request, in various embodiments, the actual cost maybe measured based on metrics, snch as the amount of data returned, CPU utilizatio , memory consam tion, network bandwidth consumption and s forth.
[9643] Hie embodiment depicted by FIG. may allow the count of tokens conently in u bucket to fail below zer . A negative token balance may correspond to a blackout period during which no requests thai depend upon that token bucket may be admitted. The length of the blackout period can depend upon the current negative token count and the rate of token accumulation, For example, if the token accumulation rate is ten per second, and the number of available tokens is -100, the length of the blackout period may be ten seconds.
[96 ] Certain types of requests, such as those involving maintenance, reporting, summarization and so forth may be data intensive and treated an high-cost. These types of request may therefore cause lengthy blackout periods, during which any request, including those of low cost but high importance, may be unable to ran. FlC 3 A depicts an example of thi type of situation. The total provisioned capacity allocated to a panict ar entiiy soch as a table, partition, computing node and so forth may be epresen ed by token inflow 30¾, the rate at which tokens are added to token buck 302. for example, long-running maintenance task 306 may be a data-iraensiv task that causes a comparatively large amount of token outflow 3 id. I may be the case that each time the tusk is run, the number of available tokens in token backet 302 drops to a negative number and a blackout period ensues. OR (he alher hand, a query request 304 may require liuk (3a¾i outfiowand ca se a comparativel small arooam of token outflow 312.
However, a previously executed maintenance task may ha e caused a blackout period, ami the quer e uests may be rejected despite their low east, It may be advantageous to avoid such situations.
0045] FIG. 3B <fepi ts dividing provisioned capacity into two separate token buckets 314 and 316. Token inflow is divided equally as depicted by token inflow 308» and token inflow 308b. The cost of performing the Song-running maintenance tasks an the queries remains constant, and thus token outflow rates 3 jO and 312 remain unchanged. This arrangement prevents quer requests from being blocked by executing a long-running maintenance task. However, it may be the case that the maintenance task is rarely called. If so, much of the capacity reserved to the long-fanning task may be wasted.
[0046] Request admittance may be determined based on more than one bucket, and for tokens to be deductible ftum more t an one bucket. For example, on embodiment might divide request types into high, medium and tew priorities. A high priority reqttest might b able to withdraw from any bucket, the medium request ftom two of the three buckets, and the low priority request from only one. Categories of similar request type* ma•be described as classes, These classes may then be associated ith an admittance policy. The admittance polic may comprise an indication of whkh token buckets should be a¾sd to determine whether or not a request should be admitted, and a technique or approach for withdrawing tokens once the total cost of the request is known.
(00471 FIG, 4 depicts an embodiment for allocating capacity based on request classes and admittance policies, Incoming requests may be classified as belonging to a class of requests, seen as elm "A" tequests 400, class* "B" requests 402 and class " * reques»404, An admittance polic ma then be applied to determine which buckets are involved in admitting or rejecting the request and how tokens should be deducted from those buckets,
{00 8] Each d of requests: may be associated with an admittance policy. The polic ma invok « variety of logical aa prbcedttral mechanisms reltaed to the use of tokens, dme aspect of an admittance policy involves determinin whether or not a request shotild be admitted for processing. Using FIG.4 as an example, a policy 406 might specify that class **A" requests 400 should be admitted if at least one token is available in bucket 'Ύ' 414. A second policy 0$ might specify that class "8" requests should be admitted if a token xis s in either bucket "X" 412 or backet "Y 414. A third policy 410 for class " 4 requests 404 might imiieate that requests should be admitted based on bucket *$ ί2 alone. These eastnpkss are illustrative and many oihef eorot»«aiio«*s a« possible,
(0049] In an embodiment, a request may be admitted based on a variable or p edefined token threshold. One example in olves admitting requests only when a bucket has a number of tokens equal to a predicted number of tokens that might be consumed Another example involves us g a mo ing avemg of previous com to set the minimum number of tokens. Numerous additional embodiments are possible,
(0050] Admittance policy may also involve determining how tokens are deducted. When a e uest is fim admitted, one or more tokens may be deducted from the bucket upon which the admittafice as based. However, the total cost of the request may not be known until the e uest has been at least partially processed. Therefore, at some time after a request has been admitted, a second token deduction may occur. The policy may specify one or more buckets as the target of the deduction, and may also specify a distribution between die buckets,
f805l| in an embodiment, tokens may be withdrawn front the same bucket that determined the admittance. For example, if a request wete admitted based on the presence of a toke in backet "X* 412, depicted in O; 4> the full token tost couid lso fee deducted from btcket WX" 4 I 2, One reason to emplo this approach is that otherwise a bucket having negative available tokens couirf fall further behind, prolonging a blackout period for requests; thai rely exclusively on that backet.
[0052] Another embodiment deducts first front the bucket that determined the admittan e, and then From one or more subsequent buckets. For example, if a request was allowed based on an available token in bucket *'X" 12, a portion of the total cost, onc determined, could be deducted from bucket "X" 4! 2 and a portion from bocket "V 414, The amount deducted from the first bucket 412, may be determined to prevent the available token count from falling below a threshold value such 8s tm. The remainder may be deducted from the second backet or the last backet i a series of buckets, possibly causing that bucket's available token count to fail below zero. T us in FIG.4. usin this approach to process class "S" requests 402 could cause the available token count for bucket *Y" 414 to become negative. The eoum for bucket *¾* 4f2, however, would not bec&me negative due to ciim Ί requests 402. This approach might be advanta eous because it could prevent blackout periods for class requests 404 that could otherwise be caused by processfog class "B" requests 0
18053 j FIG. 5 depicts an embodiment for obtaining and applying an admittance policy. Although depicted as a sequence of operations beginning ith operation 500 and ending with operation 16, those of rdinary skill the art mil appreciate thai the depicted o erates »*¾ invaded to be iiittarative, and that at least seme of the depicted operations maty fee altered. mitted, mofdercd r ^ormed in parallel.
(0 34| A rocess for applying an admittance policy may involve receiving a request ax depicted by operation 502. The class to which the reque t belong ma then be determined. This may be done hroug a variet of means, in an embodimen the API associated with a request invocation may allo for1 one or more parametersthat identify the class. One example involves a textaai para meter that names trie; class (hat the rctpics* corresponds io. However, it may be advantageous to use numerical values to identify a request class because of various performance considerations,
(0055} In some embodiments, request may be classified based on their type. For example, write requests may be classified separately from read requests. Other embedments might analyze re es s to determine their potential costs and assign e u st* with similar cost leyefc to t e same class.
[8656} Re uest cl ss may be based on factors in addition to or instead of those s ecified in request parameters. For example, a given eustom«viQ¾nttfier or 8ec*r¼y role might be associated ith a request class, the c (Corner or rote might be available from the context in which a request was »*oked, or it might be specified a* a parameter in the request. Other potential factors include die scarce internet protocol address of the request, the particular A Pi being invoked and so forth, In addition configuration or other mecbaniiiros may be used to define classification rules. For example, a configuration associated with it customer, a web service, a AH or other entity might be associated with a request. Various default rules, possibly specified in the configuration, might also apply. A default value might be applied whim no other classification rule is applicable. Embodiments may allow default values to be oveniddcu.
Embodiments may also allow for certain default values to be fixed, so that they cannot be overridden,
|9Θ$7] Qoee the class of the request has been determined, a corresponding admittance policy may be received, obtained or otherwise readied for application as depicted by operation 504. This may involve accessing a record that describes the admittance policy, such as a list of buckets from which tokens may he withdrawn. The record might, for example, be stored in a database, embedded in a code r source, configuration fife and so forth. In some embodiments, structures; representing the buckets; may be part of the policy description, or in other words she buckets and the policy definition may comprise an integrated structure. Some embodiments may Omit this step sod ap l policy by selecting an appropriate p ih of execution in (he mstrttciwrrs thai perform the various techniques described herein.
[0058] The request may be admftfed or rejected by operation 506. The a imi^e policy may describe one or more backets which wii! be checked for an available token. Some embodiments may require multiple tokens or base admittance on totem being at least above » threshold e el Some embodiment* may alfcwa requ st Jd be admitted when the token oam is negative, and the policy (fescription might indicate a threshold negative value below which requests should not be admitted.
[0059} Operation 506 may also involve deducting al feast one token. The number of tokens deduc ted may be same amoun of tokens that w s used to determine whether or not the request should be admitted. In this way, another requ st will not be admitted based on the same oken or set of tokens. Embodiments may also syechroftizs access to the buckets in order o prevent multtpie requests from being admitted based on the same token or set of tokens.
After being admitied, the request may be processed as depicted by operation 508. The total cost of the request may be determined based at least in part on processing the request, in various embodimeftts, the size of data returned to the client of the service may be used.. For example, tf the request wax a database query, a token cost could be derived from the rotal size of the data returned to the client after executing the query, !n other embodiments, various p formance metrics might be collected while the request s processed and used as a basis of a cost determination,
{0061} Varioas embodiments may deduct the cost of the operation once the request has been performed. This step may be performed in compliance w¾h the admitt nce policy, as depicted by operation 514. The admittance policy may describe various approaches to deducting the token cost, such as distributing among multiple buckets or deducting the cost from the bucket that determined the admittance. Various other policies may be employed,
(0062j In some embodiments token buckets may have a hierarchical relationship. ¾is approach may allow for convenient administration of the s lec ed admittance policy because it allows a priontizatioo scheme to be defined with a one -to-one mapping between request classes and token buckets. The hierarchical token buckets may be convenientl defined as by specifyin parent-child relationships between token buckets along with respective maximum token capacities. FIO. tSA is illustrative of tut embodiment employing an ej ample of hierarchical token buckets, It depicts two token buckets. Socket " * 60S is a parent bucket of 610, and has a maximum capacity of thirty tokens. Bucket "Y" 610 is a child of parent *>X" 608, and has a maximum capacity of five tokens. The tokens are shared etwee the buckets in hierarchical fashion, with a ch ld bttcket bav jig access to alt of iis parent's tokens. Thus, when both token backets 608 and (519 area* maximum c^achy, there are 30 toke s to be shared beMeen them.
[0063] In FIG.6A class "A" requests 600 am associated with Socket "Y" 610 based on application of class "A" policy 60 . Similarly, class "S" requests are ssorted with Bucket "X" 608 based on class B policy 606. Admittance policies may comprise a one-to-one mapping tween a re uesi class and a bucket whi h may be used to determine whether or not to admit a request and from which token bucket, w buckets, the tokens are withdrawn. Aft admittance policy may also comprise additional element such as the minimum number of available tok ns needed to admit a request Various methods and techniques described herein regarding aon- hierarcfcical approaches ma also be applied to hierarchical approaches.
{00643 A request may be admitted based on the trvaifcbHHy of at ki one tok n in the token bucket that is mapped to the request's class. For example, class "A" rcqoests 60 (na be admitted when Bucket ; Y" 610 'has an availabl -token. Similarly, class B" requests 602 may be admitted when Bucket "X" 608 has »an available token. Some embodiment may require that more than one token be available, based for example on art estima ed cost of processing the request
(0065) The token or tokens required for admittance may he o^dmcd when she request is admitted for processing, Embodiments may deduct one token upon admitting me request The rema ning cost, once known, may be deducted from the same bucket that determined admittance .
(0066) FIG. 68 is a diagram depleting ah operation 50 deducting two tokens from bucket MX" 60S* The fjre-deducttVm state of Buckets * * 608 and ' * 610 is derfcie by figore element 651, arid the post-deduction state by element 5 The tokens are deducted from Bucket. "X" 608 to result in a token coam of twenty-eight. The tokens available to "Yw 610 remain unchanged.
}0067] In FIG.6C, operation 660 depicts deducting two tokens from Bucket *Ύ' 610, The stale before deduction is depicted by figure element 66} . The stae after the deduction, as depicted by eiement 6625-bows that the tofcen ay tkhk to both Backet *¾" ^ alid ^V" 610 H ve been reduced by two. This approach reflects sharing available tokens between the two buckets in a hierarchical fashion. When a request is admitted or processed on the basis of tokens being available in a child bucket the tokens may be withdrawn from the child bucket and each of its parents. In addition, various embodiments may require that, in order for a request to be admitted, a token must be available in both the child bucket and in each of its parents. In FIG. 6C, » equest could be admitted on. the basis of suae b¾foir deduction 661 , e amw both Bucket '*Y 6I0aod its parent Bucket "X" 608 have at least oh token. On the otter h¾ emb^i « & may permit a parent backet, saeh as Bucket "X" 608, to process requests eve if there are insufficient toke s in » child bucket.
[men] Operatic* 670 in FIG.6D depicts Asducting from Secket "X** 6088 quantity of tokens that is more than the number available. Before t e deduction as depicted by element 67 i , token Bucket "X" 60S has thirty tokens available to ft. Afterdesductkig thirty-five tokens, as depicted by state after deduction 672. its out! has been reduced to negative five, PIG.6D depicts Bucket "V^IO as having five tokens available to it after the deduction. For requests whose admittance depends upon bucket 'Ύ" 610, embodiments may require at least one token to be present So parent backet "X*' 608, Accordingly, fa these embodiments no requests dependent on B cke ' Υ" 610 would be admitted until the attmber of tokens available to both Backet "X" 608 and Bucket 610 rises above zero. However, embodiments may apt reduce the numjbcr of tokens is a child bucket when tokens are dedacted from its parent . lanboditnents may 1 require at least ne token to be present in each of the child buck 's parents. However, preventing: the child*:* token count from gomg negative may help to prevent blackout periods for services associated with the child bucket
(00 ] Factors may determine the length of a blackout period include the degree to which token counts in a child bucket and its parents are e ative, and the child and parent buckets respective reftll rates. For example, in FIG.6D, a request dependent on Buc et 610 may not be admitted until at least one token is available in both Bucket "X" 608 and Backet "Y" 610. The rate at Which Bracket "X" 608 refills with tokens ma therefore influence the length of blackout periods seen with requests dependent on Bucket "V'SiO, Bucket *4X 608 may, however, be as igned an accumulation rate t at is proportionally higher than that assigned to Bucket T* 610.
{0670} FIG. 6g depicts operiUion 6S0 deducting more tokens from Bucket T* 610 than are available to it. Before the deduction, Backet UY" 6l0 has five tokea\ a ailable {» it, us delete the portion of the figure at element 681. After deducting thirty-five tokens, Backet " * 6 0 has negative thirty tokens available to it. while Bucket MXT 608 is left with negative five tokens. This state, depicted by element 6S2* reflects deductin fmm child Socket "Y" 610 u on hich admhtance is based, and from its parent Bucket " * 608.
[0671] Embodiments of hierarchical token backetvmay cmp toy a variety of ^ algorithms, data structures and so forth. In an embodiment a record may be used to track the number of tokens available to a patent token bucket The namber of tokens available to Its. child en may (Hen be determined based y» a set of ks, w algorithm and so fenh. For example, (he namber of tokens available a child token bucket may be deierfnined based oa the namber of tokens available to the parent token ts ke ;a¾J the nrniraum number of tok ens the child token bucket is a lowed to accumul te.
10072] A hierarchy of token buckets may accumulate new tokens us a group. Using the token buck ts depicted in FIG. oA s a potat of reference, a token accumu ation rote may be associated with the hierarchy that includes Ba k ts "X"6Q8 and MY" 610. When tokens are added to the hierarchy, the number of tokens available to both Buckets ' * 60S and *Ύ" 610 may increase up lo their respec ive maximums,
115 73] FIG, 6F depicts a hierarchies! a angement of token buckets in which tw or mora uest c asses share buckets having equal priority- Oas» "A" requests 6000 may be directed to Bucket **X" 60! 0, and clas "B" reqoesls 6002 may be directed to Bucket 6012, Parent Bucket MF* 00 may have a total capacity of 100, corresponding to feuckci "JC 6010 receiving an allocation of 80 tokens per second and Backet "Y" 6012 receiving an allocation of 20 tokens per second. The maximum capacit of the buc ets may be the same as Iheir respeelive rates of token aHocation,
[0074] An admission policy may be defined for the arrangement depicted in FIG.6F in which the child buckets share eqaat priority. The admission policy may proceed as follows: Upon the arrival of a request corresponding to clas "A" requests 6000, bucket "Y~ 6012 may be onsul ed for token availability, if at least o e token is avail e, the request may be admitted. The count of available tokens in backet *Ύ" 6 12 and Parent Bucket may each be reduced upon admittance and aga upon processing the request. If iasafficicnt tokens are available in Backet "Y* 6012, the request may be admitted if there is a lea*, one token in Parent Bucket "P" 6008. A token may be deducted from Parent Bucket "F" 6008 upon admittance and again upon processing the request. Class " " requests 6002 may be processed in a similar manner by defining class "B" policy 6006 in a manner similar to class ' A** policy 6004, adjusting for differences in token allocation rates,
(0075] A c nse uence of thi¾ app oa h involves reque ts from each class having ccess to their provisioned capacity, but able to draw on additional capacity if the sibling bucket af.deruiili3 ng the capacity provfeiooed to it r¾r example, if only class "A" requests 600 are issued, there will be up to 100 tokens per .second available for processing them. If the workload is mixed and do» 'W re uests 6000 consume ten tokens per second then class "B" requesls 6Q02: mil be able to consume «n to 9G tokens per second.
[9076] Aspects of the adrotttsince policy way be defined by a customer of a besting provider. The definition may be pcrfoaned b tfte custOBier^ imefaetion with a user interface rovided by the service provider. For example, a web page might allow the user to define the classes and the respective backets from which tokens may e withdrawn. The user might also set various additional parameters, such a a minimum token amount for requests in each class.
$677] Various embodiment.* may provide means for administering the policies thai govern input/outsat priomrzation, which may for example include defining request classes, token buckets, admittance policies and so forth, in FIG. 7A, user interface 700 ma comprise par of a sequence of user interface ages presented to a user during creation o a hosted database table. The previous step 701 user interlace component ma represent a button, hyperlink or similar element for navigating to a previous point in a use interface wizard for defining a hosted database table. The create table 702 user interface element may indicate that the user has finished su plying pararnetefs to the table creation process and that a table should created 1¾c depicted mis interface elements are in ended to be generalized e m le* of an approach to providing such an interface, and should not be construed as limiting the scope of i present disclosure,
ie©7¾J User interface 700 may contain One or more policy defmiiions 704a, 704b and 704c, which may be used to supply parameters indicative of the relationship between class o request and one or more buckets from which tokens may be withdrawn, and possibly other various elements of an admittance policy. For example, policy definition element ?04a may comprise a class indication 706a, primary bucket indication 708a, and secondary bucket indication l¾ The class indication 706a may comprise various, parameters that describe a class, which may include a class name and one or more request types, in some embodiments, the request classes and request types are pre -defined by the hosting service provider. A number of additional class indications may be presented in user interface 700, such as policy definition 704b comprising class indication 706b. primar ucket indication 708b arid secondary bucket indication 7 lOb, and policy definition 7Q c comprising class indication 7¾Se4 primary bucket indication 708c and secondary bucket indication 7!Qc,
[0079] Primary bucket element 708a and secondary bucket element 7lGa indicate the token backets that comprise part of the admittance policy, as well as their respective priorities. For example, the token bucket indicated by 708a would be consulted first in applyin admittance d eisioR for requests that fail into the class specified by class i^dical ft 7D6a. The toke bucket specified by secondary token bucket ?IOa would be consulted second- Policy {felinUtCtts 0 704b and 704c ma refer to the same token bu kets or to overfs|>ping sets of token buckets.
FKJL 7B depicts an illustrative embodiment of a usc mterfxe for defining admittance policies employing hierarchical token buckets. ser interface 750 may comprise part of it table definition rocess that employs previous ste 751 and create tabic 752 user iftterface eleraems to navigate to other ste s in the process and to indicate that the process should be completed.
{O0&1I User interface 750 may comprise one or more policy definitions 754a, 754b and 754c thai aikw the customer to supply parameters for cresting admittance policies and hierarchical token buckets. For example, policy definition 754a might include a bucket name 756a user nterface element indicative of the nairte of a token bucket, δι some embodiments this may be a drop box or other user interface element containing predefined bucket names. Request classes 760a might comprise combo box, list box or other user interface elemen allowing equest cfesse&to ^ assigned to he ueket indeed by bucket name 756a. The child token busket 758»:yssr interlace eien»W might be ta specify one or more child ifike bjacket*, such as Bucket €10 depicted in FIG; 6A. This may be a list box or other user interface element allowing one or more children token buckets to be selected. In some embodiments, the parent token bucket might be specified in place of one or more child token buckets. User interface 750 may allow for a mnftber of additional policy definitions to be defined. For example, user interface 750 also contains user interface elements for defining or editing policy definition 754b. which comprises backet name 756b, request classes 760b ;md child backet 758b, and policy definition 754c, which comprises bucket name 756c, equest daise 760c and child bucket 758c.
10082] Both F!0. 7 A and 78 w¾ depicted as providing user interface element* f or specifying a token bucket, such as primary bucket 70&t in FIQ. 7A and bucket name 756a in FIG. 78, However, user interfaces 700 and 750 may use alternative representations for directly or indirectly specifying a token backet and corresponding request classes. For example, user interface 700 might prc.*«m a choke of high, medium or low priority that cottld be associated with a fequesi class. Hie telationships bei een backets could be Wm≠ from the priority level selected by a user. A simitar approach could be employed in user interface 750.
[0083] Embodiments may employ use* interfaces similar to those depicted in FIGS.7A and ?& to allow customers to subsequently edit request classes, buckets, relationships between buckets and so forth. Ooc exaniple in olves changing the definition of a table . A customer might make various modtftcafiotfts to the dettnittoifi of a database table, for example by adding additkatal columns. The modified table mig t be associated with a different usage pattern. Accordingly, the customer might also specify a change in the capacity allocated lo the table, the admittance policy, the number of buckets and so for*. Vartovs user interface elements or APIs might be used to supply the relevant information.
fJXNM] The example user interfaces depicted in FIGS. 7 A and 78 may be implemented osing a wide variety of technologies, including thtck*cliertt, tMfcr or other archilecfores. In an embodiment, a weh Server operating in a hosting provider'?; data center serves hypertext madcap language ("HTML'") forms to a cnstomer's browser. Upon submitting the forms, a web service H within a hosting provider data center receives and pnfce*«ss the information supplied by the customer.
(00851 FIG.8 is a flowchart depicting a process far creating a database table with ssoc at d token buckets and admi ttance policies. Although depicted as a sequence of operations .starting with operation 800 and ending with iteration 814, it will be appreciated by those of ordinary skilj in the art t at at leas* ^rt of the depicted operations may be altered^ omitted, reordered or performed in parallel. For example, the information indicated by operations 802*808 may be received concurrently at the hosting provider data center.
196861 Operation 802 depicts receiving information indicative of a table definition. Embodiments of the present disclosure may allocate capacity on a per-table basis or on a per* partition basis if the defined table involves m re than one partition, partition may comprise a subdivision of a tabic, each of which may be maintained by a DBMS operating on one or mo e computing nodes. Because capacity may be allocated on a per-table or psr-partition basis, admittance policies and tofcen buckets may be deraed en a similar basis.
[8087} Operation 804 depicts receiving information indicative of one or more classes of requests. These classes may be defined by the user, for example through a ttser interface sach as those depicted in FIGS, 7A and ?B. m other embodiments, the hosting provider may predefine the request classes. In an embodiment, the request classes arc labeled as "high," "medium" and "low.**
{&888J Operation 806 depicts receiving informauoa indicative of the buckeis that should be created, in som embodiments the information may comprise s listing or count of the buckets to be created, while in others the information may be inferred. Far example, s one-to-one correspondence between request classes and token buckets may be inferred. In an embodiment, thira token buckets may be created to correspwd to i e "high," "rnediuro^afjd M ow¾ request classes,
[0flS9] At operation 808, mfo*mation indicative of one or more admittance policies may be received, this information may comprise a mapping between request classes and buckets, and mav include iofomut n incKcative of die order in which/buckets sboald be consulted to determine admittance, a method of deducting tokens and so ford). The information ma be onfined with other information referred to by operations 802-806. Certain aspects of the information may be determined infereniially. or be used to infer o¾r aspects of the iofomtaticn received in operations 8 &-806, For e*airap1e, in some embodinwnts u policy description that references a bucket may be used to infer that the referenced bucket should be created.
[9Θ903 At operatioH SI 0, a partitioning scheme may be determined. The table or ether .service to be hotted may be di vided among multiple computing nodes. Accordingly, embodiments may determine how many and which computing nodes to involve, as well as other aspects of partitioning the table sacfc as; determining how to divide the data maintained by the table. For services not involvin tables, this may involve d«efmimng:&© - to divide the wwkloads bandied by the respective partitions,
106 1) Based on the p^r itiontRg scheme, capacity »«¾r be ailocaied to the various partitions or competing nodes. Per-customer capacity may, for example, be divided evenly b tw en the partitions or it may be divided based m the amount of workload a partition or computing node is expected to handle. For example, if a first partition is expected to handle three-fourths of a table's workload, it may be allocated threc-f artbs of the capacity,
10092] Allocating per-custoraer capacity to a partition may involve assigning a proportion of a total amount of token generation to a partition. For example, it may be determined based at least in part on a customer's service tier that he should be allocated a given quantity of tokens per second. Continuing the previous example, three-fourths of that capacity could be allocated to one partition or competing node a»d the remaining one-fourth to another. This is depicted by operation 8 IQ.
[8S93J the total per-castomer capacity assigned to a partition may be suballocated to the token iwkm to ^ created on that partition, as depicted by operation 8! 2, Continuing the previous example, if thr¾»-faarihs of the total capacity corresponded to token generation at a rate of seventy-five tokens per second, men a total o seventy-five tokens per second could be allocated to the token buckets associated wit that partition or computing node, if there were three token buckets for that partition, then each could be allo ated twe«y-five okens per second. [099 J Oace determined, pec-bupk^ i iir«Q allocation rale may be used to create, initialise or otherwise represent a token backet, la varies en^odirnen s, creating a bucket may comj¾1se inHializtng various data stractures, such as a nxord con¾>rising a maxtam token capacity, a current capacity, a token allocation rate and a 3asi addition time. Numerous other embodiments are possible. For example,, in some embodiments there may not be a one-to-one correspondence between logically defined buckets and data stracturcs stored io memory.
£0095} the operations depicted in RG. 8 may also be adapted to allow for updates. For example, operation 802 could comprise receiving information indicative of a change to the definition of an existing table. In addition, information pertaining to request class s, bucket efmitioro and relationships, admittance policies, parfitioniag scheme and so forth can be received subsequent to their in tial defini e, imd the eorre^nding entities and relationships updated accordingly.
|O096J FKJ, 9 is a diagram depicting an example of a distributed computing environment e which iKpects of the present invention may be practiced. Various users 900a may interact with various client applications, operating on any type of computing device 903a, to communicate over ^««»iicatit»ft etwork 9 4 with tt#»se$ executing on various competing node* H<kk. 10o and 9!0c within a data center 920. Alternatively, client appiieauons 903b may communicate without user intervention. Communications network 904 may comprise any combination of communications technology, including the internet, wired and wireless local area networks, fiberoptic networks, satellite communications and so forth. Any number of networking protocols may be employed.
f009?3 Communication with processes executing on the computing nodes 910a, 910b and 910c, operating within data center 92G, may be provided via gateway 906 mi rooter 908. Numerous other network configurations may also be employed. Although not explicitly depicted in FK3. 9, various authentication nwchiatisros, web service layen* business objects or other intermediate layers may be provided to mediate communication with the processes executing on computing nodes 910a, 910b and 9!0c. Some of thes intermediate layer? may themselves comprise processes executing on one or more of the competing n«ies, Gomputing nodes 910a, 910b and 910c, and rocess executing thereon, may also communicate with each other via router 90S. Aitmukiveiy, separate communication paths may be employed. In some embodiments, data center 920 may be configured to communicate with additional da a centers, such that the computing nodes and processes executing thereon may communicate with computing nodes and ro ess operating within other data centers. f00>98} C * p lif¾i node 9 lOa d pleted as residing on physical r r4^m comprising one or more pressors 916voce or more memories 918 arid one or more storage dtofc-es 9 , Prwesse* OR compu!ing node 9*Gst may execute in conjanctltm with an operat^a system or alternatively may execute asabarfr-metal process that directly interacts with physical resources such as processors 9 , memories 918 or storage devices 914.
Computing nodes 910b and 9i0e are depicted as operating on virtual machine host 912, which may provide shared access to various physical resources such as ph sics! processors, memory and .storage devices. Arty number of virlualiiafton mechanisms might be employed to host the computing nodes.
!OlOej The various coffi a ip nodes depicted in FIG, 9 may be cortfjguted to host web services, database man gement systems, usiness objects, monitoring and diagnostic facilities, and so forth. Λ computing node may refer to varices types of computing resources, such as personal computers, servers, clustered computing devices and so forth. When implemented in hardware form, computing nodes are generally associated with one or more memories configured to store computer-readable instructions, and one or more processors configured to read and execute the msitmions. A hardware-based computing node may also comprise one or more storag devices, network interfaces, communications buses, user interface devices find so forth. Computing nodes also encompass visualized computing resources, .such as virtual machines implemented with or without a hypcrvtsor, virtuaiiacd bare-metal environments, and so forth. A ^rtaaljzation-based computing node made have vinusdized access to hardware resources, its well as rion-virtualized access, he computing node may be configured to execute on operating system, well as one or more application programs. In some embodiments, a computing node might also comprise bare-metai application programs.
($101) Each of the processes, methods and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code module* executed by one or more computers or computer processors. The code modules may be siored on any type of non- transitory computer-readable medium or computer storage device, such as hard drives, solid state memor optical d sc and/or the like. The processes and algorithms may be implemented partially or wholly in appiicaticn-sjpecnle circuitry. The results of the disclosed processes and process ste s may be stored, persistently or otherwise, in any type of non-transitory computer storage such as, e.g.. volatile or nort-voiattle storage.
{tHttZJ The various features and processes described above may be used Independently of one another, or may be combined in various ways. AH possible combinations and
SUBSTTTUTE SHEET (RULE 26) setecwA^^i^ m. K> #¾$η the acepe vtfifa lte , n aiddjtion, certain method or process Nocks may be omitted in some implementations. The rnethocis and rocesses described herein are also not limited to any particular sequence, arrf the blocks or stales rekting thereto can be performed in other sequences that ate appropriate. For example, described blocks or states may be performed in a» order other than that specifically disclosed, or multiple blocks or states may be combined its a single block or state. The example blocks or states may be pcriormed in serial, in parallel or in some other manner. Blocks or states may be added, to or removed from the disclosed example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed front or rearranged com a ed to the disclosed example embodiments.
(01031 It will als be apprecfeted that various items arc illustrated as being stored in memory or on sto age while being used, and that these items or portions of thereof may be transferred between memory a»d other storage devices for purposes of memory management and. data integrity. Alternatively, in otter «mbrxtiments some or all of the software modules and/or systems may execute m memory on anotherdevtce and comrnaaicate with the illustrated Computing *ysieft¾ via imer-eoropater commatticauoiJ. Furthermore, in some embodiments, seme or all of the systems andVbr modules may be mplemeated or provided in other ways, such as at least partially in firmware and/or hardware, including, but not limited to. one or more application-specific integrated circuits (ASICs), standard integrated circuits, controller (e.g., b executing appropriate instructions, and tfKl^ng microcontrollers and/or embedded controllers), field-programmable gate rra s (FPGAs), complex programmable logic devices (CPLDs , etc. Some or all of the modules, systems and data structures may also be stored (e.g., as software instructions or structured data) on a computer-readable medium, such as a hard disk, memory, a network, or a portable media article to be read by an appropriate drive or via an appropriate connection. The . systems, modules and data structures may also be transmitted as generated data signals (e.g. , 3% part of a carrier wave or other analog or digital propagated signal) oh a variety of corr ne ^adabie transmission media, including wircless-bascd and wired/cable-based media, and may take a variety of forms <e.g , s part of a single or multiplexed analog signal, or an multiple discrete digits packets or frames). Such computer program products may also take other forms lit other embodiments. Accordingly, the present invention may be practiced with other computer system configurations. 16184] The foregoing may further be tmdersteod In iew of the following *laoses: I . A .system forprl ^dziflg cajwcky cansumpdoft of ¾ database manag?n)wnt intern, the one or more computing nodes configured to operate the database management s stem; and
one or more armories having stored {hereon computer readable instructions that, upon execution, cause the system at least to:
receive a request to perform a operation on the database management system, the request comprising information indicative of a request cbss. the operation to be performed on behalf of a customer,
select U first token bucket from one or more data structure* comprising a plurality of token buckets, the selection based at least in part on the request class, the first token bucket having tin associated first capacity indicator,
determine that the first capacity indicator is indicative of a capacity to perform the operation on behalf of the customer
perform the operation; and
update e¼ first capacity indicator based si least jn paeton capacity utilized by performing the operation.
2. The system of clause I , farther comprising one ot more memories having stored thereon computer readable instructions that, upon execution by a computing d vice, cause the system at least to;
receive in ormation indicative of an association between the request class and the first token bucket.
3. The system of clause I. comprising one or more memories having stored thereon computer readable instructions that, upon execution by a computing device, cause the system at least to:
update a second capacity indicator associated with a second token bucket of the plurality of token buckets, based at feast i p*i on the capacity utilized by performing the operation.
4. The system of clause I, a mer comprising one or more memories having stored thereon computer readable instructions that, upon execution by a computing device, cause the system at feast to: de er in that a second capacity indicator ;a>«)ciated wtii a second token bucket of the plurality of taker* buckets is indicative Of a hck of capacity to perform the operation on behal of the customer,
5, Λ computer- implemented method for prioritizing capacity consumption comprising: receiving a request to perform an operation on one or more computing nodes, the equest comprising information indicative of a request class, the operation to be performed on behalf of a cus omer
selecting, based at least in part on the request class, a ifst data structure from a plurality of data structures, wherein the first data .structure comprises a first capacity indicator;
determining that ihc first capacity indicator is indicative of a capacity of the one or more computing nodes to perform the operation on behalf of the customer;
performing the operation; and
updating the first capacity indicator based al least in part on capacity utilized performing the operation,
6. The method of clause 5, wfcrcut the operation is performed on one or more of a web service, web s&e. and database rnimagerneai system.
?. The method of clause 5, wfierelfi the information Indicative of the request class comprise* a parameter,
8. The method of clause 5, wherein the information indicative of the request class comprises a onfi ratioft value.
9. The method of clause 5, further comprising one or more of a customer identifier, security role, and application rogrammin interface.
i , The method of clause 5, further comprising:
receiving information indicative of a mapping between t e e uest class and the fim capacity indicator.
ί I . The method of clause 5, further comprising:
Determining that a second capacity indicator from the plurality; of data structures is indicative of a lack of capacity to perform the operation on behalf of se customer.
12. The method of clauses, further comprising:
updating a second slate of a second capacity indkator from the plurality of data struc tures based at least In part on the capacity tilized performing the operation.
13. The method of clause 5< wherein the first capacity indicator and one or more additional capacity indicators from the plurality of data structures share one or more memory locations in the pl raiiiy of data struciuriiis indicative of capacity to perform the operation on & of the
1 . "Hie mated of la se 5, bereia Ihe first capacity indicate corresponds to a subsei of a total capacity of the one or more computing nodes.
1 . The method of clause 55 wherein the first capacity indicator comprises a coont of units of capacity available (or performing operations on behalf of the chanter.
16. The method of clause 15, where se count is increased based at le st io part on a rate of allocated capacily a camulsifiari.
I ?. A non-transitory compater-readable storage medium having stored thereon instructions that, upon execution by a computing device, cause the computing device to at least:
receive a re ues.* to perform an operation on one or more computing nodes, the request comprising information indicative of a request class;
select, based at least in part on the request class, a first data structure from a plurality of data structures, wherein the first data stracture com rise a Rrstcitpacit indicator;
determine (hat the ftrst C!J Ktcity indicator Is indtcattYe of sufficient capacitjf tq adroit the operation for #«fcc$st &
perform the operation; and
update the first capacity indicator based at least in part cm capacity utilized to perform the operation.
18. The computer-readable storage medium of clause 17, wherein the information indicati ve of the request class comprises a parameter.
1 . The comptMer-reacabJe storage medium of clause 17, having stored thereon further instructions that, upon execution by the computing device, cause the computing device to at least:
determine that a second capacity indicator from the plurality f data strucittres is indicative of a lack of capacity ro perform die operation,
20. The computer-readable storage medium of clause I?. having stored thereon further instructions that, upon execution by the computing device, cause the computing device to at (east:
update a second capacity indicator from the plurality of data iitructures when the first capacity indicator is indicative of a lack of capacity to perform the operation, wherein the update is based at least in pan on the capacity stSized to perform the operation. 21. The computer-readable storage medium of clause I ?. wherein the l¾xt apa ity indicator comprises si count of units of capacity available to perform the operation.
22. The comjwt r-feadabk storage medium of clause 21, w erein |h* coum i$ incrtwesA based at (east it) pert on a rats of allocated capacity accumulation.
23. A system for prioritizing ca acity consumption, the system comprising:
one or more! computing nodes; and
ooc or more memories having stored thereon computer readable instft^on that, upon execution by a computing device, cause the system at least to:
receive information indicative of one Or more requests classes;
receive information indicative of a map ing between the one or more request classes and one or more data structure , wherein t e data structures comprise a first capacity indicator;
allocate a subset of total capacity to perform an operatio on one or more computing: nodes to the first capacity indicator- and
perform the operation upon determining that the: first capacity indicator is indicative of capacity available to perform the operation on the one or more computing nod s.
24. The system of clause 2 , farther comprising one or more memories having stored thereon computer readable instructions that, upon execution by a computing device, cause die system at least to:
receive information indicative of instructions to create a database table on behuJf of a customer.
25. Toe system of c iause 23, further comprising one or more memories having stored thereon computer readable instructions that, upon execution by a computing device cause the s stem at least io;
receive information indicative of instructions to modify a database table; rid receive mforcuiion indicati ve of wstnicaom to modify capacity allocated to the database table.
26. The system of clause 24, farther cotnprising one or more memories having stored thereon computer readable ia¾ructjoas that, upon execution by si computing device, cause the s stem at least to:
determine the subset of total capacity based at least in part on a number of part itions of me database table. 21. the sys em of clau<« 23, ftn^er comprising one or more ra«moft«s having swred thereon computer readable ir«;trtictlart5 that, «pon i ecttt n by a eornp*ting device^aase me system at least to:
send user interface Instructions comprising instructions for accepting customer input, the customer tnpu! comprising the information indicaiive of the mapping.
{0J05] Conditional language used herein, such a among o ters, "cart," "cou d," "migh "may,* "eg." and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, w ile other embodiments do ml include, certain features, elements and/or steps. Thus, such cooditional langua e is not generally intended to imply that features, elements and/or steps are in arty way requited for one or more embodiments or that one or more embodiments r¾ces$attly includ logic for deciding, with or without author input or prompting, whether these features, elements and/or steps aw included or are to he performed in any particular embodiment, The terms "comprising," "'mc!itding ''ha in '" and the like are synonymous and are used inclusively, in at* open-coded fashion, and do not exclude additional elements, features, acts, operations end so forth. Also, tne term "or" is used in its inclusive sense (and not in its exclusive sense) so that when used, for e am e^ w connect a list of elements, the term *«τ" m an* one. some or «II of the elements in the list.
01961 While certain example embodiments have been described, these embodiments have been presented by wa of example only, and are no intended to limit the scope of the inventions disclosed herein. Thus, nothing in the foregoing description is intended to imply that any particular feature, characteristic, step, module or block is necessary or indispensable, Indeed, the novel methods and systems described herein may be embodied in a variety of oilier forms; furthermore* various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions disclosed h rein. The accompanying claims and their equivalents are intended to cover . uc forms or modifications as would fall wit in the scope a d spirit of certain of the inventions disclosed herein.

Claims

$ , A system tor jpriqdiiziftg a acity consumption of 3 database management system, the system co osing;
one or more computing nodes configured to operate the o¼aba« management system; and
one or more memories having stored thereon computer readable instructions that, upon execttiiori. cause the system aa least to;
receive a requext to perform an operation on the database management system, the request comprising information indicative of a request class, the operation to be performed on behalf of a customer;
select a fast token bucket from one or mote data structures comprising a plurality of token buckets, the selection based at least in part On the request class, the ftrst token bucket aving an associated first capacity indicator,
determine that the r rst capacity indicator is indicative of a capacity to perform the operation on behalf of the customer,
perform the operation; and
update the first capacity indicator based at east in part on capacity utilized by performing the operation.
2< The system of claim I, further comprising one or moremerwjfies having stored thereon computer readable instructions that, upon execution by a computing device, cause the system at feast to:
receive information indi ative of an d s ociation between the eques class and the ftm token bucket*
3. The s stem of claim I . comprising one cr more: memories having stored thereon computer readable instructions that, upon execution by a computing device, cause the *y«em at least to:
update it .second capacity indicator associated witfi a second toke bucket of the plurality of token buckets, bused «t least in pari on the capacity utilized by performing the operation.
4. The system of claim I, fart er comprising ofift or more memories having svored thereon comootet readable instructions thai, epon e«ee»tfen by a conipatmg deVfc*. cews*? the s stem at least to:
determine that a second capacity indicator associated with a second token bucket of the plitraitty o token buckets is indicative of a lock of capacity to perform the operation on behalf of the customer.
5. Λ computW'SmpJeincntwi method for prioritizing capacity consumption comprising: receiving a request to perform sn operation on one or more computing nodes, the request comprising information indicafive of a request class, the operation to he performed on behalf of a customer;
selecting, based at ktast m pan on th request class, a first dots structure from a plurality of data scnjct»fes. where the first data structure comprises a First capacity indicator;
dc^ r m thai di first capacity iind¾atof is Indicative of a capacity of the one or more computing nodes to perform the operation on behalf of the custom r
performing the operation; and
updating the first ca acit indicator based at least in part on capacity m lked performing the operation.
6. The method of claim 5. wherein the operation is performed on one or more of a web service, web site, and database management system.
?. The method of claim 5, further comprising:
determining mat a second capacity indicator from the plurality of 4ata structures is indicative of a lack of capacity to perform the operation on behalf of the customer
8. The method of Claim 5, further comprising;
updating a second state of a second capacity indicator from the plurality of data structures based at least in part on the capacity taifoed performing the operation.
The method of claim 5, wherein the first capacity indicator and one or more additional capacity indicators from the plurality of dots structures share one or more memory locatioas in the plurality of -lata structures indicative of ca acity to perform the operation on behalf of the customer. i 0. The method of claim 5, whereie the firs} capacity indicator comprises a count of units of capacity a ailable for performing operations on behalf of tbe customer.
I i. The method of claim 10. where the count is increased based nt least in pan on a rate of allocated capacity ^cumulation.
$ 2. A .system for prioritizing capacity consumption, the system comprising:
one or more computing nodes; and
one Or more memories having stored thereon com ute' readable instr n'ons that, upon execution by a computing device cause ih? s stem m least to:
receive information indicative of oneor more reques s ½*$es:
receive inforotatiOB jodieali v¾ of a mapping between ihc one or m te request classes and one or more data structures, w erein the data structures comprise & first apjicity Indicator,
allocate a subset of total capacity to pe form an o eration on one or more computing nodes to the fffst capacity indicator, and
perform the operation upon determining that the first capacity indicator is indicative of capacity available to perform the operation on the one or more comparing nodes.
13. The system of claim 12, further comprising one or more memories having stored thereon computer readable instructions that, upon execution by a computing device, cau.« the system at least to;
receive information indicative of instructions to modify a database table; and receive information indicative of instructions to modify capacity allocated to the database table.
14. The system of claim \X farther comprising one or HKM¾ memories u-vingswfisa" thereon computer readable instructions thaj* upon execution b acom w¾ o¾yice. ca»s? the s stem at least toe
determine the subset of total capacity baSfd a least in pan on a number of partitions of the database table,
S . The system of claim 12, farther comprising one or men; memories having stored thereon computer readable iestructiarts that, apon execution by a computing device, cause the system at !ea&fo:
send inter interface tiiKf ionRi eomjwisiog instructions for accepting customer tnpttt, the ecvomsr inpot comprising the information indicative of the mapping.
PCT/US2014/038477 2013-05-17 2014-05-16 Input-output prioritization for database workload WO2014186756A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201480034626.2A CN105431815B (en) 2013-05-17 2014-05-16 Input-output for data base workload is prioritized
JP2016514144A JP6584388B2 (en) 2013-05-17 2014-05-16 Prioritizing I / O to database workload
EP14798257.3A EP2997460A4 (en) 2013-05-17 2014-05-16 Input-output prioritization for database workload
CA2912691A CA2912691C (en) 2013-05-17 2014-05-16 Input-output prioritization for database workload

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/897,232 US9262505B2 (en) 2013-05-17 2013-05-17 Input-output prioritization for database workload
US13/897,232 2013-05-17

Publications (1)

Publication Number Publication Date
WO2014186756A1 true WO2014186756A1 (en) 2014-11-20

Family

ID=51896648

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/038477 WO2014186756A1 (en) 2013-05-17 2014-05-16 Input-output prioritization for database workload

Country Status (6)

Country Link
US (1) US9262505B2 (en)
EP (1) EP2997460A4 (en)
JP (1) JP6584388B2 (en)
CN (1) CN105431815B (en)
CA (1) CA2912691C (en)
WO (1) WO2014186756A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9876853B2 (en) * 2015-08-19 2018-01-23 International Business Machines Corporation Storlet workflow optimization leveraging clustered file system placement optimization features
US10387578B1 (en) * 2015-09-28 2019-08-20 Amazon Technologies, Inc. Utilization limiting for nested object queries
US10963375B1 (en) * 2018-03-23 2021-03-30 Amazon Technologies, Inc. Managing maintenance operations for a distributed system
CN109299190B (en) * 2018-09-10 2020-11-17 华为技术有限公司 Method and device for processing metadata of object in distributed storage system
CN111835655B (en) * 2020-07-13 2022-06-28 北京轻网科技有限公司 Method, device and storage medium for limiting speed of shared bandwidth
CN115578080B (en) * 2022-12-08 2023-04-18 长沙软工信息科技有限公司 Cost reference library workload verification method based on informatization system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6094654A (en) * 1996-12-06 2000-07-25 International Business Machines Corporation Data management system for file and database management
WO2008121690A2 (en) * 2007-03-30 2008-10-09 Packeteer, Inc. Data and control plane architecture for network application traffic management device
US20100183304A1 (en) * 2009-01-20 2010-07-22 Pmc Sierra Ltd. Dynamic bandwidth allocation in a passive optical network in which different optical network units transmit at different rates
US20110246481A1 (en) * 2010-03-31 2011-10-06 Greenplum, Inc. Apparatus and Method for Query Prioritization in a Shared Nothing Distributed Database

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7027394B2 (en) * 2000-09-22 2006-04-11 Narad Networks, Inc. Broadband system with traffic policing and transmission scheduling
US7421431B2 (en) * 2002-12-20 2008-09-02 Intel Corporation Providing access to system management information
US7406399B2 (en) * 2003-08-26 2008-07-29 Siemens Energy & Automation, Inc. System and method for distributed reporting of machine performance
US7689394B2 (en) * 2003-08-26 2010-03-30 Siemens Industry, Inc. System and method for remotely analyzing machine performance
CN101696659B (en) * 2003-09-02 2014-11-12 株式会社小松制作所 Engine control device
EP1927217B1 (en) * 2005-08-23 2009-12-09 TELEFONAKTIEBOLAGET LM ERICSSON (publ) Aggregated resource reservation for data flows
EP2014742B1 (en) * 2006-04-28 2018-08-08 JP Steel Plantech Co. Glowing coke delivering equipment and method of delivering the same
CN100502315C (en) * 2006-05-18 2009-06-17 华为技术有限公司 Business flow monitoring method and system
JP4717768B2 (en) * 2006-09-20 2011-07-06 富士通テレコムネットワークス株式会社 Token bucket method and router using the same
US7711789B1 (en) * 2007-12-07 2010-05-04 3 Leaf Systems, Inc. Quality of service in virtual computing environments
US8195832B2 (en) * 2007-12-12 2012-06-05 Alcatel Lucent Facilitating management of layer 2 hardware address table based on packet priority information
JP5107016B2 (en) * 2007-12-17 2012-12-26 Kddi株式会社 Buffer device and program using token bucket
JP5093043B2 (en) * 2008-10-14 2012-12-05 富士通株式会社 Rate monitoring device
US8527947B2 (en) * 2008-12-28 2013-09-03 International Business Machines Corporation Selective notifications according to merge distance for software version branches within a software configuration management system
US8713060B2 (en) * 2009-03-31 2014-04-29 Amazon Technologies, Inc. Control service for relational data management
US8335123B2 (en) * 2009-11-20 2012-12-18 Sandisk Technologies Inc. Power management of memory systems
US8190593B1 (en) * 2010-04-14 2012-05-29 A9.Com, Inc. Dynamic request throttling
US8201350B2 (en) * 2010-05-28 2012-06-19 Caterpillar Inc. Machine bucket
US8661120B2 (en) * 2010-09-21 2014-02-25 Amazon Technologies, Inc. Methods and systems for dynamically managing requests for computing capacity
US9069616B2 (en) * 2011-09-23 2015-06-30 Google Inc. Bandwidth throttling of virtual disks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6094654A (en) * 1996-12-06 2000-07-25 International Business Machines Corporation Data management system for file and database management
WO2008121690A2 (en) * 2007-03-30 2008-10-09 Packeteer, Inc. Data and control plane architecture for network application traffic management device
US20100183304A1 (en) * 2009-01-20 2010-07-22 Pmc Sierra Ltd. Dynamic bandwidth allocation in a passive optical network in which different optical network units transmit at different rates
US20110246481A1 (en) * 2010-03-31 2011-10-06 Greenplum, Inc. Apparatus and Method for Query Prioritization in a Shared Nothing Distributed Database

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2997460A4 *

Also Published As

Publication number Publication date
US9262505B2 (en) 2016-02-16
JP2016528578A (en) 2016-09-15
CA2912691A1 (en) 2014-11-20
EP2997460A4 (en) 2017-01-25
US20140344312A1 (en) 2014-11-20
EP2997460A1 (en) 2016-03-23
CN105431815A (en) 2016-03-23
JP6584388B2 (en) 2019-10-02
CN105431815B (en) 2019-02-01
CA2912691C (en) 2020-06-16

Similar Documents

Publication Publication Date Title
WO2014186756A1 (en) Input-output prioritization for database workload
US9613037B2 (en) Resource allocation for migration within a multi-tiered system
Liu et al. A low-cost multi-failure resilient replication scheme for high-data availability in cloud storage
US8554993B2 (en) Distributed content storage and retrieval
JP5039925B2 (en) Apparatus and method for controlling operational risk of data processing
KR101865318B1 (en) Burst mode control
US9800575B1 (en) Assigning storage responsibility in a distributed data storage system with replication
US20180165109A1 (en) Predictive virtual server scheduling and optimization of dynamic consumable resources to achieve priority-based workload performance objectives
US10389794B2 (en) Managing redundancy among application bundles
CN108282501A (en) A kind of Cloud Server resource information synchronous method, device and system
EP3644185A1 (en) Method and system for intelligently load balancing database backup operations in information technology environments
Limam et al. Data replication strategy with satisfaction of availability, performance and tenant budget requirements
US20230327875A1 (en) Data flow control in distributed computing systems
JP2017138895A (en) Virtualization environment management system and virtualization environment management method
US7792966B2 (en) Zone control weights
CN106487854A (en) Storage resource distribution method, device and system
Hsu et al. A proactive, cost-aware, optimized data replication strategy in geo-distributed cloud datastores
US10523756B1 (en) Network service for identifying infrequently accessed data in a data stream
US9934268B2 (en) Providing consistent tenant experiences for multi-tenant databases
Gogouvitis et al. OPTIMIS and VISION cloud: how to manage data in clouds
Mariescu-Istodor et al. VRPDiv: A Divide and Conquer Framework for Large Vehicle Routing Problems
US20160034476A1 (en) File management method
Fan et al. High-reliability virtual network resource allocation algorithm based on Service Priority in 5G Network Slicing
Ali et al. Vigorous replication strategy with balanced quorum for minimizing the storage consumption and response time in cloud environments
Sajal et al. Kerveros: Efficient and Scalable Cloud Admission Control

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201480034626.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14798257

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2912691

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2016514144

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2014798257

Country of ref document: EP