QUERY DEPLOYMENT PLAN FOR A DISTRIBUTED SHARED STREAM
PROCESSING SYSTEM
CROSS-REFERENCE TO RELATED APPLICATION
[0000] The present application claims priority from provisional application
Serial No 61/024,300, fileD January 29,2008, the contents of which are incorporated herein by reference in their entirety.
BACKGROUND
[0001] Over the past few years, stream processing systems (SPSs) have gained considerable attention in a wide ranGe of applications including planetary- scale sensor networks or "macroscopes", network performance and security monitoring, multi-player online games and feed-based information mash-ups. These SPSs are characterized by a large number of geographically dispersed entities, including data publishers that generate potentially large volumes of data streams and clients that register a large number of concurrent queries over these data streams For example, the clients send queries to the data publishers to receive certain processing results
[0002] SPSs should provide high network and workload scalability to be able to provide the clients with the requested data streams. The high network scalability refers to the ability to gracefully deal with an increasing geographical distribution of system components, whereas the workload scalability addresses a large number of simultaneous user queπes. To achieve both types of scalability, the SPSs should be able to scale out and distribute its processing across multiple nodes in the network.
[0003] Distributed versions of SPSs have been proposed, but deployment of these distributed SPSs can be difficult The difficulties associated with deploying
SPSs is further exasperated when the deployment is for SPSs handling stream- based queries in shared processing environments, where applications share processing components. First, applications often express Quality-of-Service (QoS) specifications which describe the relationship between various characteristics of the output and its usefulness, e.g., utility, response delay, end-to- end loss rate or latency, etc. For example, in many real-time financial applications, query answers are only useful if they are timely received. When a data stream carrying the financial data is processed across multiple machines, the QoS of providing the data stream is affected by each of the multiple machines. Thus, if some of the machines are over-loaded, these machines will have an impact on the QoS of providing the data stream. Moreover, stream processing applications are expected to operate over the public Internet, with a large number of unreliable nodes, some or all of which may contribute their resources only on a transient basis, such as the case in peer-to- peer settings. Furthermore, stream processing and delivery of data streams to clients may require multiple nodes working in a chain or tree to process and deliver the streams, where the output of one node is the input to another node. Thus, if processing is moved to a new node in the network, the downstream processing in the chain or tree and QoS may be affected. For example, if processing is moved to a new node in a new geographic location, it may increase the end-to-end latency to a point that it is unacceptable for a client.
BRIEF DESCRIPTION OF DRAWINGS
[0004] The embodiments of the invention will be described in detail in the following description with reference to the following figures.
[0005] Figure 1 illustrates a system, according to an embodiment;
[0006] Figure 2 illustrates data streams in the system shown in figure 1 , according to an embodiment;
[0007] Figure 3 illustrates overlay nodes in the system, examples of queries in the system, and examples of candidate hosts for operators, according to an embodiment;
[0008] Figure 4 illustrates a flowchart of a method for initial query placement, according to an embodiment;
[0009] Figure 5 illustrates a flowchart of method for optimization, according to an embodiment;
[0010] Figure 6 illustrates a flowchart of a method for deployment plan generation, according to an embodiment;
[0011] Figure 7 illustrates a flowchart of a method for resolving conflicts, according to an embodiment; and
[0012] Figure 8 illustrates a block diagram of a computer system, according to an embodiment.
DETAILED DESCRIPTION OF EMBODIMENTS
[0013] For simplicity and illustrative purposes, the principles of the embodiments are described by referring mainly to examples thereof In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments It will be apparent however, to one of ordinary skill in the art, that the embodiments may be practiced without limitation to these specific details. In some instances, well Known methods and structures have not been described in detail so as not to unnecessarily obscure the embodiments.
[0014] According to an embodiment a distnbuted SPS (DSPS) provides distributed stream processing across multiple overlay nodes in an overlay network. Nodes and overlay nodes are used interchangeably herein. The DSPS processes and delivers data streams to clients. A data stream comprises a feed of data For example, a data stream may comprises an RSS feed or a stream of real-time financial data. The data stream may also include multi-media A data stream may comprises a continuous or periodic transmission of data (such as real-time quotes or an RSS feed), or a data stream may include a set of data that is not necessarily continuously or periodically transmitted, such as results from a request for apartment listings. It should be noted that the stream processing performed by the DSPS includes shared stream processing, where an operator may be shared by multiple data streams as described below
[0015] The DSPS includes an adaptive overlay-based framework that distributes stream processing queries across multiple available nodes. The nodes self-organize using a distnbuted resource directory service The resource directory service is used for advertising and discovering available computer resources in the nodes.
[0016] The DSPS provides data stream deployments of multiple, shared, stream-processing queries while taking into consideration the resource constraints of the nodes and QoS expectation* of each application (e.g., data stream), while maintaining a low bandwidth consumption. According to an embodiment, the
OSPS uses a proactive approach, where nodes periodically collaborate to pre- compute alternative deployment plans of data streams Deployment plans are also referred to as plans herein During run time, when a computer resource or QoS metric constraint violation occurs, the DSPS can react fast to changes and migrate to a feasible deployment plan by applying the most suitable of the pre-computed deployment plans. Moreover, even in the absence of any violations, the best of these plans can be applied to periodically improve the bandwidth consumption of the system
[0017] Figure 1 illustrates a streams processing system 100, according to an embodiment The system 100 includes an overlay network 110 comprised of overlay nodes 111 , a resource directory 120 and a network monitoring service 130.
[0018] The overlay network 110 includes an underlying network infrastructure including computer systems, routers, etc but the overlay network
110 provides additional functionality with respect to stream processing, including stream-based query processing services. For example, the overlay network 110 may be built on top of the Internet or other public or private computer networks. The overlay network 110 is comprised of the overlay nodes 111, which provide the stream processing functionality. The overlay nodes 111 are connected with each other via logical links forming overlay paths, and each logical link may include multiple hops in the underlying network
[0019] According to an embodiment, the overlay nodes 111 are operable to provide stream-based query processing services For example, the overlay nodes
111 include operators for queries A query includes a plurality of operators hosted on nodes in the stream processing system The query may be provided in response to receiving and registering a client query or request for information An operator is a function for a query. An operator may include software running on a node that is operable to perform the particular operation on data streams A portion of an overlay node's computer resources may be used to provide the operator for the query. The overlay node may perform other functions, thus the
load on the overlay node may be considered when selecting an overlay node to host an operator.
[0020] Examples of operators include join, aggregate, filter, etc. The operators may include operators typically used for queries in a conventional database, however, the operators in the system 100 operate on data streams. Operators may be shared by multiple queries, where each query may be represented by one or more data streams. Also, subquehes are created by operators. In one sense, any query consisting of multiple operators has multiple subqueries, one for each operator, even if the query is for a single client. In another sense, when a new query from another client can use the result of a previous query as a partial result, the previous query becomes a subquery of the new one. For example, regarding the situation where a previous query may be partially used for a new query, a filter operation may be executed by a node on a data stream representing the results of a previous request. For example, an original client query may request all the apartment listings in northern California, and a filter operation may be performed at a node to derive the listings only for Palo Alto.
[0021] A join operation is a join of two tables in a conventional database, such as a join of addresses of employees and employee IDs The same operation is applied to data streams except for data streams with continuous or periodically transmitted data, a sliding window is used to determine where to perform the join in the stream. For example, the join operator has a first stream that is one input and a second stream that is another input. The join is performed if data from the streams have timestamps within the sliding window. An example of a sliding window may be a 2-minutβ window, but other length windows may be used.
[0022] Operators may be assigned at different overlay nodes and may be reallocated over time as the distribution of queries across the network is optimized. Optimization may take into consideration several types of metrics. The types of metrics may include node-level metrics, such as CPU utilization, memory
utilization, etc , as well as service provider metrics, such as bandwidth consumption, etc Also, QoS metrics, such as latency are considered Optimization is described in further detail below.
[0023] Client queries for data may be submitted to the overlay network 110
The location of operators for the query define the deployment plan of the query, which is also described in further detail below Depending on the resources available in the network and the query's requirements, each query could have multiple alternative pre-computed deployment plans The operators of a query are interconnected by overlay links between the nodes 111 in the overlay network 110 Each operator forwards the output of an operator to the next processing operator in the query plan. Thus, query deployments create an overlay network with a topology consistent with the data flow of the registered queries If an operator o, forwards its output to an operator o,, o, is referred to as the upstream operator of o, (or its publisher) and to oj as the downstream operator of o, (or its subscriber) Operators could have multiple publishers (e.g . join, union operators) and since they could be shared across queries they could also have multiple subscπbers The set of subscribers of o, is denoted as SUb0, and its set of publishers as pubo
[0024] The system 100 also includes data sources 140 and clients 150 The data sources 140 publish the data streams while clients subscribe their data interests expressed as stream-oriented continuous queries. The system 100 streams data from publishers to clients via the operators deployed in the overlay nodes 111. Examples of published data streams may include RSS feeds, data from sensor networks, data from multi-player games played over the internet, etc
[0025] Creating deployment plans for queries includes identifying operators to be hosted on overlay nodes for deploying the queries To discover potential overlay nodes for hosting the operators, a resource directory 120 is used The resource directory 120 may be a distributed service provided across multiple overlay nodes. In one embodiment, the resource directory 120 is based on the Node Wiz system described in Basu et at., "Nodewiz: Peer-to-pβer resource
discovery for grids." The Nodewiz system is a scalable tree-based overlay infrastructure for resource discovery.
[0026] The overlay nodes 110 use the resource directory 120 to advertise the attributes of available computer resources of each node and efficiently perform multi-attribute queries to discover the advertised resources. For example, each overlay node sends its available computer resource capacity to the resource directory 120, and the resource directory 120 stores this information. Examples of capacity attributes include CPU capacity, memory capacity, I/O capacity, etc. Also, during optimization, an overlay node or some other entity may send queries to the resource directory 120 to identify an overlay node with predetermined available capacity that can be used to execute a relocated operator. The resource directory 120 can adapt the assignment of operators such that the load of distributing advertisements and performing queries is balanced across nodes.
[0027] A network monitoring service 130 collects statistics of the overlay links between the overlay nodes 111. One example of statistics includes latency statistics. The network monitoring service 130 may be based on S3 described in Yalagandula et a!., "s3: A scalable sensing service for monitoring large networked systems." The network monitoring service 130 is a scalable sensing service for real-time and configurable monitoring for large networked systems. The infrastructure, which may include the overlay nodes 111. can be used to measure QoS, node-level, and service provider metrics, while it aggregates data in a scalable manner. Moreover, inference algorithms can be used to derive path properties of all pairs of nodes based on a small set of network paths. During optimization, the network monitoring service 130 can be queried to identify end-to- end overlay paths or overlay links between nodes that provide the pre-requisite QoS, e.g., a path that has a latency less than a threshold.
[0028] Figure 2 illustrates an example of deploying data streams. For example, the real-time financial publisher 140a generates a data stream with real- time stock quotes in response to one or more client queries. A financial news
publisher 140b also generates a data stream of financial news. The operators at nodes 111a-e function to provide subquenes by executing their respective operators to provide the clients with the desired data For example, the clients 150a-c want stock quotes and corresponding financial news for different companies, and the clients 150b and 150c require a particular sorting of the data streams. The operators execute subquenes on the original data streams from the publishers to provide the desired data to the clients.
[0029] During optimization, it may be determined that the join operator needs to be moved from the node 111 a because the node 111 a is overloaded or there is a QoS metric constraint violation. The join operator may be moved to the node 111f, but the downstream operators will be affected Optimization pre- computes feasible deployment plans that will not violate QoS metric constraints or computer resource capacities of nodes.
[0030] The system 100 implements an optimization protocol that facilitates the distribution of operators among nodes in the overlay network, such that QoS expectations for each query and respective resource constraints of the nodes are not violated. The optimization includes pre-computing alternative feasible deployment plans for all registered queries Each node maintains information regarding the placement of its local operators and periodically collaborates with nodes in its "close neighborhood" to compose deployment plans that distribute the total set of operators A deployment plan identifies operators and nodes to host operators providing an end-to-end overlay path for a data stream from publisher to client.
[0031] Whenever a computer resource or QoS metric constraint violation occurs for an existing deployment plan, the system can react fast by applying the most suitable plan from the pre-computed set. Moreover, even in the absence of violations, the system can penodically improve its current state by applying a more efficient deployment than the current one
[0032] The optimization process includes proactive, distributed, operator placement which is based on informing downstream operators/nodes about the feasible placements of their upstream operators This way the overlay nodes can make decisions regarding the placement of their local and upstream operators that will influence their shared queries the best way possible. One main advantage of this approach is that nodes can make placement decisions on their own, which provides fast reaction to any QoS metric constraint violations.
[0033] Each operator penodically sends deployment plans to its subscnbed downstream operators describing possible placements of their upstream operators These plans are referred to as partial, since they only deploy a subset of a query's operators When a node receives a partial plan from an upstream node, it extends the plan by adding the possible placements of their upstream operator Partial plans that meet the QoS constraints of all queries sharing the operators in the plan are propagated to other nodes
[0034] To identify feasible deployment plans, a k-ahead search is performed The k-ahead search discovers the placement of k operators ahead from the local operator that for example incurs the lowest latency Instead of latency other QoS metπcs may be used. Based on the minimum latency, partial plans that could violate a QoS bound (e g , a latency greater than a threshold) are eliminated as early in the optimization process as possible. Also, every node finalizes its local partial plans This may include each node evaluating its impact on the bandwidth consumption and the latency of all affected queries Using the final plans, a node can make fast placement decisions in run-time
[0035] It should be noted that several types of metπcs may be employed to select a deployment plan. For example, one or more QoS metrics provided by a client, such as end-to-end latency, and one or more node-level metncs, such as available capacity of computer resources can be used to determine whether a path is a feasible path when selecting a set of alternative feasible deployment plans Also, another type of metric, e g., a service provider metric, such as minimum total
bandwidth consumption, consolidation, etc., can be used to select one of the paths from the set of feasible deployment plans to deploy for the data stream.
[0036] The optimization process is now described in detail and symbol definitions in table 1 below are used to describe the optimization process.
[0037] Table 1: Symbol Definitions
[0038] Each overlay node periodically identifies a set of partial deployment plans for all its local operators. Assume an operator o, is shared by a set of queries q, € β, . Let also Pc be the set of upstream operators for O1. An example is shown in figure 3. Queries qi and qj share operators Oi arκk>2 and P^ = Pβi = {oλ,o2} .
[0039] A partial deployment plan for O1 assigns each operator o, €
O1 € Pβ u [O1) to one of the overlay nodes in the network. Each partial plan p is associated with (a) a partial cost, pc". e.g.. the bandwidth consumption it occurs, and (b) a partial latency for each query it affects, pl^ yq, e Q01. For example, a partial plan for O2 will assign operators o< and O2 to two nodes, evaluate the bandwidth consumed due to these placements, and the response latency up to operator O2 for each query q1 and q2.
[0040] Figure 3 also shows candidate nodes, candidate links and latencies for the links which are evaluated when determining whether the node links can be used as part of a feasible deployment plan. The evaluation of candidate nodes and QoS metrics (e.g., latency) for deployment plan generation is described in further detail below.
[0041] Figure 4 illustrates a method 400 for initial placement of a query, according to an embodiment. At step 401 a client registers a query. For example, the client 150a shown in figure 2 sends a client query to the publishers 140a and 140b requesting stock quotes and related financial news.
[0042] At step 402, any operators and data streams for the query that are currently deployed are identified. The resource directory 120 shown in figure 2 may be used to store information about deployed operators and streams.
[0043] At step 403, for any operators that do not exist, a node is identified with sufficient computer resource capacity that are closest to the publisher or their publisher operator to host the operator. Note that this is for initial assignment of nodes/initial placement of a query. Other nodes that may not be closest to the publisher or their publisher operator may be selected for optimization.
[0044] At step 404, the query is deployed using the operators and data streams, if any, from step 402 and the operators, if any, from step 403. For example, the data stream for the query is sent to the client registering the query.
[0045] At step 405, the optimization process is started. The optimization process identifies deployment plans that may be better than the current deployment plan in terms of one or more metrics.
[0048] Figure 5 illustrates a method 500 for the optimization process, according to an embodiment One or more of the steps of the method 500 may be performed at step 405 in the method 400
[0047] At step 501. a plan generation process is periodically initiated. This process creates feasible deployment plans that reflect the most current node workload and network conditions. These pre-computed deployment plans are stored on the overlay nodes and may be used when a QoS violation is detected or if a determination is made as to whether bandwidth consumption or another metnc may be improved by deploying one of the pre-computed plans The plan generation process is described in further detail below with respect to the method 600.
[0048] At step 502, nodes determine whether a QoS metnc constraint violation occurred. For example, a QoS metric, such as latency, is compared to a threshold, which is the constraint. If the threshold is exceeded, then a QoS violation occurred
[0049] To detect these violations, every overlay node monitors for every local operator the latency to the location of its publishers, it also periodically receives the latency of all queries sharing its local operators, and it quantifies their "slack" from their QoS expectations, i β., the increase of latency each query can tolerate. For example, assume an operator o, with a single publisher o
m and shared by a query q
t with a response delay d
qt and slack slacks . If the latency of the overlay link between o, and O
m increases by
then the
QoS of the query qt is violated and a different deployment should be applied immediately.
[0050] At step 503, if a QoS violation occurred, determine whether one of the pre-computed plans can be used to improve the QoS. The plan should improve the QoS sufficiently to remove the QoS violation.
[0051] Across all final plans stored at the host of o,, a search is performed for the a plan p that decreases qt's latency by at least Across all
plans that satisfy this condition, any plan p is removed that does not migrate o, and Om (i.e., includes the bottleneck link) and satisfies
[0052] ff a pre-computed pfan exists that can be used to improve the QoS, then the pre-computed plan is deployed at step 504. For example, as described above any plan p is removed that does not migrate Ot and o
m (i.e., includes the bottleneck link) and satisfies
From the remaining plans, one plan is applied that most improves the bandwidth consumption.
[0053] Otherwise, as step 505, a request is sent to other nodes for a feasible plan that can improve the QoS. For example, the request is propagated to its downstream subscriber/operator . That is, if a deployment that can meet qt's QoS cannot be discovered at the host of o,, the node sends a request for a suitable plan to its subscriber for the violated query qt. The request includes also metadata regarding the congested link (e.g., its new latency). Nodes that receive such requests, attempt to discover a plan that can satisfy the QoS of the query qt. Since downstream nodes store plans that migrate more operators, they are more likely to discover a feasible deployment for qt. The propagation continues until we reach the node hosting the iast operator of the violated query.
[0054] At step 506, a determination is made as to whether a plan can be identified in response to the request If a plan cannot be identified, the query cannot be satisfied at step 507. The client may be notified that the query cannot be
satisfied, and the client may register another query. Otherwise, a plan identified in response to the request that can improve the QoS sufficiently to remove the QoS violation is deployed.
[0055] It is important to note that identifying a new deployment plan has a small overhead. Essentially, nodes have to search for a plan that reduces enough the latency of a query. Final plans can be indexed based on the queries they affect and sorted based on their impact on each query's latency. Thus, when a QoS violation occurs, our system can identify its "recovery" deployments very fast.
[0056] At steps 502-507, a new plan may be deployed in response to a QoS violation. Many of these steps may also be deployed when a QoS violation has not occurred, but a determination is made that a new plan can provide better QoS, or better node-level (e.g., computer resource capacity) or service provider metrics (e.g.. bandwidth consumption) than an existing plan.
[0057] Figure 6 illustrates a method 600 for deployment plan generation, according to an embodiment. One or more of the steps of the method 800 may be performed at step 501 in the method 500 as the plan generation process.
[0058] A k-ahead search may be performed before the method 600 and is described in further detail below. The k-ahead search makes each node aware of candidate hosts for local operators that can be used for partial deployment plans.
[0059] At step 601 , partial deployment plans are generated at the leaf nodes. Lot o, be a leaf operator executed on a node rv Node rty creates a set of partial plans, each one assigning o, to a different candidate host and
evaluates its partial cost and the partial latencies of all queries sharing o,. if S
5 is the set of input sources for o , and h(s), s e S
n is the node publishing data on behalf of source s. then, the partial latency (i.e., the latency from the sources to n,) of a query q
( is . Finally, since this plan assigns the
first operator, its partial bandwidth consumption is zero.
[0060] At step 602, infeasible partial deployment plans are eliminated
Once a partial plan is created, a decision is made as to whether the partial plan should be forwarded downstream and expanded by adding more operator migrations. A partial plan is propagated only if it could lead to a feasible deployment. The decision is based on the results of the k-ahead search. The k- ahead latency for a triplet (o,, n, , qθ represents the minimum latency overhead for a query q» across all possible placements of k operators ahead of o,, assuming o, is placed on n, . If the latency of the query up to operator o, plus the minimum latency for k operators ahead violates the QoS of the query, the partial plan could not lead to any feasible deployments. More specifically, a partial plan p that places operator o, to node n
f is iπfeasibte if there exists at least one query q, e Q
β such that
[0061] Note, that the k-ahead latency, although it does not eliminate feasible plans, it does not identify all infeasible deployments. Thus, the propagated plans are "potentially" feasible plans which may be proven infeasible in following steps.
[0062] Moreover, there is a tradeoff with respect to the parameter k. The more operators ahead that are searched, the higher the overhead of the k-ahead search, however, the earlier infeasible plans will be able to be discovered.
[0063] At step 603, partial plans that are not eliminated are forwarded downstream along with metadata for evaluating the impact of a new partial plan. These include the feasible partial deployment plans identified from step 602. The metadata may include partial latency and/or other metrics for determine plan feasibility.
[0064] Assume a node rw, processing an operator Oι, receives a partial plan p from its publishers o
m e pub,. . For purposes of illustration assume a single publisher but the equations below can be generalized for multiple publisher in a straightforward way. Note, that each query sharing o
f is also sharing its publishers.
Thus, each received plan includes a partial latency . The optimization
process expands each of these plans by adding migrations of the local operator o, to its candidate hosts
[0065] For each candidate host n € 4 , the node n
v validates the resource availability. For example, it parses the plan p to check if any upstream operators have also been assigned to n, . To facilitate this, along with each plan metadata is sent on the expected load requirements of each operator included in each plan ff the residual capacity of r\ is enough to process all assigned operators including 0,, the impact of the new partial plan / is estimated as.
and ^where, is the host of Om in the partial plan p.
For each new partial plan /
"we also check if it could lead to a feasible deployment, based on the k-ahead latency ,and propagate only feasible partial plans.
[OOββ] At step 604, intermediate upstream nodes receiving the partial plans forwarded at step 603 determine the partial plan feasibility, as described above. For example, the intermediate node receiving the plan is a candidate for an operator of the query The intermediate node validates its computer resource availability to host the operator and determines the impact on QoS if the node were to host the operator At step 605, feasible partial plans are selected based on impact on a service provider metric, such as bandwidth consumption.
[0067] At step 60Θ. the selected feasible partial plans are stored in the overlay nodes. For example, partial plans created on a node are "finalized" and stored locally. To finalize a partial plan its impact on the current bandwidth consumption and on the latency of the queries it affects is evaluated. To implement this process, statistics are maintained on the bandwidth consumed by the upstream operators of every local operator and the query latency up to this local operator For example, in figure 3, if ot is a leaf operator, n∑ maintains statistics on the bandwidth consumption from O1 to O2 and the latency up to
operator 02. For each plan, the difference of these metrics between the current deployment and the one suggested by the plan are evaluated and stored as metadata along with the corresponding final plan. Thus, every node stores a set of feasible deployments for its local and upstream operators, along with the effect of these deployments on the system cost and the latency of the queries. In figure 3, n2 stores plans that migrate operators {01, 02}, while ru will store plans that place
{O1, O2, O4).
[0068] Combining and expanding partial plans received from the upstream nodes may generate a large number of final plans. To deal with this problem, a number of elimination heuristics may be employed. For example, among final plans with similar impact on the query latencies the ones with the minimum bandwidth consumption are kept, while if they have similar impact on the bandwidth the ones that reduce the query latency the most are kept.
[0069] As described above, nodes perform a k-ahead search to identify candidate hosts for local operators. At step 601. the leaf nodes create partial plans. Partial plans may be created using a k-ahead search.
[0070] In the k-ahead search, every node n
v runs the k-ahead search for each local operator
and each candidate host for that operator. If A
o> is the set of candidate hosts for α, the search identifies the minimum latency placement of k operators ahead of d for each of the queries sharing Oj, assuming that o, is placed on the nod*
. intuitively, the search attempts to identify the minimum impact on the latency of each query q e Q^ . if migrating o, to node n, makes the best placement decision (e.g., with respect to latency) for the next k downstream operators of each query qt Below the steps of the k-ahead search are described, which initially evaluates the 1 -ahead latency and then derives the k- ahead latency value for every triple where
[0071] For each operator o, e O
n , rw executes the following steps:
[0072] 1. identifies the candidate hosts of the local operator o, by
querying the resource directory service. Assuming the constraint requirements of Oi are C
where cj is the resource attribute and vi is the operator's requirement for that resource, the resource directory is queried for nodes with
[0073] 2. If Om is the downstream operator of o, for the query . the
node sends a request to the host of o
m, asking for the set of candidate hosts
of that operator. For each one of these candidate nodes, it queries the networking monitoring service for the latency The 1 -ahead
latency for the oi operator with respect to its candidate n
j and the query
is
In figure 3, and n
1 will requestfrom
n
2 the candidate hosts 4,. for the operator O
2, and will estimate the 1 -ahead latencies
Also for 02 we assume aπd
[0074] 3. The search continues in rounds, where for each operator o
i the node waits for it subscribers o
m in the query to complete the evaluation of
the {k-1)-ahead latency before they proceed with the estimation of the k-ahead latency. The k-ahead latency for the θι operator with respect to its candidate n
1 and the query
[0075] The last step is described using the example in figure 3. In this case, min = 25ms. Thus, assuming migration of
01 to n5, the placement with the minimum latency of the next two operators will increase the partial response latency of qi by 15ms and the partial latency of q2 by 25ms, where each partial latency increases as more operators are assigned to the query.
[0076] Concurrent modifications of shared queries require special attention, as they could create conflicts with respect to final latency of their affected queries. For example, in figure 3, assume that the QoS of both qi and q∑ are not met, and nodes n3 and a* decide concurrently to apply a different deployment plan for each query. Parallel execution of these plans does not guarantee that their QoS expectations will be satisfied
[0077] To address the problem, operators may be replicated. Deployment plans are implemented by replicating the operators whenever migrating them cannot satisfy the QoS metric constraints of all their dependent queries. However, replicating processing increases the bandwidth consumption as well as the processing load in the system. Hence, a process identifies if conflicts could be resolved by alternative candidate plans, and if none is available, then it applies replication. The process uses the metadata created during the plan generation phase to identify alternative to replication solutions. More specifically, it uses the existing deployment plans to (1) decide whether applying a plan by migration satisfies all concurrently violated queries; (2) allow multiple migrations whenever safe, i.e., allow for parallel migrations; and (3) build a non-conflicting plan when the existing ones can cannot be used. In the next paragraph the process is described using the following definitions.
[0078] Definition for Direct Dependencies: Two queries Oj and q, are directly dependent if they share an operator, such that arvi
Then. qι and q, are dependent queries of every operator . Note that the set of
dependent queries of a query q, is Dq, and the dependent queries of an operator Ok is D
0K Then, if O(qι) is the set of operators in query
[0079] Directly dependent queries do not have independent plans, and therefore concurrent modifications of their deployment plans require special handling to avoid any conflicts and violation of the delay constraints.
[0080] Definition for indirect Dependencies' Two queries q and q, are indirectly dependent
[0081] Indirectly dependent queries have independent (non-overlapping) plans. Nevertheless, concurrent modifications on their deployment plans could affect their common dependent queries Hence, the process addresses these conflicts as well, insuring that the QoS expectations of the dependent queries are satisfied To detect concurrent modifications, a lease-based approach is used Once a node decides that a new deployment should be applied, all operators in the plan and their upstream operators are locked. Nodes trying to migrate already locked operators check if their modification does not conflict with the current one in progress. If a conflict exists, it tries to identify an alternative non-conflicting deployment. Otherwise, it applies its initial plan by replicating the operators. The lease-based approach is described in the next paragraphs.
[0082] Assume a node has decided on the plan p to apply for a query q. it forwards a REQUEST LOCK(q, p) message to its publishers and subscnbers. In order to handle indirect dependencies, each node that receives the lock request, will also send it to the subscnbers of rts local operator of the query q This request informs nodes executing any query operators and their dependents about the new deployment plan and request the lock of q and its dependents. Given that no query has the lock (which is always true for queries with no dependents), publishers/subscribers reply with a MIGR LEASE(q) grant, once they receive a MiGR LEASE(q) request from their own publisher/subscriber of that query. Nodes that have granted a migration lease are not allowed to grant another migration lease until the lease has been released (or expired, based to some expiration threshold).
[0083] Once node n receives its migration lease from all its publishers and subscribers of q, it applies the plan p for that query It will parse the deployment plan and for every node hosting a migrating operator o to node n sends a MtGRATE(O, n) message. Migration is applied in a top-down direction of the query
plan, i.e., the most upstream nodes migrate their operator (if required by the plan) and once this process is completed the immediate operators are informed about the change and subscribe to the new location of the operators. As nodes update their connections, they apply also any local migration specified by the plan. Once the whole plan is deployed then a RELEASE LOCK(q) request is forwarded to the old locations of the operators and their dependents, which release the lock for the query.
[0084] A lock request is sent across all nodes hosting operators included in the plan and all queries sharing operators of the plan. Once the lock has been granted any following lock requests will be satisfied either by replication or migration lease. A migration lease allows the deployment plan to be applied by migrating its operators. However, if such a lease cannot be granted due to concurrent modifications on the query network, a replication lease can be granted, allowing the node to apply the deployment plan of that query by replicating the involved operators. This way, only this specific query will be affected.
[0085] One property that should be noted is that if an operator o, is shared by a set of queries Dc, , then the sub-plan rooted from O1 is also shared by the same set of queries. Now assume two dependent queries qs and q that both have their QoS metric constraints violated. Query q, sends the REQUESTLOCK(qι, pi) requests to this downstream operators and similarly for the query q, . Moreover, shared operators that are aware of the dependencies forward the same request to their subscribers to inform also the dependent queries of the requested lock. Since queries share some operators, at least one operator will receive both lock requests. Upon receipt of the first requests it applies the procedure describe below, i.e., identifying conflicts and resolving them based on the metadata of the two plans. However, when the second request for a took arrives the first shared node to receive does not forward it to any publishers as a migration lease for this query has already been granted
[0086] The next paragraphs describe different cases encountered when trying to resolve conflicts for direct and indirect dependencies For direct dependencies concurrent modifications on directly dependents plans are encountered
[0087] Regarding parallel migrations, concurrent modifications are not always conflicting If two deployment plans do not affect the same set of queries, then both plans can be applied in parallel For example, in figure 3, if n3 and n4 decide to migrate only o3 and o4 respectively, both changes can be applied in this case, the two plans decided by n3 and n4 should show no impact on the queries q1 and q2 respectively The deployment plans include all the necessary information (operators to be migrated, new hosts, affect on the queries) to identify these cases efficiently, and thus grant migration leases to multiple non-conflicting plans.
[0086] Regarding redundant migrations, multiples migrations defined by concurrent deployment of multiple plans may often not be necessary in order to guarantee the QoS expectations of the queries. Very often, nodes might identify in parallel QoS violations and attempt to address them by applying their own locally stored deployment plans tn this case, it is quite possible that either one of the plans will be sufficient in order to reconfigure the current deployment. However, every plan includes an evaluation of the impact on all affected queries. Thus, if two plans p1 and p2 are both affecting the same set of queries, then applying either one will still provide a feasible deployment of our queries. Therefore, the plan that first acquires the migration lease is applied while the second plan is ignored
[0089] Regarding alternative migration plans, deployments plans that relocate shared operators cannot be applied in parallel. In this case, the first plan to request the lock migrates the operators, while an attempt is made to identify a new alternative non-conflicting deployment plan to meet any unsatisfied QoS expectations Since the first plan is migrating a shared operator, then hosts of downstream operators are searched for any plans that were built on top of this migration. For example, in figure 3, if the first plan migrates operator oi, but the
QoS of q2 is still not met, the node n4 is searched for any plans that include the same migration for Oi and can reduce further q2's response delay by migrating O4 as well.
[0090] Regarding indirect dependencies, queries may not share operators, but still share dependents. Thus, if an attempt is made to modify the deployment of indirectly dependent queries, the impact on their shared dependents is considered. In this case, a migration lease is granted to the first lock request and a replication lease to any following requests, if the plans to be applied are affecting overlapping sets of dependent queries. However, in the case where they do not affect the QoS of the same queries, these plans can be applied in parallel.
[0091] Figure 7 illustrates a method 700 for concurrent modifications of shared queries. At step 701 , a node determines that a new deployment plan should be applied, for example, due to a QoS metric constraint violation.
[0092] At step 702, all operators in the plan are locked unless the operators are already locked. If any operators are locked, a determination is made as to whether a conflict exists at step 703
[0093] At step 704. if a conflict exists, the node tries to identify an alternative non-conflicting deployment.
[0094] At step 705, if a conflict does not exist, the node replicates the operator and applies its initial plan.
[0095] Figure θ illustrates an exemplary block diagram of a computer system
800 that may be used as a node (i.e , an overlay node) in the system 100 shown in figure 1. The computer system 800 includes one or more processors, such as processor 802, providing an execution platform for executing software.
[0096] Commands and data from the processor 802 are communicated over a communication bus 805. The computer system 800 also includes a main memory 804, such as a Random Access Memory (RAM), where software may be resident during runtime, and data storage 806. The data storage 806 includes, for
example, a hard disk drive and/or a removable storage drive, representing a floppy diskette drive, a magnetic tape drive, a compact disk drive, etc , or a nonvolatile memory where a copy of the software may be stored. The data storage 806 may also include ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM). In addition to software for routing and other steps described herein, routing tables, network metrics, and other data may be stored in the main memory 804 and/or the data storage 806.
[0097] A user interfaces with the computer system 800 with one or more I/O devices 807, such as a keyboard, a mouse, a stylus, display, and the like A network interface 808 is provided for communicating with other nodes and computer systems.
[0098] One or more of the steps of the methods described herein and other steps described herein may be implemented as software embedded on a computer readable medium, such as the memory 804 and/or data storage 806, and executed on the computer system 800, for example, by the processor 802 The steps may be embodied by a computer program, which may exist in a variety of forms both active and inactive. For example, they may exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats for performing some of the steps. Any of the above may be embodied on a computer readable medium, which include storage devices and signals, in compressed or uncompressed form. Examples of suitable computer readable storage devices include conventional computer system RAM (random access memory). ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), and magnetic or optical disks or tapes. Examples of computer readable signals, whether modulated using a carrier or not, are signals that a computer system hosting or running the computer program may be configured to access, including signals downloaded through the Internet or other networks. Concrete examples of the foregoing include distribution of the programs on a CO ROM or via Internet download. In a sense,
the Internet itself, as an abstract entity, is a computer readable medium. The same is true of computer networks in general. It is therefore to be understood that those functions enumerated below may be performed by any electronic device capable of executing the above-described functions.
[0099] While the embodiments have been described with reference to examples, those skilled in the art will be able to make various modifications to the described embodiments without departing from the scope of the claimed embodiments.