CN104580322B - A kind of distributed traffic processing method and processing device - Google Patents

A kind of distributed traffic processing method and processing device Download PDF

Info

Publication number
CN104580322B
CN104580322B CN201310513394.6A CN201310513394A CN104580322B CN 104580322 B CN104580322 B CN 104580322B CN 201310513394 A CN201310513394 A CN 201310513394A CN 104580322 B CN104580322 B CN 104580322B
Authority
CN
China
Prior art keywords
node
data flow
key assignments
load
hop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310513394.6A
Other languages
Chinese (zh)
Other versions
CN104580322A (en
Inventor
何诚
李柏晴
黄群
刘勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201310513394.6A priority Critical patent/CN104580322B/en
Priority to PCT/CN2014/078654 priority patent/WO2015058525A1/en
Publication of CN104580322A publication Critical patent/CN104580322A/en
Application granted granted Critical
Publication of CN104580322B publication Critical patent/CN104580322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present invention relates to data processing field more particularly to a kind of distributed traffic processing method and processing devices, to solve the problems, such as that distributed stream processing technique can not carry out load balance process to the data flow with same key assignments.The method of the embodiment of the present invention includes: the key assignments that first node distributes to the data flow of next-hop working node as needed, determine that second node is the next-hop working node of the corresponding data flow of the processing key assignments, after determining the load migration condition that the second node meets setting, the data flow for needing to distribute to next-hop working node is migrated from the second node to third node and is handled, and indicates the status information of the corresponding data flow of the second node key assignments synchronous with third node;Using the above method, first node can be after the load migration condition that the second node for determining downstream meets setting, the corresponding data flow migration of key assignments to the current accumulative lesser third node of load that second node is responsible for processing is handled, so as to carry out load balance process for the data flow of same key assignments.

Description

A kind of distributed traffic processing method and processing device
Technical field
The present invention relates to data processing field more particularly to a kind of distributed traffic processing method and processing devices.
Background technique
Research On The Key Technology In Data Stream is widely used in various fields, for example, Financial Management, network monitoring, communication data management, Web application, sensor network data processing etc..These application in have the characteristics that one it is typical: Data Stream Processing amount is very big, has Quite high sudden, when the speed that data reach exceeds the processing capacity of system, system will appear under overload and performance Drop.So load management becomes the hot spot and emphasis studied in Data Stream Processing.
Research On The Key Technology In Data Stream includes centralized Research On The Key Technology In Data Stream and distributed traffic processing technique;In centralization In Research On The Key Technology In Data Stream, when the system of detecting overloads, some data tuples can be selectively lost to guarantee system It operates normally, it is clear that be to be affected using sacrificing data tuple as cost to system performance in this way.Due to fluxion According to source and application itself, there are distributed features, currently, distributed stream processing technique becomes the hot spot of stream process research, use Distributed stream processing technique, can be by load distribution to each processing node, and the load in holding system between each node is flat Weighing apparatus.
In distributed stream processing technique, for stateless object, mainly data tuple is assigned to using polling mode On corresponding processing node, disposable operation is belonged to the treatment process of stateless object, any status information will not be saved, is located Reason process is fairly simple;For stateful object, usually will there is the data flow of same key assignments (Key) to be assigned to same node It is handled.
In real network, the distribution of flow has very big obliquity, and the data flow of corresponding same key assignments is possible to meeting Sizable flow is contributed, this just needs also to carry out the data flow with same key assignments load balance process, and above-mentioned distribution Formula stream process technology obviously can not carry out load balance process in response to this.
Summary of the invention
The embodiment of the present invention provides a kind of distributed traffic processing method and processing device, to solve distributed stream processing skill The problem of art can not carry out load balance process to the data flow with same key assignments.
In a first aspect, providing a kind of distributed data method for stream processing, comprising:
First node distributes to the key assignments of the data flow of next-hop working node as needed, determines second node for processing The next-hop working node of the corresponding data flow of the key assignments;
The first node needs to distribute after determining the load migration condition that the second node meets setting by described It migrates from the second node to third node and is handled to the data flow of next-hop working node, and indicate second section The status information of the corresponding data flow of the point key assignments synchronous with third node;
Wherein, the second node and third node belong to the next-hop working node set of the first node, described The accumulative load of third node is less than the accumulative load of the second node.
With reference to first aspect, in the first possible implementation, the third node is presently described first node Next-hop working node set in there is the working node of minimum accumulative load.
With reference to first aspect or in the first possible implementation of first aspect, second of possible realization side In formula, the load migration condition of the setting includes:
The accumulative load of the second node is more than given threshold;And/or
The ratio and/or difference of the accumulative load of the accumulative load and third node of the second node are more than setting threshold Value.
The possible implementation of second with reference to first aspect, in the third possible implementation, the setting Load migration condition further include:
Current time and the time interval for the time that described first node the last time carries out data flow migration processing be not small In given threshold.
It with reference to first aspect or the first to three kind of first aspect any one possible implementation, can at the 4th kind Can implementation in, the first node according to the following formula determine set period of time in the first node any one The accumulative load W of next-hop working node:
W=c × W '+(1-c) × y;
Wherein, y is that the first node distributes to any one next-hop work section in the set period of time The load of point, W ' are any one described next-hop working node at the end of the previous period of the set period of time Accumulative load, c are constant, and 0 < c < 1.
It with reference to first aspect or first to fourth kind of first aspect any one possible implementation, can at the 5th kind Can implementation in, the first node indicates the corresponding data flow of the second node key assignments synchronous with third node Status information, comprising:
The first node sends the status information comprising the key assignments to the second node and moves out instruction, and, to The third node sends the status information comprising the key assignments and moves into instruction;
Wherein, the status information, which is moved out, indicates that being used to indicate the second node answers the key-value pair locally generated The first state information of data flow be sent to the coordinator for managing each working node;The status information moves into instruction for referring to Show that the third node obtains the first state letter of the corresponding data flow of the key assignments from the coordinator for managing each working node Breath, and the second status information of the data flow of the correspondence key assignments for being from a locally generated of the first state information that will acquire into Row merges.
It with reference to first aspect or the first to five kind of first aspect any one possible implementation, can at the 6th kind In the implementation of energy, the first node also wraps before determining the load migration condition that the second node meets setting It includes:
The first node is receiving the instruction second node overload that the coordinator for managing each working node is sent Information after, adjust the load migration condition of setting.
It with reference to first aspect or the first to six kind of first aspect any one possible implementation, can at the 7th kind In the implementation of energy, the method also includes:
First node distributes to the key assignments of the data flow of next-hop working node as needed, is determining that there is currently no places When managing the next-hop working node of the corresponding data flow of the key assignments, by the data flow for needing to distribute to next-hop working node It distributes at the working node in the next-hop working node set of presently described first node with minimum accumulative load Reason.
Second aspect provides a kind of distributed data method for stream processing, comprising:
Second node receives the status information comprising setting key value that first node is sent and moves out instruction;
The second node is moved out instruction according to the state information, determines the corresponding data of the key assignments locally generated The first state information of stream, and the determining first state information is sent to the coordinator for managing each working node.
The third aspect provides a kind of distributed data method for stream processing, comprising:
Third node receives the status information comprising setting key value that first node is sent and moves into instruction;
The third node moves into instruction according to the state information, obtains institute from the coordinator for managing each working node Described that states the first state information for the corresponding data flow of the key assignments that second node is sent in coordinator, and will acquire Second status information of the data flow of the correspondence that one status information the is from a locally generated key assignments merges.
Fourth aspect, provides a kind of distributed data current processing device, which belongs to first node, comprising:
Determining module, the key of the data flow of the next-hop working node for distributing to the first node as needed Value determines that second node is the next-hop working node of the corresponding data flow of the processing key assignments;
Transferring module, for needing to divide by described after determining the load migration condition that the second node meets setting The data flow of dispensing next-hop working node, which is migrated from the second node to third node, to be handled, and indicates described second The status information of the corresponding data flow of the node key assignments synchronous with third node;
Wherein, the second node and third node belong to the next-hop working node set of the first node, described The accumulative load of third node is less than the accumulative load of the second node.
In conjunction with fourth aspect, in the first possible implementation, the third node is presently described first node Next-hop working node set in there is the working node of minimum accumulative load.
In conjunction with the possible implementation of the first of fourth aspect or fourth aspect, in second of possible implementation In, the load migration condition of the setting includes:
The accumulative load of the second node is more than given threshold;And/or
The ratio and/or difference of the accumulative load of the accumulative load and third node of the second node are more than setting threshold Value.
In conjunction with second of possible implementation of fourth aspect, in the third possible implementation, the setting Load migration condition further include:
The time interval for the time that current time carries out data flow migration processing with described device the last time, which is not less than, to be set Determine threshold value.
It, can at the 4th kind in conjunction with fourth aspect or the first to three kind of fourth aspect any one possible implementation In the implementation of energy, the transferring module is specifically used for determining the first node in set period of time according to the following formula The accumulative load W of any one next-hop working node:
W=c × W '+(1-c) × y;
Wherein, y is that described device distributes to any one next-hop working node in the set period of time Load, W ' are that any one described next-hop working node is accumulative at the end of the previous period of the set period of time Load, c are constant, and 0 < c < 1.
Any one possible implementation in conjunction with first to fourth of fourth aspect or fourth aspect kind, can at the 5th kind In the implementation of energy, the transferring module is specifically used for, and sends the status information comprising the key assignments to the second node It moves out instruction, and, the status information comprising the key assignments, which is sent, to the third node moves into instruction;
Wherein, the status information, which is moved out, indicates that being used to indicate the second node answers the key-value pair locally generated The first state information of data flow be sent to the coordinator for managing each working node;The status information moves into instruction for referring to Show that the third node obtains the first state letter of the corresponding data flow of the key assignments from the coordinator for managing each working node Breath, and the second status information of the data flow of the correspondence key assignments for being from a locally generated of the first state information that will acquire into Row merges.
It, can at the 6th kind in conjunction with fourth aspect or the first to five kind of fourth aspect any one possible implementation Can implementation in, the transferring module is also used to, determine the second node meet setting load migration condition it Before, the information for indicating the second node overload that the coordinator of each working node is sent is managed if receiving, adjusts setting The load migration condition.
It, can at the 7th kind in conjunction with fourth aspect or the first to six kind of fourth aspect any one possible implementation In the implementation of energy, the transferring module is also used to: the key assignments of the data flow of next-hop working node is distributed to as needed, Determining there is currently no when the next-hop working node for handling the corresponding data flow of the key assignments, by it is described need to distribute to it is next The data flow for jumping working node, which is distributed in the next-hop working node set of presently described first node, has minimum accumulative negative The working node of load is handled.
5th aspect, provides a kind of distributed data current processing device, comprising:
Receiving module, the status information comprising setting key value for receiving first node transmission are moved out instruction, and by institute It states status information instruction of moving out and is transmitted to sending module;
Sending module determines the institute locally generated for moving out instruction according to the received status information of the receiving module The first state information of the corresponding data flow of key assignments is stated, and the determining first state information is sent to each work section of management The coordinator of point.
6th aspect, provides a kind of distributed data current processing device, comprising:
Receiving module, the status information comprising setting key value for receiving first node transmission move into instruction, and by institute It states status information and moves into instruction and be transmitted to acquisition module;
Module is obtained, for moving into instruction according to the received status information of the receiving module, from each working node of management Coordinator in obtain the first state information of the corresponding data flow of the key assignments that the second node is sent in coordinator, And the second status information of the data flow of the correspondence key assignments that the first state information that will acquire is from a locally generated carries out Merge.
The distributed data method for stream processing provided using above-mentioned first aspect, first node can determine the of downstream After two nodes meet the load migration condition of setting, second node is responsible for the corresponding data flow migration of key assignments of processing to current The accumulative lesser third node of load is handled, and indicates that second node is synchronous with third node progress status information, thus Load balance process can be carried out for the data flow of same key assignments.
Detailed description of the invention
Fig. 1 is the method flow diagram that the embodiment of the present invention one carries out distributed data stream process;
Fig. 2 is the method flow diagram that the embodiment of the present invention two carries out distributed data stream process;
Fig. 3 is the method flow diagram that the embodiment of the present invention three carries out distributed data stream process;
Fig. 4 is the functional unit of each working node and each working node distribution in the distributed system of the embodiment of the present invention Schematic diagram;
Fig. 5 is the method flow diagram that first node carries out load migration in the embodiment of the present invention;
Fig. 6 is to carry out the synchronous schematic diagram of status information in the embodiment of the present invention between working node;
Fig. 7 is the method flow diagram that progress status information is synchronous in the embodiment of the present invention corresponding with Fig. 6;
Working node carries out the schematic diagram of overload feedback in the distributed system of Fig. 8 embodiment of the present invention;
Fig. 9 is the side that working node carries out overload feedback in the distributed system of the embodiment of the present invention corresponding with Fig. 8 Method flow chart;
Figure 10 is the distributed data current processing device schematic diagram that the embodiment of the present invention one provides;
Figure 11 is distributed data current processing device schematic diagram provided by Embodiment 2 of the present invention;
Figure 12 is the distributed data current processing device schematic diagram that the embodiment of the present invention three provides;
Figure 13 is the distributed data current processing device schematic diagram that the embodiment of the present invention four provides;
Figure 14 is the distributed data current processing device schematic diagram that the embodiment of the present invention five provides;
Figure 15 is the distributed data current processing device schematic diagram that the embodiment of the present invention six provides.
Specific embodiment
The embodiment of the present invention is described in further detail with reference to the accompanying drawings of the specification.
As shown in Figure 1, carrying out the method flow diagram of distributed data stream process, including following step for the embodiment of the present invention one It is rapid:
S101: first node distributes to the key assignments of the data flow of next-hop working node as needed, determines second node For the next-hop working node for handling the corresponding data flow of the key assignments;
S102: the first node is after determining the load migration condition that the second node meets setting, by the need The data flow for distributing to next-hop working node, which is migrated from the second node to third node, to be handled, and described in instruction The status information of the corresponding data flow of the second node key assignments synchronous with third node;
Wherein, the second node and third node belong to the next-hop working node set of the first node, described The accumulative load of third node is less than the accumulative load of the second node.
Using the above method, first node can meet the load migration condition set in the second node for determining downstream Afterwards, second node is responsible at the corresponding data flow migration of key assignments to the current accumulative lesser third node of load of processing Reason, and indicate that second node is synchronous with third node progress status information, so as to be carried out for the data flow of same key assignments Load balance process.
Optionally, the third node is to have minimum tired in the next-hop working node set of presently described first node Count the working node of load.
Optionally, the method also includes:
First node distributes to the key assignments of the data flow of next-hop working node as needed, is determining that there is currently no places When managing the next-hop working node of the corresponding data flow of the key assignments, by the data flow for needing to distribute to next-hop working node It distributes at the working node in the next-hop working node set of presently described first node with minimum accumulative load Reason.
In the specific implementation process, what the data flow that first node distributes to next-hop working node can be locally generated Initial data stream is also possible to the data flow that other working nodes of forwarding generate;First node is distributed to next as needed The key assignments for jumping the data flow of working node, judges whether the corresponding data flow of the key assignments has been assigned to the work of some next-hop Node, however, it is determined that the corresponding data flow of the key assignments has been assigned to second node, then it is negative to judge whether second node has met Transition condition is carried, if satisfied, then by the accumulative negative of the corresponding data flow migration of the key assignments to current accumulative duty factor first node It carries small third node to be handled, third node here specifically can be all next-hops work section of current first node There is the working node of minimum accumulative load in point;If not satisfied, then continuing the corresponding data flow of the key assignments distributing to second Node is handled;If the corresponding data flow of the key assignments is assigned to any one next-hop working node not yet, should Data flow is distributed to the current working node with minimum accumulative load and is handled;Wherein, the corresponding data flow of each key assignments It is properly termed as a data items.
Optionally, the load migration condition of the setting includes one of the following conditions or a variety of:
First condition: the accumulative load of the second node is more than given threshold;
Second condition: the ratio and/or difference of the accumulative load of the accumulative load and third node of the second node More than given threshold;
Except this, load migration condition can also include: in addition to including one or two of above-mentioned two condition
Third condition: current time and described first node the last time carry out the time of data flow migration processing when Between interval be not less than given threshold.
It, can be corresponding by the key assignments when above first and/or second condition meet in the embodiment of the present invention Data flow is migrated to the current third node with minimum accumulative load from the second node and is handled;It is being embodied In, overhead in order to balance, can when above first and/or second condition meet, check current time with it is described Whether the time interval that first node the last time carries out the time of data flow migration processing is less than given threshold, if reaching or surpassing Given threshold is crossed, then the corresponding data flow of the key assignments is migrated from the second node to current, and there is minimum to add up load Third node is handled.
The accumulative load of above-mentioned second node and third node can be calculated according to the following formula:
Optionally, the first node according to the following formula determine set period of time in the first node any one The accumulative load W of next-hop working node:
W=c × W '+(1-c) × y;
Wherein, y is that the first node distributes to any one next-hop work section in the set period of time The load of point, W ' are any one described next-hop working node at the end of the previous period of the set period of time Accumulative load, c are constant, and 0 < c < 1.
For example, being directed to above-mentioned second node, in above-mentioned formula, y is that first node distributes to second in the set time period The load of node, the load may include the corresponding data flow of a key assignments, also may include the corresponding data flow of multiple key assignments; For wherein any one key assignments namely any one data items, it is assumed that the data items when the set period of time starts Corresponding load is w ', and in the set period of time, the newly-increased load of the data items is v, then terminates in the set period of time When, the corresponding load of the data items is updated to w=c × w '+(1-c) × v.
Optionally, the first node indicates the corresponding data flow of the second node key assignments synchronous with third node Status information, comprising:
The first node sends the status information comprising the key assignments to the second node and moves out instruction, and, to The third node sends the status information comprising the key assignments and moves into instruction;
Wherein, the status information, which is moved out, indicates that being used to indicate the second node answers the key-value pair locally generated The first state information of data flow be sent to the coordinator for managing each working node;The status information moves into instruction for referring to Show that the third node obtains the first state letter of the corresponding data flow of the key assignments from the coordinator for managing each working node Breath, and the second status information of the data flow of the correspondence key assignments for being from a locally generated of the first state information that will acquire into Row merges.
In the specific implementation process, first node is in the corresponding data flow of the key assignments that second node is currently processed, Namely data items, it migrates to third node while handled, needs for the data items, synchronous first node and the Status information between three nodes;Specifically, use the coordinator for managing each working node as two nodes in the embodiment of the present invention Between the synchronous terminal of status information, the first state information of the data items stored before is sent to coordination by second node Device, so that third node can obtain the first state information from the coordinator, third node is obtaining the first state information Afterwards, the second status information of the first state information corresponding data items newly-generated with third node can be closed And it so completes synchronous to the status information of the data items of migration.In specific implementation, first node can be to the second section Point and third node send that status information moves out instruction and status information moves into instruction respectively, which can be with explicit instruction second Node is synchronous with third node progress status information, and it is same can also to carry out status information with implicit instruction second node and third node Step, for example, first node can directly notify second node that the first state information of the corresponding data flow of setting key value is moved out, And be sent to coordinator, can also notify first node described in second node by the corresponding data flow of the key assignments from this Two nodes have moved to other working nodes and have been handled, and second node is after receiving the notice, according between each working node Make an appointment, the first state information of the corresponding data flow of the key assignments locally generated is sent to coordinator;Correspondingly, One node can directly notify third node to move into the first state information of the corresponding data flow of setting key value from coordinator, and will The first state information second status information newly-generated with third node merges, and can also notify first segment described in third node The corresponding data flow of the key assignments has been moved to third node from other working nodes and handled by point, and third node exists After receiving the notice, according to making an appointment between each working node, the first state information is obtained from coordinator, and by institute The second status information for stating the corresponding data flow of the key assignments that first state information is from a locally generated merges;
Optionally, the first node also wraps before determining the load migration condition that the second node meets setting It includes:
The first node is receiving the instruction second node overload that the coordinator for managing each working node is sent Information after, adjust the load migration condition of setting.
In the specific implementation process, other than the loading condition in each early-stage work node monitoring downstream working node, Global monitoring can also be carried out to each working node by managing the coordinator of each working node;Specifically, each working node The data receiver amount for monitoring itself, itself data receiver amount be more than given threshold after, then it is assumed that own load overload, starting Itself overload messages is fed back to coordinator by overload feedback mechanism, carries out overall scheduling by coordinator, notifies the overload work knot All or part of early-stage work node of point, these early-stage work nodes can adjust above-mentioned load migration according to the overload messages Condition, for example, if first node before determining the load migration condition that second node meets setting, receives each work of management The information for the instruction second node overload that the coordinator of node is sent, then adjust the load migration condition of above-mentioned setting, so that the Two nodes meet load migration condition, specifically, can reduce the given threshold of three conditions in above-mentioned load migration condition, or The c value etc. in above-mentioned accumulative load calculation formula is adjusted, the overload work node is made to meet the load migration condition of setting.
The processing mode of above-mentioned second node and third node response first node, carries out data flow under introducing separately below The method flow of processing;
As shown in Fig. 2, carrying out the method flow diagram of distributed data stream process for the embodiment of the present invention two, comprising:
S201: second node receives the status information comprising setting key value that first node is sent and moves out instruction;
S202: the second node is moved out instruction according to the state information, determines that the key-value pair locally generated is answered Data flow first state information, and the determining first state information is sent to the coordination for managing each working node Device;
Optionally, the third node is to have minimum tired in all next-hop working nodes of presently described first node Count the working node of load.
In the specific implementation process, second node according to the status information that first node is sent move out instruction in include key Value extracts the first state information of the corresponding data flow of the key assignments, and being somebody's turn to do extraction from the status information being locally stored First state information is sent to the coordinator for managing each working node, and third node extracts the first state information from coordinator, And merge the first state information second status information newly-generated with the third node, it completes to be directed to migrating data item The merging of purpose status information.
As shown in figure 3, carrying out the method flow diagram of distributed data stream process for the embodiment of the present invention three, comprising:
S301: third node receives the status information comprising setting key value that first node is sent and moves into instruction;
S302: the third node moves into instruction according to the state information, from the coordinator for managing each working node The first state information for the corresponding data flow of the key assignments that the second node is sent in coordinator is obtained, and will acquire Second status information of the data flow for the correspondence key assignments that the first state information is from a locally generated merges;
Optionally, the third node is to have minimum tired in all next-hop working nodes of presently described first node Count the working node of load.
The method that embodiment carries out Data Stream Processing in order to further illustrate the present invention, below by some more specific realities Example is applied to be described in detail;
As shown in figure 4, for the function of each working node and each working node in the distributed system of the embodiment of the present invention Cell distribution schematic diagram;Distributed system in the embodiment of the present invention includes a coordinator (coordinator) and multiple Working node, wherein coordinator is responsible for connecting each work node, and is managed to each working node;Each working node packet Containing multiple functional units, specifically, including a container (container), a distributor (dispatcher), a collection Device (collector), and one or more analyzers (analyzer), wherein distributor and analyzer, analyzer and collection Data channel is had between device, has control channel between container and other each functional units;The effect of each functional unit point Not are as follows: distributor is responsible for receiving the data flow of external transmission, is decoded to received data flow, and by decoded data flow It is assigned in different analyzers, wherein received data flow can be the initial data stream of this working node generation, it can also be with It is the data flow that other working nodes generate;Analyzer is responsible for executing received data analysis operation, and by the number after analysis According to being transmitted to collector;Collector is responsible for summarizing that each analyzer exports as a result, and being responsible for that output analyzer exports as a result, receiving Storage also can produce new output stream, and be distributed in next-hop working node;Container be responsible for manage distributor, Analyzer and collector, while being also the interface that the working node is connect with coordinator.
In order to guarantee the reliability of coordinator, in the embodiment of the present invention, coordinator can be constructed to point increased income at one Cloth cooperative operation system, on Zookeeper cluster;Although coordinator belongs to stateless object, it is possible that crash or Other failures, but as long as Zookeeper cluster is normal, so that it may quickly restart coordinator, wherein the number between working node Library, such as ZeroMQ are communicated according to the high-performance that an open source can be used in transmission;Except this, the coordinator in the embodiment of the present invention can also To use single-point framework, but, this mode may make the reduction of coordinator fault-tolerant ability, at this moment, between working node can be with Directly pass through TCP(Transmission Control Protocol, transmission control protocol) or UDP(User Data Protocol, User Datagram Protocol) carry out data transmission.
As shown in figure 5, carrying out the method flow diagram of load migration for first node in the embodiment of the present invention, comprising:
S501: first node determines the key assignments for needing to distribute to the data flow of next-hop working node;
S502: first node according to determining key assignments, judgement it is current whether the corresponding data flow of existing processing key assignments Second node, and if it exists, then enter step S503, otherwise enter step S507;
S503: first node judges whether to need to carry out load migration according to the load migration condition of setting, if so, S504 is entered step, otherwise, enters step S505;
The load migration condition includes: that the accumulative load of the second node is more than given threshold;The second node Accumulative load and third node accumulative load ratio and/or difference be more than given threshold;Current time and described first The time interval that node the last time carries out the time of data flow migration processing is not less than given threshold;Wherein, add up load Calculation formula can be found in the above-mentioned description as described in attached drawing 1, and which is not described herein again.
S504: the corresponding data flow migration of the key assignments is had the minimum accumulative third node loaded to current by first node It is handled, and it is synchronous for key assignments progress status information with third node to indicate second node, and enters step S508;
S505: the corresponding data flow of the key assignments is distributed to second node and continues to handle by first node, and is entered Step S506;
S506: the accumulative load for the second node that first node updates storage;
S507: the corresponding data flow of the key assignments is distributed to the current third section with minimum accumulative load by first node Point is handled, and enters step S508;
S508: the accumulative load for the third node that first node updates storage.
As shown in fig. 6, the synchronous schematic diagram of status information is carried out between working node in the embodiment of the present invention, such as Fig. 7 institute Show, be the method flow diagram that progress status information is synchronous in the embodiment of the present invention corresponding with Fig. 6, comprising:
S701: the collector of first node, which determines, to be needed the corresponding data flow migration of setting key value of second node processing It is handled to third node, is transferred to step S702 and step S706 respectively;
S702: the collector of first node is moved out to the status information that the distributor of second node sends the corresponding key assignments Instruction;
S703: the distributor of second node finds the analyzer for being responsible for handling the corresponding data flow of the key assignments, will be received Status information instruction of moving out is transmitted to the analyzer;
S704: the analyzer of second node extracts the corresponding first state information of the key assignments being locally stored, and by institute State the container that first state information is sent to the second node;
S705: the first state information is sent coordinator by the container of second node;
S706: the collector of first node is moved into the status information that the distributor of third node sends the corresponding key assignments Instruction;
S707: the distributor of third node finds the analyzer for being responsible for handling the corresponding data flow of the key assignments, by the shape State information moves into instruction and is transmitted to the analyzer;
S708: the analyzer of third node is moved into the status information that the container of the third node sends the corresponding key assignments Request;
In specific implementation, the analyzer of third node continues with after sending status information to container and moving into request Data flow, the data flow of processing include the corresponding follow-up data stream of key assignments data flow corresponding with other key assignments;
S709: the container of third node obtains the corresponding first state information of the key assignments from coordinator;
S710: the first state information moved into is sent to by the container of third node handles the corresponding data flow of the key assignments Analyzer, the analyzer close the corresponding first state information of the key assignments and the second status information newly-generated on third node And.
As shown in figure 8, working node carries out the schematic diagram of overload feedback in the distributed system of the embodiment of the present invention, such as scheme It is the method flow that working node carries out overload feedback in the distributed system of the embodiment of the present invention corresponding with Fig. 8 shown in 9 Figure, the embodiment can be used for assisting above-mentioned load migration process corresponding with attached drawing 4, and working node each in system is born in realization It is loaded into the global monitoring of row and adjustment, comprising:
S901: the distributor of second node sends overload messages to the container of the second node;
In specific implementation, the distributor of working node A can make according to data connection between distributor and analyzer With rate to determine whether overload, for example, can the data channel between distributor and analyzer persistently take the time be more than set When determining threshold value, determine that working node A overloads;Specifically, the distributor of working node A can send number to analyzer with real-time measurement According to Circular buffer (ring buffer) size, if all ring buffer be all in set time period it is full, really Determine working node A overload;
S902: the information that the container of second node reports second node to overload to coordinator;
S903: coordinator notifies the information that the second node overloads to all or part of early-stage work of second node Node, wherein all or part early-stage work node includes first node;
S904: the container of first node is after the information that coordinator obtains that the second node overloads, by second section The information of point overload is notified to the collector of first node;
S905: the collector of first node adjusts load migration condition, and the load for making the second node meet setting is moved Shifting condition.
Based on the same inventive concept, a kind of data corresponding with data flow processing method are additionally provided in the embodiment of the present invention Current processing device, since the principle that the device solves the problems, such as is similar to the data flow processing method in the embodiment of the present invention, The implementation of the device may refer to the implementation of method, and overlaps will not be repeated.
As shown in Figure 10, the distributed data current processing device schematic diagram provided for the embodiment of the present invention one, the device category In first node, comprising:
Determining module 101, the data flow of the next-hop working node for distributing to the first node as needed Key assignments determines that second node is the next-hop working node of the corresponding data flow of the processing key assignments;
Transferring module 102, for after determining the load migration condition that the second node meets setting, by the needs The data flow for distributing to next-hop working node, which is migrated from the second node to third node, to be handled, and indicates described The status information of the corresponding data flow of the two nodes key assignments synchronous with third node;
Wherein, the second node and third node belong to the next-hop working node set of the first node, described The accumulative load of third node is less than the accumulative load of the second node.
Optionally, the third node is to have minimum tired in the next-hop working node set of presently described first node Count the working node of load.
Optionally, the load migration condition of the setting includes:
The accumulative load of the second node is more than given threshold;And/or
The ratio and/or difference of the accumulative load of the accumulative load and third node of the second node are more than setting threshold Value.
Optionally, the load migration condition of the setting further include: current time is counted with described device the last time The time interval of the time handled according to stream migration is not less than given threshold.
Optionally, the transferring module 102 is specifically used for determining the first segment in set period of time according to the following formula The accumulative load W of any one next-hop working node of point:
W=c × W '+(1-c) × y;
Wherein, y is that described device distributes to any one next-hop working node in the set period of time Load, W ' are that any one described next-hop working node is accumulative at the end of the previous period of the set period of time Load, c are constant, and 0 < c < 1.
Optionally, the transferring module 102 is specifically used for, and sends the state comprising the key assignments to the second node and believes Instruction of moving out is ceased, and, the status information comprising the key assignments, which is sent, to the third node moves into instruction;
Wherein, the status information, which is moved out, indicates that being used to indicate the second node answers the key-value pair locally generated The first state information of data flow be sent to the coordinator for managing each working node;The status information moves into instruction for referring to Show that the third node obtains the first state letter of the corresponding data flow of the key assignments from the coordinator for managing each working node Breath, and the second status information of the data flow of the correspondence key assignments for being from a locally generated of the first state information that will acquire into Row merges.
Optionally, the transferring module 102 is also used to, in the load migration condition for determining the second node satisfaction setting Before, the information for indicating the second node overload that the coordinator of each working node is sent is managed if receiving, adjustment is set The fixed load migration condition.
Optionally, the determining module 101 is also used to: distributing to the key of the data flow of next-hop working node as needed Value determines that there is currently no the next-hop working nodes for handling the corresponding data flow of the key assignments;The transferring module 102 is also used In, by the data flow for needing to distribute to next-hop working node distribute to presently described first node next-hop work section There is the working node of minimum accumulative load to be handled in point set.
It as shown in figure 11, is distributed data current processing device schematic diagram provided by Embodiment 2 of the present invention, the device category In above-mentioned second node, comprising:
Receiving module 111, the status information comprising setting key value for receiving first node transmission is moved out instruction, and general Status information instruction of moving out is transmitted to sending module 112;
Sending module 112 determines local raw for being moved out instruction according to the received status information of receiving module 111 At the corresponding data flow of the key assignments first state information, and it is each that the determining first state information is sent to management The coordinator of working node.
As shown in figure 12, the distributed data current processing device schematic diagram provided for the embodiment of the present invention three, the device category In above-mentioned third node, comprising:
Receiving module 121, the status information comprising setting key value for receiving first node transmission move into instruction, and will The status information moves into instruction and is transmitted to acquisition module 122;
Module 122 is obtained, for moving into instruction according to the received status information of receiving module 121, from each work of management Make the first shape that the corresponding data flow of the key assignments that the second node is sent in coordinator is obtained in the coordinator of node State information, and the second state letter of the data flow of the correspondence key assignments that the first state information that will acquire is from a locally generated Breath merges.
As shown in figure 13, the distributed data current processing device schematic diagram provided for the embodiment of the present invention four, the device category In first node, comprising:
Processor 131, the key of the data flow of the next-hop working node for distributing to the first node as needed Value determines that second node is the next-hop working node of the corresponding data flow of the processing key assignments, is determining the second node After the load migration condition for meeting setting, determine the data flow for needing to distribute to next-hop working node from described second Node is migrated to third node and is handled, and is determined and indicated that the second node key assignments synchronous with third node is corresponding Determining instruction information is transferred to transceiver 132 by the instruction information of the status information of data flow;Wherein, the second node Belong to the next-hop working node set of the first node with third node, the accumulative load of the third node is less than described The accumulative load of second node;
Transceiver 132, for sending the instruction information to the second node and third node.
Optionally, the third node is to have minimum tired in the next-hop working node set of presently described first node Count the working node of load.
Optionally, the load migration condition of the setting includes:
The accumulative load of the second node is more than given threshold;And/or
The ratio and/or difference of the accumulative load of the accumulative load and third node of the second node are more than setting threshold Value.
Optionally, the load migration condition of the setting further include: current time is counted with described device the last time The time interval of the time handled according to stream migration is not less than given threshold.
Optionally, the processor 131 is specifically used for determining the first node in set period of time according to the following formula Any one next-hop working node accumulative load W:
W=c × W '+(1-c) × y;
Wherein, y is that described device distributes to any one next-hop working node in the set period of time Load, W ' are that any one described next-hop working node is accumulative at the end of the previous period of the set period of time Load, c are constant, and 0 < c < 1.
Optionally, the transceiver 132 is specifically used for, and sends the status information comprising the key assignments to the second node It moves out instruction, and, the status information comprising the key assignments, which is sent, to the third node moves into instruction;
Wherein, the status information, which is moved out, indicates that being used to indicate the second node answers the key-value pair locally generated The first state information of data flow be sent to the coordinator for managing each working node;The status information moves into instruction for referring to Show that the third node obtains the first state letter of the corresponding data flow of the key assignments from the coordinator for managing each working node Breath, and the second status information of the data flow of the correspondence key assignments for being from a locally generated of the first state information that will acquire into Row merges.
Optionally, the processor 131 is also used to, determine the second node meet setting load migration condition it Before, the letter for indicating the second node overload that the coordinator of each working node is sent is managed if receiving by transceiver 132 Breath, then adjust the load migration condition of setting.
Optionally, the processor 131 is also used to: distributing to the key of the data flow of next-hop working node as needed Value is determining there is currently no when the next-hop working node for handling the corresponding data flow of the key assignments, the needs is being distributed to The data flow of next-hop working node, which is distributed in the next-hop working node set of presently described first node, has minimum tired The working node of meter load is handled.
As shown in figure 14, the distributed data current processing device schematic diagram provided for the embodiment of the present invention five, the device category In above-mentioned second node, comprising:
Transceiver 141, the status information comprising setting key value for receiving first node transmission are moved out instruction, and by institute It states status information instruction of moving out and is transmitted to processor 142;
Processor 142 is determined and is locally generated for being moved out instruction according to the received status information of transceiver 141 The first state information of the corresponding data flow of the key assignments, and sent out the determining first state information by transceiver 141 Give the coordinator for managing each working node.
As shown in figure 15, the distributed data current processing device schematic diagram provided for the embodiment of the present invention six, the device category In above-mentioned third node, comprising:
Transceiver 151, the status information comprising setting key value for receiving first node transmission move into instruction, and by institute It states status information and moves into instruction and be transmitted to processor 152;
Processor 152, for moving into instruction according to the received status information of transceiver 151, by transceiver 151 from It manages and obtains the corresponding data flow of the key assignments that the second node is sent in coordinator in the coordinator of each working node First state information, and the first state information that will acquire be from a locally generated the of the data flow of the correspondence key assignments Two-state information merges.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of device (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to include these modifications and variations.

Claims (26)

1. a kind of distributed data method for stream processing, which is characterized in that this method comprises:
First node distributes to the key assignments of the data flow of next-hop working node as needed, determines second node for described in processing The next-hop working node of the corresponding data flow of key assignments;
The first node distributes to down the needs after determining the load migration condition that the second node meets setting The data flow of one jump working node is migrated to third node from the second node and is handled, and indicate the second node and Third node synchronizes the status information of the corresponding data flow of the key assignments;
Wherein, the second node and third node belong to the next-hop working node set of the first node, the third The accumulative load of node is less than the accumulative load of the second node;
The first node is before determining the load migration condition that the second node meets setting, further includes:
The first node is in the letter for receiving the instruction second node overload that the coordinator for managing each working node is sent After breath, the load migration condition of setting is adjusted, so that the second node meets the load migration condition.
2. the method as described in claim 1, which is characterized in that the third node is the next-hop of presently described first node There is the working node of minimum accumulative load in working node set.
3. the method as described in claim 1, which is characterized in that the load migration condition of the setting includes:
The accumulative load of the second node is more than given threshold;And/or
The ratio and/or difference of the accumulative load of the accumulative load and third node of the second node are more than given threshold.
4. method according to claim 2, which is characterized in that the load migration condition of the setting includes:
The accumulative load of the second node is more than given threshold;And/or
The ratio and/or difference of the accumulative load of the accumulative load and third node of the second node are more than given threshold.
5. method as claimed in claim 3, which is characterized in that the load migration condition of the setting further include:
The time interval for the time that current time carries out data flow migration processing with described first node the last time, which is not less than, to be set Determine threshold value.
6. method as claimed in claim 4, which is characterized in that the load migration condition of the setting further include:
The time interval for the time that current time carries out data flow migration processing with described first node the last time, which is not less than, to be set Determine threshold value.
7. the method as described in claim 1~6 is any, which is characterized in that determination is set the first node according to the following formula The accumulative load W of any one next-hop working node of the first node in section of fixing time:
W=c × W '+(1-c) × y;
Wherein, y is that the first node distributes to any one next-hop working node in the set period of time Load, W ' are that any one described next-hop working node is accumulative at the end of the previous period of the set period of time Load, c are constant, and 0 < c < 1.
8. the method as described in claim 1~6 is any, which is characterized in that the first node indicate the second node and Third node synchronizes the status information of the corresponding data flow of the key assignments, comprising:
The first node sends the status information comprising the key assignments to the second node and moves out instruction, and, to described Third node sends the status information comprising the key assignments and moves into instruction;
Wherein, the status information, which is moved out, indicates to be used to indicate the corresponding number of the key assignments that the second node will be generated locally The coordinator for managing each working node is sent to according to the first state information of stream;The status information moves into instruction and is used to indicate institute The first state information that third node obtains the corresponding data flow of the key assignments from the coordinator for managing each working node is stated, and Second status information of the data flow for the correspondence key assignments that the first state information that will acquire is from a locally generated is closed And.
9. the method for claim 7, which is characterized in that the first node indicates the second node and third node Synchronize the status information of the corresponding data flow of the key assignments, comprising:
The first node sends the status information comprising the key assignments to the second node and moves out instruction, and, to described Third node sends the status information comprising the key assignments and moves into instruction;
Wherein, the status information, which is moved out, indicates to be used to indicate the corresponding number of the key assignments that the second node will be generated locally The coordinator for managing each working node is sent to according to the first state information of stream;The status information moves into instruction and is used to indicate institute The first state information that third node obtains the corresponding data flow of the key assignments from the coordinator for managing each working node is stated, and Second status information of the data flow for the correspondence key assignments that the first state information that will acquire is from a locally generated is closed And.
10. the method as described in claim 1~6 is any, which is characterized in that the method also includes:
First node distributes to the key assignments of the data flow of next-hop working node as needed, is determining that there is currently no processing to be somebody's turn to do When the next-hop working node of the corresponding data flow of key assignments, the data flow for needing to distribute to next-hop working node is distributed It is handled to the working node that in the next-hop working node set of presently described first node there is minimum to add up load.
11. the method for claim 7, which is characterized in that the method also includes:
First node distributes to the key assignments of the data flow of next-hop working node as needed, is determining that there is currently no processing to be somebody's turn to do When the next-hop working node of the corresponding data flow of key assignments, the data flow for needing to distribute to next-hop working node is distributed It is handled to the working node that in the next-hop working node set of presently described first node there is minimum to add up load.
12. method according to claim 8, which is characterized in that the method also includes:
First node distributes to the key assignments of the data flow of next-hop working node as needed, is determining that there is currently no processing to be somebody's turn to do When the next-hop working node of the corresponding data flow of key assignments, the data flow for needing to distribute to next-hop working node is distributed It is handled to the working node that in the next-hop working node set of presently described first node there is minimum to add up load.
13. method as claimed in claim 9, which is characterized in that the method also includes:
First node distributes to the key assignments of the data flow of next-hop working node as needed, is determining that there is currently no processing to be somebody's turn to do When the next-hop working node of the corresponding data flow of key assignments, the data flow for needing to distribute to next-hop working node is distributed It is handled to the working node that in the next-hop working node set of presently described first node there is minimum to add up load.
14. a kind of distributed data current processing device, which is characterized in that the device belongs to first node, comprising:
Determining module, the key assignments of the data flow of the next-hop working node for distributing to the first node as needed, really Determine the next-hop working node that second node is the corresponding data flow of the processing key assignments;
Transferring module, for after determining the load migration condition that the second node meets setting, the needs to be distributed to The data flow of next-hop working node is migrated to third node from the second node and is handled, and indicates the second node The status information of the corresponding data flow of synchronous with the third node key assignments;
Wherein, the second node and third node belong to the next-hop working node set of the first node, the third The accumulative load of node is less than the accumulative load of the second node;
The transferring module is also used to, before determining the load migration condition that the second node meets setting, if receiving The information for indicating the second node overload that the coordinator of each working node is sent is managed, then the load for adjusting setting is moved Shifting condition, so that the second node meets the load migration condition.
15. device as claimed in claim 14, which is characterized in that the third node is the next of presently described first node Jump the working node in working node set with minimum accumulative load.
16. device as claimed in claim 14, which is characterized in that the load migration condition of the setting includes:
The accumulative load of the second node is more than given threshold;And/or
The ratio and/or difference of the accumulative load of the accumulative load and third node of the second node are more than given threshold.
17. device as claimed in claim 15, which is characterized in that the load migration condition of the setting includes:
The accumulative load of the second node is more than given threshold;And/or
The ratio and/or difference of the accumulative load of the accumulative load and third node of the second node are more than given threshold.
18. device as claimed in claim 16, which is characterized in that the load migration condition of the setting further include: when current Between with described device the last time carry out data flow migration processing time time interval be not less than given threshold.
19. device as claimed in claim 17, which is characterized in that the load migration condition of the setting further include: when current Between with described device the last time carry out data flow migration processing time time interval be not less than given threshold.
20. the device as described in claim 14~19 is any, which is characterized in that the transferring module is specifically used for according to following Formula determines the accumulative load W of any one next-hop working node of the first node in set period of time:
W=c × W '+(1-c) × y;
Wherein, y is the load that described device distributes to any one next-hop working node in the set period of time, W ' is the accumulative load of any one next-hop working node at the end of the previous period of the set period of time, C is constant, and 0 < c < 1.
21. the device as described in claim 14~19 is any, which is characterized in that the transferring module is specifically used for, to described Second node sends the status information comprising the key assignments and moves out instruction, and, sending to the third node includes the key The status information of value moves into instruction;
Wherein, the status information, which is moved out, indicates to be used to indicate the corresponding number of the key assignments that the second node will be generated locally The coordinator for managing each working node is sent to according to the first state information of stream;The status information moves into instruction and is used to indicate institute The first state information that third node obtains the corresponding data flow of the key assignments from the coordinator for managing each working node is stated, and Second status information of the data flow for the correspondence key assignments that the first state information that will acquire is from a locally generated is closed And.
22. device as claimed in claim 20, which is characterized in that the transferring module is specifically used for, to the second node The status information comprising the key assignments is sent to move out instruction, and, the state comprising the key assignments is sent to the third node Information moves into instruction;
Wherein, the status information, which is moved out, indicates to be used to indicate the corresponding number of the key assignments that the second node will be generated locally The coordinator for managing each working node is sent to according to the first state information of stream;The status information moves into instruction and is used to indicate institute The first state information that third node obtains the corresponding data flow of the key assignments from the coordinator for managing each working node is stated, and Second status information of the data flow for the correspondence key assignments that the first state information that will acquire is from a locally generated is closed And.
23. the device as described in claim 14~19 is any, which is characterized in that the transferring module is also used to: as needed The key assignments for distributing to the data flow of next-hop working node is determining that there is currently no handle under the corresponding data flow of the key assignments When one jump working node, the data flow for needing to distribute to next-hop working node is distributed into presently described first node There is the working node of minimum accumulative load to be handled in next-hop working node set.
24. device as claimed in claim 20, which is characterized in that the transferring module is also used to: distributing to down as needed One jumps the key assignments of the data flow of working node, is determining that there is currently no the next-hop work for handling the corresponding data flow of the key assignments When node, the data flow for needing to distribute to next-hop working node is distributed to the next-hop work of presently described first node Make that there is the working node of minimum accumulative load to be handled in node set.
25. device as claimed in claim 21, which is characterized in that the transferring module is also used to: distributing to down as needed One jumps the key assignments of the data flow of working node, is determining that there is currently no the next-hop work for handling the corresponding data flow of the key assignments When node, the data flow for needing to distribute to next-hop working node is distributed to the next-hop work of presently described first node Make that there is the working node of minimum accumulative load to be handled in node set.
26. device as claimed in claim 22, which is characterized in that the transferring module is also used to: distributing to down as needed One jumps the key assignments of the data flow of working node, is determining that there is currently no the next-hop work for handling the corresponding data flow of the key assignments When node, the data flow for needing to distribute to next-hop working node is distributed to the next-hop work of presently described first node Make that there is the working node of minimum accumulative load to be handled in node set.
CN201310513394.6A 2013-10-25 2013-10-25 A kind of distributed traffic processing method and processing device Active CN104580322B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201310513394.6A CN104580322B (en) 2013-10-25 2013-10-25 A kind of distributed traffic processing method and processing device
PCT/CN2014/078654 WO2015058525A1 (en) 2013-10-25 2014-05-28 Distributed method and device for processing data stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310513394.6A CN104580322B (en) 2013-10-25 2013-10-25 A kind of distributed traffic processing method and processing device

Publications (2)

Publication Number Publication Date
CN104580322A CN104580322A (en) 2015-04-29
CN104580322B true CN104580322B (en) 2019-02-12

Family

ID=52992197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310513394.6A Active CN104580322B (en) 2013-10-25 2013-10-25 A kind of distributed traffic processing method and processing device

Country Status (2)

Country Link
CN (1) CN104580322B (en)
WO (1) WO2015058525A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106293892B (en) * 2015-06-26 2019-03-19 阿里巴巴集团控股有限公司 Distributed stream computing system, method and apparatus
CN108632144A (en) * 2017-03-17 2018-10-09 华为数字技术(苏州)有限公司 The method and apparatus for transmitting flow
CN107454013A (en) * 2017-06-08 2017-12-08 国家计算机网络与信息安全管理中心 A kind of method and apparatus of flow data processing system data partition
CN108063731B (en) * 2018-01-03 2021-03-19 烟台大学 Load balancing distribution method based on data distribution in distributed data stream
CN110839086A (en) * 2019-12-23 2020-02-25 吉林省民航机场集团公司 High-concurrency load balancing processing method
CN112527841A (en) * 2020-12-17 2021-03-19 上海数依数据科技有限公司 Stream data merging processing method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101442435A (en) * 2008-12-25 2009-05-27 华为技术有限公司 Method and apparatus for managing business data of distributed system and distributed system
CN101697526A (en) * 2009-10-10 2010-04-21 中国科学技术大学 Method and system for load balancing of metadata management in distributed file system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8144587B2 (en) * 2006-08-22 2012-03-27 Embarq Holdings Company, Llc System and method for load balancing network resources using a connection admission control engine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101442435A (en) * 2008-12-25 2009-05-27 华为技术有限公司 Method and apparatus for managing business data of distributed system and distributed system
CN101697526A (en) * 2009-10-10 2010-04-21 中国科学技术大学 Method and system for load balancing of metadata management in distributed file system

Also Published As

Publication number Publication date
CN104580322A (en) 2015-04-29
WO2015058525A1 (en) 2015-04-30

Similar Documents

Publication Publication Date Title
CN104580322B (en) A kind of distributed traffic processing method and processing device
Hussein et al. Efficient task offloading for IoT-based applications in fog computing using ant colony optimization
CN111198764B (en) SDN-based load balancing realization system and method
Qiu et al. A packet buffer evaluation method exploiting queueing theory for wireless sensor networks
CN102281290B (en) Emulation system and method for a PaaS (Platform-as-a-service) cloud platform
CN104580524A (en) Resource scaling method and cloud platform with same
CN104202254A (en) An intelligent load balancing method based on a cloud calculation platform server
CN103179570B (en) Resource allocation method and system applied in distributed time division multiplexing system
CN108809848A (en) Load-balancing method, device, electronic equipment and storage medium
CN109728981A (en) A kind of cloud platform fault monitoring method and device
WO2016197458A1 (en) Traffic control method and apparatus
CN105049485B (en) A kind of Load-aware cloud computing system towards real time video processing
CN103346978A (en) Method for guaranteeing fairness and stability of virtual machine network bandwidth
Wang et al. Distributed join-the-idle-queue for low latency cloud services
CN109783573A (en) The method of data synchronization and terminal of multichannel push
WO2024012065A1 (en) Data transmission control method and apparatus, computer-readable storage medium, computer device, and computer program product
CN105791144A (en) Method and apparatus for sharing link traffic
Kathuria et al. Reliable packet transmission in WBAN with dynamic and optimized QoS using multi-objective lion cooperative hunt optimizer
CN108123891A (en) The dynamic load balancing method realized in SDN network using distributed domain controller
Ruan et al. FSQCN: Fast and simple quantized congestion notification in data center Ethernet
CN103701721B (en) Message transmitting method and device
Liang et al. Queue‐based congestion detection and multistage rate control in event‐driven wireless sensor networks
US20170070561A1 (en) Mechanism and Method for Constraint Based Fine-Grained Cloud Resource Controls
CN106997310A (en) The apparatus and method of load balancing
CN110380825A (en) A kind of control method and device of transmission rate

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant