CN107743099A - Data flow processing method, device and storage medium - Google Patents
Data flow processing method, device and storage medium Download PDFInfo
- Publication number
- CN107743099A CN107743099A CN201710776018.4A CN201710776018A CN107743099A CN 107743099 A CN107743099 A CN 107743099A CN 201710776018 A CN201710776018 A CN 201710776018A CN 107743099 A CN107743099 A CN 107743099A
- Authority
- CN
- China
- Prior art keywords
- data flow
- sub
- buckets
- data
- bucket
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/215—Flow control; Congestion control using token-bucket
Abstract
The application provides a kind of data flow processing method, device and storage medium.This method includes:Pending data levelling is divided into N one's share of expenses for a joint undertaking data flows, wherein, N is the quantity that computing device handles the clock cycle needed during pending data stream, N one's share of expenses for a joint undertaking data flows are distributed to N number of sub-data flow processor, wherein, sub-data flow corresponds with sub-data flow processor, each sub-data flow processor safeguards at least one token bucket, the bucket depth of token bucket is X/N, the adding rate of token is Y/N in token bucket, when X is assumes that computing device handles pending data, the bucket depth of the token bucket of maintenance, when Y is assumes that computing device handles pending data, the adding rate of token in the token bucket of maintenance, N number of sub-data flow processor is controlled only to handle a different sub-data flow respectively within N number of clock cycle, so as to, improve the real-time and precision of Data Stream Processing.
Description
Technical field
The application is related to computer technology, more particularly to a kind of data flow processing method, device and storage medium.
Background technology
With the development of the communication technology and electronic technology, computing device, such as computer are, it is necessary to which the complicated business of processing is got over
Come more.When handling complicated business, computer needs longer time to complete to handle.Also, when the number of complicated business
When according to stream being high-speed data-flow, the time interval between message is narrower, in order to ensure the real-time of stream process, leaves each message for
Processing window it is very limited.In the processing procedure of complicated business, because computation complexity is higher, a message needs multiple clocks
Cycle could complete to handle.For example, in committed access rate (Committed Access Rate, CAR) mechanism, CAR
Token computation and token bucket safeguard need to use the computings such as multiplication, the addition of big bit wide, at least need 3 clock cycle could
Complete.Assuming that the clock cycle of computing device is 5ns.The each data flow handled in high-speed data-flow needs N number of clock cycle,
That is, it needs to the N*5ns time-triggered protocol data flow, the then speed for handling each data flow areBar, with operation time per second
Number (operation per second, ops) represents this unit of bar/second.Which limits data flow to flow into the processing equipment
Speed no more than the speed of computing device processing data stream, not so, the data flow newly flowed into can not be in time by computing device
Processing, can cause data to block.Therefore, the speed of the processing equipment is flowed into per data stream to be less thanIn order to retouch
State the process performance that the speed of data flow inflow processing equipment is conveniently defined as to data flow.Assuming that the clock week of computing device
Phase is CAR_CLK s, which limits the process performance of every data stream no more thanI.e. due to meter
The limitation of the hardware resource of equipment is calculated, the transmission rate per data stream is no more thanWith communication
Port speed fast lifting, also to be lifted therewith per the process performance of data stream, it is desirable to treatability of the processing per data stream
It can match with the clock cycle of processing equipment, reachEven more high.Therefore, how every is improved
The process performance of data flow becomes extremely important.
In correlation technique, the process performance per data stream is improved in the following manner:Will be same in N number of clock cycle
The message of data stream merges into a total message, and N here represents to handle the quantity of the clock cycle needed during every message.
For example, in CAR scenes, the message of same stream in 3 clock cycle is merged into a total message, the mode of merging is will
The message length value of same stream is added in 3 clock cycle, obtains overall length angle value, and the length value of total message after merging is should
Overall length angle value, afterwards, total message after the merging is handled within 3 clock cycle, using the result after processing as 3 clock weeks
The result of the message of same stream in phase.
But in said process, needed in merging process consider message data correlation, cause implementation complexity compared with
Height, simultaneously as need message merging treatment, this causes message that first clock cycle receive in n-th clock week
It can just be merged, handle after phase, result in that the time delay of Message processing is longer, and real-time is poor, meanwhile, processing accuracy also declines.
The content of the invention
The application provides a kind of data flow processing method, device and storage medium, is answered with improving the processing of Message processing
Miscellaneous degree, processing accuracy and the real-time of processing.
In a first aspect, the application provides a kind of data flow processing method, applied in CAR algorithms, this method includes:Needing
When wanting processing data stream, pending data levelling is divided into N one's share of expenses for a joint undertaking data flows, wherein, N is that computing device processing is pending
The quantity of the clock cycle needed during data flow, N are more than or equal to 2, and N one's share of expenses for a joint undertaking data flows are distributed into N number of sub-data flow is handled
Device, wherein, sub-data flow corresponds with sub-data flow processor, and each sub-data flow processor safeguards at least one token
Bucket, the bucket depth of token bucket be X/N, and the adding rate of token be Y/N in token bucket, and X is the pending number of hypothesis computing device processing
According to when, the bucket depth of the token bucket of maintenance, when Y is assumes that computing device handles pending data, token in the token bucket of maintenance
Adding rate, N number of sub-data flow processor is controlled only to handle a different subdata respectively within N number of clock cycle
Stream, on the one hand, the process performance for realizing a sub-data flow isTotal process performance of N one's share of expenses for a joint undertaking data flows
ForThe process performance for realizing pending data stream isCompared to correlation technique
The middle message by N number of clock cycle merges the mode handled, and the data flow processing method that the application provides is in N number of clock week
In phase, each sub-data flow processor only handles a sub-data flow, it is not necessary to which complicated data dependence processing, processing are complicated
Spend relatively low, and it is possible to handle the data flow flowed into each clock cycle in real time, time delay is reduced, on the other hand, per height
The token bucket that the bucket depth for the token bucket that data flow processor is safeguarded is safeguarded when handling pending data stream for hypothesis computing device
The 1/N of bucket depth, the adding rate of token is assumes that computing device processing is treated in the token bucket that each sub-data flow processor is safeguarded
The 1/N of the adding rate of token in the token bucket safeguarded during processing data stream so that the son of each sub-data flow processor processing
The bucket depth and token adding rate for the token bucket that data flow is safeguarded with it match, and are not in due to by pending data stream
The problem of processing accuracy that N one's share of expenses for a joint undertaking data flows are handled and occurred declines is divided into, so as to improve the reality of Data Stream Processing
When property and precision.
In a kind of possible design of first aspect, when CAR algorithms are SrTCM algorithms or BT_PCN algorithms, often
Individual sub- data flow processor safeguards C buckets and E buckets, and X includes CBS and EBS, Y include CIR, the C buckets that sub-data flow processor is safeguarded
Bucket depth is CBS/N, and the adding rate of token is CIR/N in bucket depth EBS/N, the C bucket of E buckets.When CAR algorithms are TrTCM algorithms,
Each sub-data flow processor safeguards C buckets and P buckets, and X includes CBS and PBS, Y include CIR and PIR, and sub-data flow processor is safeguarded
The bucket depths of C buckets be CBS/N, the adding rate of token is in bucket depth PBS/N, the C bucket for the P buckets that sub-data flow processor is safeguarded
The adding rate of token is PIR/N in CIR/N, P bucket.When CAR algorithms are that DS TrTCM algorithms or MEF10.2 band width configurations are calculated
During method, each sub-data flow processor safeguards C buckets and E buckets, and X includes CBS and EBS, Y include CIR and EIR, sub-data flow processing
The bucket depth for the C buckets that device is safeguarded be CBS/N, and the addition of token is fast in bucket depth EBS/N, the C bucket for the E buckets that sub-data flow processor is safeguarded
Rate is CIR/N, and the adding rate of token is EIR/N in E buckets.In this implementation, define in different CAR algorithms,
How the bucket depth and token adding rate of sub-data flow processor is adjusted, so as to for different CAR algorithms, can improve
Sub-data flow processing accuracy.
In a kind of possible design of first aspect, N one's share of expenses for a joint undertaking data flows are distributed to N number of sub-data flow processor, wrapped
Include:N one's shares of expenses for a joint undertaking data flow is distributed to by N number of sub-data flow processor according to random algorithm at random.
In a kind of possible design of first aspect, N one's share of expenses for a joint undertaking data flows are distributed to N number of sub-data flow processor, wrapped
Include:According to the mapping relations between the mark of sub-data flow and the mark of sub-data flow processor, N one's share of expenses for a joint undertaking data flows are distributed to
N number of sub-data flow processor.
Above two implementation process defines how N one's share of expenses for a joint undertaking data flows being distributed to N number of sub-data flow processor, so as to real
Existing N one's share of expenses for a joint undertaking data flows are distributed to N number of sub-data flow processor, and the hardware resource of above two ways of distribution consumption is less.
In a kind of possible design of first aspect, N one's share of expenses for a joint undertaking data flows are distributed to N number of sub-data flow processor, wrapped
Include:According to the message length of the processed sub-data flow of N number of sub-data flow processor, by message length in N one's share of expenses for a joint undertaking data flows most
Short sub-data flow is distributed to the most long sub-data flow processor of message length of processed sub-data flow, by N one's share of expenses for a joint undertaking data
The short sub-data flow of message length time is distributed to the sub-data flow processing of the message length vice-minister of processed sub-data flow in stream
Device, the like, until N one's share of expenses for a joint undertaking data stream distribution is completed.Such a implementation can balanced each sub-data flow processor
Load, each sub-data flow processor is handled sub-data flow under good performance.
In a kind of possible design of first aspect, this method also includes:Obtain the demand processing of pending data stream
Performance;When demand process performance is more thanWhen, pending data levelling is divided into N one's share of expenses for a joint undertaking data flows it is determined that performing
The step of, wherein, CLK is the clock cycle of computing device.This implementation is realized only when the need of pending data stream
Process performance is asked to be more thanWhen, the data flow processing method that the application provides just is performed, saves storage resource.
In a kind of possible design of first aspect, if the data in pending data stream have order, in N number of son
After data flow processor parallel processing N one's share of expenses for a joint undertaking data flows, this method also includes:By the N after parallel processing N one's share of expenses for a joint undertaking data flows
Part result is sent to receiving device in sequence, to ensure being normally carried out for business.
Second aspect, the application provide a kind of data stream processing device, and described device is used to handle CAR algorithms, including:Draw
Sub-module, for pending data levelling to be divided into N one's share of expenses for a joint undertaking data flows, wherein, N is described in data stream processing device processing
The quantity of the clock cycle needed during pending data stream, N are more than or equal to 2;Distribution module, for by the N one's shares of expenses for a joint undertaking data
Flow point issues N number of sub-data flow processor, wherein, sub-data flow corresponds with sub-data flow processor, each subnumber
At least one token bucket is safeguarded according to stream handle, and the bucket depth of the token bucket is X/N, the adding rate of token in the token bucket
For Y/N, X is the bucket depth of the token bucket of maintenance when assuming that the data stream processing device handles the pending data stream, and Y is
Assuming that when the data stream processing device handles the pending data stream, the adding rate of token in the token bucket of maintenance;Control
Molding block, it is a different for controlling N number of sub-data flow processor only to be handled respectively within N number of clock cycle
Sub-data flow.
In a kind of possible design of second aspect, when the CAR algorithms are SrTCM algorithms or BT_PCN algorithms
When, each sub-data flow processor safeguards C buckets and E buckets, and the X includes CBS and EBS, and the Y includes CIR, the C buckets
Bucket depth be CBS/N, the bucket depth EBS/N of the E buckets, the adding rate of token is CIR/N in the C buckets;When the CAR algorithms
For TrTCM algorithms when, each sub-data flow processor safeguards C buckets and P buckets, and the X includes CBS and PBS, and the Y includes
CIR and PIR, the bucket depths of the C buckets are CBS/N, the bucket depth PBS/N of the P buckets, and the adding rate of token is in the C buckets
CIR/N, the adding rate of token is PIR/N in the P buckets;When the CAR algorithms are DS TrTCM algorithms or MEF10.2 bands
During wide placement algorithm, each sub-data flow processor safeguards C buckets and E buckets, and the X includes CBS and EBS, and the Y includes
CIR and EIR, the bucket depths of the C buckets are CBS/N, the bucket depth EBS/N of the E buckets, and the adding rate of token is in the C buckets
CIR/N, the adding rate of token is EIR/N in the E buckets.
In a kind of possible design of second aspect, the distribution module is specifically used for:According to random algorithm by the N
One's share of expenses for a joint undertaking data flow is distributed to N number of sub-data flow processor at random.
In a kind of possible design of second aspect, the distribution module is specifically used for:According to the sub-data flow
Mapping relations between mark and the mark of the sub-data flow processor, N number of subdata is distributed to by the N one's shares of expenses for a joint undertaking data flow
Stream handle.
In a kind of possible design of second aspect, the distribution module is specifically used for:According to N number of sub-data flow
The message length of the processed sub-data flow of processor, by the most short subdata flow point of message length in the N one's shares of expenses for a joint undertaking data flow
The most long sub-data flow processor of the message length of processed sub-data flow is issued, message in the N one's shares of expenses for a joint undertaking data flow is grown
The short sub-data flow of degree time is distributed to the sub-data flow processor of the message length vice-minister of processed sub-data flow, successively class
Push away, until the N one's shares of expenses for a joint undertaking data stream distribution is completed.
In a kind of possible design of second aspect, described device also includes:Acquisition module, described wait to locate for obtaining
Manage the demand process performance of data flow;Determining module, for being more than when the demand process performanceWhen, it is determined that performing
The step of pending data levelling is divided into N one's share of expenses for a joint undertaking data flows, wherein, CLK is the clock of the data stream processing device
Cycle.
In a kind of possible design of second aspect, if the data in the pending data stream have order,
After N one's share of expenses for a joint undertaking data flows described in N number of sub-data flow processor parallel processing, described device also includes:Sending module, it is used for
N parts result after N one's shares of expenses for a joint undertaking data flow described in parallel processing is sent to receiving device according to the order.
The third aspect, the application provide a kind of data stream processing device, and described device is used to handle CAR algorithms, the device
Including:Transceiver;Memory, for store instruction;Processor, it is respectively connected with, is used for the memory and the transceiver
Execute instruction, to perform following steps when performing the instruction:Pending data levelling is divided into N one's share of expenses for a joint undertaking data flows,
Wherein, N is the quantity that data stream processing device handles the clock cycle needed during the pending data stream, and N is more than or equal to
2;The N one's shares of expenses for a joint undertaking data flow is distributed to N number of sub-data flow processor, wherein, sub-data flow and sub-data flow processor are one by one
Corresponding, each sub-data flow processor safeguards at least one token bucket, and the bucket depth of the token bucket is X/N, the token
The adding rate of token is Y/N in bucket, when X is assumes that the data stream processing device handles the pending data stream, is safeguarded
Token bucket bucket depth, when Y is assumes that the data stream processing device handles the pending data stream, in the token bucket of maintenance
The adding rate of token;N number of sub-data flow processor is controlled only to handle portion respectively within N number of clock cycle each not
Identical sub-data flow.
Fourth aspect, the application provide a kind of computer-readable recording medium, are stored with the computer-readable storage medium
Computer-readable instruction, when computer is read and performs the computer-readable instruction so that computer performs such as first party
The method provided in any possible design of face or first aspect.
Brief description of the drawings
Fig. 1 is a kind of Organization Chart of the concrete application scene for the data flow processing method that the application provides;
Fig. 2 is the schematic flow sheet for the data flow processing method embodiment that the application provides;
Fig. 3 A are the schematic flow sheet of SrTCM algorithms;
Fig. 3 B are the schematic flow sheet of TrTCM algorithms;
Fig. 3 C are the schematic flow sheet of DS TrTCM algorithms;
Fig. 3 D are the schematic flow sheet of MEF10.2 band width configuration algorithms;
Fig. 3 E are the schematic flow sheet of BT_PCN algorithms;
Fig. 4 is a kind of schematic diagram of the specific implementation for the data flow processing method that the application provides;
Fig. 5 is the structural representation for the data stream processing device embodiment one that the application provides;
Fig. 6 is the structural representation for the data stream processing device embodiment two that the application provides;
Fig. 7 is the structural representation for the data stream processing device embodiment three that the application provides.
Embodiment
The data flow processing method that the application provides can apply to what the high-speed data-flow of complicated business was handled
In scene.The complicated business can be CAR, can also be the business scenarios such as complicated flow analysis and threshold counter.Due to
The processing of message in high-speed data-flow needs N number of clock cycle, and N is more than or equal to 2, and which limits the place of every data stream
Rationality can exceedWherein, CLK is the clock cycle for the computing device for handling the business.Fig. 1 is this Shen
A kind of Organization Chart of the concrete application scene for the data flow processing method that please be provide.It can be run in the server of Virtual network operator
CAR mechanism, to control the flow of user.As shown in figure 1, CAR is based on Token Bucket Policing, with given pace toward in two token buckets
Add token, according to token in bucket whether be more than message send consumption token, by packet labeling into green, yellow (in figure not
Show) or it is red.The message of green can continue to send, and red message needs to be dropped.Wherein, calculated based on different CAR
Method, token bucket have different types.For example, single-rate three-color marker (Single Rate Three Color Marker,
SrTCM) in algorithm, token bucket includes:(Committed, C) bucket and extension (Expand, E) bucket are promised to undertake, in the color of dual rate three
Mark in (Two Rate Three Color Marker, TrTCM) algorithm, token bucket includes:C buckets and peak value (Peak, P) bucket.
The token computation and token bucket of CAR mechanism, which are safeguarded, to be needed to use the computings such as multiplication, the addition of big bit wide.Token computation and token bucket
Safeguard in the token bucket for referring to safeguarding toward processor addition token and according to the quantity of message in pending data stream from token
Token is reduced in bucket.In CAR scenes, the message in data flow needs at least three clock cycle just to have been processed into, and this is just
Limit the process performance of every data stream no more thanIt is involved in data flow in the application
Message carry out processing and refer to the quantity of message in quantity and data flow according to token in token bucket to packet labeling face
Color, still it is dropped with determining that the message is to continue with sending.
In order that the process performance of data flow is promoted toI.e. in order that the transmission rate of data flow byIt is promoted toThe data flow processing method that the application provides is by the way that pending data levelling is divided
For N one's share of expenses for a joint undertaking data flows, wherein, N is the quantity that computing device handles the clock cycle needed during pending data stream, N be more than or
Equal to 2, N one's share of expenses for a joint undertaking data flows are distributed to N number of sub-data flow processor according to default rule, wherein, sub-data flow and subnumber
Corresponded according to stream handle, each sub-data flow processor safeguards at least one token bucket, and the bucket depth of token bucket is X/N, order
The adding rate of token is Y/N in board bucket, when X is assumes that computing device handles pending data, the bucket depth of the token bucket of maintenance,
When Y is assumes that computing device handles pending data, the adding rate of token, controls N number of sub-data flow in the token bucket of maintenance
Processor only handles a different sub-data flow respectively within N number of clock cycle, on the one hand, realizes a subdata
The process performance of stream isTotal process performance of N one's share of expenses for a joint undertaking data flows isIt is i.e. real
The process performance for having showed pending data stream isThe message of N number of clock cycle is merged into compared in correlation technique
The mode of row processing, the data flow processing method that the application provides within N number of clock cycle, only locate by each sub-data flow processor
Manage a sub-data flow, it is not necessary to which complicated data dependence processing, processing complexity is relatively low, and it is possible to handle in real time
The data flow that each clock cycle flows into, reduces time delay, on the other hand, the token bucket that each sub-data flow processor is safeguarded
The 1/N of the bucket depth for the token bucket that bucket depth is safeguarded when handling pending data stream for hypothesis computing device, each sub-data flow processing
In the token bucket safeguarded when the adding rate of token handles pending data stream for hypothesis computing device in the token bucket that device is safeguarded
The 1/N of the adding rate of token so that the bucket for the token bucket that the sub-data flow of each sub-data flow processor processing is safeguarded with it
Deep and token adding rate matches, and is not in be handled due to pending data stream is divided into N one's share of expenses for a joint undertaking data flows
And the processing accuracy occurred the problem of declining, so as to improve the real-time and precision of Data Stream Processing.
Make one to the technical scheme of the application below in conjunction with accompanying drawing to describe in detail.
Fig. 2 is the schematic flow sheet for the data flow processing method embodiment that the application provides.As shown in Fig. 2 the application carries
The data flow processing method of confession comprises the following steps:
S201:Pending data levelling is divided into N one's share of expenses for a joint undertaking data flows.
Wherein, N is the quantity that computing device handles the clock cycle needed during pending data stream, and N is more than or equal to 2.
Specifically, the data flow processing method that the application provides can be by the central processing unit (Central of computing device
Processing Unit, CPU) perform.Optionally, when the data flow processing method that the application provides is applied in CAR mechanism
When, i.e. when the data flow processing method that the application provides is applied in CAR algorithms, computing device here is Virtual network operator
Server.
Pending data stream can be by the other equipment outside computing device be sent to the computing device or
Other modules in the computing device are sent to the CPU's of the computing device.The application is without limitation.
In a kind of implementation, the pending data stream involved by the application includes the data to require calculation.Example
Such as, the data in pending data stream need to carry out the computings such as multiplication, addition, it is necessary to which computing device is in the pending data stream
Data carry out corresponding calculation process.In another implementation, the pending data stream involved by the application please for user
The data flow asked, due to needing to carry out management and control to the data flow of user, to realize traffic management and control, it is necessary to order in token bucket
The quantity of board carries out color calibration to the data in the pending data stream, to determine that the data in the data flow are to continue with sending
Still it is dropped.
The pending data levelling is divided into N one's share of expenses for a joint undertaking data flows in the application.Division when, it is necessary to ensure division after
N one's shares of expenses for a joint undertaking data flow it is independent mutually, without any coupled relation.Wherein, N is that computing device needs when handling the pending data stream
The quantity for the clock cycle wanted, in other words, N are that computing device handles the clock cycle that the message in the pending data stream needs
Quantity.N number of clock cycle is referred to as the processing window of data flow.For different computing devices, its clock week
Phase is different.For the pending data stream of different business, N is different;For the pending data stream of different length, N is different
's.
In the application, pending data stream includes message, and each message has the encapsulation format of preset rules, to area
Divide the beginning and end of each message.In the application, the beginning and end of message can be determined according to the encapsulation format of message, it is real
Pending data stream is now divided into N one's share of expenses for a joint undertaking data flows, also, the quantity of the message in each sub-data flow is identical, so as to real
Now divide into pending data levelling into N one's share of expenses for a joint undertaking data flows.
It should be noted that " message in pending data stream " described herein and " in pending data stream
Data " represent identical concept.The quantity of token described herein can represent with the byte number of token, the quantity of message
It can be represented with the byte number of message.
Optionally, when the pending data stream is the data flow in CAR mechanism, N 3.
S202:N one's share of expenses for a joint undertaking data flows are distributed to N number of sub-data flow processor.
Wherein, sub-data flow corresponds with sub-data flow processor.Each sub-data flow processor is safeguarded at least one
Token bucket, the bucket depth of token bucket are X/N, and the adding rate of token is Y/N in token bucket, and X waits to locate to assume that computing device is handled
When managing data flow, the bucket depth of the token bucket of maintenance, when Y is assumes that computing device handles pending data stream, the token bucket of maintenance
The adding rate of middle token.
Specifically, the computing device in the application is configured with N number of sub-data flow processor.Pending data levelling is equal
It is divided into after N one's share of expenses for a joint undertaking data flows, the N one's shares of expenses for a joint undertaking data flow is corresponded into distribution into N number of sub-data flow processor, to realize
This N number of sub-data flow processor parallel processing N one's share of expenses for a joint undertaking data flows in the processing window of N number of clock cycle.
Optionally, in the application, N one's share of expenses for a joint undertaking data flows can be distributed to N number of sub-data flow according to default rule and handled
Device.Here default rule is that can realize N one's shares of expenses for a joint undertaking data flow corresponding distribution to the rule of N number of sub-data flow processor
Then.The application is not limited to it.
It is illustrated below how according to default rule N one's share of expenses for a joint undertaking data flows to be distributed to N one's share of expenses for a joint undertaking data flow processors:
In the first possible realization, N one's shares of expenses for a joint undertaking data flow is distributed at N number of sub-data flow at random according to random algorithm
Manage device.
The implementation of random algorithm has many kinds.For example, can be to N one's share of expenses for a joint undertaking data stream numbers, N number of sub-data flow processing
Device randomly selects numbering, and the corresponding sub-data flow of the numbering being drawn into sub-data flow processor is distributed to the sub-data flow
Device is managed, N one's shares of expenses for a joint undertaking data flow is distributed to N number of sub-data flow processor at random to realize.
In second of possible implementation, according between the mark of sub-data flow and the mark of sub-data flow processor
Mapping relations, N one's share of expenses for a joint undertaking data flows are distributed to N number of sub-data flow processor.
Optionally, mark here can be numbering, for example, at can be to N one's shares of expenses for a joint undertaking data stream number and N number of sub-data flow
Reason device is numbered respectively, and sub-data flow is distributed to the sub-data flow processor with identical numbering.
In the third possible implementation, according to the message of the processed sub-data flow of N number of sub-data flow processor
Length, the most short sub-data flow of message length in N one's share of expenses for a joint undertaking data flows is distributed to the message length of processed sub-data flow most
Long sub-data flow processor, the short sub-data flow of message length in N one's share of expenses for a joint undertaking data flows time is distributed to processed subdata
The sub-data flow processor of the message length vice-minister of stream, the like, until N one's share of expenses for a joint undertaking data stream distribution is completed.
Specific process description is as follows:The message for the sub-data flow that each sub-data flow processor has been handled is counted first
The message length of length and each sub-data flow of statistics, afterwards, it is determined that the message length of the sub-data flow handled is most long
Sub-data flow processor and determine the most short sub-data flow of message length, the most short sub-data flow of the message length is distributed
The most long sub-data flow processor of message length to the sub-data flow handled;The sub-data flow for determining to have handled again
The sub-data flow processor and the secondary short sub-data flow of determination message length of message length vice-minister, by the short son of message length time
Data flow distributes to the sub-data flow processor of the message length vice-minister of the sub-data flow handled;It is determined that the son handled
The sub-data flow processor and determine the 3rd short sub-data flow of message length that the message length the 3rd of data flow is grown, by message
The 3rd short sub-data flow of length distributes to the sub-data flow processor that the message length the 3rd of the sub-data flow handled is grown;
Said process is repeated, until N one's shares of expenses for a joint undertaking data flow is all distributed to N number of sub-data flow processor.
This kind is achieved in that for the load of balanced each sub-data flow processor, makes each sub-data flow processor
Sub-data flow is handled under good performance.That is, when some sub-data flow processor pre-processing data stream during, always
It is to handle the long data flow of message length, then during this sub-distribution sub-data flow, by message in N one's share of expenses for a joint undertaking data flows
The most short sub-data flow of length distributes to the sub-data flow processor, to reduce the load of the sub-data flow processor, makes its extensive
Renaturation energy, and then, improve operational precision.
In above-mentioned three kinds of implementations, the operational precision of the third implementation is more than the computing essence of the first implementation
Degree is more than the operational precision of second of implementation.The third implementation realizes that cost is more than the reality of second of implementation
Modern valency realizes cost more than the first implementation.
In this application, the quantity of sub-data flow and the quantity of sub-data flow processor need essentially equal.Work as subdata
When the quantity of stream is more than the quantity of sub-data flow processor, occur that processing mistake and processing accuracy can not expire for complicated business
Situation about requiring enough.When the quantity of sub-data flow is less than the quantity of sub-data flow processor, at some sub-data flow processors
In idle state, the waste of storage resource and process resource can be caused.
It should be noted that N number of sub-data flow processor is the processor being located in computing device.This N number of sub-data flow
Processor can be implemented in combination with by software, hardware or both.
S203:N number of sub-data flow processor is controlled only to handle a different subnumber respectively within N number of clock cycle
According to stream.
After N one's share of expenses for a joint undertaking data flows are distributed into N one's share of expenses for a joint undertaking data flow processors, each sub-data flow processor is assigned to
A sub-data flow, this N number of sub-data flow processor can start in processing window, i.e., in N number of clock cycle, to handle subnumber
According to stream.
It is a different that computing device controls N number of sub-data flow processor only to be handled respectively within N number of clock cycle
Sub-data flow, handled with being implemented without the data dependence of complexity, reduce computational complexity.
After processing is complete, N part results are got.According to the difference of practical business, computing device can be by the N
Part result is sent to receiving device, can also be stored N part results.Involved reception is set in the application
Standby can send data to flow to the other equipment of the computing device or send the meter that data flow to the computing device
Other modules in equipment are calculated, the other equipment for needing reception processing result is can also be or needs reception processing knot
Other modules in the computing device of fruit.
In this application, each sub-data flow processor safeguards at least one token bucket.Based on different CAR algorithms, son
The title for the token bucket that data flow processor is safeguarded is different with quantity.Assuming that when not splitting pending data stream, at computing device
The bucket depth for managing the token bucket of pending data stream maintenance is X, and the adding rate of token is Y in token bucket, then the application is by son
The bucket depth for the token bucket that data flow processor is safeguarded is adjusted to X/N, and the adding rate of token is adjusted to Y/N in token bucket.It is this
Adjustment mode can improve the precision of subdata stream handle processing sub-data flow.Below in conjunction with different CAR algorithms, to this
Why kind adjustment mode, which can improve processing accuracy, is described in detail.
Fig. 3 A are the schematic flow sheet of SrTCM algorithms.As shown in Figure 3A, in SrTCM algorithms, two token bucket C are included
Bucket and E buckets.Assuming that when not splitting pending data stream, computing device handles C buckets in the token bucket that the pending data stream is safeguarded
Bucket depth be Committed Burst Size (Committed Burst Size, CBS), the bucket depth of E buckets is beyond burst size (Excess
Burst Size, EBS), i.e.,:X includes CBS and EBS.In SrTCM algorithms, only toward token is added in C buckets, when token in C buckets
After reaching maximum quantity, token is spilled over to E buckets.Assuming that when not splitting pending data stream, computing device handles the pending number
In the token bucket safeguarded according to stream, the adding rate of token for committed information rate (Committed Information Rate,
CIR), i.e. Y includes CIR.The actual bucket depth of C buckets is represented with Tc, the actual bucket depth of E buckets is represented with Te.What bucket depth represented is token
The maximum quantity of token in bucket.Token adding procedure in token bucket, is described as follows with false code:
For every 1/CIR second,
If(Tc<CBS)
Tc=Tc+1;
Else if(Te<EBS)
Te=Te+1;
Else;
Token in C buckets updates CIR times each second.Every the 1/CIR seconds, judge whether the actual bucket depth Tc of C buckets is less than C buckets
Bucket depth CBS;When Tc is less than CBS, the actual bucket depth Tc of C buckets adds 1;When Tc is more than or equal to CBS, the reality of E buckets is judged
Whether border bucket depth Te is less than the bucket depth EBS of E buckets;When Te is less than EBS, the actual bucket depth Te of E buckets adds 1;When Te is more than or waits
When EBS, Tc and Te do not increase.It should be noted that here 1 represent be 1 byte (Byte).
Token reduces process in token bucket, is described as follows with false code:
If(Tc-B≥0)
Tc=Tc-B;
Color=Green;
Else if(Te-B≥0)
Te=Te-B;
Color=Yellow;
Else
Color=Red;
What B here was represented is the byte number of the size, i.e. message of message.During message is handled, judge in C buckets
Actual bucket depth Tc subtracts the size of message, if more than or equal to 0, i.e. judges actual the bucket depth Tc and message of C buckets byte
Several sizes;When the actual bucket depth Tc of C buckets byte number is more than or equal to the byte number of message, by the actual bucket depth of C buckets
Tc reduces B, is green by the packet labeling;When the actual bucket depth Tc of C buckets byte number is less than the byte number of message, E is judged
The size of the actual bucket depth Te of bucket byte number and the byte number of message;When the actual bucket depth Te of E buckets is more than or equal to message
Byte number when, the actual bucket depth Te of E buckets is reduced into B, is yellow by the packet labeling;When the actual bucket depth Te of E buckets byte
It is red by packet labeling when number is less than the byte number of message.
Fig. 3 B are the schematic flow sheet of TrTCM algorithms.As shown in Figure 3 B, in TrTCM algorithms, two token bucket C are included
Bucket and P buckets.Assuming that when not splitting pending data stream, computing device handles C buckets in the token bucket that the pending data stream is safeguarded
Bucket depth be CBS, the bucket depth of P buckets is peak burst size (Peak Burst Size, PBS), i.e.,:X includes CBS and PBS.
In TrTCM algorithms, token is added into C buckets and P buckets.Assuming that when not splitting pending data stream, computing device handles this and treated
In the token bucket that processing data stream is safeguarded, the adding rate of token be CIR in C buckets, and the adding rate of token is peak value letter in P buckets
Speed (Peak Information Rate, PIR) is ceased, i.e. Y includes CIR and PIR.The actual bucket depth of C buckets is represented with Tc, with Tp
Represent the actual bucket depth of P buckets.The token of C buckets is not spilt over to P buckets in TrTCM algorithms.
In TrTCM algorithms, token adding procedure is in token bucket:Every the 1/CIR seconds, the actual bucket depth Tc of C buckets is judged
Whether the bucket depth CBS of C bucket is less than, and when Tc is less than CBS, the actual bucket depth Tc of C buckets adds 1;When Tc is more than or equal to CBS,
Tc does not increase;Every the 1/PIR seconds, judge whether the actual bucket depth Tp of P buckets is less than the bucket depth PBS of P buckets;When Tp is less than PBS, P
The actual bucket depth Tp of bucket adds 1;When Tp is more than or equal to PBS, Tp does not increase.
Token reduction process is in token bucket:During message is handled, judge that the actual bucket depth Tp in P buckets subtracts message
Size, if more than or equal to 0, i.e. judge actual the bucket depth Tp and message of P buckets byte number B size;When P buckets
It is red by the packet labeling when actual bucket depth Tp byte number is less than the byte number B of message;As the actual bucket depth Tp of P buckets
When byte number is more than or equal to the byte number of message, the actual bucket depth Tc of C buckets byte number and the byte number of message are judged
Size;When the actual bucket depth Tc of C buckets is less than the byte number of message, the actual bucket depth Tc of C buckets is reduced into B, by the packet labeling
For yellow;When the actual bucket depth Te of C buckets byte number is more than or equal to the byte number of message, Tp and Tc are reduced into B, will
Packet labeling is green.
Fig. 3 C are the schematic flow sheet of DS TrTCM algorithms.As shown in Figure 3 C, differential service a two rate three color marker (DS
Two Rate Three Color Marker, DS TrTCM) in algorithm, include two token bucket C buckets and E buckets.Assuming that do not split
During pending data stream, the bucket depth that computing device handles C buckets in the token bucket of pending data stream maintenance is CBS, the bucket of E buckets
Depth is EBS, i.e.,:X includes CBS and EBS.In DS TrTCM algorithms, token is added into C buckets and E buckets.Wait to locate assuming that not splitting
When managing data flow, computing device is handled in the token bucket of pending data stream maintenance, and the adding rate of token is CIR in C buckets,
The adding rate of token is extraneous information speed (Excess Information Rate, EIR) in E buckets, i.e. Y include CIR and
EIR.The actual bucket depth of C buckets is represented with Tc, the actual bucket depth of E buckets is represented with Te.In DS TrTCM algorithms, the token of C buckets does not overflow
Go out to E buckets.
In DS TrTCM algorithms, token adding procedure is in token bucket:Every the 1/CIR seconds, the actual bucket depth of C buckets is judged
Whether Tc is less than the bucket depth CBS of C buckets, and when Tc is less than CBS, the actual bucket depth Tc of C buckets adds 1;When Tc is more than or equal to CBS
When, Tc does not increase;Every the 1/EIR seconds, judge whether the actual bucket depth Te of E buckets is less than the bucket depth EBS of E buckets;When Te is less than EBS
When, the actual bucket depth Te of E buckets adds 1;When Te is more than or equal to EBS, Te does not increase.
Token reduction process is in token bucket:During message is handled, judge the size of message with C buckets token it is big
It is small;When the actual bucket depth Tc of C buckets is more than or equal to the byte number B of message, the actual bucket depth Tc of C buckets is reduced into B, this is reported
Text marks;When the actual bucket depth Tc of C buckets is less than the byte number B of message, actual the bucket depth Te and message of E buckets are judged
Byte number B size;When the actual bucket depth Te of E buckets is more than or equal to the byte number B of message, by the actual bucket depth Te of E buckets
Reduce B, be yellow by the packet labeling;When the actual bucket depth Te of E buckets is less than the byte number B of message, it is by the packet labeling
It is red.
Fig. 3 D are the schematic flow sheet of MEF10.2 band width configuration algorithms.As shown in Figure 3 D, it illustrates two kinds of metropolitan area ether
Net forum's tissue (Metro Ethernet Forum, MEF) 10.2 band width configuration algorithms.In MEF10.2 band width configuration algorithms,
Include two token bucket C buckets and E buckets.Assuming that when not splitting pending data stream, computing device handles pending data stream dimension
The bucket depth of C buckets is CBS in the token bucket of shield, and the bucket depth of E buckets is EBS, i.e.,:X includes CBS and EBS.In the algorithm, toward C buckets and E
Token is added in bucket.Assuming that when not splitting pending data stream, computing device handles the token bucket of pending data stream maintenance
In, the adding rate of token is CIR in C buckets, and the adding rate of token is EIR in E buckets, i.e. Y includes CIR and EIR.Represented with Tc
The actual bucket depth of C buckets, the actual bucket depth of E buckets is represented with Te.In algorithm in Fig. 3 D shown in a, the token in C buckets does not spill over E
Bucket.In the algorithm shown in b in Fig. 3 D, the token of C buckets is spilled over to E buckets.
In the algorithm shown in a, token adding procedure is in token bucket:Every the 1/CIR seconds, the actual bucket depth of C buckets is judged
Whether Tc is less than the bucket depth CBS of C buckets, and when Tc is less than CBS, the actual bucket depth Tc of C buckets adds 1;When Tc is more than or equal to CBS
When, Tc does not increase;Every the 1/EIR seconds, judge whether the actual bucket depth Te of E buckets is less than the bucket depth EBS of E buckets;When Te is less than EBS
When, the actual bucket depth Te of E buckets adds 1;When Te is more than or equal to EBS, Te does not increase.
In the algorithm shown in b, token adding procedure is in token bucket:Every the 1/CIR seconds, the actual bucket depth of C buckets is judged
Whether Tc is less than the bucket depth CBS of C buckets, and when Tc is less than CBS, the actual bucket depth Tc of C buckets adds 1;When Tc is more than or equal to CBS
When, judge whether the actual bucket depth Te of E buckets is less than the bucket depth EBS of E buckets, when Te is less than EBS, the actual bucket depth of E buckets adds 1;When
When Te is more than or equal to EBS, Tc and Te do not increase;Every the 1/EIR seconds, judge whether the actual bucket depth Te of E buckets is less than E buckets
Bucket depth EBS;When Te is less than EBS, the actual bucket depth Te of E buckets adds 1;When Te is more than or equal to EBS, Te does not increase.
In MEF10.2 band width configuration algorithms, token reduction process is in token bucket:During message is handled, judge to report
The size of text and the size of token in C buckets;When the actual bucket depth Tc of C buckets is more than or equal to the byte number B of message, by C buckets
Actual bucket depth Tc reduce B, by the packet labeling for green;When the actual bucket depth Tc of C buckets is less than the byte number B of message, sentence
Actual the bucket depth Te and message of disconnected E buckets byte number B size;When the actual bucket depth Te of E buckets is more than or equal to the word of message
During joint number B, the actual bucket depth Te of E buckets is reduced into B, is yellow by the packet labeling;When the actual bucket depth Te of E buckets is less than message
It is red by the packet labeling during byte number B.
Fig. 3 E are the schematic flow sheet of BT_PCN algorithms.As shown in FIGURE 3 E, congestion prenotices (BT and Pre-
Congestion Notification, BT_PCN) in algorithm, include two token bucket C buckets and E buckets.Assuming that do not split pending
During data flow, the bucket depth that computing device handles C buckets in the token bucket that the pending data stream is safeguarded is CBS, and the bucket depth of E buckets is
EBS, i.e.,:X includes CBS and EBS.In BT_PCN algorithms, token is added into C buckets.Assuming that when not splitting pending data stream,
Computing device is handled in the token bucket of pending data stream maintenance, and the adding rate of token is CIR in C buckets, i.e. Y includes CIR.
The actual bucket depth of C buckets is represented with Tc, the actual bucket depth of E buckets is represented with Te.In BT_PCN algorithms, the token of C buckets is spilled over to E buckets.
In BT_PCN algorithms, token adding procedure is in token bucket:Every the 1/CIR seconds, the actual bucket depth Tc of C buckets is judged
Whether the bucket depth CBS of C bucket is less than;When Tc is less than CBS, the actual bucket depth Tc of C buckets adds 1;When Tc is more than or equal to CBS,
Judge whether the actual bucket depth Te of E buckets is less than the bucket depth EBS of E buckets;When Te is less than EBS, the actual bucket depth Te of E buckets adds 1;Work as Te
During more than or equal to EBS, Tc and Te do not increase.
Token reduction process is in token bucket:During message is handled, judge that the actual bucket depth Te in E buckets subtracts message
Size B, if more than or equal to 0, i.e. judge actual the bucket depth Te and message of E buckets byte number B size;When E buckets
When actual bucket depth Te byte number is more than or equal to the byte number of message, the actual bucket depth Te of E buckets is reduced into B, by the message
Labeled as green;When the actual bucket depth Te of E buckets byte number is less than the byte number of message, judge the actual bucket depth Tc's of C buckets
The byte number B of byte number and message size;When the actual bucket depth Tc of C buckets is more than or equal to the byte number B of message, by C
The actual bucket depth Tc of bucket reduces B, is yellow by the packet labeling;When the actual bucket depth Tc of C buckets byte number is less than the word of message
It is red by packet labeling during joint number B.
In CAR business, it is necessary to which the message for getting token is successfully transmitted out in token computation and token bucket are safeguarded
Go, will not get the packet labeling of token is red packet, and is abandoned, and is controlled actual flow for predetermined threshold value with realizing,
Realize that accurately flow controls.It is understood that in order to realize accurately flow control, key is the demarcation of message color
As a result.In the processing procedure of data flow, if the message for needing to abandon in data flow is demarcated as into green, and it is successfully transmitted out
Go, actual flow can be caused to be more than predetermined threshold value, if the message for needing to be successfully transmitted in data flow is demarcated as into red, and lost
Abandon, then actual flow can be caused to be less than predetermined threshold value, both of these case can all reduce processing accuracy.
It was found from the description to the CAR algorithms shown in Fig. 3 A- Fig. 3 E, in order to realize accurate traffic management and control, i.e. accurately
Color calibration is carried out to the message in data flow, the influence factor for having following three aspect:First aspect is matched somebody with somebody for the bucket depth of token bucket
Put, second aspect is the adding rate of token in token bucket, and the third aspect is the reduction speed of token bucket.
After pending data stream is divided into N one's share of expenses for a joint undertaking data flows, if not adjusting the ginseng of first aspect and second aspect
Number, i.e., the token safeguarded when handling pending data stream (the unallocated original data stream for sub-data flow) according further to computing device
The bucket depth of bucket and the adding rate of token, on long terms, can cause message color calibration mistake, so as to influence processing accuracy.
If during by the parameter configuration of first aspect to assume that computing device handles pending data stream, the token bucket of maintenance
Bucket depth 1/N, and, by the parameter configuration of second aspect for assume computing device handle pending data stream when, maintenance
The 1/N of token adding rate in token bucket, you can to realize the parameter and subdata of the token bucket of sub-data flow processor maintenance
Stream matches.In other words, it is each sub-data flow is pending with handling with pending data stream, each sub-data flow processor
The processor equal proportion scaling of data flow, it is possible to ensure that precision when sub-data flow processor handles sub-data flow is set with calculating
The precision of standby processing pending data stream is identical.
When CAR algorithms are SrTCM algorithms or BT_PCN algorithms, each sub-data flow processor safeguards C buckets and E buckets, X
Including CBS and EBS, Y includes CIR, and the bucket depths of the C buckets that sub-data flow processor is safeguarded is CBS/N, bucket depth EBS/N, the C bucket of E buckets
The adding rate of middle token is CIR/N.When CAR algorithms are TrTCM algorithms, each sub-data flow processor safeguards C buckets and P
Bucket, X, which includes CBS and PBS, Y, includes CIR and PIR, and the bucket depths of the C buckets of sub-data flow processor maintenance is CBS/N, sub-data flow
The adding rate of token is CIR/N in bucket depth PBS/N, the C bucket for the P buckets that processor is safeguarded, the adding rate of token is in P buckets
PIR/N.When CAR algorithms are DS TrTCM algorithms or MEF10.2 band width configuration algorithms, each sub-data flow processor is safeguarded
C buckets and E buckets, X includes CBS and EBS, Y include CIR and EIR, and the bucket depth for the C buckets that sub-data flow processor is safeguarded is CBS/N, son
The adding rate of token be CIR/N in bucket depth EBS/N, the C bucket for the E buckets that data flow processor is safeguarded, the addition of token is fast in E buckets
Rate is EIR/N.That is, the bucket depth for the token bucket each sub-data flow processor safeguarded is adjusted to assume that computing device handles and waits to locate
Manage the 1/N for the bucket depth of token bucket safeguarded during data flow, the addition of token in the token bucket that each sub-data flow processor is safeguarded
The 1/N of the adding rate of token in the token bucket that speed adjust is safeguarded when handling pending data stream for hypothesis computing device.
In the data flow processing method that the application provides, because N number of sub-data flow processor is distinguished within N number of clock cycle
A different sub-data flow is only handled, meanwhile, it have adjusted the bucket depth and token addition speed of each sub-data flow processor
Rate, processing accuracy can be improved.
Optionally, the data flow processing method that the application provides also includes:Obtain the demand treatability of pending data stream
Energy;When demand process performance is more thanWhen, pending data stream is divided into N one's share of expenses for a joint undertaking data flows it is determined that performing
Step.Wherein, CLK is the clock cycle of computing device.The step is in order to determine S201 execution opportunity, only when pending
The demand process performance of data flow is more thanWhen, just perform the data flow processing method that the application provides.If
Having handled data flow needs if storing result, it is assumed that the storage for storing the result of original pending data stream is empty
Between be x bytes, then after the step of performing S203, the memory space for storing the result of N number of sub-data flow processor is
N*x bytes, that is, extra storage resource is needed to store the result of N number of sub-data flow processor.Therefore, by determining S201
Execution opportunity, realize only be necessary perform S201 when just determine execution the step, and then, S201 need not performed
When, it is not required that extra memory space storage performs the result after S203, realizes and saves storage resource.It can pass through
The length for counting pending data stream determines the demand process performance of pending data stream.It should be noted that the application institute
The demand process performance being related to refers to the process performance required for the pending data flow, i.e., what pending data flow needed
The number of operation per second., then can be by counting pending because the length of data flow is positively correlated with the process performance of data flow
The length of data flow determines the demand process performance of pending data stream.
Some with the business scenario of timing requirements, the data in pending data stream have order, then in N number of son
After data flow processor parallel processing N one's share of expenses for a joint undertaking data flows, the data flow processing method that the application provides also includes:It will locate parallel
N part results after reason N one's share of expenses for a joint undertaking data flows are sent to receiving device in sequence.
Different business has different demands, and some business have timing requirements to the data in pending data stream, for example,
Data in pending data stream have with timestamp 11:59:50am data and with timestamp 11:59:51am data.
Here at the time of timestamp represents data generation, available for the order for representing data:The more early data of the timestamp that carries
Represent that the order of the data is more forward.When pending data stream flows into the computing device, with timestamp 11:59:50am's
Data are prior to timestamp 11:59:51am data flow into, then handled be sent to receiving device when, it is also desirable to carry
Timestamp 11:59:50am data are prior to timestamp 11:59:51am data are sent.It is N number of in this scene
Sub-data flow processor after N one's share of expenses for a joint undertaking data flows have been handled, get N parts result, it is necessary to by N parts result by
Sent according to the order carried in pending data stream to receiving device, to ensure being normally carried out for business.
Said process is illustrated with a specific example below, the example is the data flow processing method for providing the application
In scene applied to CAR mechanism.Fig. 4 is a kind of signal of the specific implementation for the data flow processing method that the application provides
Figure.CAR algorithms in Fig. 4 are SrTCM algorithms.As shown in figure 4, the size of the processing window of message is handled according to the computing device,
Using the data of L length in data flow as pending data stream.Remaining data is handled in next processing window.Assuming that N is
3, then pending data stream is divided into 3 one's share of expenses for a joint undertaking data flow 0-2.The 3 one's share of expenses for a joint undertaking data flow is distributed to 3 sons by flow distributor
Data flow processor, in this example, sub-data flow processor are CAR processors.It can be seen that sub-data flow 0 is distributed in Fig. 3
To CAR processors 21, sub-data flow 1 is distributed to CAR processors 22, sub-data flow 2 is distributed to CAR processors 23.Each
The bucket depth for the C buckets that CAR processors are safeguarded is CBS/N, and the bucket depth of E buckets is EBS/N.The adding rate of token is CIR/N in C buckets.
In this example, after 3 complete 3 one's share of expenses for a joint undertaking data flows of CAR processor parallel processings, i.e. carry out color to the message in 3 one's share of expenses for a joint undertaking data flows
After demarcation, the result of CAR processors 21 is stored into TK_C_0/TK_X_0, the results of CAR processors 22 store to
The result of TK_C_0/TK_X_1, CAR processor 23 is stored into TK_C_0/TK_X_2.For the number of next section of L length
According to repeating said process, it is possible to achieve the process performance of data flow isAlso, realize real-time processing.
The application provide data flow processing method by the way that pending data levelling is divided into N one's share of expenses for a joint undertaking data flows, its
In, N is the quantity that computing device handles the clock cycle needed during pending data stream, and N is more than or equal to 2, according to default
N one's share of expenses for a joint undertaking data flows are distributed to N number of sub-data flow processor by rule, wherein, a pair of sub-data flow and sub-data flow processor 1
Should, each sub-data flow processor safeguards at least one token bucket, and the bucket depth of token bucket is X/N, the addition of token in token bucket
Speed is Y/N, and when X is assumes that computing device handles pending data, the bucket depth of the token bucket of maintenance, Y is hypothesis computing device
When handling pending data, the adding rate of token, controls N number of sub-data flow processor in N number of clock in the token bucket of maintenance
A different sub-data flow is only handled in cycle respectively, on the one hand, the process performance for realizing a sub-data flow isTotal process performance of N one's share of expenses for a joint undertaking data flows isRealize pending data
The process performance of stream isCompared to the message of N number of clock cycle to be merged to the mode that is handled in correlation technique,
For the data flow processing method that the application provides within N number of clock cycle, each sub-data flow processor only handles a subdata
Stream, it is not necessary to which complicated data dependence processing, processing complexity is relatively low, and it is possible to handle in real time in each clock cycle
The data flow of inflow, time delay is reduced, on the other hand, the bucket depth for the token bucket that each sub-data flow processor is safeguarded is counted for hypothesis
Calculate the 1/N of the bucket depth for the token bucket safeguarded during equipment processing pending data stream, the token that each sub-data flow processor is safeguarded
The addition speed of token in the token bucket safeguarded when the adding rate of token handles pending data stream for hypothesis computing device in bucket
The 1/N of rate so that the bucket depth for the token bucket that the sub-data flow of each sub-data flow processor processing is safeguarded with it and token addition
Speed matches, and is not in due to pending data stream is divided into the processing that N one's share of expenses for a joint undertaking data flows are handled and occurred
The problem of precise decreasing, so as to improve the real-time and precision of Data Stream Processing.
Fig. 5 is the structural representation for the data stream processing device embodiment one that the application provides.The data that the application provides
Current processing device is used to handle the business in CAR algorithms.As shown in figure 5, the data stream processing device that the application provides includes:Draw
Sub-module 51, distribution module 52 and control module 53.
Division module 51, for pending data levelling to be divided into N one's share of expenses for a joint undertaking data flows.
Wherein, N is that data stream processing device handles the quantity of clock cycle needed during pending data stream, N be more than or
Equal to 2.
Distribution module 52, for N one's share of expenses for a joint undertaking data flows to be distributed into N number of sub-data flow processor.
Wherein, sub-data flow corresponds with sub-data flow processor.Each sub-data flow processor is safeguarded at least one
Token bucket, the bucket depth of token bucket are X/N, and the adding rate of token is Y/N in token bucket, and X is assumes at data stream processing device
When managing pending data stream, the bucket depth of the token bucket of maintenance, when Y is assumes that data stream processing device handles pending data stream,
The adding rate of token in the token bucket of maintenance.
Optionally, in this application, the bucket depth for the token bucket safeguarded for different CAR algorithms, sub-data flow processor
It is as follows with token adding rate:
When CAR algorithms are SrTCM algorithms or BT_PCN algorithms, each sub-data flow processor safeguards C buckets and E buckets, X
Including CBS and EBS, Y includes CIR, and the bucket depths of C buckets is CBS/N, and the adding rate of token is in bucket depth EBS/N, the C bucket of E buckets
CIR/N.When CAR algorithms are TrTCM algorithms, each sub-data flow processor safeguards C buckets and P buckets, and X includes CBS and PBS, Y bag
Include CIR and PIR, the bucket depths of C buckets is CBS/N, and the adding rate of token is CIR/N in bucket depth PBS/N, the C bucket of P buckets, is made in P buckets
The adding rate of board is PIR/N.When CAR algorithms are DS TrTCM algorithms or MEF10.2 band width configuration algorithms, each subnumber
Safeguard C buckets and E buckets according to stream handle, X includes CBS and EBS, Y to include the bucket depth of CIR and EIR, C bucket be CBS/N, the bucket depth of E buckets
The adding rate of token is CIR/N in EBS/N, C bucket, and the adding rate of token is EIR/N in E buckets.
In one implementation, distribution module 52 is specifically used for:N one's shares of expenses for a joint undertaking data flow is distributed at random according to random algorithm
To N number of sub-data flow processor.
In another implementation, distribution module 52 is specifically used for:At the mark of sub-data flow and sub-data flow
The mapping relations between the mark of device are managed, N one's share of expenses for a joint undertaking data flows are distributed to N number of sub-data flow processor.
In another implementation, distribution module 52 is specifically used for:According to the processed son of N number of sub-data flow processor
The message length of data flow, the most short sub-data flow of message length in N one's share of expenses for a joint undertaking data flows is distributed to processed sub-data flow
The most long sub-data flow processor of message length, the short sub-data flow of message length in N one's share of expenses for a joint undertaking data flows time is distributed to
The sub-data flow processor of the message length vice-minister of the sub-data flow of processing, the like, until N one's share of expenses for a joint undertaking data stream distributions is complete
Into.
Control module 53, it is each not for controlling N number of sub-data flow processor only to handle portion respectively within N number of clock cycle
Identical sub-data flow.
The data stream processing device that the application provides is used to performing method in embodiment illustrated in fig. 2, its implementation process and
Technical principle is similar, and here is omitted.
The data stream processing device that the application provides, by setting division module, for pending data levelling to be drawn
It is divided into N one's share of expenses for a joint undertaking data flows, distribution module, for N one's share of expenses for a joint undertaking data flows to be distributed into N number of sub-data flow processor, control module, uses
A different sub-data flow, a side are only handled respectively within N number of clock cycle in controlling N number of sub-data flow processor
Face, the process performance for realizing a sub-data flow areTotal process performance of N one's share of expenses for a joint undertaking data flows isThe process performance for realizing pending data stream isCompared in correlation technique
The message of N number of clock cycle is merged to the mode handled, the data flow processing method that the application provides is in N number of clock cycle
Interior, each sub-data flow processor only handles a sub-data flow, it is not necessary to complicated data dependence processing, handles complexity
It is relatively low, and it is possible to handle the data flow flowed into each clock cycle in real time, time delay is reduced, on the other hand, each subnumber
The token that the bucket depth for the token bucket safeguarded according to stream handle is safeguarded when handling pending data stream for hypothesis data stream processing device
The 1/N of the bucket depth of bucket, the adding rate of token is assumes Data Stream Processing in the token bucket that each sub-data flow processor is safeguarded
Device handles the 1/N of the adding rate of token in the token bucket safeguarded during pending data stream so that each sub-data flow processing
The bucket depth and token adding rate for the token bucket that the sub-data flow of device processing is safeguarded with it match, and are not in due to that will treat
Processing data stream is divided into the problem of processing accuracy that N one's share of expenses for a joint undertaking data flows are handled and occurred declines, so as to improve data
The real-time and precision of stream process.
Fig. 6 is the structural representation for the data stream processing device embodiment two that the application provides.The application is real shown in Fig. 5
On the basis of applying example, other modules included to data stream processing device make a detailed description.As shown in fig. 6, the application provides
Data stream processing device also include:Acquisition module 61, determining module 62 and sending module 63.
Acquisition module 61, for obtaining the demand process performance of pending data stream.
Determining module 62, for being more than when demand process performanceWhen, it is determined that performing pending data levelling
The step of being divided into N one's share of expenses for a joint undertaking data flows.
Wherein, CLK is the clock cycle of data stream processing device.
Sending module 63, connect for N part results after parallel processing N one's share of expenses for a joint undertaking data flows to be sent in sequence
Receiving unit.
It should be noted that when the data in pending data stream have order, the Data Stream Processing of the application offer
Device just includes sending module 63.
In the first implementation, the data stream processing device that the application provides includes:Division module 51, distribution module
52nd, control module 53, acquisition module 61 and determining module 62;In second of implementation, the data flow of the application offer
Processing unit includes:Division module 51, distribution module 52, control module 53 and sending module 63;In the third implementation
In, the data stream processing device that the application provides includes:Division module 51, distribution module 52, control module 53, acquisition module
61st, determining module 62 and sending module 63.The data stream processing device shown in Fig. 6 is in the third above-mentioned implementation
Data stream processing device.
The data stream processing device that the application provides is used to performing method in embodiment illustrated in fig. 2, its implementation process and
Technical principle is similar, and here is omitted.
The data stream processing device that the application provides, by setting acquisition module, for obtaining the need of pending data stream
Ask process performance, determining module, for being more than when demand process performanceWhen, it is determined that performing pending data levelling
The step of being divided into N one's share of expenses for a joint undertaking data flows, wherein, CLK is the clock cycle of data stream processing device, acquisition module and determination mould
Block can realize the execution opportunity for determining that pending data stream is divided into sub-data flow, and realization only is being necessary to perform division
Step when just determine perform the step, and then, when the step of division need not be performed, it is not required that extra memory space
Storage performs the result after division, realizes and saves storage resource, and sending module can ensure the business for having sequence requirement
It is normally carried out.
Fig. 7 is the structural representation for the data stream processing device embodiment three that the application provides.The data stream processing device
For handling CAR algorithms.As shown in fig. 7, the data stream processing device that the application provides includes:
Transceiver 71;Memory 72, for store instruction;Processor 73, it is respectively connected with memory 72 and transceiver 71,
For execute instruction, to perform following steps in execute instruction:Pending data levelling is divided into N one's share of expenses for a joint undertaking data flows,
Wherein, N is the quantity that data stream processing device handles the clock cycle needed during pending data stream, and N is more than or equal to 2;By N
One's share of expenses for a joint undertaking data flow is distributed to N number of sub-data flow processor, wherein, sub-data flow corresponds with sub-data flow processor, each
Sub-data flow processor safeguards at least one token bucket, and the bucket depth of token bucket is X/N, and the adding rate of token is Y/ in token bucket
When N, X is assume that data stream processing device handles pending data stream, the bucket depth of the token bucket of maintenance, Y is to assume at data flow
When managing device processing pending data stream, the adding rate of token in the token bucket of maintenance;N number of sub-data flow processor is controlled to exist
A different sub-data flow is only handled in N number of clock cycle respectively.
Optionally, when CAR algorithms are SrTCM algorithms or BT_PCN algorithms, each sub-data flow processor safeguards C buckets
With E buckets, X, which includes CBS and EBS, Y, includes CIR, and the bucket depths of the C buckets of sub-data flow processor maintenance is CBS/N, the bucket depth of E buckets
The adding rate of token is CIR/N in EBS/N, C bucket.When CAR algorithms are TrTCM algorithms, each sub-data flow processor dimension
C buckets and P buckets are protected, X includes CBS and PBS, Y include CIR and PIR, and the bucket depth for the C buckets that sub-data flow processor is safeguarded is CBS/N,
The adding rate of token is CIR/N in bucket depth PBS/N, the C bucket for the P buckets that sub-data flow processor is safeguarded, the addition of token in P buckets
Speed is PIR/N.When CAR algorithms are DS TrTCM algorithms or MEF10.2 band width configuration algorithms, each sub-data flow processing
Device safeguards C buckets and E buckets, and X includes CBS and EBS, Y include CIR and EIR, and the bucket depth for the C buckets that sub-data flow processor is safeguarded is
CBS/N, the adding rate of token is CIR/N in bucket depth EBS/N, the C bucket for the E buckets that sub-data flow processor is safeguarded, token in E buckets
Adding rate be EIR/N.That is, the bucket depth for the token bucket each sub-data flow processor safeguarded is adjusted to assume at data flow
The 1/N of the bucket depth for the token bucket safeguarded during reason device processing pending data stream, the token that each sub-data flow processor is safeguarded
Made in the token bucket that the adding rate of token is adjusted to assume to safeguard during data stream processing device processing pending data stream in bucket
The 1/N of the adding rate of board.
In a kind of implementation, in terms of N one's share of expenses for a joint undertaking data flows are distributed into N number of sub-data flow processor, processor 73
For:N one's shares of expenses for a joint undertaking data flow is distributed to by N number of sub-data flow processor according to random algorithm at random.
In another implementation, in terms of N one's share of expenses for a joint undertaking data flows are distributed into N number of sub-data flow processor, processor
73 are used for:According to the mapping relations between the mark of sub-data flow and the mark of sub-data flow processor, by N one's share of expenses for a joint undertaking data flow points
Issue N number of sub-data flow processor.
In another implementation, in terms of N one's share of expenses for a joint undertaking data flows are distributed into N number of sub-data flow processor, processor
73 are used for:According to the message length of the processed sub-data flow of N number of sub-data flow processor, message in N one's share of expenses for a joint undertaking data flows is grown
The most short sub-data flow of degree is distributed to the most long sub-data flow processor of message length of processed sub-data flow, by N one's shares of expenses for a joint undertaking
The short sub-data flow of message length time is distributed to the sub-data flow of the message length vice-minister of processed sub-data flow in data flow
Processor, the like, until N one's share of expenses for a joint undertaking data stream distribution is completed.
Optionally, processor 73 is additionally operable to:The demand process performance of pending data stream is obtained, when demand process performance is big
InWhen, it is determined that the step of pending data levelling is divided into N one's share of expenses for a joint undertaking data flows is performed, wherein, CLK is data
The clock cycle of current processing device.
Optionally, if the data in pending data stream have order, processor 73 is additionally operable to:By parallel processing N parts
N part results after sub-data flow are sent to receiving device in sequence.
The data stream processing device that the application provides is used to perform the method in embodiment illustrated in fig. 2, its implementation process, skill
Art principle is similar with technique effect, and here is omitted.
The application also provides a kind of computer-readable recording medium, and computer-readable finger is stored with computer-readable storage medium
Order, when computer is read and performs computer-readable instruction so that computer performs Data Stream Processing side as shown in Figure 2
Method.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above-mentioned each method embodiment can lead to
The related hardware of programmed instruction is crossed to complete.Foregoing program can be stored in a computer read/write memory medium.The journey
Sequence upon execution, execution the step of including above-mentioned each method embodiment;And foregoing storage medium includes:ROM, RAM, magnetic disc or
Person's CD etc. is various can be with the medium of store program codes.
Claims (15)
- A kind of 1. data flow processing method, applied in committed access rate CAR algorithms, it is characterised in that including:Pending data levelling is divided into N one's share of expenses for a joint undertaking data flows, wherein, N is that computing device handles the pending data stream When the quantity of clock cycle that needs, N is more than or equal to 2;The N one's shares of expenses for a joint undertaking data flow is distributed to N number of sub-data flow processor, wherein, sub-data flow and sub-data flow processor one One correspondence, each sub-data flow processor safeguard at least one token bucket, and the bucket depth of the token bucket is X/N, the order The adding rate of token is Y/N in board bucket, when X is assumes that the computing device handles the pending data stream, the order of maintenance The bucket depth of board bucket, when Y is assumes that the computing device handles the pending data stream, the addition of token in the token bucket of maintenance Speed;N number of sub-data flow processor is controlled only to handle a different subnumber respectively within N number of clock cycle According to stream.
- 2. according to the method for claim 1, it is characterised in that when the CAR algorithms are calculated for single-rate three-color marker SrTCM When method or congestion prenotice BT_PCN algorithms, each sub-data flow processor safeguards C buckets and E buckets, and the X includes holding Promise burst size CBS and including committed information rate CIR beyond burst size EBS, the Y, the bucket depth of the C buckets is CBS/N, The bucket depth EBS/N of the E buckets, the adding rate of token is CIR/N in the C buckets;When the CAR algorithms are a two rate three color marker TrTCM algorithms, each sub-data flow processor safeguards C buckets and P Bucket, the X, which includes CBS and peak burst size PBS, the Y, includes CIR and peak information rate PIR, the bucket depth of the C buckets For CBS/N, the bucket depth PBS/N of the P buckets, the adding rate of token is CIR/N in the C buckets, the addition of token in the P buckets Speed is PIR/N;When the CAR algorithms are differential service a two rate three color marker DS TrTCM algorithms or Metro Ethernet Forum During MEF10.2 band width configuration algorithms, each sub-data flow processor safeguards C buckets and E buckets, and the X includes CBS and EBS, institute Stating Y includes CIR and extraneous information speed EIR, and the bucket depths of the C buckets is CBS/N, the bucket depth EBS/N of the E buckets, in the C buckets The adding rate of token is CIR/N, and the adding rate of token is EIR/N in the E buckets.
- 3. method according to claim 1 or 2, it is characterised in that described that the N one's shares of expenses for a joint undertaking data flow is distributed to N number of son Data flow processor, including:The N one's shares of expenses for a joint undertaking data flow is distributed to by N number of sub-data flow processor according to random algorithm at random.
- 4. method according to claim 1 or 2, it is characterised in that described that the N one's shares of expenses for a joint undertaking data flow is distributed to N number of son Data flow processor, including:According to the mapping relations between the mark and the mark of the sub-data flow processor of the sub-data flow, by described N parts Sub-data flow is distributed to N number of sub-data flow processor.
- 5. method according to claim 1 or 2, it is characterised in that described that the N one's shares of expenses for a joint undertaking data flow is distributed to N number of son Data flow processor, including:According to the message length of the processed sub-data flow of N number of sub-data flow processor, will be reported in the N one's shares of expenses for a joint undertaking data flow The most short sub-data flow of literary length is distributed to the most long sub-data flow processor of message length of processed sub-data flow, by institute State the son that the short sub-data flow of message length time in N one's share of expenses for a joint undertaking data flows is distributed to the message length vice-minister of processed sub-data flow Data flow processor, the like, until the N one's shares of expenses for a joint undertaking data stream distribution is completed.
- 6. according to the method described in claim any one of 1-5, it is characterised in that methods described also includes:Obtain the demand process performance of the pending data stream;When the demand process performance is more thanWhen, pending data levelling is divided into N one's share of expenses for a joint undertaking data it is determined that performing The step of stream, wherein, CLK is the clock cycle of the computing device.
- 7. according to the method described in claim any one of 1-6, it is characterised in that if the data tool in the pending data stream There is order, then after N one's share of expenses for a joint undertaking data flows described in N number of sub-data flow processor parallel processing, methods described also includes:N parts result after N one's shares of expenses for a joint undertaking data flow described in parallel processing is sent to receiving device according to the order.
- 8. a kind of data stream processing device, described device is used to handle committed access rate CAR algorithms, it is characterised in that including:Division module, for pending data levelling to be divided into N one's share of expenses for a joint undertaking data flows, wherein, N fills for the Data Stream Processing The quantity for handling the clock cycle needed during the pending data stream is put, N is more than or equal to 2;Distribution module, for the N one's shares of expenses for a joint undertaking data flow to be distributed into N number of sub-data flow processor, wherein, sub-data flow and subnumber Corresponded according to stream handle, each sub-data flow processor safeguards at least one token bucket, the bucket depth of the token bucket For X/N, the adding rate of token is Y/N in the token bucket, and X is described pending to assume the data stream processing device processing During data flow, the bucket depth of the token bucket of maintenance, when Y is assumes that the data stream processing device handles the pending data stream, The adding rate of token in the token bucket of maintenance;Control module, it is a each for controlling N number of sub-data flow processor only to be handled respectively within N number of clock cycle The sub-data flow differed.
- 9. device according to claim 8, it is characterised in that when the CAR algorithms are calculated for single-rate three-color marker SrTCM When method or congestion prenotice BT_PCN algorithms, each sub-data flow processor safeguards C buckets and E buckets, and the X includes holding Promise burst size CBS and including committed information rate CIR beyond burst size EBS, the Y, the bucket depth of the C buckets is CBS/N, The bucket depth EBS/N of the E buckets, the adding rate of token is CIR/N in the C buckets;When the CAR algorithms are a two rate three color marker TrTCM algorithms, each sub-data flow processor safeguards C buckets and P Bucket, the X, which includes CBS and peak burst size PBS, the Y, includes CIR and peak information rate PIR, the bucket depth of the C buckets For CBS/N, the bucket depth PBS/N of the P buckets, the adding rate of token is CIR/N in the C buckets, the addition of token in the P buckets Speed is PIR/N;When the CAR algorithms are differential service a two rate three color marker DS TrTCM algorithms or Metro Ethernet Forum During MEF10.2 band width configuration algorithms, each sub-data flow processor safeguards C buckets and E buckets, and the X includes CBS and EBS, institute Stating Y includes CIR and extraneous information speed EIR, and the bucket depths of the C buckets is CBS/N, the bucket depth EBS/N of the E buckets, in the C buckets The adding rate of token is CIR/N, and the adding rate of token is EIR/N in the E buckets.
- 10. device according to claim 8 or claim 9, it is characterised in that the distribution module is specifically used for:The N one's shares of expenses for a joint undertaking data flow is distributed to by N number of sub-data flow processor according to random algorithm at random.
- 11. device according to claim 8 or claim 9, it is characterised in that the distribution module is specifically used for:According to the mapping relations between the mark and the mark of the sub-data flow processor of the sub-data flow, by described N parts Sub-data flow is distributed to N number of sub-data flow processor.
- 12. device according to claim 8 or claim 9, it is characterised in that the distribution module is specifically used for:According to the message length of the processed sub-data flow of N number of sub-data flow processor, will be reported in the N one's shares of expenses for a joint undertaking data flow The most short sub-data flow of literary length is distributed to the most long sub-data flow processor of message length of processed sub-data flow, by institute State the son that the short sub-data flow of message length time in N one's share of expenses for a joint undertaking data flows is distributed to the message length vice-minister of processed sub-data flow Data flow processor, the like, until the N one's shares of expenses for a joint undertaking data stream distribution is completed.
- 13. according to the device described in claim any one of 8-12, it is characterised in that described device also includes:Acquisition module, for obtaining the demand process performance of the pending data stream;Determining module, for being more than when the demand process performanceWhen, pending data levelling is drawn it is determined that performing The step of being divided into N one's share of expenses for a joint undertaking data flows, wherein, CLK is the clock cycle of the data stream processing device.
- 14. according to the device described in claim any one of 8-13, it is characterised in that if the data in the pending data stream With order, then described device also includes:Sending module, for N parts result after N one's shares of expenses for a joint undertaking data flow described in parallel processing to be sent to according to the order Receiving device.
- 15. a kind of computer-readable recording medium, it is characterised in that be stored with the computer-readable storage medium computer-readable Instruction, when computer is read and performs the computer-readable instruction so that computer is performed as claim 1 to 7 is any Method described in.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710776018.4A CN107743099B (en) | 2017-08-31 | 2017-08-31 | Data stream processing method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710776018.4A CN107743099B (en) | 2017-08-31 | 2017-08-31 | Data stream processing method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107743099A true CN107743099A (en) | 2018-02-27 |
CN107743099B CN107743099B (en) | 2021-08-03 |
Family
ID=61235171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710776018.4A Active CN107743099B (en) | 2017-08-31 | 2017-08-31 | Data stream processing method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107743099B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111726300A (en) * | 2020-06-15 | 2020-09-29 | 哈工大机器人(合肥)国际创新研究院 | Data sending method and device |
CN113132262A (en) * | 2020-01-15 | 2021-07-16 | 阿里巴巴集团控股有限公司 | Data stream processing and classifying method, device and system |
CN113949668A (en) * | 2021-08-31 | 2022-01-18 | 北京达佳互联信息技术有限公司 | Data transmission control method, device, server and storage medium |
CN114520789A (en) * | 2022-02-21 | 2022-05-20 | 北京浩瀚深度信息技术股份有限公司 | Token bucket-based shared cache message processing method, device, equipment and medium |
CN114765585A (en) * | 2020-12-30 | 2022-07-19 | 北京华为数字技术有限公司 | Service quality detection method, message processing method and device |
CN115102908A (en) * | 2022-08-25 | 2022-09-23 | 珠海星云智联科技有限公司 | Method for generating network message based on bandwidth control and related device |
CN116708310A (en) * | 2023-08-08 | 2023-09-05 | 北京傲星科技有限公司 | Flow control method and device, storage medium and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050068798A1 (en) * | 2003-09-30 | 2005-03-31 | Intel Corporation | Committed access rate (CAR) system architecture |
CN101159675A (en) * | 2007-11-06 | 2008-04-09 | 中兴通讯股份有限公司 | Method of implementing improvement of user service quality in IP multimedia subsystem |
CN101778043A (en) * | 2010-01-19 | 2010-07-14 | 华为技术有限公司 | Method and device for dividing filling rate interval based on token bucket algorithm |
CN101820385A (en) * | 2010-02-10 | 2010-09-01 | 中国电子科技集团公司第三十研究所 | Method for controlling flow of IP data stream |
CN101883050A (en) * | 2010-06-30 | 2010-11-10 | 中兴通讯股份有限公司 | System and method for realizing service speed limit |
CN103763217A (en) * | 2014-02-07 | 2014-04-30 | 清华大学 | Packet scheduling method and device for multi-path TCP |
-
2017
- 2017-08-31 CN CN201710776018.4A patent/CN107743099B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050068798A1 (en) * | 2003-09-30 | 2005-03-31 | Intel Corporation | Committed access rate (CAR) system architecture |
CN101159675A (en) * | 2007-11-06 | 2008-04-09 | 中兴通讯股份有限公司 | Method of implementing improvement of user service quality in IP multimedia subsystem |
CN101778043A (en) * | 2010-01-19 | 2010-07-14 | 华为技术有限公司 | Method and device for dividing filling rate interval based on token bucket algorithm |
CN101820385A (en) * | 2010-02-10 | 2010-09-01 | 中国电子科技集团公司第三十研究所 | Method for controlling flow of IP data stream |
CN101883050A (en) * | 2010-06-30 | 2010-11-10 | 中兴通讯股份有限公司 | System and method for realizing service speed limit |
CN103763217A (en) * | 2014-02-07 | 2014-04-30 | 清华大学 | Packet scheduling method and device for multi-path TCP |
Non-Patent Citations (4)
Title |
---|
刘云燕, 李 斌, 胡绍海: "IP QoS 中流量监 管技术的研究", 《山东科技大学学报》 * |
李博伦,王海栋,钱高冉,唐 翔,高秀敏: "网络流量监管CAR技术研究", 《无线互联科技》 * |
林南晖,索女中: "基于令牌桶的分级动态流量整形算法", 《现代计算机》 * |
魏小曼,董喜明: "一种改进的单速三色令牌桶算法及其实现", 《光通信研究》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113132262A (en) * | 2020-01-15 | 2021-07-16 | 阿里巴巴集团控股有限公司 | Data stream processing and classifying method, device and system |
CN111726300A (en) * | 2020-06-15 | 2020-09-29 | 哈工大机器人(合肥)国际创新研究院 | Data sending method and device |
CN114765585A (en) * | 2020-12-30 | 2022-07-19 | 北京华为数字技术有限公司 | Service quality detection method, message processing method and device |
CN114765585B (en) * | 2020-12-30 | 2024-03-01 | 北京华为数字技术有限公司 | Service quality detection method, message processing method and device |
CN113949668A (en) * | 2021-08-31 | 2022-01-18 | 北京达佳互联信息技术有限公司 | Data transmission control method, device, server and storage medium |
CN113949668B (en) * | 2021-08-31 | 2023-12-19 | 北京达佳互联信息技术有限公司 | Data transmission control method, device, server and storage medium |
CN114520789A (en) * | 2022-02-21 | 2022-05-20 | 北京浩瀚深度信息技术股份有限公司 | Token bucket-based shared cache message processing method, device, equipment and medium |
CN114520789B (en) * | 2022-02-21 | 2023-11-21 | 北京浩瀚深度信息技术股份有限公司 | Method, device, equipment and medium for processing shared cache message based on token bucket |
CN115102908A (en) * | 2022-08-25 | 2022-09-23 | 珠海星云智联科技有限公司 | Method for generating network message based on bandwidth control and related device |
CN116708310A (en) * | 2023-08-08 | 2023-09-05 | 北京傲星科技有限公司 | Flow control method and device, storage medium and electronic equipment |
CN116708310B (en) * | 2023-08-08 | 2023-09-26 | 北京傲星科技有限公司 | Flow control method and device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN107743099B (en) | 2021-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107743099A (en) | Data flow processing method, device and storage medium | |
CN103763208B (en) | Data traffic method for limiting and device | |
CN103281252B (en) | Message flow control method and device based on multi-path transmission | |
CN104301248B (en) | Message rate-limiting method and device | |
CN106371546A (en) | Method and device for limiting power dissipation of whole cabinet | |
CN106775936A (en) | The management method and device of a kind of virtual machine | |
US7609633B2 (en) | Bandwidth policer with compact data structure | |
WO2016086542A1 (en) | Message transmission method and device, and computer storage medium | |
CN106874120A (en) | The processor resource optimization method of calculate node, calculate node and server cluster | |
CN102904823A (en) | Accurate flow control method based on multi-user multi-service of memory | |
US7779155B2 (en) | Method and systems for resource bundling in a communications network | |
CN102664807B (en) | Method and device for controlling flow | |
CN113890842A (en) | Information transmission delay upper bound calculation method, system, equipment and storage medium | |
CN105323053B (en) | The method and device of business clock transparent transmission | |
CN107204930A (en) | Token adding method and device | |
US9071554B2 (en) | Timestamp estimation and jitter correction using downstream FIFO occupancy | |
CN106850456A (en) | Token bucket flow limiters | |
CN107547446A (en) | A kind of bandwidth adjusting method, device and the network equipment | |
CN111064676B (en) | Flow monitoring method, equipment, device and computer storage medium | |
CN107517166A (en) | Flow control methods, device and access device | |
CN107577530A (en) | Board, the method and system of balanced board memory usage | |
CN102970246B (en) | A kind of Ethernet message flux control method | |
CN111181875A (en) | Bandwidth adjusting method and device | |
EP3748499A1 (en) | System and method for managing shared computer resources | |
US7830873B1 (en) | Implementation of distributed traffic rate limiters |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |