CN101834787A - Method and system for dispatching data - Google Patents

Method and system for dispatching data Download PDF

Info

Publication number
CN101834787A
CN101834787A CN201010147455A CN201010147455A CN101834787A CN 101834787 A CN101834787 A CN 101834787A CN 201010147455 A CN201010147455 A CN 201010147455A CN 201010147455 A CN201010147455 A CN 201010147455A CN 101834787 A CN101834787 A CN 101834787A
Authority
CN
China
Prior art keywords
delay variation
data flow
formation
port
straight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201010147455A
Other languages
Chinese (zh)
Inventor
翟勇
张海峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201010147455A priority Critical patent/CN101834787A/en
Publication of CN101834787A publication Critical patent/CN101834787A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a method and a system for dispatching data, belonging to the field of ethernet network. In the method, a through queue used for caching data sensitive to delay jitter is arranged and is connected with a classifier and an output port; the classifier classifies a received data stream and identifies a data stream sensitive to the delay jitter and a data stream insensitive to the delay jitter; the classifier sends the data stream sensitive to the delay jitter to the through queue and sends the data stream insensitive to the delay jitter to a port queue through a service scheduler, a user scheduler and a user group scheduler; and the data stream sensitive to the delay jitter and the data stream insensitive to the delay jitter are output through the outlet port after a port scheduler dispatches the data stream sensitive to the delay jitter and the data stream insensitive to the delay jitter. The technical scheme can enable the delay jitter to be smaller, thereby being beneficial to popularizing service.

Description

The method and system of data dispatching
Technical field
The present invention relates to the Ethernet field, particularly a kind of method and system of data dispatching.
Background technology
Ethernet technology is the network technology that is widely used now.At present, Ethernet not only becomes the dominant technology of various independently local area network (LAN)s, and the local area network (LAN) of many Ethernet forms also becomes the part of Internet.Along with the continuous development of ethernet technology, the Ethernet access way also will become one of main ordinary Internet user mode access.Realize the whole network service quality (QoS, Quality ofService) solution end to end, will consider the professional problem that guarantees of QoS on the Ethernet inevitably.This just needs ethernet switching device to use Ethernet QoS technology, provides the QoS of different brackets to guarantee to dissimilar Business Streams, especially can support those to the Business Stream of delaying time and shake is had relatively high expectations.
The main cause that time delay produces is exactly that packet is stored in the buffer memory to transmitting the time that is consumed before the scheduling.If can reduce this part time delay, total packet time delay also can correspondingly reduce.The main cause that shake produces since in the Business Stream in succession grouping wait in line that asynchronism(-nization) causes, be a problem to the quality of service impacts maximum.Some type of service, particularly real time business such as speech and video are that the utmost point is not tolerated shake.
When network congestion, must solve a plurality of packets and compete the use problem of resource simultaneously, adopt queue scheduling to solve usually.Several queue scheduling algorithms of using always on the switch: strict priority (SP, Strict-Priority) queue scheduling algorithm, weighted round ring (WRR, Weighted Round Robin) dispatching algorithm, Weighted Fair Queuing (WFQ, Weighted Fair Queue) dispatching algorithm etc.On switch, a port can adopt modes such as SP, WRR and WFQ generally for 8 priority queries to the scheduling of these formations.These formations and scheduler level do not have the branch of the superior and the subordinate.The method for designing advantage of this common Q oS formation is that the very fast delay variation of scheduling is less, and shortcoming is difficult to accomplish simultaneously a plurality of business of a plurality of users to be controlled.
Based on this, Hierarchical QoS (H-QoS) technology has been proposed.The H-QoS technology is a kind of flow that can control the user, again the QoS technology that can dispatch the priority of customer service simultaneously.The H-QoS technology adopts the mode of multi-stage scheduling, adopts new design, and all classifications of formation and scheduler make equipment have the control strategy of internal resource, can either provide quality assurance for advanced level user's business, can save the net structure cost on the whole again.But the H-QoS technology also has shortcoming: because its scheduling queue and scheduler have more than 2 grades, packet is scheduled in multi-queue and the switching of multi-stage scheduling device, correspondingly can increase time delay and shake, so the H-QoS performance is relatively poor in to the environment of delay variation sensitivity.
Summary of the invention
The purpose of the embodiment of the invention is to provide a kind of method and system of data dispatching, to solve in the prior art the bigger shortcoming of delay variation under user and professional controlled situation.
In order to overcome the above problems, the embodiment of the invention provides a kind of method and system of data dispatching, and technical scheme is as follows:
A kind of method of data dispatching comprises:
A straight-through formation that is used for buffer memory to the data of delay variation sensitivity is set, and this straight-through formation directly links to each other with outbound port with grader;
This grader is classified to the data flow that receives, and identification is to the data flow of delay variation sensitivity with to the insensitive data flow of delay variation;
This grader will be sent to this straight-through formation to the data flow of delay variation sensitivity, and this is sent to port queue to the insensitive data flow of delay variation by service dispatcher, user's scheduler and user's group scheduling device;
The Port Scheduling device is dispatched the back by this outbound port output to the data and the insensitive data of delay variation of this delay variation sensitivity.
Further, this Port Scheduling device adopts the mode of debris exclusion forwarding that the data flow and the insensitive data flow of delay variation of this delay variation sensitivity are dispatched the back by this outbound port output.
Further, when the priority of this straight-through formation was the highest, this Port Scheduling device adopted strict priority algorithm or Weighted Fair Queuing algorithm that the data flow and the insensitive data flow of delay variation of this delay variation sensitivity are dispatched the back by this outbound port output.
Further, when the priority of this straight-through formation when not being the highest, this Port Scheduling device adopts Weighted Fair Queuing algorithm data flow of this delay variation sensitivity of scheduling earlier, dispatches the insensitive data flow of delay variation then, wherein, the weight of this straight-through formation is greater than the weight of this port queue.
Further, this method also comprises: the step that this straight-through formation bandwidth is set.
Further, this method also comprises: it is congested whether this straight-through formation of detection occurs, if then data stream is carried out discard processing.
A kind of system of data dispatching comprises:
Straight-through formation module directly links to each other with outbound port with grader, is used for the data of buffer memory to the delay variation sensitivity;
This grader is used for the data flow that receives from inbound port is classified, and identification is to the data flow of delay variation sensitivity with to the insensitive data flow of delay variation; To be sent to this straight-through formation to the data flow of delay variation sensitivity, this will be sent to port queue to the insensitive data flow of delay variation by service dispatcher, user's scheduler and user's group scheduling device;
The Port Scheduling device is used for the data and the insensitive data of delay variation of this delay variation sensitivity are dispatched the back by this outbound port output.
Further, this Port Scheduling device, the mode that specifically is used to adopt debris exclusion to transmit is dispatched the back by this outbound port output to the data flow and the insensitive data flow of delay variation of this delay variation sensitivity.
Further, this Port Scheduling device, also be used for priority when this straight-through formation when the highest, adopt strict priority algorithm or Weighted Fair Queuing algorithm that the data flow and the insensitive data flow of delay variation of this delay variation sensitivity are dispatched the back by this outbound port output.
Further, this Port Scheduling device, also be used for when the priority of this straight-through formation when not being the highest, adopt the Weighted Fair Queuing algorithm data flow of this delay variation sensitivity of scheduling earlier, dispatch the insensitive data flow of delay variation then, wherein, the weight of this straight-through formation is greater than the weight of this port queue.
In embodiments of the present invention, increased and be used for buffer memory the relatively straight-through formation of responsive data of delay variation, by grader to being put in the straight-through formation behind the relatively responsive data flow classification of delay variation and then directly dispatching, both keep the H-QoS technology and can distinguish user and professional advantage, kept the less advantage of the very fast shake of QoS technology scheduling again, reduce time delay and the shake of data flow in packet network, thereby helped professional popularization.
Description of drawings
Fig. 1 is the flow chart of the method for a kind of data dispatching of providing of the embodiment of the invention;
Fig. 2 is the hierarchy model schematic diagram of the H-QoS commonly used that provides of the embodiment of the invention;
Fig. 3 is the schematic diagram that the hierarchy model of the H-QoS commonly used that provides of the embodiment of the invention adds straight-through formation;
Fig. 4 is the schematic diagram of the store-and-forward mode that provides of the embodiment of the invention;
Fig. 5 is the schematic diagram of the through type that provides of the embodiment of the invention;
Fig. 6 is the schematic diagram that the debris exclusion formula that provides of the embodiment of the invention is transmitted;
Fig. 7 is the schematic diagram of data streams in improved H-QoS model that the embodiment of the invention provides;
Fig. 8 is the schematic diagram of the system of the data dispatching that provides of the embodiment of the invention.
Embodiment
Core concept of the present invention is: increase the straight-through formation that priority is the highest, and revise the responsibility of Port Scheduling device, the forwarding form of formation is the debris exclusion pass-through mode.By grader to being put in the straight-through formation behind the relatively responsive data flow classification of delay variation and then directly dispatching, reduced time delay and the shake of data flow in packet network, the technical scheme of the embodiment of the invention had both kept the H-QoS technology can distinguish user and professional advantage, the advantage that has kept the QoS technology again, it is less to dispatch very fast delay variation, thereby helps professional popularization.
As shown in Figure 1, the embodiment of the invention provides a kind of method of data dispatching, comprising:
Step 101 is provided with a straight-through formation that is used for buffer memory to the data of delay variation sensitivity, and this straight-through formation directly links to each other with outbound port with grader;
Step 102, grader is classified to the data flow that receives, and identification is to the data flow of delay variation sensitivity with to the insensitive data flow of delay variation;
Step 103, grader will be sent to this straight-through formation to the data flow of delay variation sensitivity, and this is sent to port queue to the insensitive data flow of delay variation by service dispatcher, user's scheduler and user's group scheduling device;
Step 104, the Port Scheduling device is dispatched the back to the data of delay variation sensitivity and the insensitive data of delay variation and is exported by outbound port.
Wherein, the Port Scheduling device adopts mode that debris exclusion transmits that the data of delay variation sensitivity and the insensitive data of delay variation are dispatched the back to be exported by outbound port.
Wherein, when the priority of straight-through formation was the highest, this Port Scheduling device adopted strict priority algorithm or Weighted Fair Queuing algorithm that the data flow of delay variation sensitivity and the insensitive data flow of delay variation are dispatched the back and is exported by outbound port.
Wherein, when the priority of straight-through formation when not being the highest, this Port Scheduling device adopts Weighted Fair Queuing algorithm data flow of scheduling delay variation sensitivity earlier, dispatches the insensitive data flow of delay variation then, wherein, the weight of this straight-through formation is greater than the weight of port queue.
Wherein, this method also comprises: the step that this straight-through formation bandwidth is set.
Wherein, this method also comprises: it is congested whether the straight-through formation of detection occurs, if then data stream is carried out discard processing.
Be described in detail below in conjunction with the technical scheme of accompanying drawing the embodiment of the invention, specific as follows:
Usually, the layering scheduling model of H-QoS as shown in Figure 2, by port layer, user organize layer, client layer, customer service layer totally four levels form.Each layer all has formation and scheduler separately.If have individual Business Stream this moment is VOIP packet or 1588PTP protocol massages etc., need to walk 4 grades of formations and 4 grades of scheduling altogether, the uncertain increase of queuing and scheduling time is so time-delay and shake can increase.
In model as shown in Figure 3, increase a straight-through formation Q2, the priority of this formation is designed to the highest in all formations, direct link sort device of this straight-through formation and port, operation layer is not walked in the centre, the user organizes layer, client layer.
In model as shown in Figure 3, this moment, the effect of part scheduler needed to revise.Operation layer scheduler, client layer scheduler and user's scheduler are not dispatched the data flow of Q2 in the newly-increased straight-through formation.
Data flow (or claiming Business Stream) to the delay variation sensitivity directly arrives among the straight-through formation Q2 that increases newly by grader.Straight-through formation Q2 can be understood as a physical port and has opened up a formation that priority is higher again.The Port Scheduling device need increase the scheduling to straight-through formation Q2 in the port.The data of straight-through formation Q2 inside all are the data to time delay and jitter-sensitive, and for guaranteeing the timely priority scheduling of data quilt of its buffering, the Port Scheduling device can adopt the strict priority algorithm dispatch to port queue Q1 and straight-through formation Q2.In addition, for the data that guarantee the assurance forwarding grade among the port queue Q1 also can in time be dispatched, scheduling to port queue Q1 and straight-through formation Q2 also can be adopted the WFQ algorithm, require the data among the straight-through formation Q2 of scheduling earlier, dispatch the data among the port queue Q1 again, and the weight of straight-through formation Q2 is bigger.So the scheduling mode of Port Scheduling device should design according to the size of data flow and traffic carrying capacity.It for example is the flow of 100M for total bandwidth, the bandwidth that real time business takies may be 5M, the ratio of real time business in total bandwidth very little (5/100=1/20), if at this time the Port Scheduling device adopts WFQ just not too suitable: the data among the formation Q1 are a lot of relatively, the weight of formation Q2 is big again, and some data among the possible Q2 can not guarantee that still in time scheduling is transmitted.This moment with the SP algorithm to formation dispatch (priority ratio of formation Q2 is higher) proper.If total bandwidth is 100M, the bandwidth that real time business accounts for is 50M, and the large percentage (50/100=1/2) of real time business in total bandwidth adopts WFQ just can guarantee that basically the data among the formation Q2 are transmitted in real time, adopts this moment SP just not too suitable.Preferably can dynamically arrange scheduling mode.
In model as shown in Figure 3, the bandwidth control procedure also needs to revise.Can realize user and user's bandwidth is controlled as the H-QoS model of Fig. 3, increase after the straight-through formation Q2, because the data flow of straight-through formation Q2 directly arrives port, so can't control the bandwidth of the data flow on the straight-through formation Q2.Sign service-level agreement (SLA the user, Service-LevelAgreement) after, committed information rate (the CIR of reality on other data flow, CommittedInformation Rate) and peak information rate (PIR, Peak Information Rate) to deduct straight-through formation Q2 and go up the bandwidth of distributing.Allocated bandwidth on the straight-through formation Q2 is wanted and can be configured by order.The size that straight-through formation Q2 goes up bandwidth need be provided with according to practical application.For example the maximum burst size of real time business correspondence is 10M/s, reserves the above bandwidth (if not wanting to allow business datum abandon because of congested) of 10M then preferably for formation Q2.
In model as shown in Figure 3, need monitor straight-through formation Q2 operating position.When the data flow in the formation too much has congestion tendency, can remove network overload problems by abandoning data flow.The method that can adopt has tail drop (Tail-Drop) and WRED (Weighted Random EarlyDetection, Weighted random earlier detection).The size design of straight-through formation Q2 will be carried out appropriate design according to applied environment.The size of packet in the size of formation Q2 and the Business Stream, the efficient of scheduler is relevant with the factors such as service traffics of formation.For example packet at utmost is 1000 (this numerical value is convenient to for example), maximum delay in formation Q2 (time-delay in formation Q2 is basic fixed numerically) is 1us (this numerical value is convenient to for example), the maximum stream flow of real time business correspondence is 10M/s, then in order to reduce the possibility that packet is dropped in formation Q2, the size of formation Q2 is preferably (1/1000) * (10*1024*1024) * 1000.
In order to reduce time delay, Business Stream (or claiming data flow) is when grader is delivered to formation, and the Port Scheduling device adopts the debris exclusion formula to transmit straight-through formation Q2 is dispatched.This is a solution between the relay type between through type and storage, wherein, the schematic diagram of store-and-forward mode as shown in Figure 4, the schematic diagram of through type as shown in Figure 5, the schematic diagram that the debris exclusion formula is transmitted is as shown in Figure 6.It checks earlier before forwarding whether the length of packet (comprising protocol massages) reaches 64 bytes (512bit), if less than 64 bytes, explanation is false bag (or claiming runt), then abandons this bag; If greater than 64 bytes, then preceding 64 byte information according to packet send this bag.Since fixing preceding 64 bytes of analyzing packet of this mode, so the data processing speed of this mode is faster than store-and-forward mode, but slower than through type.
The Port Scheduling device has adopted the debris exclusion formula to transmit, and preceding 64 bytes of packet promptly are allowed to the scheduling forwarding after entering formation.Storing relay type relatively needs the correctness of whole packet of buffer memory and checking data bag just to dispatch forwarding afterwards, has improved data forwarding efficient greatly, has reduced the time delay of packet in buffer queue.For example, the bag of 1500 byte lengths adopts the debris exclusion formula to transmit, and its storage time delay minimum is about 1/20th (64/1500) of storage relay type time delay.And owing to only get 64 bytes and just can dispatch forwarding, time delay is relatively also fixed.But this method also has a shortcoming, promptly analyzes preceding 64 bytes and just can dispatch forwarding and whole packet not carried out Frame Check Sequence (FCS, Frame Check Squence) detection, and error data packets also may be forwarded.
When inbound port has different data streams, for example, when common data stream 1,2 and 3 inputs of 1588 protocol massages, in conjunction with Fig. 7 and, the handling process of this system comprises:
Step 100, data flow are identified data flow 1 for guaranteeing to transmit the data of grade by grader (Classifier), deliver among the corresponding formation q1 of this business; Data flow is identified the data of data flow 2 for the grade of doing one's best by grader, delivers among the corresponding formation q2 of this business; Data flow is identified data flow 3 for quickening to transmit the data of grade by grader, because this data flow is the data to real-time and jitter-sensitive, therefore, is directly delivered among the straight-through formation Q2 of this business correspondence.
Step 200, service level scheduler are responsible for the data (usually according to SP or WFQ, if be SP, the data flow 1 among the q1 is understood by priority scheduling than data flow 2) among scheduling queue q1 and the q2, are sent to the corresponding formation of back level scheduler then; The data that user level dispatcher is dispatched in its corresponding formation are organized the corresponding formation of grade scheduler to the user, and final data is sent among the Port Scheduling device corresponding port formation Q1.
Step 300, during the scheduling of Port Scheduling device, (WFQ/CBWFQ) dispatches according to the Weighted Fair Queuing algorithmic dispatching.When being dispatched to port one, the data among the straight-through formation Q2 of scheduling earlier dispatch the data among the port queue Q1 again, and the weight of straight-through formation Q2 are bigger.
Technical scheme of the present invention is applicable to the occasion that data is wrapped in delay variation sensitivity in the network, comprises business such as VOIP business, 1588PTP.
For data flow that guarantee to transmit grade (be generally the data relatively more responsive to delay variation, as the VoIP business, professional and 1588 protocol massages of synchronous ethernet etc.), process grader discriminator can be sent to straight-through formation.Other packet can be sent in each corresponding formation under the business-level according to different classifying ruless, and the data in the corresponding formation of each under the business-level can finally be put in the port queue by service dispatcher, user's scheduler and user's group scheduling device successively then.From this process as can be seen, straight-through formation has at utmost reduced the time that packet queue waits, and has promptly reduced time delay and the shake of data flow in packet switching network to the full extent, has improved the performance of H-QoS technology at the processing delay sensitive traffic.
The embodiment of the invention provides a kind of system of data dispatching, as shown in Figure 8, comprising:
Straight-through formation module directly links to each other with outbound port with grader, is used for the data of buffer memory to the delay variation sensitivity;
This grader is used for the data flow that receives from inbound port is classified, and identification is to the data flow of delay variation sensitivity with to the insensitive data flow of delay variation; To be sent to this straight-through formation to the data flow of delay variation sensitivity, this will be sent to port queue to the insensitive data flow of delay variation by service dispatcher, user's scheduler and user's group scheduling device;
The Port Scheduling device is used for the data and the insensitive data of delay variation of this delay variation sensitivity are dispatched the back by this outbound port output.
Further, the Port Scheduling device, the mode that specifically is used to adopt debris exclusion to transmit is dispatched the back by this outbound port output to the data flow and the insensitive data flow of delay variation of this delay variation sensitivity.
Further, this Port Scheduling device, also be used for priority when this straight-through formation when the highest, adopt strict priority algorithm or Weighted Fair Queuing algorithm that the data flow and the insensitive data flow of delay variation of this delay variation sensitivity are dispatched the back by this outbound port output.
Further, this Port Scheduling device, also be used for when the priority of this straight-through formation when not being the highest, adopt the Weighted Fair Queuing algorithm data flow of this delay variation sensitivity of scheduling earlier, dispatch the insensitive data flow of delay variation then, wherein, the weight of this straight-through formation is greater than the weight of this port queue.
Compared with prior art, the embodiment of the invention increases the straight-through formation that priority is the highest, and revises the responsibility of Port Scheduling device at the relatively more responsive data of delay variation, and the forwarding form of formation is the debris exclusion pass-through mode.By grader to being put in the straight-through formation behind the relatively responsive data flow classification of delay variation and then directly dispatching, reduced time delay and the shake of data flow in packet network, the technical scheme of the embodiment of the invention had both kept the H-QoS technology can distinguish user and professional advantage, the advantage that has kept the QoS technology again, it is less to dispatch very fast delay variation, thereby helps professional popularization.
Should be noted that at last, above embodiment is only unrestricted in order to technical scheme of the present invention to be described, although the present invention is had been described in detail with reference to preferred embodiment, those of ordinary skill in the art is to be understood that, can make amendment, change the present invention or be equal to replacement, and not break away from the spirit and scope of the present invention and claim.

Claims (10)

1. the method for a data dispatching is characterized in that, comprising:
A straight-through formation that is used for buffer memory to the data of delay variation sensitivity is set, and this straight-through formation directly links to each other with outbound port with grader;
This grader is classified to the data flow that receives, and identification is to the data flow of delay variation sensitivity with to the insensitive data flow of delay variation;
This grader will be sent to this straight-through formation to the data flow of delay variation sensitivity, and this is sent to port queue to the insensitive data flow of delay variation by service dispatcher, user's scheduler and user's group scheduling device;
The Port Scheduling device is dispatched the back by this outbound port output to the data and the insensitive data of delay variation of this delay variation sensitivity.
2. the method for claim 1 is characterized in that, the mode that this Port Scheduling device adopts debris exclusion to transmit is dispatched the back by this outbound port output to the data flow and the insensitive data flow of delay variation of this delay variation sensitivity.
3. the method for claim 1, it is characterized in that, when the priority of this straight-through formation was the highest, this Port Scheduling device adopted strict priority algorithm or Weighted Fair Queuing algorithm that the data flow and the insensitive data flow of delay variation of this delay variation sensitivity are dispatched the back by this outbound port output.
4. the method for claim 1, it is characterized in that, when the priority of this straight-through formation when not being the highest, this Port Scheduling device adopts the Weighted Fair Queuing algorithm data flow of this delay variation sensitivity of scheduling earlier, dispatch the insensitive data flow of delay variation then, wherein, the weight of this straight-through formation is greater than the weight of this port queue.
5. the method for claim 1 is characterized in that, also comprises: the step that this straight-through formation bandwidth is set.
6. the method for claim 1 is characterized in that, also comprises: it is congested whether this straight-through formation of detection occurs, if then data stream is carried out discard processing.
7. the system of a data dispatching is characterized in that, comprising:
Straight-through formation module directly links to each other with outbound port with grader, is used for the data of buffer memory to the delay variation sensitivity;
This grader is used for the data flow that receives from inbound port is classified, and identification is to the data flow of delay variation sensitivity with to the insensitive data flow of delay variation; To be sent to this straight-through formation to the data flow of delay variation sensitivity, this will be sent to port queue to the insensitive data flow of delay variation by service dispatcher, user's scheduler and user's group scheduling device;
The Port Scheduling device is used for the data and the insensitive data of delay variation of this delay variation sensitivity are dispatched the back by this outbound port output.
8. system as claimed in claim 7 is characterized in that, this Port Scheduling device, and the mode that specifically is used to adopt debris exclusion to transmit is dispatched the back by this outbound port output to the data flow and the insensitive data flow of delay variation of this delay variation sensitivity.
9. system as claimed in claim 7, it is characterized in that, this Port Scheduling device, also be used for priority when this straight-through formation when the highest, adopt strict priority algorithm or Weighted Fair Queuing algorithm that the data flow and the insensitive data flow of delay variation of this delay variation sensitivity are dispatched the back by this outbound port output.
10. system as claimed in claim 7, it is characterized in that, this Port Scheduling device, also be used for when the priority of this straight-through formation when not being the highest, adopt the Weighted Fair Queuing algorithm data flow of this delay variation sensitivity of scheduling earlier, dispatch the insensitive data flow of delay variation then, wherein, the weight of this straight-through formation is greater than the weight of this port queue.
CN201010147455A 2010-04-12 2010-04-12 Method and system for dispatching data Pending CN101834787A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010147455A CN101834787A (en) 2010-04-12 2010-04-12 Method and system for dispatching data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010147455A CN101834787A (en) 2010-04-12 2010-04-12 Method and system for dispatching data

Publications (1)

Publication Number Publication Date
CN101834787A true CN101834787A (en) 2010-09-15

Family

ID=42718719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010147455A Pending CN101834787A (en) 2010-04-12 2010-04-12 Method and system for dispatching data

Country Status (1)

Country Link
CN (1) CN101834787A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102148815A (en) * 2010-10-26 2011-08-10 华为技术有限公司 Video stream dispatching method and network node
CN103067474A (en) * 2012-12-21 2013-04-24 曙光信息产业(北京)有限公司 Processing method and device for communication in distributed file system
CN103326962A (en) * 2013-06-19 2013-09-25 中国人民解放军信息工程大学 Diversified service exchange method
CN103457881A (en) * 2012-06-01 2013-12-18 美国博通公司 System for performing Data Cut-Through
CN107172107A (en) * 2017-07-24 2017-09-15 中国人民解放军信息工程大学 The transparent management-control method and equipment of a kind of differentiated service stream early stage passback
CN107181698A (en) * 2016-03-10 2017-09-19 谷歌公司 The system and method for single queue multi-stream service shaping
CN108632162A (en) * 2017-03-22 2018-10-09 华为技术有限公司 A kind of array dispatching method and forwarding unit
CN112399470A (en) * 2019-08-16 2021-02-23 深圳长城开发科技股份有限公司 LoRa communication method, LoRa gateway, LoRa system and computer readable storage medium
CN112929217A (en) * 2021-02-05 2021-06-08 吉林化工学院 Halter strap theory-based differentiated network traffic bandwidth demand estimation method
CN114567679A (en) * 2022-03-25 2022-05-31 阿里巴巴(中国)有限公司 Data transmission method and device
CN116032859A (en) * 2023-02-16 2023-04-28 之江实验室 Fusion type rapid data exchange device and method
CN116800684A (en) * 2023-06-27 2023-09-22 中科驭数(北京)科技有限公司 Performance isolation method of RDMA network card transmission queue and RDMA network card
CN116800684B (en) * 2023-06-27 2024-06-07 中科驭数(北京)科技有限公司 Performance isolation method of RDMA network card transmission queue and RDMA network card

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6785235B1 (en) * 1997-08-18 2004-08-31 Telefonaktiebolaget Lm Ericsson (Publ) Priority control of queued data frames in frame delay multiplexing
CN1658611A (en) * 2005-03-22 2005-08-24 中国科学院计算技术研究所 Method for guarantee service quality of radio local network
CN1972239A (en) * 2005-11-24 2007-05-30 武汉烽火网络有限责任公司 Ethernet cache exchanging and scheduling method and apparatus
CN101114955A (en) * 2007-09-14 2008-01-30 武汉烽火网络有限责任公司 Jitter detection based congestion control method in city domain Ethernet

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6785235B1 (en) * 1997-08-18 2004-08-31 Telefonaktiebolaget Lm Ericsson (Publ) Priority control of queued data frames in frame delay multiplexing
CN1658611A (en) * 2005-03-22 2005-08-24 中国科学院计算技术研究所 Method for guarantee service quality of radio local network
CN1972239A (en) * 2005-11-24 2007-05-30 武汉烽火网络有限责任公司 Ethernet cache exchanging and scheduling method and apparatus
CN101114955A (en) * 2007-09-14 2008-01-30 武汉烽火网络有限责任公司 Jitter detection based congestion control method in city domain Ethernet

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
段炜: "以太网交换机技术", 《中国有线电视》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102148815A (en) * 2010-10-26 2011-08-10 华为技术有限公司 Video stream dispatching method and network node
CN103457881A (en) * 2012-06-01 2013-12-18 美国博通公司 System for performing Data Cut-Through
CN103457881B (en) * 2012-06-01 2017-03-01 美国博通公司 Execution data leads directly to the system of forwarding
CN103067474A (en) * 2012-12-21 2013-04-24 曙光信息产业(北京)有限公司 Processing method and device for communication in distributed file system
CN103326962A (en) * 2013-06-19 2013-09-25 中国人民解放军信息工程大学 Diversified service exchange method
CN107181698A (en) * 2016-03-10 2017-09-19 谷歌公司 The system and method for single queue multi-stream service shaping
CN108632162A (en) * 2017-03-22 2018-10-09 华为技术有限公司 A kind of array dispatching method and forwarding unit
CN107172107B (en) * 2017-07-24 2019-08-13 中国人民解放军信息工程大学 A kind of transparent management-control method and equipment of the passback of differentiated service stream early stage
CN107172107A (en) * 2017-07-24 2017-09-15 中国人民解放军信息工程大学 The transparent management-control method and equipment of a kind of differentiated service stream early stage passback
CN112399470A (en) * 2019-08-16 2021-02-23 深圳长城开发科技股份有限公司 LoRa communication method, LoRa gateway, LoRa system and computer readable storage medium
CN112399470B (en) * 2019-08-16 2022-11-08 深圳长城开发科技股份有限公司 LoRa communication method, loRa gateway, loRa system and computer readable storage medium
CN112929217A (en) * 2021-02-05 2021-06-08 吉林化工学院 Halter strap theory-based differentiated network traffic bandwidth demand estimation method
CN112929217B (en) * 2021-02-05 2022-07-01 吉林化工学院 Halter strap theory-based differentiated network traffic bandwidth demand estimation method
CN114567679A (en) * 2022-03-25 2022-05-31 阿里巴巴(中国)有限公司 Data transmission method and device
CN114567679B (en) * 2022-03-25 2024-04-02 阿里巴巴(中国)有限公司 Data transmission method and device
CN116032859A (en) * 2023-02-16 2023-04-28 之江实验室 Fusion type rapid data exchange device and method
CN116800684A (en) * 2023-06-27 2023-09-22 中科驭数(北京)科技有限公司 Performance isolation method of RDMA network card transmission queue and RDMA network card
CN116800684B (en) * 2023-06-27 2024-06-07 中科驭数(北京)科技有限公司 Performance isolation method of RDMA network card transmission queue and RDMA network card

Similar Documents

Publication Publication Date Title
CN101834787A (en) Method and system for dispatching data
CN101834790B (en) Multicore processor based flow control method and multicore processor
KR100823785B1 (en) Method and system for open-loop congestion control in a system fabric
WO2017157274A1 (en) Network-traffic control method and network device thereof
CN100463451C (en) Multidimensional queue dispatching and managing system for network data stream
US8259738B2 (en) Channel service manager with priority queuing
CN100550852C (en) A kind of method and device thereof of realizing mass port backpressure
CN101453400B (en) Method and forwarding device for ensuring quality of service of virtual private network service
JP3306705B2 (en) Packet transfer control device and scheduling method thereof
CN101692648B (en) Method and system for queue scheduling
WO2012065477A1 (en) Method and system for avoiding message congestion
CN107431667A (en) Packet is dispatched in the network device
CN102082765A (en) User and service based QoS (quality of service) system in Linux environment
CN105915468B (en) A kind of dispatching method and device of business
CN101924781A (en) Terminal equipment and QoS implementation method and flow classifier
CN100466593C (en) Method of implementing integrated queue scheduling for supporting multi service
Szilágyi et al. A review of congestion management algorithms on cisco routers
Yaghmaee et al. A model for differentiated service support in wireless multimedia sensor networks
WO2001039467A1 (en) Method and system for controlling transmission of packets in computer networks
Cisco Policing and Shaping Overview
Cisco QC: Quality of Service Overview
Matthew et al. Modeling and simulation of queuing scheduling disciplines on packet delivery for next generation internet streaming applications
Musa et al. A Comparative Study of Different Queuing Scheduling Disciplines
CN103107955A (en) Method and device of scheduling packet transport network queues
Dike et al. Design and Analysis of Voice and Critical Data Priority Queue (VCDPQ) Scheduler for Constrained-Bandwidth VoIP Networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20100915