CN108347389A - A kind of method and device for realizing flow equalization in data forwarding network - Google Patents
A kind of method and device for realizing flow equalization in data forwarding network Download PDFInfo
- Publication number
- CN108347389A CN108347389A CN201710047636.5A CN201710047636A CN108347389A CN 108347389 A CN108347389 A CN 108347389A CN 201710047636 A CN201710047636 A CN 201710047636A CN 108347389 A CN108347389 A CN 108347389A
- Authority
- CN
- China
- Prior art keywords
- data
- buffer queue
- queue
- caching
- loss
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000032683 aging Effects 0.000 claims abstract description 28
- 238000012544 monitoring process Methods 0.000 claims description 23
- 238000010586 diagram Methods 0.000 description 14
- 238000003483 aging Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 230000003111 delayed effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000011144 upstream manufacturing Methods 0.000 description 3
- 230000002431 foraging effect Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 230000029087 digestion Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036578 sleeping time Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/125—Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a kind of in data forwarding network realizes the method and device of flow equalization, is related to data forwarding network field, the method includes:During a plurality of data flow enters same buffer queue, the caching occupancy situation of the buffer queue is monitored;According to monitored results, determines and wait for that the data flow into the buffer queue whether there is loss of data risk;If it is determined that there are loss of data risks for the data flow for waiting for into the buffer queue, then the data of the buffer queue are removed according to preset ratio.The embodiment of the present invention is when two and data above stream enter same buffer queue, by way of aging queuing data, part or all of loss of data when can avoid network data congestion in network forwarding equipment.
Description
Technical field
It is the present invention relates to data forwarding network field, more particularly to a kind of to realize flow equalization in data forwarding network
Method and device.
Background technology
In most of network applications, data forwarding mostly uses more lining up modes to expire service quality (Quality of
Service, QoS).
Queue is a piece of buffering area based on storage forwarding, message first in first out.Dispatching technique based on bandwidth can be tight
The size of outlet data service traffics is controlled to lattice, and when congestion occurs for data-interface, i.e., outlet data service traffics are super
When crossing maximum bandwidth, the flow of certain low priority messages is limited.
It, can be according to classification of service (Class of Service, COS) logarithm for network data forwarding unit up direction
Classified according to business and is assigned to different queue.Data flow is categorized enter respective queue after, according to queue scheduling pattern
It is scheduled out team.In network data forwarding unit up direction, data are scheduled based on uplink assignment bandwidth, when transmission number
Data transmission is then terminated according to when reaching bandwidth requirement, corresponding transmission queue starts buffered data, when reaching buffer threshold,
Data can not join the team, and data start to lose.
If the network equipment upstream bandwidth 10Mbps, upstream A and data stream B are respectively with the rate of 10Mbps, phase
Queue 0 is entered with packet length, since queue handles data flow in a serial fashion, i.e., data flow A and data stream B be in sequence
Sequentially enter queue.Fig. 1 is gross data stream A and stream B transmissions sequence and enqueue situation schematic diagram, as shown in Figure 1, as stream A
When sequentially entering queue with stream B with identical rate, situation of joining the team is as shown in Figure 1, i.e. every stream can sequentially enter queue.
Due to data join the team sequence uncertainty and bandwidth allocation limitation, cause wherein certain data stream to be scheduled out
While going, cause the loss of another data stream, at this time it is easy to appear data flow A either data stream B all lose or
Most of the case where losing.That is, when upstream bandwidth is less than data traffic, there is data congestion at this time, that is, dispatch out
The data traffic gone is less than data traffic of joining the team, and in this case, according to the mechanism of queue first in first out, data packet is in queue
Tail portion is abandoned.Fig. 2 is loss of data schematic diagram, as shown in Fig. 2, when data stream B is being scheduled, data flow A
It is dropped, when data stream B is gone out by full schedule, queue can discharge a caching, and subsequent data stream B can be normal at this time
It joins the team.Fig. 3 is queue situation schematic diagram after loss of data, according to the dequeuing data of Fig. 2, is lost, situation analysis of joining the team, last team
There is only data stream Bs in row, and in this case, data flow A is dropped completely, and cause data flow A to occur obstructed in a network
Situation.
Invention content
A kind of method and device for realizing flow equalization in data forwarding network provided in an embodiment of the present invention, solves net
The problem of partial data or total data of the data flow occurred when network data congestion are lost.
A kind of method that realizing flow equalization in data forwarding network provided according to embodiments of the present invention, including:
During a plurality of data flow enters same buffer queue, the caching occupancy situation of the buffer queue is monitored;
According to monitored results, determines and wait for that the data flow into the buffer queue whether there is loss of data risk;
If it is determined that there are loss of data risks then to remove institute according to preset ratio for the data flow for waiting for into the buffer queue
State the data of buffer queue.
Preferably, before the caching occupancy situation to the buffer queue is monitored, further include:
According to the total caching capabilities and network packet sending speed of the buffer queue, the spatial cache of the buffer queue is determined
It is fully occupied the required time, and is fully occupied the required time according to the spatial cache of the buffer queue, startup pair
The monitoring of the buffer queue.
Preferably, the caching occupancy situation to the buffer queue be monitored including:
By being scanned to the buffer queue, the caching occupation value of the buffer queue is obtained.
Preferably, described according to monitored results, it determines and waits for that the data flow into the buffer queue is lost with the presence or absence of data
It goes wrong and includes nearly:
The caching occupation value of the buffer queue is compared with predetermined threshold value, if the caching of the buffer queue accounts for
It is more than the predetermined threshold value with value, it is determined that there are loss of data risks for the data flow for waiting for into the buffer queue.
Preferably, the data that the buffer queue is removed according to preset ratio include:
By tail portion to the head of the buffer queue of the buffer queue, the presence is gradually removed according to preset ratio
Data with existing in the buffer queue of loss of data risk.
Preferably, the preset ratio is the arbitrary proportion in 50% to 100%.
The storage medium provided according to embodiments of the present invention, storage are realized for realizing above-mentioned in data forwarding network
The program of the method for flow equalization.
A kind of device that realizing flow equalization in data forwarding network provided according to embodiments of the present invention, including:
Monitoring module during entering same buffer queue for a plurality of data flow, occupies the caching of the buffer queue
Situation is monitored;
Judgment module waits for that the data flow into the buffer queue whether there is data for according to monitored results, determining
Risk of missing;
Ageing module, for determining when the data flow into the buffer queue there are when loss of data risk, according to pre-
If ratio removes the data of the buffer queue.
Preferably, further include:
Starting module, for before the caching occupancy situation to the buffer queue is monitored, according to the caching
The total caching capabilities and network packet sending speed of queue, when determining that the spatial cache of the buffer queue is fully occupied required
Between, and the required time is fully occupied according to the spatial cache of the buffer queue, start the monitoring to the buffer queue.
Preferably, the caching occupation value of the buffer queue that the judgment module obtains monitoring and predetermined threshold value into
Row compares, if the caching occupation value of the buffer queue is more than the predetermined threshold value, it is determined that waits entering the buffer queue
Data flow there are loss of data risks.
Preferably, the ageing module by the buffer queue tail portion to the head of the buffer queue, according to default
Ratio gradually removes the data with existing there are in the buffer queue of loss of data risk.
Preferably, the preset ratio is the arbitrary proportion in 50% to 100%.
Technical solution provided in an embodiment of the present invention has the advantages that:
The embodiment of the present invention prevents network data forwarding unit partial data by way of data with existing in aging queue
The case where all or most is lost.
Description of the drawings
Fig. 1 is gross data stream A and stream B transmissions sequence and enqueue situation schematic diagram;
Fig. 2 is loss of data schematic diagram;
Fig. 3 is queue situation schematic diagram after loss of data;
Fig. 4 is the method block diagram provided in an embodiment of the present invention that flow equalization is realized in data forwarding network;
Fig. 5 is the device block diagram provided in an embodiment of the present invention that flow equalization is realized in data forwarding network;
Fig. 6 and Fig. 7 is caching occupancy situation schematic diagram after queue aging;
Fig. 8 is the fundamental diagram that software solves loss of data.
Specific implementation mode
Below in conjunction with attached drawing to a preferred embodiment of the present invention will be described in detail, it should be understood that described below is excellent
Select embodiment only for the purpose of illustrating and explaining the present invention and is not intended to limit the present invention.
Embodiment 1
Fig. 4 is the method block diagram provided in an embodiment of the present invention that flow equalization is realized in data forwarding network, such as Fig. 4 institutes
Show, step includes:
Step S401:During a plurality of data flow enters same buffer queue, to the caching occupancy situation of the buffer queue
It is monitored.
Before step S401, further include:Whether the judgment step that buffer queue is monitored, specifically, according to institute
It is required to determine that the spatial cache of the buffer queue is fully occupied for the total caching capabilities and network packet sending speed for stating buffer queue
Time, and be fully occupied the required time according to the spatial cache of the buffer queue, start to the buffer queue
Monitoring.Such as the spatial cache of buffer queue is fully occupied the required time less than preset time value (such as 1s), then illustrates
Network packet sending speed is larger, is susceptible to network data congestion and causes data flow different, needs at this time slow by starting queue
Scanning thread is deposited, the monitoring to the buffer queue is started.
Step S401 further comprises:Using the queue cache sweep thread having been turned on, the buffer queue is swept
It retouches, to obtain the caching occupation value of the buffer queue.
Step S402:According to monitored results, determines and wait for that the data flow into the buffer queue whether there is loss of data
Risk.
Step S402 further comprises:The caching occupation value of buffer queue is compared with predetermined threshold value, if caching
The caching occupation value of queue is more than predetermined threshold value, it is determined that there are loss of data wind for the data flow for waiting for into the buffer queue
Danger.
Step S403:If it is determined that there are loss of data risks for the data flow for waiting for into the buffer queue, then according to default
Ratio removes the data of the buffer queue, such as empties the data of the buffer queue, to keep the more of the buffer queue
The flow equalization of data stream.
Step S403 further comprises:By tail portion to the head of the buffer queue of the buffer queue, according to default
The data with existing there are in the buffer queue of loss of data risk is gradually removed in ratio removing.By step S403, subsequently
A plurality of data flow into multiple business of enqueue can normally be joined the team in sequence, furtherly, weight of starting after data aging
The data of biography into enqueue and can be scheduled, and it is complete to avoid the occurrence of some or all of certain data stream data in Fig. 2 and Fig. 3
The obstructed situation of data flow caused by full discarding.
Wherein, the preset ratio of the present embodiment is the arbitrary proportion in 50% to 100%.Specifically, for there are numbers
According to the buffer queue of risk of missing, in the buffer queue 50% or more data with existing can be removed.The preset ratio is optional
50%, 60%, 70%, 80%, 90%, 100%.It is preferred that 100%, i.e., the data with existing of the buffer queue is all emptied.
The embodiment of the present invention can avoid network data forwarding unit up direction, number occur under network data congestion situation
According to the obstructed problem of stream.
It will appreciated by the skilled person that implement the method for the above embodiments be can be with
Relevant hardware is instructed to complete by program, the program can be stored in computer read/write memory medium, should
Program when being executed, including step S401 to step S403.Wherein, the storage medium can be ROM/RAM, magnetic disc, light
Disk etc..
Embodiment 2
Fig. 5 is the device block diagram provided in an embodiment of the present invention that flow equalization is realized in data forwarding network, such as Fig. 5 institutes
Show, including:
Monitoring module 10 accounts for the caching of the buffer queue during entering same buffer queue for a plurality of data flow
It is monitored with situation, specifically, monitoring module 10 obtains the buffer queue by being scanned to the buffer queue
Caching occupation value.
Judgment module 20 waits for the data flow into the buffer queue with the presence or absence of number for according to monitored results, determining
According to risk of missing.
Ageing module 30, for determining when the data flow into the buffer queue there are when loss of data risk, according to
Preset ratio removes the data of the buffer queue, such as empties the data of the buffer queue.
Further, described device further includes:
Starting module 40, for before the caching occupancy situation to the buffer queue is monitored, being delayed according to described
The total caching capabilities and network packet sending speed for depositing queue, when determining that the spatial cache of the buffer queue is fully occupied required
Between, and the required time is fully occupied according to the spatial cache of the buffer queue, start the monitoring to the buffer queue.
The workflow of described device is as follows:If starting module 40 is given out a contract for a project according to the total caching capabilities and network of buffer queue
Rate when judging that queue will will produce congestion, starts the monitoring to the buffer queue.After starting monitoring, monitoring module 10 is logical
It crosses and the buffer queue is scanned, obtain the caching occupation value of the buffer queue, judgment module 20 is by the caching team
The caching occupation value of row is compared with predetermined threshold value, if more than predetermined threshold value, it is determined that waits entering the buffer queue
Data flow there are loss of data risk, at this point, ageing module 30 by the tail portion of the buffer queue to the buffer queue
Head gradually removes the data with existing there are in the buffer queue of loss of data risk, release caching according to preset ratio
The spatial cache of queue, so that follow-up data (including retransmitting data) is joined the team according to normal sequence, to avoid certain data stream
Some or all of loss of data.The preset ratio is the arbitrary proportion in 50% to 100%, such as removes the buffer queue
In 55% or 65% or 75% or 85% or 95% or 100% data with existing.
Embodiment 3
Network data forwarding unit up direction, since bandwidth limits, two or a plurality of data flow enter the same team
When row, occur in which that one is flowed all loss or most of loss problem, in extreme circumstances, when the scheduled dequeues of stream B
When, stream A is just dropped, and only stream B enters always for follow-up queue, causes stream A obstructed in this case.It is obstructed for anti-fluid stopping A,
The device of the embodiment of the present invention comprises the following modules:Scheduler module based on queue, the cache module based on queue are based on queue
The control module of caching, and the monitoring module based on queue caching.
Queue scheduling module (i.e. the scheduler module based on queue) is responsible for the scheduling of queuing data, and dispatching algorithm has stringent excellent
First grade scheduling (Strict Priority, SP) and Weight Round Robin (Weighted Round Robin, WRR).The network equipment
Up direction is scheduled based on bandwidth, and scheduling rates are by bandwidth contributions.
Data packet in queue cache module (i.e. the cache module based on queue) buffer network equipment, i.e., when data packet arrives
When up to forwarding port, which is responsible for data cached packet, to be scheduled in next bandwidth cycle.
Control of queue module (control module i.e. based on queue caching) is responsible for aging and the data of control queue caching
The team that joins the team out of packet.
Queue caches monitoring module (monitoring module i.e. based on queue caching, realize the function of Fig. 5 devices) and is responsible for monitoring
The occupancy situation of queue caching, the aging of queue caching is controlled according to the occupancy situation of monitoring.
In network data forwarding unit, queue scheduling, queue caching and control of queue module can only ensure data packet
The transmission data in bandwidth allocation, and tail drop is executed to data flow according to queue cache size, certain can not be solved
The obstructed problem of network data caused by a little data flows are lost completely or major part is lost.And the device of the embodiment of the present invention increases
The queue caching monitoring module added can effectively prevent due to network congestion, and partial data is lost completely or most of loss
When caused by the obstructed situation of data flow.
The queue caching monitoring module implementation steps are as follows:
Step 1:Assuming that the total cache size of queue is X, i.e. queue can at most cache X data message, network packet sending speed
For V Mbps, data packet length is L Bytes, then the caching, which takes, needs time T=(LX*8)/(V*106)。
Step 2:When T is less than 1s, in system kernel state open queue cache sweep thread, each scan period is (T*
1000*90/100) ms, i.e., after doing single pass to all queues caching, thread sleep (T*1000*90/100) ms, when this
When period is less than 10ms, then it is 10ms to take queued scans cycle time.
Step 3:The caching occupation value Y of the queue is read within the queue cache sweep period, if queue caching occupies
Ratio is more than 80% that the queue always caches, i.e. Y>X*80%, then it is assumed that there are the risk of loss of data, at this time agings for the queue
Data with existing packet in the queue.
Because chip array ageing process and kernel state thread are asynchronous procedures, the microsecond function that is delayed is needed at this time
(udelay) be delayed 100us, that is, calls udelay functions to postpone 100us.
It should be noted that not use sleep microsecond function (usleep) herein, thread wakening is prevented to be delayed.
Step 4:Continue to read next queue caching situation, there are the queues of risk to be all aging until all.
Step 5:After all queue cachings have checked, thread enters sleep, discharges thread resources, sleeping time one
A time scan period.
Fig. 6 and Fig. 7 is caching occupancy situation schematic diagram after queue aging, as shown in Figure 6 and Figure 7, if when queue stores
More than queue always cache 80% when, software execute queuing data aging, i.e., from queue tail aging queue caching in data,
Data in this way in the network of back can normally join the team again according to network data sequence, to solve the problems, such as the loss of data flow A.
Embodiment 4
Fig. 8 is that software solves the problem of that the fundamental diagram of loss of data is lost for data flow, can be by software at
Reason mode is evaded, as shown in figure 8, step includes:
Step S801:Software kernel state opens a thread.
After starting kernel module, a thread is created and starts, to monitor queue caching situation, and there are data to lose for aging
It goes wrong the data of dangerous queue.
Step S802:Timing a cycle scans the caching occupancy situation of each queue, such as reads 0 queue caching and occupy
Situation.
Step S803:Judge that whether the scanned queue caching occupies more than 80%, if it is, thinking that the queue exists
The risk of loss of data executes step 604;
Step S804:To there are the queues of loss of data risk to execute caching aging mechanism, cached from queue tail aging
Data.
From data with existing of the queue tail into queue head successively aging queue, until whole numbers in aging queue
According to, that is to say, that for there are the queue of loss of data risk, the data wherein cached are all removed, with ensure subsequently into
The multiple data flow traffics for entering multiple business of the queue are balanced.
Step S805:After one queue aging, software udelay 100us.
Because the implementation procedure and hardware aging of software are not synchronizing processes, software should ensure hardware to team as far as possible
Data cached aging is arranged to complete.
Step S806:After the completion of all queue agings, the thread sleep a cycle times, when thread is waken up again
Afterwards, start to execute next flow digestion period, execute step 602.
Step S807, after the completion of the queuing data aging, software continues the subsequent queue of aging, until aging is complete all
Until queue.
It, can be to avoid certain data stream whole loss or most of the case where being lost by the above flow.
In conclusion the embodiment of the present invention has the following technical effects:
The embodiment of the present invention is directed to data forwarding network, realizes that various businesses data flow traffic is balanced based on queue management and control,
It can effectively prevent the situation that data flow caused by subnetwork data all or most loss is obstructed in product.This hair
Bright embodiment is applicable to all network data forwarding units for meeting QoS standards and defining, including passive optical network
The equipment such as (Passive Optical Network, PON), router, interchanger.
Although describing the invention in detail above, but the invention is not restricted to this, those skilled in the art of the present technique
It can be carry out various modifications with principle according to the present invention.Therefore, all to be changed according to made by the principle of the invention, all it should be understood as
Fall into protection scope of the present invention.
Claims (11)
1. a kind of method for realizing flow equalization in data forwarding network, including:
During a plurality of data flow enters same buffer queue, the caching occupancy situation of the buffer queue is monitored;
According to monitored results, determines and wait for that the data flow into the buffer queue whether there is loss of data risk;
If it is determined that there are loss of data risks for the data flow for waiting for into the buffer queue, then removed according to preset ratio described slow
Deposit the data of queue.
2. according to the method described in claim 1, before the caching occupancy situation to the buffer queue is monitored, also wrap
It includes:
According to the total caching capabilities and network packet sending speed of the buffer queue, determine that the spatial cache of the buffer queue is complete
Portion occupies the required time, and the required time is fully occupied according to the spatial cache of the buffer queue, starts to described
The monitoring of buffer queue.
3. method according to claim 1 or 2, the caching occupancy situation to the buffer queue is monitored packet
It includes:
By being scanned to the buffer queue, the caching occupation value of the buffer queue is obtained.
4. according to the method described in claim 3, described according to monitored results, the determining data flow waited for into the buffer queue
Include with the presence or absence of loss of data risk:
The caching occupation value of the buffer queue is compared with predetermined threshold value, if the caching occupation value of the buffer queue
More than the predetermined threshold value, it is determined that there are loss of data risks for the data flow for waiting for into the buffer queue.
5. method according to claim 1 or 2, the data that the buffer queue is removed according to preset ratio include:
By tail portion to the head of the buffer queue of the buffer queue, gradually remove that described there are data according to preset ratio
Data with existing in the buffer queue of risk of missing.
6. according to the method described in claim 5, the preset ratio is the arbitrary proportion in 50% to 100%.
7. a kind of device for realizing flow equalization in data forwarding network, including:
Monitoring module, during entering same buffer queue for a plurality of data flow, to the caching occupancy situation of the buffer queue
It is monitored;
Judgment module waits for that the data flow into the buffer queue whether there is loss of data for according to monitored results, determining
Risk;
Ageing module, for determining when the data flow into the buffer queue there are when loss of data risk, according to default ratio
Example removes the data of the buffer queue.
8. device according to claim 7, further includes:
Starting module, for before the caching occupancy situation to the buffer queue is monitored, according to the buffer queue
Total caching capabilities and network packet sending speed, determine that the spatial cache of the buffer queue is fully occupied the required time, and
It is fully occupied the required time according to the spatial cache of the buffer queue, starts the monitoring to the buffer queue.
9. device according to claim 7 or 8, the judgment module accounts for the caching for the buffer queue that monitoring obtains
It is compared with value with predetermined threshold value, if the caching occupation value of the buffer queue is more than the predetermined threshold value, it is determined that
There are loss of data risks for the data flow for waiting for into the buffer queue.
10. device according to claim 7 or 8, the ageing module is by the tail portion of the buffer queue to the caching
The data with existing there are in the buffer queue of loss of data risk is gradually removed on the head of queue according to preset ratio.
11. device according to claim 10, the preset ratio is the arbitrary proportion in 50% to 100%.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710047636.5A CN108347389A (en) | 2017-01-22 | 2017-01-22 | A kind of method and device for realizing flow equalization in data forwarding network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710047636.5A CN108347389A (en) | 2017-01-22 | 2017-01-22 | A kind of method and device for realizing flow equalization in data forwarding network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108347389A true CN108347389A (en) | 2018-07-31 |
Family
ID=62974735
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710047636.5A Pending CN108347389A (en) | 2017-01-22 | 2017-01-22 | A kind of method and device for realizing flow equalization in data forwarding network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108347389A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109274736A (en) * | 2018-09-12 | 2019-01-25 | 北京奇安信科技有限公司 | Data flow method for releasing and device |
CN110519176A (en) * | 2019-08-20 | 2019-11-29 | 华能四川水电有限公司 | Dynamic equilibrium intelligent flow data forwarding device and method suitable for industry control environment |
CN111061545A (en) * | 2018-10-17 | 2020-04-24 | 财团法人工业技术研究院 | Server and resource regulation and control method thereof |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101360058A (en) * | 2008-09-08 | 2009-02-04 | 华为技术有限公司 | Method and apparatus for cache overflow control |
CN101753440A (en) * | 2009-12-18 | 2010-06-23 | 华为技术有限公司 | Method, device and wireless network controller for active queue management |
CN102082735A (en) * | 2011-03-07 | 2011-06-01 | 江苏科技大学 | Method for managing passive queue by abandoning head for N times |
CN102088395A (en) * | 2009-12-02 | 2011-06-08 | 杭州华三通信技术有限公司 | Method and device for adjusting media data cache |
WO2015169048A1 (en) * | 2014-05-05 | 2015-11-12 | 中兴通讯股份有限公司 | Queue management method and device |
WO2017000657A1 (en) * | 2015-06-30 | 2017-01-05 | 深圳市中兴微电子技术有限公司 | Cache management method and device, and computer storage medium |
-
2017
- 2017-01-22 CN CN201710047636.5A patent/CN108347389A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101360058A (en) * | 2008-09-08 | 2009-02-04 | 华为技术有限公司 | Method and apparatus for cache overflow control |
CN102088395A (en) * | 2009-12-02 | 2011-06-08 | 杭州华三通信技术有限公司 | Method and device for adjusting media data cache |
CN101753440A (en) * | 2009-12-18 | 2010-06-23 | 华为技术有限公司 | Method, device and wireless network controller for active queue management |
CN102082735A (en) * | 2011-03-07 | 2011-06-01 | 江苏科技大学 | Method for managing passive queue by abandoning head for N times |
WO2015169048A1 (en) * | 2014-05-05 | 2015-11-12 | 中兴通讯股份有限公司 | Queue management method and device |
WO2017000657A1 (en) * | 2015-06-30 | 2017-01-05 | 深圳市中兴微电子技术有限公司 | Cache management method and device, and computer storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109274736A (en) * | 2018-09-12 | 2019-01-25 | 北京奇安信科技有限公司 | Data flow method for releasing and device |
CN109274736B (en) * | 2018-09-12 | 2021-08-03 | 奇安信科技集团股份有限公司 | Data stream release method and device |
CN111061545A (en) * | 2018-10-17 | 2020-04-24 | 财团法人工业技术研究院 | Server and resource regulation and control method thereof |
CN110519176A (en) * | 2019-08-20 | 2019-11-29 | 华能四川水电有限公司 | Dynamic equilibrium intelligent flow data forwarding device and method suitable for industry control environment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017054566A1 (en) | Method of preventing cpu packet congestion and device utilizing same | |
JP4995101B2 (en) | Method and system for controlling access to shared resources | |
TWI477109B (en) | A traffic manager and a method for a traffic manager | |
JP3732989B2 (en) | Packet switch device and scheduling control method | |
US7535835B2 (en) | Prioritizing data with flow control | |
US7970888B2 (en) | Allocating priority levels in a data flow | |
US8151067B2 (en) | Memory sharing mechanism based on priority elevation | |
CN100550852C (en) | A kind of method and device thereof of realizing mass port backpressure | |
CA2355473A1 (en) | Buffer management for support of quality-of-service guarantees and data flow control in data switching | |
JPH08274793A (en) | Delay minimization system provided with guaranteed bandwidthdelivery for real time traffic | |
Keslassy et al. | Providing performance guarantees in multipass network processors | |
CN105873233B (en) | IEEE802.11ax based on layering scheduling accesses Enhancement Method | |
US8174985B2 (en) | Data flow control | |
CN108347389A (en) | A kind of method and device for realizing flow equalization in data forwarding network | |
US20130343398A1 (en) | Packet-based communication system with traffic prioritization | |
CN113132265B (en) | Multi-stage scheduling method and device for multi-path Ethernet | |
AU2002339349B2 (en) | Distributed transmission of traffic flows in communication networks | |
US20030225739A1 (en) | Flexible scheduling architecture | |
CN102594669A (en) | Data message processing method, device and equipment | |
US20040022246A1 (en) | Packet sequence control | |
CN114500336A (en) | Time-sensitive network gating scheduling and per-flow filtering management test method | |
US7933283B1 (en) | Shared memory management | |
EP3340548B1 (en) | Scheduling method and customer premises equipment | |
US7499400B2 (en) | Information flow control in a packet network based on variable conceptual packet lengths | |
CN111638986A (en) | QoS queue scheduling method, device, system and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180731 |