CN104125168A - A scheduling method and system for shared resources - Google Patents

A scheduling method and system for shared resources Download PDF

Info

Publication number
CN104125168A
CN104125168A CN201310152617.0A CN201310152617A CN104125168A CN 104125168 A CN104125168 A CN 104125168A CN 201310152617 A CN201310152617 A CN 201310152617A CN 104125168 A CN104125168 A CN 104125168A
Authority
CN
China
Prior art keywords
queue
sequence number
priority
chained list
pointer field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310152617.0A
Other languages
Chinese (zh)
Inventor
高继伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201310152617.0A priority Critical patent/CN104125168A/en
Priority to PCT/CN2013/090940 priority patent/WO2014173166A1/en
Publication of CN104125168A publication Critical patent/CN104125168A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a scheduling method and system for shared resources. The method comprises the following steps: arranging queues to be scheduled to a shared buffer; recording serial number information of the queues arranged to the shared buffer to chain tables corresponding to the priorities where the queues belong, the chain tables being pre-configured and each priority of each port being configured with one corresponding chain table; and scheduling the queues in the shared buffer according to the serial number information in the chain tables corresponding to the selected priorities. The scheduling method and system can improve the utilization rate of storage space of the buffer and save resources.

Description

A kind of dispatching method of shared resource and system
Technical field
The present invention relates to data communication technology, relate in particular to a kind of dispatching method and system of shared resource.
Background technology
At present, data communication chip dispatching patcher often has multiple physical ports (Port), shown in Figure 1, there is no dividing of priority between each physical port, often adopt fair wheel to dispatch (Round-Robin, RR) to each physical port.Each physical port has multiple priority, and each priority is adopted to strict priority (Strict Priority, SP) scheduling conventionally.The corresponding buffer memory independently of each priority, for storage queue.
In the time of incoming message, first according to attribute, the message of input is classified, according to the result of classification, message is divided again, and queue division being obtained according to user's configuration maps to certain priority under each self-corresponding certain port, be stored in independently buffer memory corresponding to described priority.
In the time of outgoing message, first dispatch port, then dispatching priority, then the queue in independently buffer memory corresponding to priority is dispatched, realize the output of message.
In general, the restricted number of not determining due to the queue that under each port, each priority can articulate, for the queue that prevents independently buffer memory corresponding to priority too much causes overflowing, at present conventionally buffer memory corresponding to each priority all arranged to enough large memory space, required buffer memory expense is quite large.
But, because any one queue can only be under the jurisdiction of certain definite port and priority, can not be simultaneously on different port or the different priorities of same port, so as long as total number of queues determine, the required total spatial cache of storage queue is fixed so.Like this, the practical efficiency of the memory space of each buffer memory is very low, causes the waste of resource.And, in the time that dispatching patcher is upgraded, need to increase the quantity of priority under port and port, and along with the increase of the quantity of priority under port and port, the memory space of buffer memory also needs corresponding increase, thereby causes the utilance of memory space of each buffer memory more and more lower.
Summary of the invention
In view of this, main purpose of the present invention is to provide a kind of dispatching method and system of shared resource, can improve the utilance of the memory space of buffer memory, saving resource.
For achieving the above object, technical scheme of the present invention is achieved in that
The invention provides a kind of dispatching method of shared resource, described method comprises:
Queue to be scheduled is entered to shared buffer memory;
The sequence number information that enters the queue in shared buffer memory is recorded in to chained list corresponding to priority under described queue, and described chained list is pre-configured, and each priority of each port all configures a corresponding chained list;
According to the queue in the sequence number information scheduling shared buffer memory in chained list corresponding to selected priority.
Preferably, described sequence number information comprises: owner pointer field, tail pointer field, down hop field and status indicator, wherein,
Described owner pointer field for the sequence number, tail pointer field of storing first-in-chain(FIC) queue for the sequence number, down hop field of storing last-of-chain queue for by the sequence number of other queues of storage except first-in-chain(FIC) queue successively of joining the team successively, described status indicator is divided into two kinds of empty and non-NULLs, for representing whether described chained list is sky;
Accordingly, described the sequence number information that enters the queue in shared buffer memory is recorded in to described queue under chained list corresponding to priority be:
In the time that status indicator is sky, upgrades respectively owner pointer field and tail pointer field according to the sequence number of described queue, and status indicator is revised as to non-NULL;
In the time that status indicator is non-NULL, according to the sequence number update tail pointer field of described queue and deposit described sequence number in down hop field;
Described queue of dispatching in shared buffer memory according to the sequence number information in chained list corresponding to selected priority is:
According to the sequence number in the owner pointer field of chained list corresponding to selected priority, from shared buffer memory, dispatch the queue that described sequence number is corresponding, extract the sequence number that is positioned at head of the queue in down hop field, according to described sequence number update owner pointer field.
Preferably, before the queue in described scheduling shared buffer memory, described method also comprises:
Judge that whether the sequence number in sequence number and the tail pointer field in owner pointer field is identical, when identical, after the queue in scheduling shared buffer memory, status indicator is revised as to sky.
Preferably, before the queue in described scheduling shared buffer memory, described method also comprises:
From the status indicator of affiliated each chained list is not all empty port, select group port.
Preferably, before the queue in described scheduling shared buffer memory, described method also comprises:
The priority that the status indicator of selecting the high and the most corresponding chained list of priority is non-NULL is as going out group priority.
The invention provides a kind of dispatching patcher of shared resource, described system comprises:
Queue enters unit, for queue to be scheduled is entered to shared buffer memory;
Sequence number information record cell, is recorded in chained list corresponding to priority under described queue by the sequence number information that enters the queue in shared buffer memory, and described chained list is pre-configured, and each priority of each port all configures a corresponding chained list;
Queue scheduling unit, for dispatching the queue in shared buffer memory according to the sequence number information of chained list corresponding to selected priority.
Preferably, described sequence number information comprises: owner pointer field, tail pointer field, down hop field and status indicator, wherein,
Described owner pointer field for the sequence number, tail pointer field of storing first-in-chain(FIC) queue for the sequence number, down hop field of storing last-of-chain queue for by the sequence number of other queues of storage except first-in-chain(FIC) queue successively of joining the team successively, described status indicator is divided into two kinds of empty and non-NULLs, for representing whether described chained list is sky;
Sequence number information record cell, when being empty when status indicator, upgrading respectively owner pointer field and tail pointer field according to the sequence number of described queue, and status indicator is revised as to non-NULL; In the time that status indicator is non-NULL, according to the sequence number update tail pointer field of described queue and deposit described sequence number in down hop field;
Described queue scheduling unit, specifically for according to the sequence number in the owner pointer field of chained list corresponding to selected priority, from shared buffer memory, dispatch the queue that described sequence number is corresponding, extract the sequence number that is positioned at head of the queue in down hop field, according to described sequence number update owner pointer field.
Preferably, described queue scheduling unit, also for before the queue in scheduling shared buffer memory, judge that whether the sequence number in sequence number and the tail pointer field in owner pointer field is identical, when identical, after the queue in scheduling shared buffer memory, status indicator is revised as to sky.
Preferably, described queue scheduling unit, is not all also empty port for the status indicator from affiliated each chained list, selects group port.
Preferably, described queue scheduling unit, also for selecting priority that the status indicator of the high and the most corresponding chained list of priority is non-NULL as going out group priority.
As from the foregoing, technical scheme of the present invention comprises: queue to be scheduled is entered to shared buffer memory; The sequence number information that enters the queue in shared buffer memory is recorded in to chained list corresponding to priority under described queue, and described chained list is pre-configured, and each priority of each port all configures a corresponding chained list; According to the queue in the sequence number information scheduling shared buffer memory in chained list corresponding to selected priority.Thus, the present invention only need arrange enough large shared buffer memory of a memory space, thereby avoids arranging enough large independent buffer memorys of multiple memory spaces, and then can realize the utilance of the memory space that improves buffer memory, the object of saving resource.
Brief description of the drawings
Fig. 1 is the principle schematic of available data communication chip dispatching patcher;
Fig. 2 is the realization flow schematic diagram of the dispatching method of shared resource of the present invention;
Fig. 3 is the structural representation of the dispatching patcher of shared resource of the present invention;
Fig. 4 is the principle schematic of the dispatching patcher of shared resource of the present invention;
Fig. 5 is the realization flow schematic diagram of dispatching method first embodiment of shared resource of the present invention.
Embodiment
The dispatching method of a kind of shared resource provided by the invention, as shown in Figure 2, described method comprises:
Step 201, queue to be scheduled is entered to shared buffer memory;
Step 202, the sequence number information that enters the queue in shared buffer memory is recorded in to chained list corresponding to priority under described queue, described chained list is pre-configured, and each priority of each port all configures a corresponding chained list;
Step 203, according to the queue in the sequence number information scheduling shared buffer memory in chained list corresponding to selected priority.
Preferably, described sequence number information can comprise: owner pointer field, tail pointer field, down hop field and status indicator, wherein,
Described owner pointer field for the sequence number, tail pointer field of storing first-in-chain(FIC) queue for the sequence number, down hop field of storing last-of-chain queue for by the sequence number of other queues of storage except first-in-chain(FIC) queue successively of joining the team successively, described status indicator is empty and two kinds of non-NULLs, for representing whether described chained list is sky;
Accordingly, described the sequence number information that enters the queue in shared buffer memory is recorded in to described queue under chained list corresponding to priority be:
In the time that status indicator is sky, upgrades respectively owner pointer field and tail pointer field according to the sequence number of described queue, and status indicator is revised as to non-NULL;
In the time that status indicator is non-NULL, according to the sequence number update tail pointer field of described queue and deposit described sequence number in down hop field;
Described queue of dispatching in shared buffer memory according to the sequence number information in chained list corresponding to selected priority is:
According to the sequence number in the owner pointer field of chained list corresponding to selected priority, from shared buffer memory, dispatch the queue that described sequence number is corresponding, extract the sequence number that is positioned at head of the queue in down hop field, according to described sequence number update owner pointer field;
Preferably, before the queue in described scheduling shared buffer memory, described method also comprises:
Judge that whether the sequence number in sequence number and the tail pointer field in owner pointer field is identical, when identical, after the queue in scheduling shared buffer memory, status indicator is revised as to sky.
Preferably, before the queue in described scheduling shared buffer memory, described method also comprises:
From the status indicator of affiliated each chained list is not all empty port, select group port.
Preferably, before the queue in described scheduling shared buffer memory, described method also comprises:
The priority that the status indicator of selecting the high and the most corresponding chained list of priority is non-NULL is as going out group priority.
Corresponding to the method shown in Fig. 2, the invention provides a kind of dispatching patcher of shared resource, as shown in Figure 3, described system comprises:
Queue enters unit, for queue to be scheduled is entered to shared buffer memory;
Sequence number information record cell, is recorded in chained list corresponding to priority under described queue by the sequence number information that enters the queue in shared buffer memory, and described chained list is pre-configured, and each priority of each port all configures a corresponding chained list;
Queue scheduling unit, for dispatching the queue in shared buffer memory according to the sequence number information of chained list corresponding to selected priority.
Preferably, described sequence number information comprises: owner pointer field, tail pointer field, down hop field and status indicator, wherein,
Described owner pointer field for the sequence number, tail pointer field of storing first-in-chain(FIC) queue for the sequence number, down hop field of storing last-of-chain queue for by the sequence number of other queues of storage except first-in-chain(FIC) queue successively of joining the team successively, described status indicator is empty and two kinds of non-NULLs, for representing whether described chained list is sky;
Accordingly, sequence number information record cell, when being empty when status indicator, upgrading respectively owner pointer field and tail pointer field according to the sequence number of described queue, and status indicator is revised as to non-NULL; In the time that status indicator is non-NULL, according to the sequence number update tail pointer field of described queue and deposit described sequence number in down hop field;
Described queue scheduling unit, specifically for according to the sequence number in the owner pointer field of chained list corresponding to selected priority, from shared buffer memory, dispatch the queue that described sequence number is corresponding, extract the sequence number that is positioned at head of the queue in down hop field, according to described sequence number update owner pointer field.
Preferably, described queue scheduling unit, also for before the queue in scheduling shared buffer memory, judge that whether the sequence number in sequence number and the tail pointer field in owner pointer field is identical, when identical, after the queue in scheduling shared buffer memory, status indicator is revised as to sky.
Preferably, described queue scheduling unit, is not all also empty port for the status indicator from affiliated each chained list, selects group port.
Preferably, described queue scheduling unit, also for selecting priority that the status indicator of the high and the most corresponding chained list of priority is non-NULL as going out group priority.
Fig. 4 is the principle schematic of the dispatching patcher of shared resource of the present invention, shown in contrast Fig. 1, known the present invention only need arrange enough large shared buffer memory of a memory space, thereby avoid arranging enough large independent buffer memorys of multiple memory spaces, and then can realize the utilance of the memory space that improves buffer memory, the object of saving resource.
Below in conjunction with Fig. 5, dispatching method first embodiment of shared resource of the present invention is described in detail.
In advance for each priority of each port all configures a corresponding chained list, constantly queue to be scheduled is entered to shared buffer memory, and the sequence number information that enters the queue in shared buffer memory is recorded in to chained list corresponding to priority under described queue, described chained list is constantly updated;
Concrete, described sequence number information can comprise: owner pointer field, tail pointer field, down hop field and status indicator, wherein,
Described owner pointer field for the sequence number, tail pointer field of storing first-in-chain(FIC) queue for the sequence number, down hop field of storing last-of-chain queue for by the sequence number of other queues of storage except first-in-chain(FIC) queue successively of joining the team successively, described status indicator is empty and two kinds of non-NULLs, for representing whether described chained list is sky;
Here, the sequence number information that enters the queue in shared buffer memory is recorded in to described queue under chained list corresponding to priority can be:
In the time that status indicator is sky, upgrades respectively owner pointer field and tail pointer field according to the sequence number of described queue, and status indicator is revised as to non-NULL;
In the time that status indicator is non-NULL, according to the sequence number update tail pointer field of described queue and deposit described sequence number in down hop field.
Step 501, from the status indicator of affiliated each chained list is not all empty port, select group port by deficit weighted round-robin (Deficit Weighted Round-Robin, DWRR) scheduling.
Step 502, select priority that the status indicator of the high and the most corresponding chained list of priority is non-NULL as going out group priority.
Step 503, according to the queue in the sequence number information scheduling shared buffer memory in chained list corresponding to selected priority;
Concrete, according to the sequence number in the owner pointer field of chained list corresponding to selected priority, from shared buffer memory, dispatch the queue that described sequence number is corresponding, extract the sequence number that is positioned at head of the queue in down hop field, according to described sequence number update owner pointer field, and upgrade down hop field; In the process of scheduling, described chained list is constantly updated;
In actual applications, before the queue in described scheduling shared buffer memory, described method can also comprise:
Judge that whether the sequence number in sequence number and the tail pointer field in owner pointer field is identical, when identical, after the queue in scheduling shared buffer memory, status indicator is revised as to sky.
Introduce the present invention for clearer, map to successively A and two ports of B below taking sequence number as 0~15 queue and introduce the generative process of chained list content as example, in this example, each port has 7 priority, port is in initial condition, the status indicator of the chained list of each priority is all empty, can check according to user's configuration (being the mapping relations of port and priority) port and priority that each queue is subordinate to.
No. 0 queue, through looking into the priority 0 that belongs to A port, is updated to respectively 0,0 by the initial and end pointer field of the chained list of the priority of A port 0 correspondence, and status indicator is revised as non-NULL.
No. 1 queue, through looking into the priority 6 that belongs to A port, is updated to respectively 1,1 by the initial and end pointer field of the chained list of the priority of A port 6, and status indicator is revised as non-NULL.
No. 2 queues, through looking into the priority 0 that belongs to A port, form linking relationship with No. 0 queue, and the tail pointer field of the chained list of the priority of A port 0 correspondence is updated to 2, and down hop field is updated to 2, represent that the down hop of No. 0 queue linking relationship is No. 2 queues.
No. 3 queues, through looking into the priority 1 that belongs to B port, are updated to respectively 3,3 by the initial and end pointer field of the chained list corresponding priority 1 of B port, and status indicator is revised as non-NULL.
No. 4 queues, through looking into the priority 6 that belongs to A port, form linking relationship with No. 1 queue, and the tail pointer field of the chained list of the priority of A port 6 correspondences is updated to 4, and down hop field is updated to 4.
No. 5 queues, through looking into the priority 6 that belongs to A port, form linking relationship with No. 4 queues, and the tail pointer of the chained list of the priority of A port 6 correspondences is updated to 5, and down hop field is updated to 4,5.
No. 6 queues, through looking into the priority 1 that belongs to B port, form linking relationship with No. 3 queues, and the tail pointer field of the chained list corresponding priority 1 of B port is updated to 6, and down hop field is updated to 6.
No. 7 queues, through looking into the priority 0 that belongs to A port, form linking relationship with No. 2 queues, and the tail pointer field of the chained list of the priority of A port 0 correspondence is updated to 7, and down hop field is updated to 2,7.
No. 8 queues, through looking into the priority 3 that belongs to B port, are updated to respectively 8,8 by the initial and end pointer field of the chained list corresponding priority 3 of B port, and status indicator is revised as non-NULL.
No. 9 queues, through looking into the priority 1 that belongs to B port, form linking relationship with No. 6 queues, and the tail pointer field of the chained list corresponding priority 1 of B port is updated to 9, and down hop field is updated to 6,9.
No. 10 queues, through looking into the priority 0 that belongs to A port, form linking relationship with No. 7 queues, and the tail pointer field of the chained list of the priority of A port 0 correspondence is updated to 10, and down hop field is updated to 2,7,10.
No. 11 queues, through looking into the priority 3 that belongs to B port, form linking relationship with No. 8 queues, and the tail pointer field of the chained list corresponding priority 3 of B port is updated to 11, and down hop field is updated to 11.
No. 12 queues, through looking into the priority 1 that belongs to B port, form linking relationship with No. 9 queues, and the tail pointer field of the chained list corresponding priority 1 of B port is updated to 12, and down hop field is updated to 6,9,12.
No. 13 queues, through looking into the priority 0 that belongs to A port, form linking relationship with No. 10 queues, and the tail pointer field of the chained list of the priority of A port 0 correspondence is updated to 13, and down hop field is updated to 2,7,10,13.
No. 14 queues, through looking into the priority 3 that belongs to B port, form linking relationship with No. 11 queues, and the tail pointer field of the chained list corresponding priority 3 of B port is updated to 14, and down hop field is updated to 11,14.
No. 15 queues, through looking into the priority 1 that belongs to B port, form linking relationship with No. 12 queues, and the tail pointer field of the chained list corresponding priority 1 of B port is updated to 15, and down hop field is updated to 6,9,12,15.
Through above-mentioned enter chain process, the queue sequence number of the chain table record of priority 0 correspondence of A port is 0,2,7,10,13, the queue sequence number of the chain table record of priority 6 correspondences of A port is 1,4,5, the queue sequence number of the chain table record that the priority 1 of B port is corresponding is 3,6,9,12,15, and the queue sequence number of the chain table record of priority 6 correspondences of A port is 8,11,14.
Follow-up continuation enters the queue of these chained lists as long as continue to carry out in a manner mentioned above, safeguards initial and end pointer field and the down hop field of chained list.While adopting as can be seen here above-mentioned mode of operation, enter chain and go out chain and can carry out simultaneously, no matter user is the mapping relations of deployment queue and port and priority how, can both be by the maintenance of the initial and end pointer field chained list of linking relationship and chained list, thus realize sharing of memory space resource.
The foregoing is only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.

Claims (10)

1. a dispatching method for shared resource, is characterized in that, described method comprises:
Queue to be scheduled is entered to shared buffer memory;
The sequence number information that enters the queue in shared buffer memory is recorded in to chained list corresponding to priority under described queue, and described chained list is pre-configured, and each priority of each port all configures a corresponding chained list;
According to the queue in the sequence number information scheduling shared buffer memory in chained list corresponding to selected priority.
2. method according to claim 1, is characterized in that, described sequence number information comprises: owner pointer field, tail pointer field, down hop field and status indicator, wherein,
Described owner pointer field for the sequence number, tail pointer field of storing first-in-chain(FIC) queue for the sequence number, down hop field of storing last-of-chain queue for by the sequence number of other queues of storage except first-in-chain(FIC) queue successively of joining the team successively, described status indicator is divided into two kinds of empty and non-NULLs, for representing whether described chained list is sky;
Accordingly, described the sequence number information that enters the queue in shared buffer memory is recorded in to described queue under chained list corresponding to priority be:
In the time that status indicator is sky, upgrades respectively owner pointer field and tail pointer field according to the sequence number of described queue, and status indicator is revised as to non-NULL;
In the time that status indicator is non-NULL, according to the sequence number update tail pointer field of described queue and deposit described sequence number in down hop field;
Described queue of dispatching in shared buffer memory according to the sequence number information in chained list corresponding to selected priority is:
According to the sequence number in the owner pointer field of chained list corresponding to selected priority, from shared buffer memory, dispatch the queue that described sequence number is corresponding, extract the sequence number that is positioned at head of the queue in down hop field, according to described sequence number update owner pointer field.
3. method according to claim 2, is characterized in that, before the queue in described scheduling shared buffer memory, described method also comprises:
Judge that whether the sequence number in sequence number and the tail pointer field in owner pointer field is identical, when identical, after the queue in scheduling shared buffer memory, status indicator is revised as to sky.
4. method according to claim 2, is characterized in that, before the queue in described scheduling shared buffer memory, described method also comprises:
From the status indicator of affiliated each chained list is not all empty port, select group port.
5. method according to claim 2, is characterized in that, before the queue in described scheduling shared buffer memory, described method also comprises:
The priority that the status indicator of selecting the high and the most corresponding chained list of priority is non-NULL is as going out group priority.
6. a dispatching patcher for shared resource, is characterized in that, described system comprises:
Queue enters unit, for queue to be scheduled is entered to shared buffer memory;
Sequence number information record cell, is recorded in chained list corresponding to priority under described queue by the sequence number information that enters the queue in shared buffer memory, and described chained list is pre-configured, and each priority of each port all configures a corresponding chained list;
Queue scheduling unit, for dispatching the queue in shared buffer memory according to the sequence number information of chained list corresponding to selected priority.
7. system according to claim 6, is characterized in that, described sequence number information comprises: owner pointer field, tail pointer field, down hop field and status indicator, wherein,
Described owner pointer field for the sequence number, tail pointer field of storing first-in-chain(FIC) queue for the sequence number, down hop field of storing last-of-chain queue for by the sequence number of other queues of storage except first-in-chain(FIC) queue successively of joining the team successively, described status indicator is divided into two kinds of empty and non-NULLs, for representing whether described chained list is sky;
Sequence number information record cell, when being empty when status indicator, upgrading respectively owner pointer field and tail pointer field according to the sequence number of described queue, and status indicator is revised as to non-NULL; In the time that status indicator is non-NULL, according to the sequence number update tail pointer field of described queue and deposit described sequence number in down hop field;
Described queue scheduling unit, specifically for according to the sequence number in the owner pointer field of chained list corresponding to selected priority, from shared buffer memory, dispatch the queue that described sequence number is corresponding, extract the sequence number that is positioned at head of the queue in down hop field, according to described sequence number update owner pointer field.
8. system according to claim 7, it is characterized in that, described queue scheduling unit, also for before the queue in scheduling shared buffer memory, judge that whether the sequence number in sequence number and the tail pointer field in owner pointer field is identical, when identical, after the queue in scheduling shared buffer memory, status indicator is revised as to sky.
9. system according to claim 7, is characterized in that, described queue scheduling unit is not all also empty port for the status indicator from affiliated each chained list, selects group port.
10. system according to claim 7, is characterized in that, described queue scheduling unit, also for selecting priority that the status indicator of the high and the most corresponding chained list of priority is non-NULL as going out group priority.
CN201310152617.0A 2013-04-27 2013-04-27 A scheduling method and system for shared resources Pending CN104125168A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201310152617.0A CN104125168A (en) 2013-04-27 2013-04-27 A scheduling method and system for shared resources
PCT/CN2013/090940 WO2014173166A1 (en) 2013-04-27 2013-12-30 Shared resource scheduling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310152617.0A CN104125168A (en) 2013-04-27 2013-04-27 A scheduling method and system for shared resources

Publications (1)

Publication Number Publication Date
CN104125168A true CN104125168A (en) 2014-10-29

Family

ID=51770436

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310152617.0A Pending CN104125168A (en) 2013-04-27 2013-04-27 A scheduling method and system for shared resources

Country Status (2)

Country Link
CN (1) CN104125168A (en)
WO (1) WO2014173166A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105490963A (en) * 2015-12-11 2016-04-13 中国航空工业集团公司西安航空计算技术研究所 Sending scheduling method of network node multi-service data
CN105912273A (en) * 2016-04-15 2016-08-31 成都欧飞凌通讯技术有限公司 FPGA-basedmessage share storage management implementation method
WO2016197822A1 (en) * 2016-01-05 2016-12-15 中兴通讯股份有限公司 Packet sending method and device
CN107347039A (en) * 2016-05-05 2017-11-14 深圳市中兴微电子技术有限公司 A kind of management method and device in shared buffer memory space
CN109102691A (en) * 2018-07-24 2018-12-28 宁波三星医疗电气股份有限公司 A kind of electric energy meter active report of event processing method based on chained list
CN109344091A (en) * 2018-09-29 2019-02-15 武汉斗鱼网络科技有限公司 A kind of regular method, apparatus of buffering array, terminal and readable medium
CN111522643A (en) * 2020-04-22 2020-08-11 杭州迪普科技股份有限公司 Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025184B (en) * 2016-02-01 2021-03-16 深圳市中兴微电子技术有限公司 Data management method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1411211A (en) * 2002-04-17 2003-04-16 华为技术有限公司 Ethernet exchange chip output queue management and dispatching method and device
US20090100075A1 (en) * 2007-10-10 2009-04-16 Tobias Karlsson System and method of mirroring a database to a plurality of subscribers
CN102130833A (en) * 2011-03-11 2011-07-20 中兴通讯股份有限公司 Memory management method and system of traffic management chip chain tables of high-speed router
CN102447610A (en) * 2010-10-14 2012-05-09 中兴通讯股份有限公司 Method and device for realizing message buffer resource sharing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101043426B (en) * 2006-03-25 2011-01-05 中兴通讯股份有限公司 Packet multiplexing method in wireless communication system
CN102298539A (en) * 2011-06-07 2011-12-28 华东师范大学 Method and system for scheduling shared resources subjected to distributed parallel treatment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1411211A (en) * 2002-04-17 2003-04-16 华为技术有限公司 Ethernet exchange chip output queue management and dispatching method and device
US20090100075A1 (en) * 2007-10-10 2009-04-16 Tobias Karlsson System and method of mirroring a database to a plurality of subscribers
CN102447610A (en) * 2010-10-14 2012-05-09 中兴通讯股份有限公司 Method and device for realizing message buffer resource sharing
CN102130833A (en) * 2011-03-11 2011-07-20 中兴通讯股份有限公司 Memory management method and system of traffic management chip chain tables of high-speed router

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105490963A (en) * 2015-12-11 2016-04-13 中国航空工业集团公司西安航空计算技术研究所 Sending scheduling method of network node multi-service data
CN105490963B (en) * 2015-12-11 2018-12-28 中国航空工业集团公司西安航空计算技术研究所 A kind of the transmission dispatching method and system of network node multi-service data
WO2016197822A1 (en) * 2016-01-05 2016-12-15 中兴通讯股份有限公司 Packet sending method and device
CN105912273A (en) * 2016-04-15 2016-08-31 成都欧飞凌通讯技术有限公司 FPGA-basedmessage share storage management implementation method
CN105912273B (en) * 2016-04-15 2019-05-24 成都欧飞凌通讯技术有限公司 A kind of message shares the FPGA implementation method of storage management
CN107347039A (en) * 2016-05-05 2017-11-14 深圳市中兴微电子技术有限公司 A kind of management method and device in shared buffer memory space
CN107347039B (en) * 2016-05-05 2020-02-21 深圳市中兴微电子技术有限公司 Management method and device for shared cache space
CN109102691A (en) * 2018-07-24 2018-12-28 宁波三星医疗电气股份有限公司 A kind of electric energy meter active report of event processing method based on chained list
CN109102691B (en) * 2018-07-24 2020-10-27 宁波三星医疗电气股份有限公司 Active reporting processing method for electric energy meter event based on linked list
CN109344091A (en) * 2018-09-29 2019-02-15 武汉斗鱼网络科技有限公司 A kind of regular method, apparatus of buffering array, terminal and readable medium
CN109344091B (en) * 2018-09-29 2021-03-12 武汉斗鱼网络科技有限公司 Buffer array regulation method, device, terminal and readable medium
CN111522643A (en) * 2020-04-22 2020-08-11 杭州迪普科技股份有限公司 Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2014173166A1 (en) 2014-10-30

Similar Documents

Publication Publication Date Title
CN104125168A (en) A scheduling method and system for shared resources
US20230046107A1 (en) Configurable logic platform with reconfigurable processing circuitry
CN103348640B (en) Relay
CN101594299B (en) Method for queue buffer management in linked list-based switched network
CN102045258B (en) Data caching management method and device
CN101604264B (en) Task scheduling method and system for supercomputer
CN101834786B (en) Queue scheduling method and device
CN104572106A (en) Parallel program development method for processing large-scale data based on small memory
CN101741751B (en) Traffic shaping dispatching method, traffic shaping dispatcher and routing device
CN102665284B (en) Uplink service transmission scheduling method and terminal
CN102447610A (en) Method and device for realizing message buffer resource sharing
CN103077183A (en) Data importing method and system for distributed sequence list
CN102981973B (en) Perform the method for request within the storage system
CN103701934A (en) Resource optimal scheduling method and virtual machine host machine optimal selection method
CN105656807A (en) Network chip multi-channel data transmission method and transmission device
CN102025639A (en) Queue scheduling method and system
CN103353851A (en) Method and equipment for managing tasks
CN106095569A (en) A kind of cloud workflow engine scheduling of resource based on SLA and control method
CN102253898A (en) Memory management method and memory management device of image data
US9063841B1 (en) External memory management in a network device
CN104615684A (en) Mass data communication concurrent processing method and system
CN103366022A (en) Information processing system and processing method for use therewith
CN106686735A (en) Physical downlink shared channel resource mapping method
CN101594201B (en) Method for integrally filtering error data in linked queue management structure
CN104050193A (en) Message generating method and data processing system for realizing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20141029

WD01 Invention patent application deemed withdrawn after publication