CN101419562B - Hardware PRI queue implementing method for balancing load and performance - Google Patents

Hardware PRI queue implementing method for balancing load and performance Download PDF

Info

Publication number
CN101419562B
CN101419562B CN2008101629034A CN200810162903A CN101419562B CN 101419562 B CN101419562 B CN 101419562B CN 2008101629034 A CN2008101629034 A CN 2008101629034A CN 200810162903 A CN200810162903 A CN 200810162903A CN 101419562 B CN101419562 B CN 101419562B
Authority
CN
China
Prior art keywords
priority
entity
directory entry
processing unit
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008101629034A
Other languages
Chinese (zh)
Other versions
CN101419562A (en
Inventor
陈天洲
王罡
胡威
陈度
冯德贵
吴斌斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2008101629034A priority Critical patent/CN101419562B/en
Publication of CN101419562A publication Critical patent/CN101419562A/en
Application granted granted Critical
Publication of CN101419562B publication Critical patent/CN101419562B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an implementation method of a hardware priority queue capable of balancing load and performance. The method combines the present comparatively prevalent implementation methods of two hardware priority queues which are a parallel hardware priority queue and a transfer hardware priority queue respectively, solves the conflict between the load and the performance, gives full play to performance advantages of hardware parallelization and reduces load increase caused by the parallelization. By a modular design, the method causes the priority queue to have better extensibility, which ensures relative independence whether the priority or directory items are needed to be added without changing the structure of the whole queue. The method improves the parallelism and response speed of the priority queue by a parallel transpose queue of a lower layer processing unit, and causes the insertion of an entity to need only to be compared with all directory items in an upper queue of a first priority by an item-by-item transfer queue of the upper queue, which reduces the load.

Description

The hardware priority formation implementation method of balanced load and performance
Technical field
The present invention relates to hardware priority cohort design field, relate in particular to the hardware priority formation implementation method of a kind of balanced load and performance.
Background technology
The world today, the application of computer technology has been pushed human civilization to a new height, not only in the scientific and engineering field, in people's the life, also has to have incorporated increasing computer equipment.And in a lot of embedded devices and the network equipment, it is very extensive that priority query uses.Because the speed advantage of hardware realizes priority query with hardware, becomes the selection of the quick queue operation of many requirements system.Many embedded real time systems, for example speech player etc. needs entity queue response fast.Many low delays that require, high-quality network system, for example the real-time voice exchange system needs the transmitting-receiving of network packet formation fast to guarantee.In order to require high in real time to these, require rapidly simple entity of scheduling or service, priority scheduling is one of its best selection scheme, and directly use hard-wired priority query, be undoubtedly one of approach of such system performance raising, how to design a rational and effective hardware configuration, realize priority query, become one of problem of studying on the engineering field.
The many embedded devices and the network equipment have been arranged in the market, had the hardware priority formation, its kind roughly comprises: fifo queue walks abreast and transplants formation and transmit formation item by item.
The realization of fifo queue is fairly simple, it is according to different priority, distribute directory entry, the entity of equal priority, to be placed in the same directory entry, the expression priority height that the directory entry position is forward is during the entity dequeue, the forward directory entry from the position is according to the tactful dequeue of first in first out.The advantage of this kind design is a simplicity of design, but shortcoming is because the number and the directory entry internal storage entity information number of directory entry are certain, causes extendability poor, is unfavorable for the expansion of follow-up function.
Parallel transposition formation has made full use of the strong characteristics of hardware parallel ability, and by hardware cell independently, relatively more all simultaneously hardware cells and the priority ratio of novel entities carry out the translation realization priority query of certain rule.The advantage of the hardware priority formation of this design is that speed is fast, hardware concurrency and extensibility are good, but shortcoming is also fairly obvious, owing to need simultaneously novel entities information to be delivered on all hardware modules, from the VLSI angle analysis, to cause the big shortcoming of hardware load, further cause the expansion of formation and the speed of operation to be restricted.
Transmit priority query item by item, use the method compare item by item, the priority of novel entities only compares with first of formation, transmits item by item successively, up to finding correct position.This formation has well solved the excessive shortcoming of parallel transposition formation load, but owing to need compare item by item, has caused the formation performance decrease, simultaneously, compares item by item, need insert temporary module in each hardware cell, has increased the expense of hardware resource.
The implementation of above-mentioned three kinds of hardware priority formations respectively has relative merits, how to design a kind of hardware queue, can either bring into play the parallel and speed advantage of hardware, improve performance, can reduce the hardware load again, save hardware resource, become the focus of design hardware priority formation.
Summary of the invention
The object of the present invention is to provide the hardware priority formation implementation method of a kind of balanced load and performance.
The technical scheme that technical solution problem of the present invention is adopted is:
1. the hardware priority formation implementation method of balanced load and performance is characterized in that:
1) lower floor's processing unit is made up of some directory entries:
Each lower floor's processing unit, form by some identical directory entries, each directory entry is a hardware cell, comprise access hole, control signal and entity information I/O port are between the adjacency, by the data path identical with data bit width, and the compare result signal connection of a bit wide, each has write down entity information, comprises entity priority and other information;
2) lower floor's processing unit novel entities inserts, and according to parallel transposition lining up mode, adopts parallel mode relatively to realize:
When new entity arrives, the entity information of this entity will be sent to the entity input port of each directory entry simultaneously, simultaneously, control signal informs that each directory entry has novel entities to arrive, when directory entry finds to have novel entities to arrive, the entity priority of more own current record immediately and the priority of novel entities, if the priority of novel entities is higher than the entity of current record, and the comparative result of upper level directory entry output is higher than the current entity of its record for novel entities priority, then this directory entry reads in the current entity information recorded content of upper level directory entry, as current entity information, simultaneously with original entity information, output on the data path that is connected with the next stage directory entry, and self comparative result is outputed on the compare result signal, as the input of next stage catalogue; If the priority of novel entities is higher than the current record entity, and the comparative result of upper level directory entry output is lower than the current entity of its record for novel entities priority, then this directory entry reads in novel entities information, as current entity information, simultaneously with original entity information, output on the data path that is connected with the next stage directory entry, and self comparative result is outputed on the compare result signal, as the input of next stage directory entry; If novel entities priority is lower than the current record entity, then keep original entity information constant, simultaneously self comparative result is outputed on the compare result signal, as the input of next stage directory entry, once insert operation, the periodicity that needs altogether is 3 bats, first count input novel entities, and second count produces comparative result, triple time, according to comparative result, whole priority query carries out the translation of above-mentioned rule, to reach the series arrangement of priority;
3) go up layer queue and realize that by transmitting models of priority layer queue on each comprises lower floor's processing unit, between the last layer queue, according to priority, series arrangement:
Last layer queue is realized according to transmitting models of priority, between each level, connects by data path, control signal, and series arrangement according to the priority, its inside is lower floor's processing unit;
4) go up entity transmission between the layer queue, when the directory entry that occurs in previous stage upper strata inner queue lower floor processing unit exhausts:
When novel entities need insert, if in lower floor's processing unit of novel entities insertion position, priority query's directory entry less than, then insert according to lower floor's processing unit novel entities insertion method of above mentioning, need not anyly move between the last layer queue, in lower floor's processing unit of novel entities insertion position, priority query's directory entry is full, then need transmit entity information between last layer queue, at this moment, novel entities is inserted the position that needs insertion, simultaneously, will overflow the directory entry of the lowest priority of formation, move on the next stage in the layer queue, the entity information of the entity that transmits between the last layer queue and control signal are exported by the current layer queue of going up, layer queue is received signal on the next stage, and its inner lower floor processing unit begins the insertion of novel entities, if the directory entry of lower floor's processing unit less than, then insert successfully, operation stops; If, then need to continue and the last layer queue generation entity transmission of next stage again because the directory entry of lower floor's processing unit is full, carry out successively, stop up to operation;
5) whole piece priority query, be made up of 1 or many upper strata formations:
Whole piece priority query by 1 or many last layer queues, according to priority orders, is in series, and in the layer queue, comprises lower floor's processing unit on each, and lower floor's processing unit has comprised a plurality of directory entries;
6) the limit priority entity dequeue of constant time:
The entity information of limit priority, always leave in the highest highest directory entry of going up in the layer queue of lower floor's processing unit, when this entity dequeue, all follow-up directory entries are to higher level's translation, limit priority directory entry on the next stage in the layer queue will be moved in the last layer queue of upper level, and finishing the needed time of this process is constant;
7) the priority catalogue of balanced load and performance is inserted:
When novel entities inserted, because its need are given on the first order all directory entries of lower floor's processing unit in the layer queue, so the situation that the whole piece formation is given in load more simultaneously reduced; Simultaneously, the many methods of local parallel have been adopted in the insertion of entity, transmit priority mode, have saved time and all more required temporary hardware resource of relatively transmitting, are improved on the performance.
The present invention compares with background technology, and the useful effect that the present invention has is:
This design is a kind of parallel transposition formation hardware concurrency advantage that combines, and the little advantage of formation load relatively item by item, by parallel transposition and the method that relatively mutually combines item by item, design the hardware priority formation, thereby reach the characteristics of balanced load and performance.
(1) modular design makes the favorable expandability of priority query, no matter is to need to increase priority or increase directory entry, and is all relatively independent, need not change the structure of whole formation.
(2) utilize the parallel transposition formation of lower floor's processing unit, improved the concurrency and the response speed of priority query.
(3) formation of transmission item by item of layer queue in the utilization makes the insertion of novel entities, only needs that all directory entries compare in the last layer queue with the first order, has reduced load.
Description of drawings
Fig. 1 goes up the layer queue synoptic diagram.
Fig. 2 is the processing unit figure of lower floor.
Specific implementation method
Relate to relevant symbolic interpretation in the method:
Compare Self: the comparative result of entity input port and current record, its value are that the current input entity priority of 0 (Low) expression is higher than current record priority, or the current input entity priority of 1 (High) expression is less than or equal to the entity priority in the current directory item.
Compare Left: the comparative result of directory entry upper level directory entry, its value is the priority that the current input entity priority of 1 (High) expression is higher than entity in the upper level directory entry, or the current input entity priority of 0 (Low) expression is less than or equal to the entity priority in the upper level directory entry.
The hardware priority formation implementation method of a kind of balanced load and performance, its general structure is made up of according to the priority orders polyphone layer queue on several, on each in the layer queue, lower floor's processing unit is arranged, this unit is made up of some directory entries, and directory entry adopts parallel transposition formation mode to realize.The formation block diagram of balanced load and performance as shown in Figure 1.Fig. 2 is lower floor's processing unit internal frame diagram.Directory entry and lower floor's number of processing units are according to hardware resource and concrete user demand decision, and the operation committed step is as follows:
1) lower floor's processing unit is made up of some directory entries:
Each lower floor's processing unit is made up of some identical directory entries, and each directory entry is a hardware cell, comprises access hole, control signal and entity information I/O port.Between the adjacency, by the data path identical with data bit width, and the compare result signal of a bit wide connects.Each has write down entity information, comprises entity priority and other information;
2) lower floor's processing unit novel entities inserts, and according to parallel transposition lining up mode, adopts parallel mode relatively to realize:
The initialization of last layer queue and lower floor's processing unit, at first, connected in series with layer queue on each, come the formation stem, be the five-star layer queue of going up.In the lower floor's processing unit of each, directory entry all places disarmed state, and the expression directory entry be a sky, for the directory entry of sky is special directory entry, and the comparative result Compare of itself and entity priority SelfBe 0.When new entity arrived, the entity information of this entity will be sent to the entity input port of each directory entry simultaneously, and simultaneously, control signal informs that each directory entry has novel entities to arrive.When directory entry finds to have novel entities to arrive, the entity priority of more own current record immediately and the priority of novel entities, if the priority of novel entities is higher than the entity of current record, and the comparative result of upper level directory entry output is higher than the current entity of its record for novel entities priority, then this directory entry reads in the current entity information recorded content of upper level directory entry, as current entity information, simultaneously with original entity information, output on the data path that is connected with the next stage directory entry, and self comparative result is outputed on the compare result signal, as the input of next stage catalogue; If the priority of novel entities is higher than the current record entity, and the comparative result of upper level directory entry output is lower than the current entity of its record for novel entities priority, then this directory entry reads in novel entities information, as current entity information, simultaneously with original entity information, output on the data path that is connected with the next stage directory entry, and self comparative result is outputed on the compare result signal, as the input of next stage directory entry; If novel entities priority is lower than the current record entity, then keep original entity information constant, simultaneously self comparative result is outputed on the compare result signal, as the input of next stage directory entry.Once insert operation, the periodicity that needs altogether is 3 bats, first count input novel entities, and second count produces comparative result, and triple time, according to comparative result, whole priority query carries out the translation of above-mentioned rule, to reach the series arrangement of priority.Concrete comparison procedure, its entity information and priority will be sent to five-star going up in the layer queue, and the lower floor's processing unit in the last layer queue is input to its inner directory entry, the comparison that walks abreast with what it walked abreast.Each directory entry independently compares the priority of self record entity and the priority of input entity, obtains Compare Self, in conjunction with the Compare that obtains from the upper level directory entry LeftValue transplants according to following rule: work as Compare Self=0 and Compare Left=0, expression input entity inserts before the upper level directory entry, and then the current directory item reads in the entity information of its record from the upper level directory entry, as the entity information of current directory item, simultaneously, original entity information is outputed on the data path that is connected with the next stage directory entry; Work as Compare Self=0 and Compare Left=1, the expression novel entities should insert in current location, and then the current directory item as the current information record, outputs to novel entities information on the data path that is connected with the next stage directory entry with original entity information simultaneously; Work as Compare Self=1 o'clock, the expression novel entities should insert in the follow-up catalogue of current directory item, and therefore, the current directory item remains unchanged.Parallel work-flow between each directory entry.
3) go up layer queue and realize that by transmitting models of priority layer queue on each comprises lower floor's processing unit, between the last layer queue, according to priority, series arrangement:
Last layer queue is realized according to transmitting models of priority, between each level, connects by data path, control signal, and series arrangement according to the priority.Its inside is lower floor's processing unit;
4) go up entity transmission between the layer queue, when the directory entry that occurs in previous stage upper strata inner queue lower floor processing unit exhausts:
When novel entities need insert, if in lower floor's processing unit of novel entities insertion position, priority query's directory entry less than, then insert according to the lower floor processing unit novel entities insertion method of above mentioning, need not anyly move between the last layer queue.In lower floor's processing unit of novel entities insertion position, priority query's directory entry is full, then need transmit entity information between last layer queue, at this moment, novel entities is inserted the position that needs insertion, simultaneously, to overflow the directory entry of the lowest priority of formation, move on the next stage in the layer queue.The entity information of the entity that transmits between the last layer queue and control signal are exported by the current layer queue of going up, layer queue is received signal on the next stage, and its inner lower floor processing unit begins the insertion of novel entities, if the directory entry of lower floor's processing unit less than, then insert successfully, operation stops; If, then need to continue and the last layer queue generation entity transmission of next stage again because the directory entry of lower floor's processing unit is full, carry out successively, stop up to operation;
5) whole piece priority query, be made up of the formation of some upper stratas:
Whole piece priority query by some layer queues of going up, according to priority orders, is in series, and in the layer queue, comprises lower floor's processing unit on each, and lower floor's processing unit has comprised some directory entries.
6) the limit priority entity dequeue of constant time:
The entity information of limit priority, always leave in the highest highest directory entry of going up in the layer queue of lower floor's processing unit, when this entity dequeue, all follow-up directory entries are to higher level's translation, limit priority directory entry on the next stage in the layer queue will be moved in the last layer queue of upper level, and finishing the needed time of this process is constant; When the highest-priority queue dequeue is specifically implemented, only need five-star directory entry output in the layer queue on the superlative degree, then obtained the entity information of limit priority, simultaneously, control signal is issued to layer queue on all, makes lower floor's processing unit tell all directory entries, to directory entry of high priority direction translation, between the last layer queue, move to the high priority direction by low priority.
7) the priority catalogue of balanced load and performance is inserted:
When novel entities inserted, because its need are given on the first order all directory entries of lower floor's processing unit in the layer queue, so the situation that the whole piece formation is given in load more simultaneously reduced; Simultaneously, the many methods of local parallel have been adopted in the insertion of entity, transmit priority mode, have saved time and all more required temporary hardware resource of relatively transmitting, are improved on the performance.

Claims (1)

1. the hardware priority formation implementation method of balanced load and performance is characterized in that:
1) lower floor's processing unit is made up of some directory entries:
Each lower floor's processing unit, form by some identical directory entries, each directory entry is a hardware cell, comprise access hole, control signal and entity information I/O port are between the adjacency, by the data path identical with data bit width, and the compare result signal connection of a bit wide, each has write down entity information, comprises entity priority and other information;
2) lower floor's processing unit novel entities inserts, and according to parallel transposition lining up mode, adopts parallel mode relatively to realize:
When new entity arrives, the entity information of this entity will be sent to the entity input port of each directory entry simultaneously, simultaneously, control signal informs that each directory entry has novel entities to arrive, when directory entry finds to have novel entities to arrive, the entity priority of more own current record immediately and the priority of novel entities, if the priority of novel entities is higher than the entity of current record, and the comparative result of upper level directory entry output is higher than the current entity of its record for novel entities priority, then this directory entry reads in the current entity information recorded content of upper level directory entry, as current entity information, simultaneously with original entity information, output on the data path that is connected with the next stage directory entry, and self comparative result is outputed on the compare result signal, as the input of next stage catalogue; If the priority of novel entities is higher than the current record entity, and the comparative result of upper level directory entry output is lower than the current entity of its record for novel entities priority, then this directory entry reads in novel entities information, as current entity information, simultaneously with original entity information, output on the data path that is connected with the next stage directory entry, and self comparative result is outputed on the compare result signal, as the input of next stage directory entry; If novel entities priority is lower than the current record entity, then keep original entity information constant, simultaneously self comparative result is outputed on the compare result signal, as the input of next stage directory entry, once insert operation, the periodicity that needs altogether is 3 bats, first count input novel entities, and second count produces comparative result, triple time, according to comparative result, whole priority query carries out the translation of above-mentioned rule, to reach the series arrangement of priority;
3) go up layer queue and realize that by transmitting models of priority layer queue on each comprises lower floor's processing unit, between the last layer queue, according to priority, series arrangement:
Last layer queue is realized according to transmitting models of priority, between each level, connects by data path, control signal, and series arrangement according to the priority, its inside is lower floor's processing unit;
4) go up entity transmission between the layer queue, when the directory entry that occurs in previous stage upper strata inner queue lower floor processing unit exhausts:
When novel entities need insert, if in lower floor's processing unit of novel entities insertion position, priority query's directory entry less than, then insert according to lower floor's processing unit novel entities insertion method of above mentioning, need not anyly move between the last layer queue, in lower floor's processing unit of novel entities insertion position, priority query's directory entry is full, then need transmit entity information between last layer queue, at this moment, novel entities is inserted the position that needs insertion, simultaneously, will overflow the directory entry of the lowest priority of formation, move on the next stage in the layer queue, the entity information of the entity that transmits between the last layer queue and control signal are exported by the current layer queue of going up, layer queue is received signal on the next stage, and its inner lower floor processing unit begins the insertion of novel entities, if the directory entry of lower floor's processing unit less than, then insert successfully, operation stops; If, then need to continue and the last layer queue generation entity transmission of next stage again because the directory entry of lower floor's processing unit is full, carry out successively, stop up to operation;
5) whole piece priority query, be made up of 1 or many upper strata formations:
Whole piece priority query by 1 or many last layer queues, according to priority orders, is in series, and in the layer queue, comprises lower floor's processing unit on each, and lower floor's processing unit has comprised a plurality of directory entries;
6) the limit priority entity dequeue of constant time:
The entity information of limit priority, always leave in the highest highest directory entry of going up in the layer queue of lower floor's processing unit, when this entity dequeue, all follow-up directory entries are to higher level's translation, limit priority directory entry on the next stage in the layer queue will be moved in the last layer queue of upper level, and finishing the needed time of this process is constant;
7) the priority catalogue of balanced load and performance is inserted:
When novel entities inserted, because its need are given on the first order all directory entries of lower floor's processing unit in the layer queue, so the situation that the whole piece formation is given in load more simultaneously reduced; Simultaneously, local parallel method has relatively been adopted in the insertion of entity, transmits priority mode, has saved time and all more required temporary hardware resource of relatively transmitting, is improved on the performance.
CN2008101629034A 2008-12-04 2008-12-04 Hardware PRI queue implementing method for balancing load and performance Expired - Fee Related CN101419562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101629034A CN101419562B (en) 2008-12-04 2008-12-04 Hardware PRI queue implementing method for balancing load and performance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101629034A CN101419562B (en) 2008-12-04 2008-12-04 Hardware PRI queue implementing method for balancing load and performance

Publications (2)

Publication Number Publication Date
CN101419562A CN101419562A (en) 2009-04-29
CN101419562B true CN101419562B (en) 2010-06-02

Family

ID=40630358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101629034A Expired - Fee Related CN101419562B (en) 2008-12-04 2008-12-04 Hardware PRI queue implementing method for balancing load and performance

Country Status (1)

Country Link
CN (1) CN101419562B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246653B (en) * 2012-02-03 2017-07-28 腾讯科技(深圳)有限公司 Data processing method and device
CN106685854B (en) * 2016-12-09 2020-02-14 浙江大华技术股份有限公司 Data sending method and system
EP3535956B1 (en) 2016-12-09 2021-02-17 Zhejiang Dahua Technology Co., Ltd Methods and systems for data transmission
CN107728953B (en) * 2017-11-03 2021-03-02 记忆科技(深圳)有限公司 Method for improving mixed read-write performance of solid state disk

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100417114C (en) * 2005-03-01 2008-09-03 华为技术有限公司 Method for realizing load balance between access devices in WLAN

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100417114C (en) * 2005-03-01 2008-09-03 华为技术有限公司 Method for realizing load balance between access devices in WLAN

Also Published As

Publication number Publication date
CN101419562A (en) 2009-04-29

Similar Documents

Publication Publication Date Title
CN100437522C (en) Long-distance inner server and its implementing method
US10412021B2 (en) Optimizing placement of virtual machines
CN101667144B (en) Virtual machine communication method based on shared memory
CN100346268C (en) Dynamic power consumption management method in information safety SoC based on door control clock
CN110262754B (en) NVMe and RDMA-oriented distributed storage system and lightweight synchronous communication method
CN101419562B (en) Hardware PRI queue implementing method for balancing load and performance
CN102118434A (en) Data packet transmission method and device
CN102882864A (en) Virtualization system based on InfiniBand cloud computing network
CN101051280A (en) Intelligent card embedded operation system and its control method
US20050094649A1 (en) Logical ports in trunking
CN101902390B (en) Unicast and multicast integrated scheduling device, exchange system and method
CN101467117A (en) Power saving in circuit functions through multiple power buses
WO2022037265A1 (en) Edge computing center integrated server
CN111611180B (en) Dynamic shared buffer area supporting multiple protocols
Sun et al. Republic: Data multicast meets hybrid rack-level interconnections in data center
CN104360982A (en) Implementation method and system for host system directory structure based on reconfigurable chip technology
US7443799B2 (en) Load balancing in core-edge configurations
Zhang et al. A multi-VC dynamically shared buffer with prefetch for network on chip
US11936758B1 (en) Efficient parallelization and deployment method of multi-objective service function chain based on CPU + DPU platform
JP2001142845A (en) Computer system and data transfer control method
CN101437033A (en) Method and network appliance for supporting variable velocity
CN101853185B (en) Blade server and service dispatching method thereof
Luo et al. Traffic-aware VDC embedding in data center: A case study of fattree
US20050105904A1 (en) Frame traffic balancing across trunk groups
CN113821317A (en) Edge cloud collaborative micro-service scheduling method, device and equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100602

Termination date: 20111204