CN101419562A - Hardware PRI queue implementing method for balancing load and performance - Google Patents

Hardware PRI queue implementing method for balancing load and performance Download PDF

Info

Publication number
CN101419562A
CN101419562A CNA2008101629034A CN200810162903A CN101419562A CN 101419562 A CN101419562 A CN 101419562A CN A2008101629034 A CNA2008101629034 A CN A2008101629034A CN 200810162903 A CN200810162903 A CN 200810162903A CN 101419562 A CN101419562 A CN 101419562A
Authority
CN
China
Prior art keywords
priority
entity
directory entry
processing unit
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101629034A
Other languages
Chinese (zh)
Other versions
CN101419562B (en
Inventor
陈天洲
王罡
胡威
陈度
冯德贵
吴斌斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2008101629034A priority Critical patent/CN101419562B/en
Publication of CN101419562A publication Critical patent/CN101419562A/en
Application granted granted Critical
Publication of CN101419562B publication Critical patent/CN101419562B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an implementation method of a hardware priority queue capable of balancing load and performance. The method combines the present comparatively prevalent implementation methods of two hardware priority queues which are a parallel hardware priority queue and a transfer hardware priority queue respectively, solves the conflict between the load and the performance, gives full play to performance advantages of hardware parallelization and reduces load increase caused by the parallelization. By a modular design, the method causes the priority queue to have better extensibility, which ensures relative independence whether the priority or directory items are needed to be added without changing the structure of the whole queue. The method improves the parallelism and response speed of the priority queue by a parallel transpose queue of a lower layer processing unit, and causes the insertion of an entity to need only to be compared with all directory items in an upper queue of a first priority by an item-by-item transfer queue of the upper queue, which reduces the load.

Description

The hardware priority formation implementation method of balanced load and performance
Technical field
The present invention relates to hardware priority cohort design field, relate in particular to the hardware priority formation implementation method of a kind of balanced load and performance.
Background technology
The world today, the application of computer technology has been pushed human civilization to a new height, not only in the scientific and engineering field, in people's the life, also has to have incorporated increasing computer equipment.And in a lot of embedded devices and the network equipment, it is very extensive that priority query uses.Because the speed advantage of hardware realizes priority query with hardware, becomes the selection of the quick queue operation of many requirements system.Many embedded real time systems, for example speech player etc. needs entity queue response fast.Many low delays that require, high-quality network system, for example the real-time voice exchange system needs the transmitting-receiving of network packet formation fast to guarantee.In order to require high in real time to these, require rapidly simple entity of scheduling or service, priority scheduling is one of its best selection scheme, and directly use hard-wired priority query, be undoubtedly one of approach of such system performance raising, how to design a rational and effective hardware configuration, realize priority query, become one of problem of studying on the engineering field.
The many embedded devices and the network equipment have been arranged in the market, had the hardware priority formation, its kind roughly comprises: fifo queue walks abreast and transplants formation and transmit formation item by item.
The realization of fifo queue is fairly simple, it is according to different priority, distribute directory entry, the entity of equal priority, to be placed in the same directory entry, the expression priority height that the directory entry position is forward is during the entity dequeue, the forward directory entry from the position is according to the tactful dequeue of first in first out.The advantage of this kind design is a simplicity of design, but shortcoming is because the number and the directory entry internal storage entity information number of directory entry are certain, causes extendability poor, is unfavorable for the expansion of follow-up function.
Parallel transposition formation has made full use of the strong characteristics of hardware parallel ability, and by hardware cell independently, relatively more all simultaneously hardware cells and the priority ratio of novel entities carry out the translation realization priority query of certain rule.The advantage of the hardware priority formation of this design is that speed is fast, hardware concurrency and extensibility are good, but shortcoming is also fairly obvious, owing to need simultaneously novel entities information to be delivered on all hardware modules, from the VLSI angle analysis, to cause the big shortcoming of hardware load, further cause the expansion of formation and the speed of operation to be restricted.
Transmit priority query item by item, use the method compare item by item, the priority of novel entities only compares with first of formation, transmits item by item successively, up to finding correct position.This formation has well solved the excessive shortcoming of parallel transposition formation load, but owing to need compare item by item, has caused the formation performance decrease, simultaneously, compares item by item, need insert temporary module in each hardware cell, has increased the expense of hardware resource.
The implementation of above-mentioned three kinds of hardware priority formations respectively has relative merits, how to design a kind of hardware queue, can either bring into play the parallel and speed advantage of hardware, improve performance, can reduce the hardware load again, save hardware resource, become the focus of design hardware priority formation.
Summary of the invention
The object of the present invention is to provide the hardware priority formation implementation method of a kind of balanced load and performance.
The technical scheme that technical solution problem of the present invention is adopted is:
1. the hardware priority formation implementation method of balanced load and performance is characterized in that:
1) lower floor's processing unit is made up of some directory entries:
Each lower floor's processing unit is made up of some identical directory entries, and each directory entry is a hardware cell, comprises access hole, control signal and entity information I/O port.Between the adjacency, by the data path identical with data bit width, and the compare result signal of a bit wide connects.Each has write down entity information, comprises entity priority and other information;
2) lower floor's processing unit novel entities inserts, and according to parallel transposition lining up mode, adopts parallel mode relatively to realize:
When new entity arrived, the entity information of this entity will be sent to the entity input port of each directory entry simultaneously, and simultaneously, control signal informs that each directory entry has novel entities to arrive.When directory entry finds to have novel entities to arrive, the entity priority of more own current record immediately and the priority of novel entities, if the priority of novel entities is higher than the entity of current record, and the comparative result of upper level directory entry output is higher than the current entity of its record for novel entities priority, then this directory entry reads in the current entity information recorded content of upper level directory entry, as current entity information, simultaneously with original entity information, output on the data path that is connected with the next stage directory entry, and self comparative result is outputed on the compare result signal, as the input of next stage catalogue; If the priority of novel entities is higher than the current record entity, and the comparative result of upper level directory entry output is lower than the current entity of its record for novel entities priority, then this directory entry reads in novel entities information, as current entity information, simultaneously with original entity information, output on the data path that is connected with the next stage directory entry, and self comparative result is outputed on the compare result signal, as the input of next stage directory entry; If novel entities priority is lower than the current record entity, then keep original entity information constant, simultaneously self comparative result is outputed on the compare result signal, as the input of next stage directory entry.Once insert operation, the periodicity that needs altogether is 3 bats, first count input novel entities, and second count produces comparative result, and triple time, according to comparative result, whole priority query carries out the translation of above-mentioned rule, to reach the series arrangement of priority;
3) go up layer queue and realize that by transmitting models of priority layer queue on each has comprised lower floor's processing unit of one, between the last layer queue, according to priority, series arrangement:
Last layer queue realizes that according to transmitting models of priority between each level, by data path, control signal connects, and series arrangement according to the priority.Its inside is lower floor's processing unit;
4) go up entity transmission between the layer queue, when the directory entry that occurs in previous stage upper strata inner queue lower floor processing unit exhausts:
When novel entities need insert, if in lower floor's processing unit of novel entities insertion position, priority query's directory entry less than, then insert according to the lower floor processing unit novel entities insertion method of above mentioning, need not anyly move between the last layer queue.In lower floor's processing unit of novel entities insertion position, priority query's directory entry is full, then need transmit entity information between last layer queue, at this moment, novel entities is inserted the position that needs insertion, simultaneously, to overflow the directory entry of the lowest priority of formation, move on the next stage in the layer queue.Entity information and control signal go up layer queue output by current, and layer queue is received signal on the next stage, and its inner lower floor processing unit begins the insertion of novel entities, if the directory entry of lower floor's processing unit less than, then insert successfully, operation stops; If, then need to continue and the last layer queue generation entity transmission of next stage again because the directory entry of lower floor's processing unit is full, carry out successively, stop up to operation;
5) whole piece priority query, be made up of 1 or many upper strata formations:
Whole piece priority query by 1 or many last layer queues, according to priority orders, is in series, and in the layer queue, has comprised lower floor's processing unit on each, and lower floor's processing unit has comprised a plurality of directory entries.
6) the limit priority entity dequeue of constant time:
The entity information of limit priority, always leave in the highest highest directory entry of going up in the layer queue of lower floor's processing unit, when this entity dequeue, all follow-up directory entries are to higher level's translation, last layer queue is direct, limit priority directory entry on the next stage in the layer queue will be moved in the last layer queue of upper level, and finishing the needed time of this process is constant;
7) the priority catalogue of balanced load and performance is inserted:
When novel entities inserted, because its need are given on the first order all directory entries of lower floor's processing unit in the layer queue, so the situation that the whole piece formation is given in load more simultaneously reduced; Simultaneously, the many methods of local parallel have been adopted in the insertion of entity, transmit priority mode, have saved time and all more required temporary hardware resource of relatively transmitting, are improved on the performance.
The present invention compares with background technology, and the useful effect that the present invention has is:
This design is a kind of parallel transposition formation hardware concurrency advantage that combines, and the little advantage of formation load relatively item by item, by parallel transposition and the method that relatively mutually combines item by item, design the hardware priority formation, thereby reach the characteristics of balanced load and performance.
(1) modular design makes the favorable expandability of priority query, no matter is to need to increase priority or increase directory entry, and is all relatively independent, need not change the structure of whole formation.
(2) utilize the parallel transposition formation of lower floor's processing unit, improved the concurrency and the response speed of priority query.
(3) formation of transmission item by item of layer queue in the utilization makes the insertion of novel entities, only needs that all directory entries compare in the last layer queue with the first order, has reduced load.
Description of drawings
Fig. 1 goes up the layer queue synoptic diagram.
Fig. 2 is the processing unit figure of lower floor.
Specific implementation method
Relate to relevant symbolic interpretation in the method:
Compare Self: the comparative result of entity input port and current record, its value are that the current input entity priority of 0 (Low) expression is higher than current record priority, or the current input entity priority of 1 (High) expression is less than or equal to the entity priority in the current directory item.
Compare Left: the comparative result of directory entry upper level directory entry, its value is the priority that the current input entity priority of 1 (High) expression is higher than entity in the upper level directory entry, or the current input entity priority of 0 (Low) expression is less than or equal to the entity priority in the upper level directory entry.
The hardware priority formation implementation method of a kind of balanced load and performance, its general structure is made up of according to the priority orders polyphone layer queue on several, on each in the layer queue, lower floor's processing unit is arranged, this unit is made up of some directory entries, and directory entry adopts parallel transposition formation mode to realize.The formation block diagram of balanced load and performance as shown in Figure 1.Fig. 2 is lower floor's processing unit internal frame diagram.Directory entry and lower floor's number of processing units are according to hardware resource and concrete user demand decision, and the operation committed step is as follows:
1) lower floor's processing unit is made up of some directory entries:
Each lower floor's processing unit is made up of some identical directory entries, and each directory entry is a hardware cell, comprises access hole, control signal and entity information I/O port.Between the adjacency, by the data path identical with data bit width, and the compare result signal of a bit wide connects.Each has write down entity information, comprises entity priority and other information;
2) lower floor's processing unit novel entities inserts, and according to parallel transposition lining up mode, adopts parallel mode relatively to realize:
The initialization of last layer queue and lower floor's processing unit, at first, connected in series with layer queue on each, come the formation stem, be the five-star layer queue of going up.In the lower floor's processing unit of each, directory entry all places disarmed state, and the expression directory entry be a sky, for the directory entry of sky is special directory entry, and the comparative result Compare of itself and entity priority SelfBe 0.When new entity arrived, the entity information of this entity will be sent to the entity input port of each directory entry simultaneously, and simultaneously, control signal informs that each directory entry has novel entities to arrive.When directory entry finds to have novel entities to arrive, the entity priority of more own current record immediately and the priority of novel entities, if the priority of novel entities is higher than the entity of current record, and the comparative result of upper level directory entry output is higher than the current entity of its record for novel entities priority, then this directory entry reads in the current entity information recorded content of upper level directory entry, as current entity information, simultaneously with original entity information, output on the data path that is connected with the next stage directory entry, and self comparative result is outputed on the compare result signal, as the input of next stage catalogue; If the priority of novel entities is higher than the current record entity, and the comparative result of upper level directory entry output is lower than the current entity of its record for novel entities priority, then this directory entry reads in novel entities information, as current entity information, simultaneously with original entity information, output on the data path that is connected with the next stage directory entry, and self comparative result is outputed on the compare result signal, as the input of next stage directory entry; If novel entities priority is lower than the current record entity, then keep original entity information constant, simultaneously self comparative result is outputed on the compare result signal, as the input of next stage directory entry.Once insert operation, the periodicity that needs altogether is 3 bats, first count input novel entities, and second count produces comparative result, and triple time, according to comparative result, whole priority query carries out the translation of above-mentioned rule, to reach the series arrangement of priority.Concrete comparison procedure, its entity information and priority will be sent to five-star going up in the layer queue, and the lower floor's processing unit in the last layer queue is input to its inner directory entry, the comparison that walks abreast with what it walked abreast.Each directory entry independently compares the priority of self record entity and the priority of input entity, obtains Compare Self, in conjunction with the Compare that obtains from the upper level directory entry LeftValue transplants according to following rule: work as Compare Self=0 and Compare Left=0, expression input entity inserts before the upper level directory entry, and then the current directory item reads in the entity information of its record from the upper level directory entry, as the entity information of current directory item, simultaneously, original entity information is outputed on the data path that is connected with the next stage directory entry; Work as Compare Self=0 and Compare Left=1, the expression novel entities should insert in current location, and then the current directory item as the current information record, outputs to novel entities information on the data path that is connected with the next stage directory entry with original entity information simultaneously; Work as Compare Self=1 o'clock, the expression novel entities should insert in the follow-up catalogue of current directory item, and therefore, the current directory item remains unchanged.Parallel work-flow between each directory entry.
3) go up layer queue and realize that by transmitting models of priority layer queue on each has comprised lower floor's processing unit of one, between the last layer queue, according to priority, series arrangement:
Last layer queue realizes that according to transmitting models of priority between each level, by data path, control signal connects, and series arrangement according to the priority.Its inside is lower floor's processing unit;
4) go up entity transmission between the layer queue, when the directory entry that occurs in previous stage upper strata inner queue lower floor processing unit exhausts:
When novel entities need insert, if in lower floor's processing unit of novel entities insertion position, priority query's directory entry less than, then insert according to the lower floor processing unit novel entities insertion method of above mentioning, need not anyly move between the last layer queue.In lower floor's processing unit of novel entities insertion position, priority query's directory entry is full, then need transmit entity information between last layer queue, at this moment, novel entities is inserted the position that needs insertion, simultaneously, to overflow the directory entry of the lowest priority of formation, move on the next stage in the layer queue.Entity information and control signal go up layer queue output by current, and layer queue is received signal on the next stage, and its inner lower floor processing unit begins the insertion of novel entities, if the directory entry of lower floor's processing unit less than, then insert successfully, operation stops; If, then need to continue and the last layer queue generation entity transmission of next stage again because the directory entry of lower floor's processing unit is full, carry out successively, stop up to operation;
5) whole piece priority query, be made up of the formation of some upper stratas:
Whole piece priority query by some layer queues of going up, according to priority orders, is in series, and in the layer queue, has comprised lower floor's processing unit on each, and lower floor's processing unit has comprised some directory entries.
6) the limit priority entity dequeue of constant time:
The entity information of limit priority, always leave in the highest highest directory entry of going up in the layer queue of lower floor's processing unit, when this entity dequeue, all follow-up directory entries are to higher level's translation, last layer queue is direct, limit priority directory entry on the next stage in the layer queue will be moved in the last layer queue of upper level, and finishing the needed time of this process is constant; When the highest-priority queue dequeue is specifically implemented, only need five-star directory entry output in the layer queue on the superlative degree, then obtained the entity information of limit priority, simultaneously, control signal is issued to layer queue on all, makes lower floor's processing unit tell all directory entries, to directory entry of high priority direction translation, between the last layer queue, move to the high priority direction by low priority.
7) the priority catalogue of balanced load and performance is inserted:
When novel entities inserted, because its need are given on the first order all directory entries of lower floor's processing unit in the layer queue, so the situation that the whole piece formation is given in load more simultaneously reduced; Simultaneously, the many methods of local parallel have been adopted in the insertion of entity, transmit priority mode, have saved time and all more required temporary hardware resource of relatively transmitting, are improved on the performance.

Claims (1)

1. the hardware priority formation implementation method of balanced load and performance is characterized in that:
1) lower floor's processing unit is made up of some directory entries:
Each lower floor's processing unit is made up of some identical directory entries, and each directory entry is a hardware cell, comprises access hole, control signal and entity information I/O port.Between the adjacency, by the data path identical with data bit width, and the compare result signal of a bit wide connects.Each has write down entity information, comprises entity priority and other information;
2) lower floor's processing unit novel entities inserts, and according to parallel transposition lining up mode, adopts parallel mode relatively to realize:
When new entity arrived, the entity information of this entity will be sent to the entity input port of each directory entry simultaneously, and simultaneously, control signal informs that each directory entry has novel entities to arrive.When directory entry finds to have novel entities to arrive, the entity priority of more own current record immediately and the priority of novel entities, if the priority of novel entities is higher than the entity of current record, and the comparative result of upper level directory entry output is higher than the current entity of its record for novel entities priority, then this directory entry reads in the current entity information recorded content of upper level directory entry, as current entity information, simultaneously with original entity information, output on the data path that is connected with the next stage directory entry, and self comparative result is outputed on the compare result signal, as the input of next stage catalogue; If the priority of novel entities is higher than the current record entity, and the comparative result of upper level directory entry output is lower than the current entity of its record for novel entities priority, then this directory entry reads in novel entities information, as current entity information, simultaneously with original entity information, output on the data path that is connected with the next stage directory entry, and self comparative result is outputed on the compare result signal, as the input of next stage directory entry; If novel entities priority is lower than the current record entity, then keep original entity information constant, simultaneously self comparative result is outputed on the compare result signal, as the input of next stage directory entry.Once insert operation, the periodicity that needs altogether is 3 bats, first count input novel entities, and second count produces comparative result, and triple time, according to comparative result, whole priority query carries out the translation of above-mentioned rule, to reach the series arrangement of priority;
3) go up layer queue and realize that by transmitting models of priority layer queue on each has comprised lower floor's processing unit of one, between the last layer queue, according to priority, series arrangement:
Last layer queue realizes that according to transmitting models of priority between each level, by data path, control signal connects, and series arrangement according to the priority.Its inside is lower floor's processing unit;
4) go up entity transmission between the layer queue, when the directory entry that occurs in previous stage upper strata inner queue lower floor processing unit exhausts:
When novel entities need insert, if in lower floor's processing unit of novel entities insertion position, priority query's directory entry less than, then insert according to the lower floor processing unit novel entities insertion method of above mentioning, need not anyly move between the last layer queue.In lower floor's processing unit of novel entities insertion position, priority query's directory entry is full, then need transmit entity information between last layer queue, at this moment, novel entities is inserted the position that needs insertion, simultaneously, to overflow the directory entry of the lowest priority of formation, move on the next stage in the layer queue.Entity information and control signal go up layer queue output by current, and layer queue is received signal on the next stage, and its inner lower floor processing unit begins the insertion of novel entities, if the directory entry of lower floor's processing unit less than, then insert successfully, operation stops; If, then need to continue and the last layer queue generation entity transmission of next stage again because the directory entry of lower floor's processing unit is full, carry out successively, stop up to operation;
5) whole piece priority query, be made up of 1 or many upper strata formations:
Whole piece priority query by 1 or many last layer queues, according to priority orders, is in series, and in the layer queue, has comprised lower floor's processing unit on each, and lower floor's processing unit has comprised a plurality of directory entries.
6) the limit priority entity dequeue of constant time:
The entity information of limit priority, always leave in the highest highest directory entry of going up in the layer queue of lower floor's processing unit, when this entity dequeue, all follow-up directory entries are to higher level's translation, last layer queue is direct, limit priority directory entry on the next stage in the layer queue will be moved in the last layer queue of upper level, and finishing the needed time of this process is constant;
7) the priority catalogue of balanced load and performance is inserted:
When novel entities inserted, because its need are given on the first order all directory entries of lower floor's processing unit in the layer queue, so the situation that the whole piece formation is given in load more simultaneously reduced; Simultaneously, the many methods of local parallel have been adopted in the insertion of entity, transmit priority mode, have saved time and all more required temporary hardware resource of relatively transmitting, are improved on the performance;
CN2008101629034A 2008-12-04 2008-12-04 Hardware PRI queue implementing method for balancing load and performance Expired - Fee Related CN101419562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101629034A CN101419562B (en) 2008-12-04 2008-12-04 Hardware PRI queue implementing method for balancing load and performance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101629034A CN101419562B (en) 2008-12-04 2008-12-04 Hardware PRI queue implementing method for balancing load and performance

Publications (2)

Publication Number Publication Date
CN101419562A true CN101419562A (en) 2009-04-29
CN101419562B CN101419562B (en) 2010-06-02

Family

ID=40630358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101629034A Expired - Fee Related CN101419562B (en) 2008-12-04 2008-12-04 Hardware PRI queue implementing method for balancing load and performance

Country Status (1)

Country Link
CN (1) CN101419562B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246653A (en) * 2012-02-03 2013-08-14 腾讯科技(深圳)有限公司 Data processing method and device
CN106685854A (en) * 2016-12-09 2017-05-17 浙江大华技术股份有限公司 Data transmission method and system
CN107728953A (en) * 2017-11-03 2018-02-23 记忆科技(深圳)有限公司 A kind of method for lifting solid state hard disc mixing readwrite performance
US11012366B2 (en) 2016-12-09 2021-05-18 Zhejiang Dahua Technology Co., Ltd. Methods and systems for data transmission

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100417114C (en) * 2005-03-01 2008-09-03 华为技术有限公司 Method for realizing load balance between access devices in WLAN

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246653A (en) * 2012-02-03 2013-08-14 腾讯科技(深圳)有限公司 Data processing method and device
CN103246653B (en) * 2012-02-03 2017-07-28 腾讯科技(深圳)有限公司 Data processing method and device
CN106685854A (en) * 2016-12-09 2017-05-17 浙江大华技术股份有限公司 Data transmission method and system
CN106685854B (en) * 2016-12-09 2020-02-14 浙江大华技术股份有限公司 Data sending method and system
US11012366B2 (en) 2016-12-09 2021-05-18 Zhejiang Dahua Technology Co., Ltd. Methods and systems for data transmission
US11570120B2 (en) 2016-12-09 2023-01-31 Zhejiang Dahua Technology Co., Ltd. Methods and systems for data transmission
CN107728953A (en) * 2017-11-03 2018-02-23 记忆科技(深圳)有限公司 A kind of method for lifting solid state hard disc mixing readwrite performance
CN107728953B (en) * 2017-11-03 2021-03-02 记忆科技(深圳)有限公司 Method for improving mixed read-write performance of solid state disk

Also Published As

Publication number Publication date
CN101419562B (en) 2010-06-02

Similar Documents

Publication Publication Date Title
US10412021B2 (en) Optimizing placement of virtual machines
CN102882864B (en) A kind of virtualization system based on InfiniBand system for cloud computing
CN102741775B (en) For the methods, devices and systems changed the system power states of computer platform
CN110262754B (en) NVMe and RDMA-oriented distributed storage system and lightweight synchronous communication method
CN100517236C (en) Intelligent card embedded operation system and its control method
US11436258B2 (en) Prometheus: processing-in-memory heterogenous architecture design from a multi-layer network theoretic strategy
CN101419562B (en) Hardware PRI queue implementing method for balancing load and performance
Yechiali Analysis and control of polling systems
CN101667144A (en) Virtual machine communication method based on shared memory
CN1752894A (en) Dynamic power consumption management method in information safety SoC based on door control clock
US7593336B2 (en) Logical ports in trunking
US20140053164A1 (en) Region-Weighted Accounting of Multi-Threaded Processor Core According to Dispatch State
CN111813506A (en) Resource sensing calculation migration method, device and medium based on particle swarm algorithm
CN101467117A (en) Power saving in circuit functions through multiple power buses
CN101902390B (en) Unicast and multicast integrated scheduling device, exchange system and method
US11936758B1 (en) Efficient parallelization and deployment method of multi-objective service function chain based on CPU + DPU platform
WO2022037265A1 (en) Edge computing center integrated server
CN104754063B (en) Local cloud computing resource scheduling method
CN104519106B (en) A kind of task immigration method and network controller
CN113821317A (en) Edge cloud collaborative micro-service scheduling method, device and equipment
US20090144469A1 (en) Usb key emulation system to multiplex information
CN100562864C (en) A kind of implementation method of chip-on communication of built-in isomerization multicore architecture
CN101853185B (en) Blade server and service dispatching method thereof
US7443799B2 (en) Load balancing in core-edge configurations
e Silva et al. Energy efficient ethernet on mapreduce clusters: Packet coalescing to improve 10gbe links

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100602

Termination date: 20111204