CN101129033A - A method of and a system for controlling access to a shared resource - Google Patents

A method of and a system for controlling access to a shared resource Download PDF

Info

Publication number
CN101129033A
CN101129033A CNA2006800061634A CN200680006163A CN101129033A CN 101129033 A CN101129033 A CN 101129033A CN A2006800061634 A CNA2006800061634 A CN A2006800061634A CN 200680006163 A CN200680006163 A CN 200680006163A CN 101129033 A CN101129033 A CN 101129033A
Authority
CN
China
Prior art keywords
data item
priority
wait
flit
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2006800061634A
Other languages
Chinese (zh)
Other versions
CN101129033B (en
Inventor
托比亚斯·比杰莱加尔德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Teklatech AS
Original Assignee
Teklatech AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Teklatech AS filed Critical Teklatech AS
Priority claimed from PCT/DK2006/000119 external-priority patent/WO2006089560A1/en
Publication of CN101129033A publication Critical patent/CN101129033A/en
Application granted granted Critical
Publication of CN101129033B publication Critical patent/CN101129033B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • H04L47/2433Allocation of priorities to traffic types
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/06Deflection routing, e.g. hot-potato routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/15Flow control; Congestion control in relation to multipoint traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation

Abstract

A method and a system of controlling access of data items to a shared resource, wherein the data items each is assigned to one of a plurality of priorities, and wherein, when a predetermined number of data items of a priority have been transmitted to the shared resource, that priority will be awaiting, i.e. no further data items are transmitted with that priority, until all lower, non-awaiting priorities have had one or more data items transmitted to the shared resource. In this manner, guarantees services may be obtained for all priorities.

Description

Control is to the method and system of the visit of shared resource
Technical field
The present invention relates to control visit, relate in particular to the control that predetermined assurance is provided for the transfer of data to shared resource shared resource.
Background technology
Such technology can referring to: US 2004/001502, " An asynchronouson-chip network router with quality-of-service (QoS) support ", people such as Felicijan T, SOC conference, 2004, proceedings, IEEE, International Santa Clara, CA, USA, Sept.12-15,2004, Piscataway, NJ, USA, IEEE, pp 274-77, " An asynchronous low latency arbiter forq uality of service (qos) applica-tions ", people such as Felicij an T, microelectronics, 2003, ICM 2003, Procee-dings of the 15 ThInternational conference on Cairo, Egypt, Dec.9-11,2003, Piscataway, NJ, USA, IEEE, pp 123-26, " Virtual channel designs for guaranteeingbandwidth in asynchronous network-on-chip ", people such as Bjerre-gaard T, Norchip conference, 2004, proceedings, Oslo, Norway, 8-9Nov.2004, Piscataway, NJ, USA, IEEE, pp 269-272, " Rate-controlledstatic-priority queueing ", and referring to people such as Zhang H, Networking:founda-tion for the future, San Francisco, March 28-April 1,1993, Proceedings of the annual joint conference, on the computer andcommunication societies (INFOCOM), Los Alamitos, IEEE Comp.Soc.Press, US, vol.2, conf.12, pp 227-236.
The problem of seeing in many data transmission applications is that data volume will be transferred to the shared resource such as link or memory, and may not send all data simultaneously.Therefore, must decision at first send any data and which data must be waited for.Much less, this decision influences data to the transmission of shared resource, for example, and stand-by period, the performance of the application that influences resource subsequently and participated in.The invention provides scheduling and guide the new mode of the data of shared resource into, making can the acquisition assurance relevant with stand-by period and bandwidth.
Summary of the invention
In first aspect, the present invention relates to the method for a kind of control to the visit of shared resource, this method comprises:
-receiving and the relevant information of one or more data item that will send, each data item is assigned with one of a plurality of different predetermined priority; With
-repeatedly project is offered resource as follows:
The data item to be sent that o will have the highest non-wait priority sends to resource, and
That priority of o is waited for the transmission from the data item of each the non-wait lower priority that receives the information relevant with data item to be sent subsequently.
In general, these data item can be the data item of any kind, for example, and Ethernet grouping, flit (stream control unit) or their part.Each data item can be managed separately and according to priority be arranged, and perhaps, can according to priority arrange relatively large data, subsequently they is resolved into data item.
The priority of data can be predetermined, from data item or therefrom determine the content of the relatively large data of derived data item.Alternately, priority can be determined from its source or its recipient.
In current context, priority can be represented with the mode of any hope.Because the present invention realizes with circuit form that generally general the use represented priority such as the numeral of integer.But even for circuit, this neither be essential.
In general, determine that the order of priority is so that can determine which is higher and which is lower from any priority centering.This is irrelevant with the practical ways of representing priority.In current context, priority can represent by any way, with and order (than another height) can determine in any way.
Receiving step can receive data (determining its priority then) separately, receive only the priority of priority or reception data and distribution.Further described the different aspect of these message pick-up modes below.
Current, shared resource can be the resource that is applicable to any kind that receives data such as memory, link, processor and/or cross bar switch.
The present invention relates to control the mode that data item is sent to shared resource, be indifferent to each data item, but come flowing of control data item according to priority.
Priority can be to wait for or non-wait.Wait means that the data that will not have that priority in this moment send to shared resource.Wish to send to shared resource (that is to say, received the information that the such data item of indication is ready to send) even without the data item with that priority, priority also can be to wait for.
Best, the non-wait lower priority that certain priority must be waited for is to be those priority of non-wait in that time that transmission has a data item of priorities associated.
At any time, the next data item that send to resource is following data item:
-be ready to send (that is, receiving relevant information),
-have the priority of non-wait, and
-have a limit priority of the off-the-shelf non-wait priority of data item.
In case sent data item, that priority will be waited for, have an opportunity data item is sent to shared resource up to non-wait and all lower priorities that DSR sends.
Should be noted that, be to wait to bide one's time in priority, and the data item relevant with being lower than other priority of waiting for priority may become and be ready to send.Wait for that priority need not to wait for also the transmission from this priority.
People may wish that before current priority can send once more with which lower priority must send is each priority keeping records relevantly.
Please note, even when the non-wait priority of all lower (all other priority) has sent data item, preferably only allow limit priority to send at least once, also can be certain assurance relevant of all priority acquisitions and stand-by period with obtainable bandwidth.In addition, the data item that note that shared resource sends that to have nothing to do with the actual transmission of which priority data item may be best (for example, the entire capacity of link).If this can be from being ready to send the data item of having distributed other priority, then certain priority is found out in can only waiting for.If have only single priority to have the data item that will send, then this priority is not just waited for, and data item can be sent to shared resource one by one.
Provide a kind of mode of data item further to comprise the step that a plurality of formations are provided, each formation is relevant with one of priority, and wherein:
-receiving step comprises each data item that receives in the formation relevant with the priority of distributing to data item,
-forwarding step comprises and sends from having a data item of the non-waiting list of limit priority, and
-waiting step comprises that formation and waits for transmission from the data item of the non-waiting list of all non-NULLs with lower priority then.
Therefore, data item is received and is provided in the formation relevant with priorities associated.
About this on the one hand, waiting step then can comprise: when will be from the data item of the non-waiting list of each lower priority non-NULL when data set provider sends to shared resource, to be forwarded to memory from the next data item of formation, and wherein, described forwarding step comprises that data item that has limit priority in the transmission memory.
Further separation wait state and definite process that sends which data item have so just been simplified.In the present embodiment, memory is preserved the data item of each non-empty queue of non-wait simply.Therefore, can determine to send which data item simply.
In fact, same memory construction may not be to be used for actual data item, but is used for the information relevant with data item, so that make which priority is to wait for as can be seen, and is convenient to easily determine to be applicable to the next priority of transmission.
Can prevent that waiting list from sending to memory with data item, make that the non-empty queue that does not have data item in the memory will be a waiting list.
In another embodiment, this method further comprises the step that a plurality of data set providers are provided, and each of a plurality of data set providers is applicable to:
-the data item that provides each all to distribute one of priority,
-provide and the relevant information of one or more data item of preparing to send to shared resource from data set provider, and
Wherein:
-forwarding step comprises: data set provider sends to resource with data item, and
-waiting step comprises: prevent to send the ready data item with wait priority from any data set provider, send to shared resource from data set provider up to the data item from each non-wait lower priority.
Under this situation, this method or realize that the system of this method may not have or receive data, and be applicable to the designation data supplier method of data item (notice send) when send data item (sign that preferably also has its priority, so as the supplier determine to send in many data item which).
In a kind of mode, waiting step prevents energetically that for example by sending anti-stop signal data set provider from sending the data item of waiting for priority.Alternately, the supplier is applicable to being ordered and does not send data item before doing like this.This may be when data set provider further is applicable to forwarding data when therefore being subjected to instructing, and wherein:
-forwarding step comprises the data set provider that is ready to data item that instruction has the highest non-wait priority data item is transmitted to shared resource, and
-waiting step comprises not instruct to have waits for that the data set provider that is ready to data item of priority sends any data item of wait priority.
In general, receiving step preferably comprises receiving data item and data item is offered memory device or memory, and wherein, forwarding step comprises the data item of sending from memory device or memory.
In one embodiment, waiting step comprises: for predetermined priority, have only when predetermined priority has sent a plurality of data item, just wait for the transmission of data item.Like this, before this priority enters loitering phase, this predetermined priority will be provided than only send the big bandwidth to resource of individual data Xiang Shigeng at every turn.
In this state, any memory of preserving the data item that will send to resource for non-waiting list can be reserved the space of a plurality of data item and reserve the space of less data item for other priority for predetermined priority.
In one aspect of the method, the present invention relates to the method for a kind of control to the visit of shared resource, this method comprises:
-receiving and the relevant information of one or more data item that will send, each data item is assigned with one of a plurality of different predetermined priority;
-be the one or more process of transmittings of each priority definition, the every segment information relevant with data item is assigned with the process of transmitting of priorities associated; With
-repeatedly project is offered resource as follows:
O will send to resource for the data item of the non-wait process of transmitting that defines for limit priority the information distribution relevant with it, and
That process of transmitting of o is waited for the transmission from the data item of each lower priority subsequently, and described each lower priority has been assigned with the information relevant with data item to be sent, and at least one process of transmitting of described each lower priority is non-wait.
This aspect and first aspect are closely related, but in aspect this, can obey several transmission means of the transfer characteristic of first aspect for one or more priority definition.Like this, that priority can obtain bigger bandwidth, still can determine and actual service guarantees for other priority provides simultaneously.
And, will receive information distribution by control and give which process of transmitting and select which process of transmitting to send data item, can obtain various behaviors.In addition, will determine the behavior of system for the moment of process of transmitting with receiving information distribution.Best, with information distribution to the process of transmitting that contiguous items will be sent the earliest.
In one embodiment, the actual subsequently wait of that process of transmitting is from the transmission of the data item of each non-wait process of transmitting of lower priority, and described lower priority has been assigned with the information relevant with data item to be sent.
In general, aspect first and second in, priority/queue/process of transmitting is preferably waited for and only is sent in the data item that has received its information in the middle of the data item that delivery time sends at this delivery time.Therefore, in these areas in, it is determining of non-wait and non-NULL that the delivery time that is preferably in data item carries out which formation, process of transmitting or priority.In a kind of mode, give each priority with memory distribution, and the information that should send the non-NULL wait lower priority of data item before the non-once more wait of this priority is supplied to that memory.Then, when sending data item, upgrade this memory at every turn.
Another aspect of the present invention relates to the system of a kind of control to the visit of shared resource, and this system comprises:
-being applicable to the receiving system that receives the information relevant with one or more data item that will send, each data item has been assigned with one of a plurality of different predetermined priority; With
-be applicable to the dispensing device that repeatedly data item is offered resource as follows:
O sends to resource with data item to be sent and that have the highest non-wait priority, and
O waits for from the transmission of the data item of that priority with relief, up to having sent data item by each the non-wait lower priority that receives the information relevant with data item to be sent.
As mentioned above, receiving system can receive actual data item, and definite priorities associated of controlling oneself, can receiving data item and priority, maybe can receive only priority.
In a total embodiment, this system further comprises each a plurality of formation relevant with one of priority, and wherein:
-receiving system is applicable to each data item offered the formation relevant with the priority of distributing to data item,
-dispensing device is applicable to and sends from having the data item with the highest non-wait priority of any non-waiting list of limit priority, with the transmission of that formation wait of relief from the data item of all the non-waiting lists relevant with lower priority.
Then, dispensing device goes for, when will from the data item of each non-wait lower priority when formation send to shared resource, being forwarded to memory from the next data item of formation, and that data item that has limit priority in the memory is sent to resource.
As mentioned above, in addition or alternately, the memory of the sort of type can in addition or be replaced the information that is used for receiving in this manner, and the feasible information that is used for non-wait priority that only receives in memory is so that determine to send the next priority of data item fast.
In same or another embodiment, this system can further comprise a plurality of data set providers, and each of a plurality of data set providers is applicable to:
-the data item that provides each all to distribute one of priority, and
-provide and the relevant information of one or more data item of preparing to send to shared resource from data set provider,
Wherein, dispensing device is applicable to that the data item that the director data supplier will have the highest non-wait priority sends to resource, and prevent from subsequently to send the data item that is ready to wait priority from any data set provider, send to shared resource up to data item from data set provider from each non-wait lower priority.
About this point, data set provider goes for transmitting data item when therefore being subjected to instructing, and wherein, dispensing device is applicable to and instructs the data set provider that is ready to data item with the highest non-wait priority that data item is transmitted to shared resource, and do not instruct subsequently and have the data set provider that is ready to data item of waiting for priority and send any data item of waiting for priority, send data item up to director data supplier from each non-wait lower priority.
In general, receiving system preferably is applicable to receiving data item and data item is offered memory device or memory, and wherein, dispensing device is applicable to the data item of sending from memory device or memory.
In current context, shared resource can be any kind resource or the receiver that are applicable to receiving data item such as link, memory, processor, integrated circuit or cross bar switch.
As mentioned above, only after a plurality of data item from priority have sent to resource, just allow priority etc. bide one's time, can big bandwidth offer predetermined priority when dispensing device is applicable to.
In fourth aspect, the present invention relates to the system of a kind of control to the visit of shared resource, this system comprises:
-being applicable to the receiving system that receives the information relevant with one or more data item that will send, each data item has been assigned with one of a plurality of different predetermined priority;
-for each priority, one or more process of transmitting, the every segment information relevant with data item is assigned to the process of transmitting of priorities associated; And
-be applicable to the dispensing device that repeatedly data item is offered resource as follows:
O sends to resource with the data item to be sent that the information relevant with it has been assigned to the non-wait process of transmitting of limit priority, and
O waits for transmission from the data item of each lower priority subsequently with that process of transmitting of relief, and described each lower priority has been assigned with the information relevant with data item to be sent and its at least one process of transmitting is non-wait.
As mentioned above, this provides flowing of control data item and can determine and has sent the append mode that guarantees to offer priority with may command.
In one embodiment, dispensing device is applicable to that process of transmitting of relief and waits for that subsequently described lower priority has been assigned with the information relevant with data item to be sent from the transmission of the data item of each non-wait process of transmitting of lower priority.
In addition, aspect third and fourth in, dispensing device preferably is applicable to and allows priority/queue/process of transmitting wait for only to be sent in the data item that receives its information in the middle of the data item that delivery time sends at delivery time.When determining that the moment fixedly, this makes process be easier to control.
Description of drawings
Hereinafter, the preferred embodiments of the present invention are described with reference to the accompanying drawings, wherein:
Fig. 1 illustration connect independent timing core asynchronous network be convenient to modularization in the extensive SoC design;
Fig. 2 illustration pseudo channel A share the basic link of physical link to D;
Fig. 3 illustration based on the complete link of preferred embodiment, wherein, the visit that static priority formation (SPQ) is according to priority arranged link guarantees that so that the stand-by period to be provided the required condition of mobile obedience SPQ of flit is guaranteed in permission control, and VC control guarantees no obstruction behavior;
Fig. 4 illustration the example of operation of link of Fig. 3;
Fig. 5 is the model by a part that connects a series of VC that reserve;
Fig. 6 illustration based on the VC control of sharing;
Fig. 7 illustration share and the schematic diagram of non-shared box;
Fig. 8 illustration permission control schematic diagram;
Fig. 9 illustration the flit stand-by period distribute and offered load: (a) slow-path, (b) fast path; And
Figure 10 illustration the preferred asynchronous enforcement of SPQ arbiter and the merging data path of following.
Embodiment
Present embodiment shows the present invention's (hereinafter, being called time assurance when ALG-is asynchronous to be waited) and how can be used as asynchronous network-on-chip (Network-on-Chip, NoC) the link scheduling device in.Assurance service (the GS that ALG provided, guaranteed services) not to interdepend on the contrary, therefore ALG has overcome the limitation based on bandwidth (BW) allocative decision of time division multiplexing (TDM), and supports to require by different GS various type of service of sign.In the opposite end of GS spectrum, ALG not only supports the low BW business such as the stand-by period key of interrupting, and supports there is not strict stand-by period requirement, but requires the stream data of GS aspect BW.In addition, ALG is working under the asynchronous environment fully.We utilize shallow district 0.12 μ m CMOS standard cell to implement to demonstrate.
Remaining is described and presses as undertissue.Part 1 is investigated the background of GS, proposes current solution, and is the best solution regulation requirement among the NoC.The notion of part 2 explanation ALG scheduling also provides the proof of its function, and the 4th part proof that will provide in the 3rd part is generalized to and also takes into account buffer limit.The 4th part relates to the allocated bandwidth under this scheme.In the 5th part,, and in the 6th part, provide analog result to the enforcement of ALG link on the slice.
1. guarantee service
Hereinafter, network performance parameter and set up classification at first is discussed.Then, we discuss the needs of connection-oriented route and discussion is used among the current NoC and macro network in the GS scheme, we propose one group of requirement for the GS among the NoC at last.
1.1. performance parameter
Service guarantees quantize with one or more performance parameters.For suitably regulation service restriction, must indicate BW and stand-by period.If the throughput that can keep is too little, then the stand-by period guarantees it is useless, and is same, do not have the restriction to the stand-by period that causes, it also is useless that BW guarantees.
Though the BW of flit stream restriction determined by the bottleneck in its path, total stand-by period of flit characterizes by the summation of stand-by period of running in the network.These stand-by period comprise network permission stand-by period tadmit and the many jump stand-by period that the required network of visit connects, and jump is that the buffer from a routing node passes the flit that link arrives the buffer in the adjacent routing node and moves.The jump stand-by period comprises access latency taccess, that is, authorize the flit visit to share route resource (for example link) the used time, add to send stand-by period tlink, in case i.e. granted access sends to used time of buffer in the next routing node with flit.Therefore, to pass total stand-by period that X jumps long path be ttotal=tadmit+taccess1+tlink1+...+taccessX+tlinkX to flit.
1.2. connection-oriented GS
For hard service guarantees are provided, connection-oriented route is indispensable.Few link road by in, all data are advanced at the identity logic network uplink, and any transmission all makes another transmission delay potentially.The GS business must be that logic is independent of other business in the network.From system-level viewpoint, be useful to the hard restriction of service guarantees, because they have promoted the modular design flow process.Do not have such assurance, the change of system may need top widely checking again.Therefore, the GS among the NoC might shorten the cycle of large-scale SoC design.In addition, though the formal verification of the performance of BE route network impossible often-as desirable in the key real-time system, GS makes it become possibility.
1.3.GS scheme
Provide basic solution that BW guarantees based on fair fluid line up (FFQ, fair fluidqueuing).FFQ is the general type that connects the queue heads processors sharing (HOL-PS, head-of-line processor sharing) that realizes separate queues for every.Queue heads obtains service by this way, and that is exactly that the fair share and access to shared medium (for example link) is provided.
In asynchronous NoC, there is the shortcoming beastly of very long worst case stand-by period in FFQ type access scheme.Tree-like arbiter can be similar to FFQ, and still, because without any relevant incoming timing each other, what arrive given channel anyly is grouped in the service of obtaining and may has to wait for before all other inputs.Therefore, cross over to realize that the worst case stand-by period that a series of links among the asynchronous NoC of this link-access scheme accumulate is very long.In addition, access time and BW reserve and are inversely proportional to.In order to obtain the short stand-by period, must reserve the major part of BW.In Global Asynchronous NoC, can guarantee that the stand-by period through network is clock cycle of every jumping, still, the stand-by period of the connection of access originator still is inversely proportional to the BW that reserves.In addition, in order to realize every jumping stand-by period of such weak point, not individual buffer owing to connect, need explicit end-to-end flux controlling organization.In order to provide better restriction, need different schemes to the stand-by period of separating with the BW assurance.
Since apparent, can not realize in wide area network that the clock level between the network node is synchronous, and macro network has Global Asynchronous character.This makes them be similar to asynchronous NoC a little.When FFQ type solution was applied to GS, asynchronous NoC stand-by period problem as described above was well-known shortcoming.In order to overcome these shortcomings, often use rate controlled static priority (RCSP, rate controlled static priority) scheduling.In RCSP, the permission controller is distributed to all input groupings with suitable transmitting time.When this arrives constantly, be grouped in queuing in the static priority formation (SPQ, static priority queue).This mode can not only provide BW to guarantee independently of each other, and provides the stand-by period to guarantee.But permission control requires node to have local time's idea.This makes it be not suitable for implementing in asynchronous NoC.Another shortcoming of this method is that no work keeps, and this means that even there is the grouping of the suitable transmitting time of waiting for them in channel queue, router also may be in idle condition.This has reduced the efficient of using available network resource, and, even considered the stand-by period restriction, also reduced average connection stand-by period and link and used.
1.4. requirement to the GS among the NoC
We to the suggestion of the requirement of the solution of the GS among the NoC are: (i) simple, so that high operation speed and low hardware spending; The reservation of (ii) working is so that effectively utilize Internet resources; And (iii) can to separate or be not that complementary stand-by period and BW provide restriction on the contrary at least.In addition, be to the requirement of the solution of the GS among the asynchronous NoC: (iv) when not required between idea, no matter be local or the overall situation.ALG meets all these requirements, and therefore is the effective solution that GS is provided in synchronous and asynchronous system.
In the 3rd part, we will illustrate the ALG scheduling rule, the purposes that demonstration provides stand-by period and BW to guarantee on shared link at it.But, note that the visit based on ALG can be applied to any shared medium.In addition, although our enforcement based on asynchronous circuit, AGL is not limited to such circuit.But, ALG when not required between the fact of idea make it be particularly suitable for asynchronous system.
2. basic ALG scheduling
In this part, we will illustrate basic ALG scheduling rule.We understand its work at first intuitively, formally prove its work then.The free indication of following institute all utilizes unit interval flit-time to quantize.Flit-time is defined in and finishes on the physical link one time the used time of handshake cycle.The VC control measure have guaranteed not have flit to be parked on the link, and therefore, such duration of shaking hands is clearly defined.Because circuit is asynchronous, natural flit-time is not constant in whole network.But we suppose that flit-time is quite consistent.
Fig. 3 shows complete ALG link.The ALG scheduler is implemented in ALG permission control and static priority formation (SPQ).VC control is round them.Hereinafter will clearly demonstrate these three subsystems and how to work together, and provide stand-by period and BW to guarantee so that cross over a plurality of VC that share physical link.The principle of ALG scheduling can obtain fine understanding from the inside to surface.SPQ according to priority arranges VC and guarantees so that the stand-by period to be provided in view of the above, but can only be under certain conditions.Permission control has guaranteed that these conditions are met.The VC controlling organization has only guaranteed just to send flit when there is the free buffer space in receiving terminal on shared link, has prevented that therefore flit is parked on the link and has made the stand-by period and BW guarantees to lose efficacy.
2.1. according to priority arrange channel
In order to provide the stand-by period to guarantee, need provide restriction for the link-access time.Investigate Fig. 2, imagination flit randomly but large-spacing ground arrival channel A to D.Consider the channel of service according to priority now, A has limit priority.The flit that arrives A always obtains service immediately, has therefore guaranteed that the maximum link access time is a flit-time,, finishes the potential used time of ongoing transmission that is.Because we have made the simplification hypothesis that has large-spacing between the flit that arrives A this moment, the flit that arrives B waited for that before they obtain service A is once no more than.Therefore, the flit on the B will postpone two flit-time at most, because they will wait for the ongoing transmission that is sent completely and waits on the A at most.Equally, C will wait for maximum three flit-time, by that analogy.Consequently, the maximum link access time is directly proportional with the priority of channel.The function that realizes by SPQ among Here it is Fig. 3.
2.2. permission control
The flit interval that rule request described above is big.This may not guarantee, especially in the asynchronous network with distributed route control.Even provide specific flit at interval in the source, network also may be introduced shake in the data flow, is lost efficacy in the somewhere of space requirement in network.It is essential that this makes adjusting become the permission controlled stage of the permission of SPQ.In Fig. 3, ALG permission control is illustrated as the frame of SPQ front.This is similar to the RCSP that is used in the macro network a little, because it also realizes permitting controlled stage and SPQ.But in RCSP, permission is based on local timing's (for the suitable transmitting time of each priority scheduling) of channel.In the complete asynchronous system that at all loses all track of time, this is impossible.
The condition that the stand-by period restriction of the SPQ that do not lose efficacy is realized by ALG permission control be the flit on given (higher priority) VC can only postpone among another (lower priority) VC flit once.This can realize by investigating the flit that waits for when flit contention during to the visit of given VC in SPQ.For the stand-by period that does not make the flit on the lower priority VC guarantees to lose efficacy, before the new flit of permission, be necessary for all flit that waiting for when preceding flit according to priority is arranged among the SPQ on the given VC are being served.When just sending flit on link, this is to guarantee by taking of SPQ of sampling.In case this time be engraved in all flit that wait among the SPQ all obtain the service, just can permit the new flit on the same VC.Therefore, the flit on the VC of lower priority will be postponed a flit_time at most by the flit on each higher priority VC.Note that when authorizing given flit access link, only exist in lower priority VC and go up the flit that waits for that this is because according to the definition of SPQ function, all flit on the higher priority VC have obtained service.
Fig. 4 by the example illustration ALG.The stand-by period that can see B and C formation guarantees how to be met.The A formation has too many flit to enter, and therefore licensed control is forbidden.Owing on last link, very rapidly send A flit (than guaranteeing that the stand-by period restriction is fast), can find the reason that burst on A, occurs for the breakfast in network.
2.3.ALG stand-by period and bandwidth guarantee
In this part, we will state that connecting the stand-by period and the bandwidth that provide by basic ALG guarantees.These results will infer in the 3.4th part.The service guarantees of ALG are by characterizing for the total VC quantity on the priority of each VC of connect reserving and every the link.Consideration is at a series of ALG links 1,2 ..., reserved on the X and had priority Q1, Q2 ..., the connection of the VC of QX.Every link is realized N VC.These links provide the Q1 to the link-access time, Q2 ..., the restriction of QX flit-time.Consider at the place, source to come to this under the condition of tinterval 〉=N+Qmax-1 the flit-time in flit interval, Qmax is the maximum Q value on a series of VC.The so-called condition at interval that Here it is derives in the 4.4th part.At interval condition also is that the access rate that connects guarantees, like this, with the mark of total link capacity: BWmin=BWmin[Qmax]=BWlink/ (N+Qmax-1) characterizes the BW assurance of connection.
Be important to note that such fact: the stand-by period that is provided by ALG guarantees to guarantee to separate with BW.When N (being the quantity of VC on the link) increased, it is arbitrarily small that BW is guaranteed, and still keep the link-access time little of a flit-time.Therefore, support to have crucial connection of stand-by period that low BW needs (for example interrupting), and need not too to distribute BW.
Although the stand-by period that existing (asynchronous) GS rule of the NoC that distributes based on TDM type BW has realized a flit-time of every jumping, the initial connected reference stand-by period still makes total end-to-end stand-by period and BW assurance be inversely proportional to.As long as satisfy condition at interval, ALG just provides the instantaneous visit that GS is connected.In addition, can notice, can make every grade of forward stand-by period in the asynchronous network very little, much smaller than the clock cycle of comparable synchronization circuit.Therefore, in the restriction of ALG assurance to the stand-by period, asynchronous NoC also may have the minimum latency of much shorter.Utilize asynchronous and synchronous circuit to realize that the major advantage of NoC just is this.
2.4. proof
We will prove hereinafter, if consider condition at interval in the source, then can stipulate the end-to-end restriction that connects the stand-by period.Permission control may keep flit, but has only as flit during early than its overall scheduling, and the flit of local observation is shortened at interval.This proof is made up of two parts.In first, prove that the ALG rule works in single link.Prove that at first first flit that sends satisfies its stand-by period requirement on connecting, or stipulate its time limit.Prove that then any one flit after having stipulated its time limit and having met the flit of condition at interval also stipulates its time limit, and therefrom obtain the value at interval.According to induction, all flit that meet the interval condition stipulate their time limit.Prove that in the second portion of proof for a series of ALG links, if consider condition at interval in the source, then flit will stipulate its time limit on every link.Therefore, lost efficacy irrelevantly in network with the interval condition, the end-to-end stand-by period of connection is subjected to the restriction of the summation that the stand-by period on every link guarantees.
Single link theorem: on the ALG link of realizing N VC, under the condition of the flit interval of tinterval 〉=N+Q-1 flit-time, guarantee that all flit on the VC Q will have the maximum link access time of Q flit-time.
Proof: adopt and realize each corresponding to the priority 1 among the SPQ, 2,3 ..., the given link of the N of a N VC.Arrive given VC Q ∈ 1 ..., the 1st flit of N} will be allowed to zero access SPQ.In SPQ, Q flit-time of the longest wait before being allowed to access link.Therefore, stipulate that its time limit limited by the maximum link access time of Q flit-time.
Consider now on the Q, allow zero access SPQ and therefore stipulate the flit A in its time limit and also on Q, be connected on flit B after the flit A.Flit B arrives than the late tinterval of flit A flit-time.Flit A visit SPQ has waited for 0 flit-time, and in SPQ Q flit-time at most.When allowing flit A access link, N-Q flit (that is the quantity of the VC that priority ratio Q is low) waits at most in SPQ.According to the ALG rule, must before the next flit visit SPQ that allows on the Q, send all these flit.In the situation of worst case, Q-1 flit (that is the quantity of the VC that priority ratio Q is high) can adopt than N-Q the priority that flit is high that must send before the permission control permission flit of Q B enters SPQ at most.The summation indication that these parts postpone can enter maximum duration Q+ (N-Q)+(the Q-1)=N+Q-1 of process between the next flit of SPQ in flit on the Q and permission.This means that if tinterval 〉=N+Q-1 flit-time, so, flit B must be allowed to zero access SPQ, and when waiting for Q flit-time at most in SPQ, it also stipulates its time limit.Therefore, under the condition of interval, because any one flit itself that is connected on after the flit that has stipulated its time limit will stipulate its time limit, and because the 1st flit stipulates its time limit, according to induction, all flit will stipulate their time limit.
We prove that now for a series of ALG links, even the condition partial failure at interval owing to introduced shake in network, ALG guarantees that also all flit stipulate their time limit on every link.Therefore, the end-to-end stand-by period is limited the summation of the stand-by period restriction that is every link.In this, we still suppose always to exist enough cushion spaces in node.In the 5th part, we require to strengthen this proof by calculating buffering.
Link series inference: under the hypothesis that always has enough cushion spaces, for on a series of (X bar) ALG link of N VC of every realization, having reserved VC Q1, Q2, ..., the given connection of QX, observe at the place, source under the condition of tinterval 〉=N+Qmax-1 the flit-time in flit interval, the stand-by period restriction is the summation of the stand-by period restriction on every link.Here, Qmax be Q1, Q2 ..., the maximum of QX}.
Proof: consider to have reserved thereon VC Q ∈ Q1, Q2 ..., link in the related connection of QX} and the flit A in the connection of having stipulated its time limit on that link.Because flit A has stipulated its time limit, according to top single link proof of theorem, permission control will permit maximum N+Q-1s the flit-times of flit B after flit A is allowed to visit SPQ that advance on the same VC to enter SPQ.Because flit A has stipulated its time limit, its can be on time or its timetable in advance.If flit B is than more leading its timetable of flit A, it will arrive less than N+Q-1 flit-time after flit A is allowed to visit SPQ, and permission control will not allow its zero access SPQ.N+Q-1 flit-time the latest after flit A is allowed to visit, flit B must be allowed to visit.Their spacing is N+Qmax-1 flit-time at the place, source.Be this value or littler now, therefore, the same with flit A at least leading its timetable far away of flit B.Therefore, it also stipulates its time limit.On the other hand, if flit B is owing to being not so good as leading its timetable of flit A than prime congested in the network, it will be more than N+Qmax-1 flit-time arrival flit A is allowed to visit SPQ after.Therefore, it will be allowed to zero access, and stipulate its time limit.
The 1st flit that sends on connecting stipulates its time limit, because it is all hindered in the permission control of any link.Because any one flit that is connected on after the flit that stipulates its time limit itself stipulates its time limit, according to induction, all flit in the connection stipulate their time limit at all links.
I is kept BW and is drawn by following:
The minimum bandwidth inference: reserved VC Q1 at a series of (X bar) the ALG link that provides total bandwidth BWlink and every to realize N VC to every, Q2 ..., in the given connection of QX, the minimum bandwidth of keeping will be BWmin=BWlink/ (N+Qmax-1).Here, Qmax be Q1, Q2 ..., the maximum of QX}.
Proof:, meet the stand-by period that all flit in the ALG connection of tinterval 〉=N+Qmax-1 flit-time of condition at interval have restriction according to link series inference.The flit speed of at least 1/ (N+Qmax-1) of total flit speed of therefore, can link supporting sends flit stream and can not cause congested.Therefrom can directly draw, can keep bandwidth and be at least: BWmin=BWlink/ (N+Qmax-1).
3. buffer
In a preceding part, we have supposed that flit flows freely in network, only be subjected to the constraint of ALG link-access scheduling rule.Because this work is target with the lossless network that will never lose flit, every link also must be realized antibaric flow control, and assurance can only just send flit on VC when receiving terminal has free buffer.So just introduced permission control, the additional layer of VC control promptly shown in Figure 3.VC control only just allows flit pass through when reception VC buffer indicates it to have free space around ALG permission control and SPQ.If it can be free to travel to the receiving terminal of link, then flit must only present to ALG permission control.Otherwise the stand-by period that provides by the ALG rule guarantees and may lose efficacy because of flit is parked on the link.On the other hand, flit must cause to lose its time limit not because of VC control excessive deferral, makes the ALG stand-by period guarantee to lose efficacy once more.
In this work, we have used based on the VC control of sharing.The scheme that is illustrated among Fig. 6 uses each VC single wire to realize the nothing of shared medium (for example link) is stopped up visit.After permission flit, box is shared in locking, does not allow further flit to pass through.Flit passes the non-shared box that medium arrive the distally.Non-shared box realizes accepting the latch of flit.When flit leaves non-shared box again, be transformed into the release control line.Like this with regard to release shared box, permit another flit to arrive medium.As long as medium are not locked, just do not have flit and stop within it.
As shown in Figure 5, we turn to a series of ALG links with link model, have direct circuit at input with for connecting between the VC buffer of reserving.This hypothesis is effective to realizing that nothing is stopped up the router topology that switches.VC buffer among the figure respectively they input and output on realize non-shared box and shared box.In case the stand-by period that relates to is to pass link, process router and enter the link of the stand-by period of next VC buffer as the link-access stand-by period taccess, the conduct granted access link flit that allow the used time of flit access link to transmit stand-by period tlink and be transmitted back to the shared box in the previous VC buffer, the release stand-by period tunlock that indication can allow the used time of another flit access link as unlocking signal.When not taking place when congested, except link-access other all stand-by period the stand-by period all are constant.The end-to-end stand-by period restriction that is realized the connection that a series of (X bar) ALG link of N VC is formed by each bar is similar with the ttotal that introduces in the 3rd part: tend2end=taccess1+tlink1+taccess2+tlink2+...+taccessX+tl inkX.For simplicity, think that here N is identical on all links.The link-access time is by the priority Q1 that reserves on every link, Q2 ..., the QX decision: taccess1=Q1 flit-time, taccess2=Q2 flit-time, by that analogy.According to the 4.3rd part, the maximum Q in the connection, i.e. the BW that Qmax indication connects guarantees, because this is the bottleneck on the path: BWmin=BWlink/ (N+Qmax-1).
We need now to determine when flit when visiting the time of SPQ, that is, and the requirement of the shared box of release always when than leading 0 time of its timetable.If like this, flit will control in the face of the ALG permission, and, according to the ALG rule, will stipulate its time limit.Hereinafter, we will prove that unit piece VC buffer is enough to make the ALG scheduling rule suitably to be worked under the situation of flit interval condition and tlink+tunlock<N-1 flit-time of isl cycle condition.
Single buffer theorem: under flit interval tinterval 〉=N+Qmax-1 flit-time of condition and tlink+tunlock<N-1 flit-time of isl cycle condition, the unit piece flit buffer of each VC in each node is enough to guarantee the validity of a series of link inferences.
Proof: as shown in Figure 5, consider to each bar realize a series of ALG links of N VC reserved VC (..., Qi, Qj ...) and the coupling part.Each of VC buffer VCbufi and VCbufj all has the cushion space of a flit.They are empty when resetting, and therefore, the 1st flit that sends on connecting is not subjected to the restriction of VC control and will stipulates its time limit according to the ALG rule.The flit A flit B afterwards in its time limit is being stipulated in consideration now.Because flit A stipulating its time limit, it will visit with the SPQj more nearest than corresponding time leading 0 time of its timetable 0.At this moment, it has been ready to receive another flit to VCbufj with signaling VCbufi.Therefore, VCbufi incites somebody to action time tunlock afterwards, and to next flit, promptly flit B opens its output.Flit A must be not later than time 0-tlink and leave SPQi, and therefore, adm1 will allow flit B to be not later than 0-tlink+N-1=N-1-tlink flit-time and enter SPQi.If this time is later than the described time, VCbufl allows flit B process, and so, VCbufi will not be the restriction agency of stream.Therefore, VC control is not that the requirement of the restriction agency in the system is: N-1-tlink>tunlock=>tlink+tunlock<N-1 flit-time.This has constituted the isl cycle condition.If the isl cycle condition is set up, flit B will arrive VCbufj:Q1+tlink+N-1-tlink=Q1+N-1 flit-time in the following moment, and it is less than or equal to the required flit interval of Qmax+N-1 flit-time.Therefore, flit B stipulates its timetable when arriving admj.
Under the minimum flit interval condition at interval of Qmax+N-1 the flit-time in place, source, and under the isl cycle condition of tlink+tunlock<N-1 flit-time, any one flit after the flit that has stipulated its time limit also will stipulate its time limit.Because the 1st flit stipulated its time limit, according to induction, all flit in the connection stipulate their time limit.
4. allocated bandwidth
Under its citation form, as described up to now, the ALG link provides stand-by period assurance flexibly.Can select its priority and the stand-by period that is connected that will set up to require corresponding VC.But according to the priority of selected VC, bandwidth guarantees to fix.And between different priorities, BW is not very different, scope from the 1/N that is used for limit priority to 1/ (2N-1) that is used for lowest priority.Hereinafter, the flexible stand-by period that we will be described in still satisfied basic ALG configuration obtains three kinds of methods that the flexible BW that connects is distributed when guaranteeing.
First method (multichannel method) is common: on every link several VC are distributed to wall scroll and connect.This has created effectively in fact is the connection that several ' parallel ' connects, and has therefore increased data capacity.Use though can on every link, manage partly, also can on network adapter, manage based on end-to-end.The method advantage of this distribution BW is easy to understand and realization.In addition, flit arrives with the form of burst (having little interval).Separately, send on the VC sequence of individual buffer before X flit, X is the quantity of the VC of connection distribution on every link.Therefore, by selecting high priority VC, can make the stand-by period of all these very short.This is the major advantage of little grouping, guarantees with low-down total stand-by period because can give whole group.In the network that has such as bus-type (memory mapped) the visit socket of OCP interface, the great majority that transmit simple OCP order are grouped in 32 networks to be made up of 2-3 flit.By corresponding a plurality of VC series of distribution and the flit number in this grouping or group, the forwarding stand-by period of these orders foreshortens to the ALG stand-by period of the slowest VC group of distribution.The shortcoming of this method is that the zone of link requires more or less linear the increasing along with the BW granularity, and this zone is mainly determined by the flit buffer; Because each VC needs a flit buffer, the BW granularity is inversely proportional to the quantity of VC haply.In addition, also need wide SPQ.This has reduced the performance of SPQ, in the present embodiment, because SPQ is the bottleneck of link, so shortened flit-time.
Second and the third method around by basic ALG permission controlling schemes-need not to realize more cushion spaces, allow to develop than the basic conception that normally allows to visit a priority more.Therefore, allow a VC to visit SPQ more continually, and therefore, the throughput of this VC will increase.But because flit can not independently begin, total stand-by period of the many flit groupings in the connection is longer than first method.They use identical buffer series in whole network, therefore since flit must wait for that previous flit leaves connection carry out buffer in the node, they will have the spacing of can not ignore on connecting.
According to second method (many permission control methods), each VC has many permission controls.If any one of these permission controls is non-wait, then can permit flit.Then, in case send the flit of permission, the permission control of permission flit (if more permissions control is non-wait, can select any one ' that ' as permission flit of these permission controls) will be according to take a sample the taking of lower priority of SPQ of the method that the front is described at basic ALG scheduling scheme.As can be seen, for each the permission controlled stage that realizes (or movable each), each VC can stop up the transmission of a lower priority VC potentially, and therefore, the stand-by period that correspondingly shortens these VC guarantees.Therefore stand-by period of VC guarantees with the quantity of the VC of higher priority irrelevant, but relevant with the sum of the activity permission controlled stage of higher priority VC.This method visits corresponding to two or more different priorities in the basic ALG link of flit buffer utilization, only selects then permitting open that.But because priority will never be used (owing to having only a flit buffer) simultaneously, the stand-by period that they will be equated guarantees that according to the limit priority of two priority, they will never stop up mutually.
On every link, have only when the flit buffer in the next node is idle, just can send flit.That is to say, only after carrying out having sent previous flit on the link, just can on link, send flit.This gives us separation condition.Suppose that flit stops up not licensed control in next node, that is, not leading its timetable t_ (separation) of flit=t_ (link)+Q_ (proceeding_link)+t_ (unlock).Note that differently with as previously shown interval condition, separation condition is independent of N.Condition stands good in each permission controlled stage at interval, but it is independent of the separation condition of the flit stream that is applied to connect.Therefore, if VC realizes two permission controlled stages: if t_ (interval)>=2*t_ (separation), the BW that we double for given VC acquisition.This is corresponding to the isl cycle condition of the 5th part.If t_ (interval)<2*t_ (separation) is with the BW that guarantees less than twice, because flit will be subjected to the restriction of the separation condition stricter than interval condition.Therefore, obtainable maximum bandwidth is determined by the isl cycle condition, and the multiple that comes ' balance ' this condition by the interval condition.
Note that the Q inequivalence of mentioning here in the priority of SPQ, but permit the sum of controlled stage to add 1 Q that calculates according to the activity of higher priority VC.
According to the third method (counting permission control method), when sending flit by its priority, the permission controlled stage do not take a sample and take (and if any VC that lower priority is arranged subsequently waiting for, then close visit), take and only on each the X time, take a sample.In fact, when (becoming non-wait) opened in permission control, can send the burst of flit.After the last flit of burst, permission control is closed, and sampling lower priority VC takies, and etc. those shared VC of wait flit to be sent.Therefore, each flit of lower priority VC has to wait for this specific (higher priority) VC X time potentially.Its stand-by period guarantees and will correspondingly shorten.The same with many permission control methods, obtain to can be used for the increase of the bandwidth of VC, with stand-by period of the VC of lower priority be cost.
5. realize
According to Fig. 3, three basic subsystem of basic ALG chain route are formed: VC control, permission control and SPQ.
5.1.VC control
The functional description of the VC controlling schemes of using is in the 4th part.Fig. 7 shows the schematic diagram of the realization of the shared box of a VC and non-shared box.The single line unlocking signal plays two-phase affirmation effect.The necessary long enough of the pulse that pulse_gen generates is with replacement C element c_clock.Output_decouple circuit at the output of sharing box separates shared medium with VC.Therefore, irrespectively guaranteed flowing freely of flit with slow each.
5.2. permission control
The novelty of ALG scheduling is to permit controlled stage, and it controls flowing of flit, makes SPQ that suitable stand-by period restriction can be provided.
Each channel of permission control is that each channel of lower priority is realized one status register.When being provided with one or more mode bit of given channel, permission control suspension of licence flit arrives SPQ on that channel.During flit access link on allowing channel, mode bit is set according to the snapshot that takies of SPQ.Take indication when allowing given channel access link (according to priority arranging), which channel has the flit that waits in SQP.When these waited for that flit is authorized to access link, mode bit was reset subsequently.When all obtaining sending, remove all mode bits, and another flit on the given channel of permission control permission.
Fig. 8 shows the schematic diagram of the permission control of channel n.The mode bit register [n-1...0] that each lower-priority channels is one is realized as the RS latch.Channel n is used as in the maximum priority channel of contention preset time to the visit of link.SPQ generates and to take vector, and its value n be confirmed to be high level the time be stable.The set of mode bit register input is the affirmation of n and takies the logic of vector and (AND).Like this, when allowing channel access link (indicating), the appropriate state position is set according to taking of SPQ by its affirmation that uprises.Mode bit register reset input is connected with the affirmation signal of respective channel simply.When allowing the channel access link, its affirmation uprises, so, in the permission control of each higher priority channel, be reset with the corresponding mode bit of this channel.When all mode bits all were low level, the input request was allowed to propagate into output.Because local affirmation makes mode bit obtain being provided with, and needs the C element in request path, rather than the AND door.This has guaranteed that the output request does not reduce, and reduces up to the input request.
The set and the reset that note that the mode bit register are mutual exclusions, because have only the channel can access link at given time.
5.3. static priority formation
The asynchronous realization of SPQ arbiter and the merging data path of following are presented among Figure 10.Key control signal is ' enabling (enable) '.When resetting, ' enable ' is low level.Asymmetric C element on the input has guaranteed at ' enable ' when being high level that because the operational phase of this indication arbiter, inside request signals does not reduce.Mutex realizes so-called lock register.When one or more input requests propagated into the output of these lock registers, this was detected by the OR door, generated ' any_req ', made enable become high level.This makes the lock register locking again, and as long as ' enable ' is high level, newly arrived input request gets clogged.After this, just what a output C element starts, and confirms suitable channel, and the data of catching that particular channel in output latch.' enable ' reduces now, indicates the stage of arrangement according to priority to finish.As long as ' enable ' be low level, neither one output C element can start, in case but acknowledgement channel is cancelled its request, and link acknowledgement the output request, corresponding C element is reset, and makes ' enable ' begin another and according to priority arranges the stage.
' g1 ' output by hypothesis mutex excessively is not slower than ' g2 ' output, can realize the fast RTZ stage.Required all of RTZ are ' r2 ' input of according to priority arranging channel to be propagated through ' g2 ' hold output C element.As long as ' g1 ' reduces before ' g2 ' propagation that reduces turns back to input and restarts enable by output C element, this supposes safety.Two RTZ timing paths that comprise ' g1 ' and ' g2 ' are from ' enable ' step-down.When the quantity of VC doubled, the cycle of SPQ had only increased-one of three two-input gate degree of depth (about 30-40ps in 0.12 micron technology) in asymmetric C element, and two in complete detection OR door.
Permission control is used to be provided with the logic AND that the vector formation enabled and locked the input request that takies of its mode bit.Pointed as decline, requiring this vector is stable when being confirmed to be high level.By the locking request vector sum is enabled to carry out the AND computing, guaranteed that when being confirmed to be high level the position that takies vector is stable or low.Because the mode bit register takies the position by height and is provided with, this is enough good.
6. result
Utilize commercial 0.12 μ m CMOS standard component to realize 16 8-VC ALG links.When using typical timing parameters, this design is to simulate with the speed of the corresponding 702MDI/s of flit-time of 1.4ns.Shared physical link has been realized 3 level production lines.The element region of whole link, that is, pre-layout is 0.012mm 2, the core of ALG scheduler (permission control and SPQ) only spends 0.004m m2.This benefit that shows ALG is not expensive at all with regard to occupation of land.
Simulated the testing apparatus of the connection on imitation a series of (3) ALG link.Observe two kinds of connections: reserve the fast connection of high priority VC, with being connected slowly of reservation low priority VC.On all other VC, cause the stand-by period of record flit in the background service at random.Fig. 9 shows for the heterogeneous networks load, and record surpasses the distribution of the flit stand-by period of 10000 flit.As can be seen, even under 100% offered load, the flit in the connection also stipulates their time limit.Along with load increases, the stand-by period distribution map is pushed the stand-by period restriction to, but will never intersect with it.Can obtain to jump beginning from 3.6ns/ is the forwarding stand-by period restriction that unit upwards increases progressively with 1.4ns (flit-time).This comprises the ALG access latency and crosses over the constant forwarding stand-by period of link (sharing box, merging, streamline, decomposition and non-shared box-about 2.2ns).BW on fast the connection guarantees it is 1/8*702MDI/s=88MDI/s, and is connecting slowly, and it is 1/15*702MDI/s=47MDI/s.
Table 1 is contrasted those of the assurance of ALG and the existing scheduling scheme that uses in NoC.In this table, N is the quantity of VC on every link, and h is the quantity of the jumping of given connection leap.TDM is used among the synchronous NoC, and is realizing under certain type the situation of end-to-end current control, and the restriction of little connection stand-by period to N+h is provided.If not, the stand-by period restriction shortens to the level that asynchronous justice is shared, that is, and and (N+1) * h.This table shows, ALG provides much better stand-by period restriction, and, with regard to all kinds that can example connect, generally more flexible.
Synchronously Asynchronous
TDM Fair sharing The ALG fast path The ALG slow-path
t edmit t access t link N 1 1 0 N 1 0 1 1 0 N 1
Stand-by period N+h (N+1)*h h (N+1)*h
Bandwidth
1/N 1/N 1/N 1/(2N-1)
The stand-by period of the different GS schemes of table 1. and bandwidth guarantee

Claims (20)

1. a control is to the method for the visit of shared resource, and this method comprises:
-receiving and the relevant information of one or more data item that will send, each data item has been assigned with one of a plurality of different predetermined priority; And
-come as follows repeatedly data item to be offered resource:
The data item to be sent that o will have the highest non-wait priority sends to resource, and
That priority of o is waited for the transmission from the data item of each non-wait lower priority subsequently, and the information relevant with the data item to be sent of described non-wait lower priority is received.
2. method according to claim 1 further comprises the step that each a plurality of formation relevant with one of priority is provided, and wherein:
-described receiving step comprises: in the formation relevant, receive each data item with the priority of distributing to data item,
-described forwarding step comprises: sends oneself and has the data item of the non-waiting list of limit priority, and
-described waiting step comprises: the transmission from the data item of the non-waiting list of all non-NULLs with lower priority is waited in that formation then.
3. method according to claim 2, wherein, described waiting step comprises: when from the data item of each non-wait lower priority when data set provider sends to shared resource, to be forwarded to memory from the next data item of formation, and wherein, described forwarding step comprises that data item that has limit priority in the transmission memory.
4. according to any one described method of claim 1-3, further comprise the step that a plurality of data set providers are provided, each of a plurality of data set providers is applicable to:
-the data item that provides each to be assigned with one of priority,
-provide and the relevant information of one or more data item of preparing to send to shared resource from data set provider, and
Wherein:
-described forwarding step comprises data set provider data item is sent to resource, and
-described waiting step comprises the data item that is ready to that prevents to have from any data set provider transmission wait priority, sends to shared resource from data set provider up to the data item from each non-wait lower priority.
5. method according to claim 4, wherein, described data set provider is applicable to when therefore being subjected to instructing transmits data item, and wherein:
-described forwarding step comprises the data set provider that is ready to data item that instruction has the highest non-wait priority data item is transmitted to shared resource, and
-described waiting step comprises not instruct to have waits for that the data set provider that is ready to data item of priority sends any data item of wait priority.
6. according to any one described method of front claim, wherein, described receiving step comprises receiving data item and data item is provided in memory device or the memory, and wherein, described forwarding step comprises the data item of sending from memory device or memory.
7. according to any one described method of front claim, wherein, described shared resource is link, memory, processor, integrated circuit or cross bar switch.
8. according to any one described method of front claim, wherein, the waiting step of described predetermined priority comprises: have only the transmission of just waiting for data item when predetermined priority has sent a plurality of data item.
9. a control is to the method for the visit of shared resource, and this method comprises:
-receiving and the relevant information of one or more data item that will send, each data item is assigned with one of a plurality of different predetermined priority;
-be the one or more process of transmittings of each priority definition, the every segment information relevant with data item is assigned with the process of transmitting of priorities associated; And
-repeatedly data item is offered resource as follows:
O sends to resource with the data item that the information relevant with it has been assigned to the non-wait process of transmitting that defines for limit priority, and
That process of transmitting of o is waited for the transmission from the data item of each lower priority subsequently, and described each lower priority has been assigned with the information relevant with data item to be sent and its at least one process of transmitting is non-wait.
10. according to any one described method of front claim, wherein, described priority/queue/process of transmitting is only waited for the transmission that receives the data item of its information in the middle of the data item that delivery time sends at this delivery time.
11. a control is to the system of the visit of shared resource, this system comprises:
-being applicable to the receiving system that receives the information relevant with one or more data item that will send, each data item is assigned with one of a plurality of different predetermined priority; And
-be applicable to the dispensing device that repeatedly data item is offered resource as follows:
O sends to resource with data item to be sent and that have the highest non-wait priority, and
O waits for from the transmission of the data item of that priority with relief, up to having sent data item by each the non-wait lower priority that receives the information relevant with data item to be sent.
12. system according to claim 11 further comprises each a plurality of formation relevant with one of priority, and wherein:
-receiving system is applicable to each data item is provided in the formation relevant with the priority of distributing to data item,
-dispensing device is applicable to send to have the data item with the highest non-wait priority of any non-waiting list of limit priority certainly, and with the transmission of that formation wait of relief from the data item of all the non-waiting lists relevant with lower priority.
13. system according to claim 12, wherein, described dispensing device be applicable to when from the data item of each non-wait lower priority when formation sends to shared resource, to be forwarded to memory from the next data item of formation, and that data item that has limit priority in the memory will be sent to resource.
14. any one the described system according to claim 11-13 further comprises a plurality of data set providers, each of a plurality of data set providers is applicable to:
-the data item that provides each to be assigned with one of priority, and
-provide and the relevant information of one or more data item of preparing to send to shared resource from data set provider,
Wherein, described dispensing device is applicable to that the data item that the director data supplier will have the highest non-wait priority sends to resource, and prevent from subsequently to send the data item that is ready to wait priority from any data set provider, send to shared resource up to data item from data set provider from each non-wait lower priority.
15. system according to claim 14, wherein, described data set provider is applicable to transmits data item when therefore by instruction, and wherein, described dispensing device is applicable to and instructs the data set provider that is ready to data item with the highest non-wait priority that data item is transmitted to shared resource, and do not instruct subsequently and have the data set provider that is ready to data item of waiting for priority and send any data item of waiting for priority, send data item up to director data supplier from each non-wait lower priority.
16. any one described system according to claim 11-15, wherein, described receiving system is applicable to receiving data item and data item is offered memory device or memory, and wherein, described dispensing device is applicable to the data item of sending from memory device or memory.
17. according to any one described system of claim 10-16, wherein, described shared resource is link, memory, processor, integrated circuit or cross bar switch.
18. according to any one described system of claim 11-17, wherein, described dispensing device is applicable to and only just allows described priority wait for after a plurality of data item from priority have been sent to resource.
19. a control is to the system of the visit of shared resource, this system comprises:
-being applicable to the receiving system that receives the information relevant with one or more data item that will send, each data item has been assigned with one of a plurality of different predetermined priority;
-for each priority, one or more process of transmitting, the every segment information relevant with data item is assigned with the process of transmitting of priorities associated; And
-be applicable to the dispensing device that repeatedly data item is offered resource as follows:
O sends to resource with the data item to be sent that the information relevant with it has been assigned to the non-wait process of transmitting of limit priority, and
O waits for transmission from the data item of each lower priority subsequently with that process of transmitting of relief, and described lower priority has been assigned with the information relevant with data item to be sent and its at least one process of transmitting is non-wait.
20. any one described system according to claim 11-19, wherein, described dispensing device is applicable to and allows priority/queue/process of transmitting only wait for the transmission that has received the data item of its information in the middle of the data item that delivery time sends at this delivery time.
CN2006800061634A 2005-02-28 2006-02-28 A method of and a system for controlling access to a shared resource Expired - Fee Related CN101129033B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US65637705P 2005-02-28 2005-02-28
DKPA200500304 2005-02-28
US60/656,377 2005-02-28
DKPA200500304 2005-02-28
PCT/DK2006/000119 WO2006089560A1 (en) 2005-02-28 2006-02-28 A method of and a system for controlling access to a shared resource

Publications (2)

Publication Number Publication Date
CN101129033A true CN101129033A (en) 2008-02-20
CN101129033B CN101129033B (en) 2012-10-10

Family

ID=35057004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2006800061634A Expired - Fee Related CN101129033B (en) 2005-02-28 2006-02-28 A method of and a system for controlling access to a shared resource

Country Status (2)

Country Link
CN (1) CN101129033B (en)
AT (1) ATE510385T1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102347891A (en) * 2010-08-06 2012-02-08 高通创锐讯通讯科技(上海)有限公司 Method for using shared cache
CN103348640A (en) * 2011-07-22 2013-10-09 松下电器产业株式会社 Relay device
CN112540841A (en) * 2020-12-28 2021-03-23 智慧神州(北京)科技有限公司 Task scheduling method and device, processor and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1181438C (en) * 2001-01-18 2004-12-22 深圳市中兴集成电路设计有限责任公司 Method for controlling access of asynchronous clock devices to shared storage device
JP2005018224A (en) * 2003-06-24 2005-01-20 Matsushita Electric Ind Co Ltd Conflict controller

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102347891A (en) * 2010-08-06 2012-02-08 高通创锐讯通讯科技(上海)有限公司 Method for using shared cache
CN103348640A (en) * 2011-07-22 2013-10-09 松下电器产业株式会社 Relay device
CN103348640B (en) * 2011-07-22 2016-11-23 松下知识产权经营株式会社 Relay
CN112540841A (en) * 2020-12-28 2021-03-23 智慧神州(北京)科技有限公司 Task scheduling method and device, processor and electronic equipment
CN112540841B (en) * 2020-12-28 2021-11-12 智慧神州(北京)科技有限公司 Task scheduling method and device, processor and electronic equipment

Also Published As

Publication number Publication date
CN101129033B (en) 2012-10-10
ATE510385T1 (en) 2011-06-15

Similar Documents

Publication Publication Date Title
US7827338B2 (en) Method of and a system for controlling access to a shared resource
US8223650B2 (en) Express virtual channels in a packet switched on-chip interconnection network
Feliciian et al. An asynchronous on-chip network router with quality-of-service (QoS) support
Bauer et al. Applying Trajectory approach with static priority queuing for improving the use of available AFDX resources
DE602006000516T2 (en) Architecture of a communication node in a globally asynchronous network on-chip system
US20030163593A1 (en) Method and system for implementing a fair, high-performance protocol for resilient packet ring networks
CN103238302A (en) Repeater, method for controlling repeater, and program
CN110166380A (en) Method, first network equipment and the computer readable storage medium of schedules message
Zhou et al. Insight into the IEEE 802.1 Qcr asynchronous traffic shaping in time sensitive network
CN1614956B (en) Method and apparatus for scheduling prior packet level
CN109639560A (en) Improve the method for the availability of real-time computer networks
Fan et al. Guaranteed real-time communication in packet-switched networks with FCFS queuing
CN101129033B (en) A method of and a system for controlling access to a shared resource
Kranich et al. NoC switch with credit based guaranteed service support qualified for GALS systems
Lee Real-time wormhole channels
US20130286825A1 (en) Feed-forward arbitration
Kostrzewa et al. Supervised sharing of virtual channels in Networks-on-Chip
CN102611924A (en) Flow control method and system of video cloud platform
Liu et al. Highway in tdm nocs
Cobb et al. A theory of multi‐channel schedulers for quality of service
US7839774B2 (en) Data processing circuit wherein data processing units communicate via a network
Berejuck et al. Adding mechanisms for QoS to a network-on-chip
CUI et al. A hybrid service scheduling strategy of satellite data based on TSN
Jiang Per-domain packet scale rate guarantee for Expedited Forwarding
Nguyen et al. A novel priority-driven arbiter for the router in reconfigurable Network-on-Chips

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121010

Termination date: 20150228

EXPY Termination of patent right or utility model