CN100508502C - Stream queue-based extensible device for CAM-based broadband network service stream - Google Patents

Stream queue-based extensible device for CAM-based broadband network service stream Download PDF

Info

Publication number
CN100508502C
CN100508502C CNB2006101655896A CN200610165589A CN100508502C CN 100508502 C CN100508502 C CN 100508502C CN B2006101655896 A CNB2006101655896 A CN B2006101655896A CN 200610165589 A CN200610165589 A CN 200610165589A CN 100508502 C CN100508502 C CN 100508502C
Authority
CN
China
Prior art keywords
ram
module
stream
cam
pointer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006101655896A
Other languages
Chinese (zh)
Other versions
CN101009645A (en
Inventor
胡成臣
刘斌
陈雪飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CNB2006101655896A priority Critical patent/CN100508502C/en
Publication of CN101009645A publication Critical patent/CN101009645A/en
Application granted granted Critical
Publication of CN100508502C publication Critical patent/CN100508502C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Abstract

This invention belongs to network business flow management field, wherein it uses a physical storage device RAM to maintain the CAM and realize dynamic-share-logic FPGA/ASIC; the RAM can buffer data grouping, the CAM can store and lookup the mapping relation between active business flow and physical sequence, and a FPGA/ASIC part included a write module/ RAM manage module/scheduler module/lookup module realizes the dynamic share. This invention can reduce required queue number to less than 1% of total number, and decreases the cost and power consumption less than 6.5% of normal method.

Description

Press the extendible device of every stream queuing based on the broadband network Business Stream of CAM
Technical field
But the present invention is a kind of linear speed based on the CAM device realizes the extendible device by every stream queuing, can be applied in the computer network forwarding unit such as router switch to realize that service quality (QoS) guarantees, belongs to the network service flow management domain.
Background technology
Carry out cache management by in computer network forwarding units such as router, switch, adopting by the mode of every stream queuing (per-flow queuing), and the service quality between can the strict guarantee different business flowing (Quality of Service, QoS).On the traditional sense, need the different line pipe that flows to of physical isolation to manage, that is to say to be that each stream is safeguarded a physical queue separately according to every stream queuing.Find by means such as network measures, in the high-speed wideband network (as 2.5Gbps, the network of 10Gbps bandwidth) can coexist simultaneously different Business Streams that enliven more than 1,000,000, thereby need safeguard up to a million physical queues for the way to manage of every stream queuing, and this will make administrative unit take too much resource and expend the too much processing time, make to be difficult to realize.Therefore, although can provide QoS assurance by the cache management mode of every stream queuing, traditional idea but is considered to not have extensibility and can't realizes in the high-speed wideband network.
Although can coexist different Business Streams that enliven more than 1,000,000 in the high-speed wideband, by to real network data l-G simulation test, scale length is not no more than the order of magnitude of hundreds of or several thousand in fact when finding any one for empty number of queues.Explain intuitively: time that each packet is stored in forwarding units such as router very short (ms level, even ns level), and when the interval that arrives between the same stream packets during, formation can occur and temporarily be the situation of sky greater than this order of magnitude.The present invention is based on this fact, a kind of expandable device based on CAM is proposed, only safeguard physically small number of queues (as, less than 1k formation), the mode of dynamically sharing by for formation guarantees at any time, different enliven Business Stream and in forwarding unit, can be assigned to independent formation, avoid having a plurality of streams in the formation, thereby realize guaranteeing service quality by every stream queuing.
Summary of the invention
In every stream queuing storage management system, in order to determine to need to reserve how many resources, the quantity of the stream that transmits on the measure link by the following method: 1) traffic classification comes grouping is classified according to 5 tuples in the data packet header, and different packets belongs to not homogeneous turbulence; 2) judge whether stream for transmitting.When the first packet of a stream reaches, think that stream begins transmission; For whether the transmission that detects stream finishes, the τ time-out time is set.If should the grouping of stream free of data arrive in the τ time, think that then the transmission course of stream finishes.It is 60 seconds even longer that τ is set during measurement.Measurement result shows that the quantity by the stream of node can reach hundreds of thousands even up to a million individual.Based on this result, if every stream queuing system is safeguarded a formation for each stream, then the number of formation also will reach hundreds of thousands even up to a million individual, and the storage administration of every stream queuing is become difficult to achieve in high speed router.Yet in fact, in the process of flow transmission, some moment, some stream corresponding queues were not stored packet, though this moment, their transmission did not finish, at these constantly, it is that it preserves formation that the packet storage system there is no need.The result of experiment and modeling shows that actual occupied number of queues is far smaller than the quantity of the stream that transmits simultaneously.Thereby only need realize a spot of formation in the packet storage system.
But the present invention proposes a kind ofly to realize extendible device by every stream queuing applicable to the high-speed wideband network based on the linear speed of CAM device, being used for computer network forwarding unit such as router switch guarantees to realize service quality (QoS), this device is reduced to below 1% of traditional approach with required number of queues, simultaneously, cost and power consumption will be reduced to below 6.5% of traditional approach.The present invention mainly solves following three problems and realizes that a small amount of physical queue is shared by a large amount of flowable state, thereby the function by every stream queuing is provided:
1) because a large amount of stream is shared a spot of physical queue, and the present invention safeguards a tabulation in CAM, preserves the corresponding relation of active stream and physical queue;
2) when packet arrives, the present invention searches stream under the packet whether in tabulation, determines whether this stream has taken physical queue, if then packet is added to the end of physical queue and ranked; If not, then at first to distribute new physical queue, just can rank;
3) by data groupings and physical storage cut apart employing list structure management maintenance physical queue.
In high speed router, the maintenance of tabulation and search processing and need in very short time, finish, its realization has challenge.
The invention is characterized in that it contains: physical memory devices RAM, be used for safeguarding the active CAM that flows to the mapping relations of physical queue, and the FPGA/ASIC part that realizes dynamic shared logic, wherein:
Described RAM with and realize that dynamic shared logic partly links to each other, CAM with and realize that dynamic shared logic partly links to each other, and realize that dynamic shared logic links to each other with CAM, RAM;
Physical memory devices RAM is used for data cached grouping, is SRAM, DRAM or any other physical storage device;
Content adressable memory CAM is used for preserving and search the mapping relations of enlivening between Business Stream and the physical queue, is BCAM, TCAM or other any content adressable memorys;
Dynamically shared logic contains: writing module, RAM administration module, scheduler module and search module; Its workflow is:
A) grouping of data at first arrives writing module, writing module with its stream number to searching the corresponding physical queue of module request;
B) search module and from CAM, search the pairing list item of corresponding stream number, then return physical queue and number give writing module if exist; If do not exist, idle queues of application number returns to writing module in the RAM administration module, and adds corresponding list item in CAM;
C) writing module obtains after the respective physical queue number, and the address to a free block of RAM administration module acquisition writes RAM to packet, and the RAM administration module upgrades the tail pointer and the free block top-of-stack pointer of this physical queue;
D) scheduler module is dispatched each physical queue, to team's head pointer of the current formation that is scheduled of RAM administration module inquiry, from RAM, read, the RAM administration module reclaims this data block adding free block stack simultaneously, revise the free block top-of-stack pointer, and revise team's head pointer of respective physical formation;
When e) the current formation that is scheduled became sky, the scheduler module notice was searched module, deleted its corresponding list item on CAM;
Writing module and RAM administration module, search module and RAM links to each other, the RAM administration module is with writing module, scheduler module and search module and link to each other, scheduler module with search module, RAM administration module and RAM and link to each other, search module and link to each other with CAM.
Described extendible device of lining up by every stream based on the broadband network Business Stream of CAM, it is characterized in that, the RAM administration module is grouped in the data structure of storing among the RAM to data and manages, provide map between physical queue and the address to writing module and scheduler module, and in CAM, add and delete the relevant entries of virtual queue (being active stream) to physical queue by searching module.
Described extendible device of lining up by every stream based on the broadband network Business Stream of CAM is characterized in that the RAM administration module adopts the chained list mode to realize the distribution and the recovery of free block, safeguards a head pointer, points to the head of free block chained list; Safeguard a tail pointer, point to the afterbody of free block chained list; Corresponding next the jumping pointer of each free block points to next free block.
Described extendible device of lining up by every stream based on the broadband network Business Stream of CAM, it is characterized in that RAM administration module employing chained list mode realizes the management of physical queue, for each physical queue, safeguard a head pointer, point to the head of physical queue; Safeguard a tail pointer, point to the afterbody of physical queue; Corresponding next the jumping pointer of each data block of physical queue points to next data block.
Described extendible device of lining up by every stream based on the broadband network Business Stream of CAM, it is characterized in that, the RAM administration module use that the on-chip SRAM of FPGA/ASIC inside preserves list of free blocks and each physical queue next jump pointer and pointer end to end, next jumps pointer and the divided memory block of RAM correspondence one by one outside pointer and the sheet end to end.
Described extendible device of lining up by every stream based on the broadband network Business Stream of CAM is characterized in that the RAM administration module is the RAM memory space and be divided into fixed-size memory block, and its size equals the size of a data transmission unit; Wherein, a data transmission unit, the elongated packet that is transmission over networks is cut into fixed-size unit after entering network forwarding equipment, data transmission unit can be resumed into the original elongated packet of elongated packet when the deviated from network forwarding unit.
Described broadband network Business Stream based on CAM is characterized in that by the extendible device of every stream queuing scheduler module is responsible for the scheduling of packet, from the address that the RAM administration module obtains a physical queue team element, and it is read from RAM.
Described extendible device of lining up by every stream based on the broadband network Business Stream of CAM is characterized in that, searches the stream number that module provides according to writing module, judges by the information that exists in the CAM whether this stream of current physical queue is active.
The extendible device that the present invention proposes, realize dynamically sharing of physical queue based on the CAM device, in limited physical queue resource, realize lining up of a large number of services stream by every stream, simultaneously, device of the present invention is when providing the performance of same every stream queuing, compare with conventional method, required number of queues can be reduced to below 1% of total fluxion of coexistence, the cost of device of the present invention and power consumption will be reduced to below 6.5% of static method.
Experiment and modeling checking
In the structure of Fig. 1, if the packet memory space is divided into A memory block, then next pointer storage occupation space is Alog 2A/8 bytes, taking then of pointer decided by number of queues end to end, if number of queues is Q, then the occupation space of pointer end to end of formation is 2Qlog 2A/8 bytes.The capacity s=2Mbytes of tentation data packeting memory, memory block size B=64bytes, then A=32000.Taking by number of queues of pointer determines end to end, if do not adopt device of the present invention, but formation of each flow distribution for transmitting on the link, then as fluxion N=1000000, number of queues Q=1000000, the space that this moment, RAM administrative institute needed (comprise next pointer and pointer) end to end takies and has reached 3.8Mbytes, be difficult to be integrated in the sheet, even the SRAM that the use sheet is outer also needs to increase cost and uses jumbo SRAM device could be competent at (jumbo DRAM at a low price can't meet the demands) on speed.If adopt the method for device of the present invention, the physical queue number realized on the hardware is reduced, Q=1000 for example, this moment, the logic data structure complexity reduced, and correspondingly the space hold that needs of RAM administrative institute has also reduced to 62Kbytes.
For verifying device scheme of the present invention, the present invention has carried out relevant experiment, two sections real network service traffics data from NLANR (http://pma.nlanr.net/Special/) are chosen in experiment: 1) real traffic 1:OC-48 link, capacity are 2.5Gbps.Went to the data on the link of Indianapolis in 10 minutes in 10 o'clock to the 10 o'clock morning of on August 14th, 2002 by Cleveland.2) real traffic 2:OC-192 link.Went to the data on the link of Indianapolis in 10 minutes in 8 o'clock to 8 o'clock evening of on June 1st, 2004 by Chicago.The length of two sections real traffic data is 10 minutes, their flow over time as shown in Figure 2, by statistics, real traffic 1 mean flow rate in 10 minutes is 430Mbps, the mean flow rate of real traffic 2 in 10 minutes is 730Mbps.Wherein, simultaneously the number of Chuan Shu stream as shown in Figure 3, the fluxion of statistics is in 300,000 fluctuations up and down on the real traffic 1, the fluxion that real traffic 2 statistics obtains is in 360,000 fluctuations up and down.
Emulation deposits it in different formation by the due in that divides into groups and ranks, scheduling distributes bandwidth liberally between occupied physical queue, adopt polling mode, each physical queue obtains dispatching the byte (1500 byte) that regular length is transmitted in the back at every turn, and poll is to next physical queue then.The performance parameter of experiment statistics is as follows:
1) Mean Speed.By the data volume of node and the ratio of time span, represent in certain time period with S;
2) Gong Cun fluxion.Convection current is provided with the overtime of τ time, the fluxion of transmitting is added up, with N, (τ) expression;
3) enliven fluxion, take the quantity of the stream that formation ranks sometime, i.e. the quantity of occupied formation is used N aTable is not.
The every 25ns of statistic processes is to N aAnd N s(τ) once sample, correspondingly draw certain time period the Max{N in (for example 1 second or 10 minutes) aAnd Max{N v(τ) }.The load L of system is defined as the Mean Speed of actual services flow and the ratio of outlet bandwidth, because the Mean Speed of actual services flow can't change, can only adjust the load L of emulation with the performance parameter under the statistics different loads situation by changing outlet bandwidth.As, wherein solid line is the curve of real traffic 1, and dotted line is the curve of real traffic 2.
Shown in Figure 4, for system load L is set at 0.97 o'clock, physical queue takies time dependent situation.As can be seen from the figure, the fluxion maximum that real traffic 1 statistics obtains only is 89, and real traffic 2 maximums only are 406.Will, wherein solid line is the curve of real traffic 1, and dotted line is the curve of real traffic 2.
The number of the original coexistence stream among Fig. 4 and Fig. 3 compares, and by device of the present invention, making needs the quantity of the physical queue of setting to reduce greatly.
The load of system is to influence the another one factor that formation takies situation.Be to analyze the situation that takies of formation under the different loads situation, experiment is by changing the load that outlet bandwidth changes system, and real traffic 1 and real traffic 2 take physical queue respectively under 0.5,0.75 and 0.97 load situation respectively as shown in Figure 5 and Figure 6.As can be seen from the figure, along with the reduction of load, occupied physical queue number also reduces thereupon.Even under the heavy duty situation, the fluxion that also is far smaller than by node is counted in taking of formation.According to simulation result, device of the present invention is only reserved the requirement that a spot of physical queue just can satisfy every stream queuing under the different loads situation even near full load the time.
In the packet storage system, though the present invention has significantly reduced the quantity of physical queue, need a slice cost higher, the CAM device that power consumption is bigger.But can find by analyzing us,, can reduce the capacity of original memory greatly, thereby device in fact of the present invention can be obtained more economical cost and power consumption owing to the introducing of CAM.Concrete quantitative analysis is as follows.
The size of the size of memory device and next pointer SRAM mainly by the decision of memory capacity estimation equation, is only discussed the cost and the power consumption of pointer, CAM and stack architecture end to end below.Usually, the cost of CAM and power consumption are 10 times of identical big or small SRAM, and the cost of supposing the every bit of SRAM is C Sram, the cost of the every bit of CAM is 10C SramBecause power consumption and cost are the line style relation substantially, only cost are described at this, the description of power consumption is similar.
As a comparison, if do not use device scheme of the present invention, in order to realize every stream queuing of packet, need to adopt preset rule to data grouping carry out traffic classification, and according to the granularity reserved queue of traffic classification, in order to satisfy the demand of every stream queuing, the number of queues of reserving approaches the actual fluxion of transmitting usually.For being different from device of the present invention, deserving to be called the method for stating is static method.In the static method, the RAM administrative unit only need be used pointer end to end, does not need CAM device and stack architecture, if the quantity of formation this moment is Q s, then its cost is C s=2Q sLog 2A, wherein A is the number of memory block.
For installation method of the present invention, if number of queues is Qd, then corresponding taking of pointer SRAM end to end is 2Q dLog 2A, carrying cost is 2C SramQ dLog 2A, main quantity Q by formation dDecision.The size of CAM is Q dL f, L wherein fBe the length of stream number, its carrying cost is 10C SramQL fAnd the space hold of stack architecture is Q dLog 2Q d, its carrying cost is C SramQ dLog 2Q dTherefore the cost of whole system is
C=C sramQ d(2log 2A+10L f+log 2Q d)
In order to simplify comparison, suppose log 2A, log 2Q and L fValue equate, be 4 bytes, this moment C d=13C SramLog 2Q.
To sum up analyze, work as Q d<2Q s/ 13 o'clock, the cost of device of the present invention and power consumption can be lower than the static method that realizes formation, and the number of queues of static method is near the actual fluxion of transmitting.
To sum up tell, device of the present invention can be reduced to required number of queues below 1% of total fluxion of coexistence under the performance prerequisite that guarantees every stream queuing, and simultaneously, the cost of device of the present invention and power consumption will be reduced to below 6.5% of static method.
Description of drawings
The realization of queue structure under Fig. 1 piece storage mode.
Fig. 2 real traffic is curve over time, and wherein solid line is the curve of real traffic 1, and dotted line is the curve of real traffic 2.
Fig. 3 flow amount curve over time that coexists, wherein solid line is the curve of real traffic 1, and dotted line is the curve of real traffic 2.
Fig. 4 takies physical queue number curve over time, and wherein solid line is the curve of real traffic 1, and dotted line is the curve of real traffic 2.
Fig. 5 real traffic 1 actual flow takies physical queue number curve over time under the different loads situation.
Fig. 6 real traffic 2 actual flows take physical queue number curve over time under the different loads situation.
Fig. 7 system construction drawing.
The hardware implementation method of Fig. 8 device of the present invention.
Embodiment
At first define the state of two kinds of streams: 1) active state.When a stream stores grouping in physical queue, think that then this stream is in active state; 2) silent status.All groupings of storing in network forwarding equipment when a stream all are forwarded, and before the new grouping that does not belong to this stream arrives, think that this stream is in silent status.According to above definition, have only when a stream enlivens, just to take a physical queue, correspondingly can set up its virtual queue (being current active stream) VQ nWith physical queue PQ qBetween mapping relations VQ n→ PQ q, set { VQ n→ PQ qBe called active stream tabulation.When stream becomes when mourning in silence from active, need be with mapping from the active stream tabulation deletion of its corresponding virtual formation to physical queue, and discharge the corresponding physical formation; When stream becomes from mourning in silence when enlivening, need distribute a new physical queue for it, in the active stream tabulation, add of the mapping of its virtual queue to physical queue.The false code of handling process of the present invention is as follows:
Grouping p arrives the 1. stream number n 2. that obtain p and passes through VQ nFind PQ q3. if PQ qDo not exist and { distribute PQ qAdd mapping VQ n→PQ q4. the p that will divide into groups deposits PQ in q
Grouping p leaves away 5. from PQ qIf read grouping p 6 PQ q{ discharge PQ for empty qDeletion mapping VQ n→PQ q }
When data are grouped into when reaching, must confirm at first whether the stream under the packet enlivens.Search procedure is searched VQ in the active stream tabulation nWhether exist.If exist, by mapping relations VQ n→ PQ qFind physical queue PQ qIf do not exist, for it distributes a physical queue PQ q, and set up mapping, then packet is deposited in corresponding physical queue.Because scheduling is finished between physical queue, when packet is left, directly reads the physical queue that obtains dispatching, therefore needn't search, if this moment, physical queue became empty, at once this formation is regained, and the corresponding mapping relations VQ of deletion n→ PQ q
The system construction drawing of apparatus of the present invention as shown in Figure 7.RAM is the physical memory devices of storage packet, and CAM is used for carrying out storage and maintenance mapping relations VQ n→ PQ qPart in the FPGA of dash area, the ASIC comprises writing module, the RAM administration module, and scheduler module and search module, wherein:
Writing module is according to searching the result that module is returned, and from the idle address of RAM administration module application, and packet write among the RAM;
The RAM administration module is grouped in the data structure of storing among the RAM to data and manages, and provides map between physical queue and the address to writing module and scheduler module, and searches module by management and add in CAM and delete VQ n→ PQ qRelevant entries.Concrete mechanism will in the back " realization of physical queue " part detailed description.
Scheduler module is responsible for the scheduling of packet, from the address that the RAM administration module obtains a physical queue team element, and it is read from RAM.
Search the stream number VQ that module provides according to writing module n, judge by the information that exists in the CAM whether current physical queue exists stream, exist and then return its corresponding physical queue number PQ qOtherwise idle queues of application number returns to writing module in the RAM administration module, and adds corresponding list item in CAM.Concrete mechanism will be in the back " dynamic queue shares ", and part describes in detail.
1) realization of physical queue
Be convenient management, the RAM administrative unit is divided into fixed-size memory block to the packet memory space usually.But, transmission over networks be elongated packet, therefore need cut into fixed-size unit to grouping, be called data cell (DataUnit is called for short DU), these DU were reassembled into original elongated packet before leaving forwarding unit.The size of the memory block that memory RAM is divided into and DU's is identical, and therefore a DU just in time takies a memory block.
In the packet memory, unappropriated memory block is called free block.When packet arrived, the RAM administrative unit was distributed free block, and packet is deposited in; When packet was left, the memory block that takies became free block, and the RAM administrative unit is regained free block again.
For distribution and the recovery that realizes free block, adopt the chained list mode to manage.Safeguard a head pointer in the chained list, point to the head of free block chained list; Safeguard a tail pointer, point to the afterbody of free block chained list; Each free block is safeguarded next pointer, points to next free block.The realization of formation is adopted in a like fashion, if Q formation arranged, then the RAM administrative unit need be safeguarded Q+1 chained list.The present invention uses the SRAM in the FPGA/ASIC sheet to preserve next pointer and pointer end to end, packet then is stored in the external RAM memory device, the outer divided memory block of RAM of next pointer among the SRAM in the FPGA/ASIC sheet and sheet is corresponding one by one, as shown in Figure 1.Because the pointer end to end of free block formation upgrades frequent, and do not relate to the problem of quantity, therefore adopt register to realize, to improve the efficient of RAM administrative unit operation.
When packet arrives or leaves away, need upgrade pointer data, upgrade relating to two parts: the 1) renewal of next pointer is a unit with DU.When a DU arrives, need finish the operation of joining the team, from the free block formation, obtain a free block, data are write, this moment is the current DU memory location of next pointed of the memory block at previous DU place; When a DU leaves, need finish out team's operation, free block is added to free block formation end, next pointer that upgrades previous free block makes it point to the free block of regaining; 2) renewal of pointer end to end.Renewal is that unit carries out with the packet.After new packet arrived and deposits in, rear of queue changed, and tail pointer is updated to new position; When packet was left, change had taken place in the head position of formation, and head pointer is updated to new position.Opposite process is then finished in the free block formation, and after packet was left and given back free block, tail pointer changed, and needs to upgrade tail pointer; And, having distributed free block when packet arrival, variation has taken place in the head pointer position of free block formation, needs to upgrade head pointer.
2) dynamic queue shares
For realizing sharing of Q formation, defined two kinds of formations: the Q that 1) realizes on a hardware formation is called physical queue, uses PQ qExpression, 0≤q≤Q-1; 2) in the actual queuing process, corresponding in logic each active stream all has a formation, is called virtual queue, uses VQ nExpression, virtual queue stream number n mark, 0≤n≤N-1, wherein stream number n obtains according to 5 territories of data packet header.The item of active stream tabulation still can reach Q, once searches and need carry out a large amount of compare operations, could determine virtual queue VQ nWhether in tabulation.Raising along with network interface speed, the interval that packet arrives is more and more littler, to in the so short time, finish searching of Q list item, also need finish renewal during adding to list item, use conventional storage means and the means of searching to be difficult to be competent at (by the packet calculating of 40 bytes, the arrival interval is 32ns under the OC-192 speed, is 8ns under the OC-768 speed).The present invention uses CAM (Binary Content Addressable Memory) to realize that CAM need have Q Storage Item, and each is stored a stream number or stores invalid stream number.If the item of CAM has been stored stream number n, then the address of this item is corresponding to physical queue PQ qThe memory location of tail pointer, can realize VQ n→ PQ qMapping.Because searching through once-through operation of CAM can be finished, and can make device of the present invention satisfy and search temporal requirement.
Fig. 8 has provided the hardware implementation structure of device of the present invention.When data are grouped into when reaching, its stream number is sent to CAM, search and send the address after finishing, it is sent to the SRAM that cooperates with it, hit if search, this address is the deposit position of tail pointer in SRAM of physical queue, otherwise needs to distribute new physical queue.Finish the distribution of free block this moment earlier, deposits packet in, walks abreast to finish the distribution of pointer deposit position end to end.Can upgrade pointer end to end immediately after finishing the depositing of packet.
The distribution of physical queue, reclaim and search the renewal process executed in parallel, adopt stack architecture to realize.Top-of-stack pointer is deposited in the register, divides timing to take out a memory location from stack top, and the memory location of free time is pressed into stack top when returning.If need simultaneously to distribute and reclaim formation, then do not need stack is operated, directly the physical queue that reclaims is distributed to corresponding stream.As follows when distribution and recovery physical queue: as when being a flow distribution physical queue, its stream number n to be write item corresponding among the CAM to the operation of CAM; And when physical queue becomes empty, regain the physical queue that stream takies, then an invalid stream number is write corresponding; If reclaim simultaneously and distribute physical queue, the stream number that then upgrades the stream number Cheng Xin of correspondence position gets final product.

Claims (10)

1 based on the broadband network Business Stream of the CAM extendible device by every stream queuing, it is characterized in that it contains: physical memory devices RAM is used for safeguarding the active CAM that flows to the mapping relations of physical queue, and the FPGA/ASIC part that realizes dynamic shared logic, wherein:
Described RAM partly links to each other with the FPGA/ASIC that realizes dynamic shared logic, and CAM partly links to each other with the FPGA/ASIC that realizes dynamic shared logic, and realizes that the FPGA/ASIC of dynamic shared logic links to each other with CAM, RAM; Physical memory devices RAM is used for data cached grouping, is physical storage device;
Content adressable memory CAM is used for preserving and searching the mapping relations of enlivening between Business Stream and the physical queue;
The FPGA/ASIC that realizes dynamic shared logic partly contains: writing module, RAM administration module, scheduler module and search module; Writing module and RAM administration module, search module and RAM links to each other, the RAM administration module is with writing module, scheduler module and search module and link to each other, scheduler module with search module, RAM administration module and RAM and link to each other, search module and link to each other with CAM;
Based on the broadband network Business Stream of CAM be by the workflow of the extendible device of every stream queuing:
1) grouping of data at first arrives writing module, writing module with its stream number to searching the corresponding physical queue of module request;
2) search module and from CAM, search the pairing list item of corresponding stream number, then return physical queue and number give writing module if exist; If do not exist, idle queues of application number returns to writing module in the RAM administration module, and adds corresponding list item in CAM;
3) writing module obtains after the respective physical queue number, and the address to a free block of RAM administration module acquisition writes RAM to packet, and the RAM administration module upgrades the tail pointer and the free block top-of-stack pointer of this physical queue;
4) scheduler module is dispatched each physical queue, team's head pointer to the current formation that is scheduled of RAM administration module inquiry, from RAM, read the data that group head pointer points to, the RAM administration module reclaims this data block adding free block stack simultaneously, revise the free block top-of-stack pointer, and revise team's head pointer of respective physical formation;
When 5) the current formation that is scheduled became sky, the scheduler module notice was searched module, deleted its corresponding list item on CAM.
2 extendible devices of lining up by every stream according to claim 1 based on the broadband network Business Stream of CAM, it is characterized in that, the RAM administration module is grouped in the data structure of storing among the RAM to data and manages, provide map between physical queue and the address to writing module and scheduler module, and in CAM, add and delete the active relevant entries that flows to physical queue by searching module.
3 extendible devices of lining up by every stream according to claim 1 based on the broadband network Business Stream of CAM, it is characterized in that, the RAM administration module adopts the chained list mode to realize the distribution and the recovery of free block, safeguards a head pointer, points to the head of free block chained list; Safeguard a tail pointer, point to the afterbody of free block chained list; The RAM administration module is also safeguarded next jumping pointer for each free block, points to next free block.
4 extendible devices of lining up by every stream according to claim 1 based on the broadband network Business Stream of CAM, it is characterized in that RAM administration module employing chained list mode realizes the management of physical queue, for each physical queue, safeguard a head pointer, point to the head of physical queue; Safeguard a tail pointer, point to the afterbody of physical queue; The RAM administration module also is that each data block of physical queue is safeguarded next jumping pointer, points to next data block.
5 according to claim 1 or 3 or 4 described extendible devices of lining up by every stream based on the broadband network Business Stream of CAM, it is characterized in that, the RAM administration module use that the on-chip SRAM of FPGA/ASIC inside preserves list of free blocks and each physical queue next jump pointer and pointer end to end, each next jump pointer and end to end pointer respectively point to the divided memory block of RAM outside the sheet of a correspondence.
6 extendible devices of lining up by every stream according to claim 1 based on the broadband network Business Stream of CAM, it is characterized in that, the RAM administration module is divided into fixed-size memory block to the RAM memory space, and the memory block size equals the size of a data transmission unit; Wherein, a data transmission unit is that the elongated packet of transmission over networks is cut into fixed-size unit after entering network forwarding equipment, and data transmission unit can be resumed into original elongated packet when the deviated from network forwarding unit.
7 extendible devices of lining up by every stream according to claim 1 based on the broadband network Business Stream of CAM, it is characterized in that, scheduler module is responsible for the scheduling of packet, from the address that the RAM administration module obtains a physical queue team element, and team's element is read from RAM.
8 extendible devices of lining up by every stream according to claim 1 based on the broadband network Business Stream of CAM, it is characterized in that, search the stream number that module provides according to writing module, judge by the information that exists in the CAM whether this stream number corresponding service stream is active.
9 extendible devices of lining up by every stream based on the broadband network Business Stream of CAM according to claim 1, wherein, physical memory devices RAM can be SRAM or DRAM.
10 extendible devices of lining up by every stream based on the broadband network Business Stream of CAM according to claim 1, wherein, content adressable memory CAM can be TCAM or BCAM.
CNB2006101655896A 2006-12-22 2006-12-22 Stream queue-based extensible device for CAM-based broadband network service stream Expired - Fee Related CN100508502C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101655896A CN100508502C (en) 2006-12-22 2006-12-22 Stream queue-based extensible device for CAM-based broadband network service stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101655896A CN100508502C (en) 2006-12-22 2006-12-22 Stream queue-based extensible device for CAM-based broadband network service stream

Publications (2)

Publication Number Publication Date
CN101009645A CN101009645A (en) 2007-08-01
CN100508502C true CN100508502C (en) 2009-07-01

Family

ID=38697788

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101655896A Expired - Fee Related CN100508502C (en) 2006-12-22 2006-12-22 Stream queue-based extensible device for CAM-based broadband network service stream

Country Status (1)

Country Link
CN (1) CN100508502C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105824879A (en) * 2015-12-17 2016-08-03 深圳市华讯方舟软件技术有限公司 Migration method based on PostgreSQL block storage equipment

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101692209B (en) * 2009-11-09 2011-11-30 盛科网络(苏州)有限公司 Circuit design method and device for simulating TCAM by using embedded SRAM of FPGA
CN102045258B (en) * 2010-12-22 2012-12-12 北京星网锐捷网络技术有限公司 Data caching management method and device
CN102999434A (en) * 2011-09-15 2013-03-27 阿里巴巴集团控股有限公司 Memory management method and device
CN102437937B (en) * 2011-12-29 2014-04-09 北京锐安科技有限公司 Deep packet inspection method
CN103095595B (en) * 2012-12-30 2017-07-18 大连环宇移动科技有限公司 A kind of network data management method and system based on unidirectional parallel multilinked list
CN103795621B (en) * 2013-12-12 2017-02-15 华为技术有限公司 Virtual machine data exchange method and device, and physical host
CN105159837A (en) * 2015-08-20 2015-12-16 广东睿江科技有限公司 Memory management method
CN105630879B (en) * 2015-12-17 2019-03-26 深圳市华讯方舟软件技术有限公司 A kind of PostgreSQL block storage equipment module for reading and writing
CN107544819B (en) * 2016-06-29 2022-04-19 中兴通讯股份有限公司 Service implementation method and device for programmable device and communication terminal
CN106453141B (en) * 2016-10-12 2019-11-15 中国联合网络通信集团有限公司 Global Queue's method of adjustment, traffic stream queues method of adjustment and network system
CN108650189A (en) * 2018-04-03 2018-10-12 郑州云海信息技术有限公司 A kind of flow-balance controlling method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Self-Timed, Fully-Parallel Content AddressableQueue for Switching Applications. Jason Podaima. Glenn Gulak.Custom Integrated Circuits, 1999.,Vol.Proceedings of the IEEE 1999 . 1999
A Self-Timed, Fully-Parallel Content AddressableQueue for Switching Applications. Jason Podaima. Glenn Gulak.Custom Integrated Circuits, 1999.,Vol.Proceedings of the IEEE 1999 . 1999 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105824879A (en) * 2015-12-17 2016-08-03 深圳市华讯方舟软件技术有限公司 Migration method based on PostgreSQL block storage equipment
CN105824879B (en) * 2015-12-17 2019-06-28 深圳市华讯方舟软件技术有限公司 A kind of moving method based on PostgreSQL block storage equipment

Also Published As

Publication number Publication date
CN101009645A (en) 2007-08-01

Similar Documents

Publication Publication Date Title
CN100508502C (en) Stream queue-based extensible device for CAM-based broadband network service stream
CN100521655C (en) Dynamic sharing device of physical queue based on the stream queue
US6882642B1 (en) Method and apparatus for input rate regulation associated with a packet processing pipeline
US6757249B1 (en) Method and apparatus for output rate regulation and control associated with a packet pipeline
US6934250B1 (en) Method and apparatus for an output packet organizer
Zheng et al. An ultra high throughput and power efficient TCAM-based IP lookup engine
CN102045258B (en) Data caching management method and device
Iyer et al. Designing packet buffers for router linecards
CN1736068B (en) Flow management structure system
US7653072B2 (en) Overcoming access latency inefficiency in memories for packet switched networks
US7529224B2 (en) Scheduler, network processor, and methods for weighted best effort scheduling
CN101083622A (en) System and method for managing forwarding database resources in a switching environment
CN101714947B (en) Extensible full-flow priority dispatching method and system
US20080063004A1 (en) Buffer allocation method for multi-class traffic with dynamic spare buffering
US9769092B2 (en) Packet buffer comprising a data section and a data description section
CN101499956B (en) Hierarchical buffer zone management system and method
JP2000236344A (en) Atm switch and dynamic threshold setting method
CN100440854C (en) A data packet receiving interface component of network processor and storage management method thereof
WO2011085934A1 (en) A packet buffer comprising a data section and a data description section
Shah et al. Analysis of a statistics counter architecture
Lin et al. Route table partitioning and load balancing for parallel searching with TCAMs
US7474662B2 (en) Systems and methods for rate-limited weighted best effort scheduling
Wang et al. Block-based packet buffer with deterministic packet departures
CN100499563C (en) Increasing memory access efficiency for packet applications
Kabra et al. Fast buffer memory with deterministic packet departures

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090701

Termination date: 20161222