CN102857446A - Cache management method and cache management apparatus for Ethernet switching chip - Google Patents

Cache management method and cache management apparatus for Ethernet switching chip Download PDF

Info

Publication number
CN102857446A
CN102857446A CN201110181173.4A CN201110181173A CN102857446A CN 102857446 A CN102857446 A CN 102857446A CN 201110181173 A CN201110181173 A CN 201110181173A CN 102857446 A CN102857446 A CN 102857446A
Authority
CN
China
Prior art keywords
cell data
multicast
storage area
unicast
clean culture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201110181173.4A
Other languages
Chinese (zh)
Other versions
CN102857446B (en
Inventor
赵培培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanechips Technology Co Ltd
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201110181173.4A priority Critical patent/CN102857446B/en
Publication of CN102857446A publication Critical patent/CN102857446A/en
Application granted granted Critical
Publication of CN102857446B publication Critical patent/CN102857446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a cache management method and a cache management apparatus for an Ethernet switching chip. The cache management method comprises the steps of calculating flow information of unicast cell data and multicast cell data; dividing a cache space of each buffer into a unicast storage region and a multicast storage region according to a flow proportion between the unicast cell data and the multicast cell data; and storing the unicast cell data into the unicast storage region, and storing the multicast cell data into the multicast storage region. According to the technical scheme provided by the invention, the problem of increase of the packet loss probability due to inflexible space allocations of unicast cache and multicast cache and poor expandability in the prior art is solved, and furthermore, the effects of improvements of use ratios of the unicast cache and the multicast cache and the flexible expandability are achieved.

Description

The buffer memory management method of Ethernet switching chip and device
Technical field
The present invention relates to the communications field, in particular to a kind of buffer memory management method and device of Ethernet switching chip.
Background technology
Switching network is the core in the bandwidth switching equipment (such as switch/router).The commercial switching equipment of main flow mainly adopts shared buffer memory (shared-memory) framework at present, so caching technology is one of very important key technology.
Existing caching method mainly contains following two kinds:
A kind of is to be used for the storage unicast cell for each output port keeps " reserved pool " storage resources.There is simultaneously in " sharedpool " cache resources part zone be shared by all output ports, only is used for the storage unicast cell; Only be used for storage multicast cell in shared pool another part zone.Go to the unicast cell of a certain port and preferentially put into the reserved pool that oneself exclusively enjoys, if reserved pool has expired, put into again the clean culture zone of shared pool.
Another kind method is that clean culture buffer memory and multicast caching are placed on respectively in the different memories, and clean culture buffer memory and multicast caching are fully independent, are independent of each other.
Above-mentioned two kinds of methods all are the storage modes that generally adopts.At present, the exchange capacity of commercial exchange chip and switching port quantity all are being the situation of rapid increase, but when prior art is not given in the port number expansion, shared buffer memory is by under the support of multi-disc buffer, solve the method for load imbalance problem between the buffer, to such an extent as to the utilance of buffer memory is not high, wasted a large amount of storage resources, increased cost.
In addition, in above-mentioned two kinds of methods, the storage area of clean culture multicast is all fixed, in case after spatial cache is distributed, just immobilize, can not come the dynamic assignment spatial cache according to the service traffics of clean culture and multicast.Give an example, the service traffics of certain section time clean culture are much larger than the multicast service flow, multicast storage area occupancy is extremely low, and clean culture buffer memory that originally arrange because service traffics are too large is relatively very little, and cause unicast cell because of the inadequate packet loss of spatial cache, but the phenomenon of the vacant waste in multicast caching zone.As seen, because the increase that single multicast caching allocation of space is dumb, extensibility missionary society causes packet loss.
For above mentioned problem, effective solution is proposed not yet at present.
Summary of the invention
Main purpose of the present invention is to provide a kind of buffer memory management method and device of Ethernet switching chip, one of to address the above problem at least.
According to an aspect of the present invention, provide a kind of buffer memory management method of Ethernet switching chip, having comprised: the flow information of statistics unicast cell data and multicast cell data; According to the flow proportional of unicast cell data and multicast cell data, the spatial cache of every a slice buffer is divided into clean culture storage area and multicast storage area; The unicast cell data are stored in the clean culture storage area, the multicast cell data is stored in the multicast storage area.
The unicast cell data are stored in the clean culture storage area, store the multicast cell data into the multicast storage area and comprise: the degree that takies according to each buffer clean culture storage area and multicast storage area sorts to each buffer clean culture storage area and multicast storage area; Be idle address corresponding to the clean culture storage area that takies the degree minimum of unicast cell data allocations, the unicast cell number is stored in this clean culture storage area that takies the degree minimum, for the multicast cell data distributes the idle address corresponding to the multicast storage area that takies the degree minimum, the multicast cell data is stored in this multicast storage area that takies the degree minimum.
On the clean culture storage area that takies the degree minimum and the multicast storage area that takies the degree minimum are positioned at a slice buffer and this buffer when being the simple cache device, be idle address corresponding to the clean culture storage area that takies the degree minimum of unicast cell data allocations, the unicast cell number is stored in this clean culture storage area that takies the degree minimum, for the multicast cell data distributes one corresponding to the idle address of time little multicast storage area of the degree that takies, store the multicast cell data into this and take in time little multicast storage area of degree.
In the ratio according to unicast cell data and multicast cell data flow, the spatial cache of every a slice buffer is divided into after clean culture storage area and the multicast storage area, also comprise: the header information of resolving the unicast cell data, carry out route querying, determine the output port information of unicast cell data.
After storing into the unicast cell data in the clean culture storage area, also comprise: for each output port is safeguarded and the same number of row of input port VIQ (Virtual InputQueue, virtual input queue), join the team according to unicast cell data storage address and corresponding input/output port information; According to the head of the queue information of all VIQ, dispatch; Determine group cell addresses according to scheduling result, upgrade the head of the queue information of group VIQ; According to going out group cell addresses, read cell data and send to corresponding output port; Output port sends on the data link after the cell data of receiving is inserted check word.
In the ratio according to unicast cell data and multicast cell data flow, the spatial cache of every a slice buffer is divided into after clean culture storage area and the multicast storage area, also comprise: the header information of resolving the multicast cell data, carry out route querying, determine bitmap (bitmap) information of multicast cell data.
After storing institute's multicast cell data into the multicast storage area, also comprise: for each output port is safeguarded and the same number of row of input port VIQ, join the team according to memory address and the corresponding bitmap information of multicast cell data; According to the head of the queue information of all VIQ, dispatch; Determine group cell addresses according to scheduling result, upgrade the head of the queue information of group VIQ; According to going out group cell addresses, read cell data and send to corresponding output port; Output port sends on the data link after the cell data of receiving is inserted check word.
Before the flow information of statistics unicast cell data and multicast cell data, also comprise: receive cell data stream, and the cell data stream that receives is carried out verification; Resolve the header information by the information source data of verification, distinguish unicast cell data and multicast cell data, the unicast cell data are sent to unicast channel, the multicast cell data is sent to multicast channel.
According to a further aspect in the invention, provide a kind of cache management device of Ethernet switching chip, having comprised: the flow detection module, for the flow information of statistics unicast cell data and multicast cell data; Storage area is divided module, is used for the ratio according to unicast cell data and multicast cell data flow, and the spatial cache of every a slice buffer is divided into clean culture storage area and multicast storage area; Idle address administration module is used for storing the unicast cell data into the clean culture storage area, and the multicast cell data is stored in the multicast storage area.
Said apparatus also comprises: buffer memory depth ordering module is used for according to the degree that takies of each buffer clean culture storage area and multicast storage area each buffer clean culture storage area and multicast storage area being sorted.
Idle address administration module, also be used to idle address corresponding to the clean culture storage area that takies the degree minimum of unicast cell data allocations, the unicast cell number is stored in this clean culture storage area that takies the degree minimum, for the multicast cell data distributes the idle address corresponding to the multicast storage area that takies the degree minimum, the multicast cell data is stored in this multicast storage area that takies the degree minimum.
Idle address administration module, also be used on the clean culture storage area that takies the degree minimum and the multicast storage area that takies the degree minimum are positioned at a slice buffer and this buffer when being the simple cache device, be idle address corresponding to the clean culture storage area that takies the degree minimum of unicast cell data allocations, the unicast cell number is stored in this clean culture storage area that takies the degree minimum, for the multicast cell data distributes one corresponding to the idle address of time little multicast storage area of the degree that takies, store the multicast cell data into this and take in time little multicast storage area of degree.
Said apparatus also comprises: the singlecast router module, for the header information of resolving the unicast cell data, carry out route querying, and determine the output port information of unicast cell data; Queue management module, be used to each output port to safeguard and the same number of VIQ of input port, join the team according to unicast cell data storage address and corresponding input/output port information, and determine group cell addresses according to scheduling result, upgrade the head of the queue information of group VIQ; The output scheduling module is used for the head of the queue information according to all VIQ, dispatch the generation scheduling result, and group cell data that goes out that will receive sends to the output port module; The output port module, the output port by correspondence behind the cell data insertion check word that is used for receiving sends to data link.
Said apparatus also comprises: the multicast routing module, for the header information of resolving the multicast cell data, carry out route querying, and determine the bitmap information of multicast cell data; Queue management module, also be used to each output port to safeguard and the same number of VIQ of input port, memory address and corresponding bitmap information according to the multicast cell data are joined the team, and determine group cell addresses according to scheduling result, upgrade the head of the queue information of group VIQ.
Idle address administration module also is used for according to going out group cell addresses, reads cell data and sends to the output scheduling module.
Said apparatus also comprises: input interface module is used for receiving cell data stream, and the cell data stream that receives is carried out verification; The cell sort module is used for resolving the header information by the information source data of verification, distinguishes unicast cell data and multicast cell data, and the unicast cell data are sent to unicast channel, and the multicast cell data is sent to multicast channel.
By the present invention, employing is according to the flow information of unicast cell data and multicast cell data, memory space with every a slice buffer is divided into clean culture storage area and multicast storage area dynamically, scheme with storage unicast cell data and multicast cell data, solved that single multicast caching allocation of space in the prior art is dumb, poor expandability and problem that the packet loss that causes increases, and then reached and improve the utilance of clean culture and two kinds of buffer memorys of multicast and expand flexibly effect.
Description of drawings
Accompanying drawing described herein is used to provide a further understanding of the present invention, consists of the application's a part, and illustrative examples of the present invention and explanation thereof are used for explaining the present invention, do not consist of improper restriction of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart according to the buffer memory management method of the Ethernet switching chip of the embodiment of the invention;
Fig. 2 is the structural representation that the inside of the example according to the present invention receives the switching system of 4 unicast cells and 4 multicast cells at most simultaneously;
Fig. 3 is the schematic diagram of the method that sorts of the employing shift register of the example according to the present invention;
Fig. 4 is the schematic flow sheet that the unicast cell of the example according to the present invention preferentially writes;
Fig. 5 is the structured flowchart according to the cache management device of the Ethernet switching chip of the embodiment of the invention;
Fig. 6 is the structured flowchart of the cache management device of Ethernet switching chip according to the preferred embodiment of the invention.
Embodiment
Hereinafter also describe in conjunction with the embodiments the present invention in detail with reference to accompanying drawing.Need to prove that in the situation of not conflicting, embodiment and the feature among the embodiment among the application can make up mutually.
Fig. 1 is the flow chart according to the buffer memory management method of the Ethernet switching chip of the embodiment of the invention.As shown in Figure 1, the buffer memory management method according to the Ethernet switching chip of the embodiment of the invention comprises:
Step S102, the flow information of statistics unicast cell data and multicast cell data;
Step S104 according to the flow proportional of unicast cell data and multicast cell data, is divided into clean culture storage area and multicast storage area with the spatial cache of every a slice buffer;
Step S106 stores the unicast cell data in the clean culture storage area into, and the multicast cell data is stored in the multicast storage area.
Because switching network exists clean culture and two kinds of cells of multicast, in shared buffer memory, need two kinds of cell subregions are deposited, said method is different from prior art clean culture and two kinds of buffer memorys of multicast is strictly distinguished, but single multicast caching is put together.Suppose to write at most simultaneously the words of M unicast cell and M multicast cell, can build a shared buffer memory by 2M memory, the spatial cache of every buffer is divided into clean culture storage area and multicast storage area, unicast cell will be stored into the some zones in 2M the clean culture storage area, and the multicast cell will be stored into the some zones in 2M the multicast storage area.The service traffics of Region Segmentation dynamic self-adapting and list/multicast, when unicast service flow during greater than the multicast service flow, the clean culture memory space is greater than the multicast memory space, when unicast service flow during less than the multicast service flow, the clean culture memory space is less than the multicast memory space, under the prerequisite of using same quantity buffer, can greatly improve the buffer memory occupancy of clean culture and multicast memory space.
Preferably, step S106 may further include following processing:
(1) degree that takies according to each buffer clean culture storage area and multicast storage area sorts to each buffer clean culture storage area and multicast storage area;
(2) be idle address corresponding to the clean culture storage area that takies the degree minimum of unicast cell data allocations, the unicast cell number is stored in this clean culture storage area that takies the degree minimum, for the multicast cell data distributes the idle address corresponding to the multicast storage area that takies the degree minimum, the multicast cell data is stored in this multicast storage area that takies the degree minimum.
In fact above-mentioned processing provides a kind of load-balancing function.Usually shared buffer memory is built by the multi-disc buffer, and all cells that N bar input port enters will be dispersed and be stored in the multi-disc buffer.Suppose that shared buffer memory has M sheet buffer, by above-mentioned processing the cell from different input ports is evenly distributed in M the buffer, thereby avoid some buffer idle, the buffer that causes overflows or the generation Flow Control and some buffer almost is filled with.The Flow Control that occur this moment although can avoid cache overflow, because not all buffer all is in full state, so the Flow Control of at this moment sending is inaccurate, can cause Buffer Utilization low.
Preferably, on the clean culture storage area that takies the degree minimum and the multicast storage area that takies the degree minimum are positioned at a slice buffer and this buffer when being the simple cache device, be idle address corresponding to the clean culture storage area that takies the degree minimum of unicast cell data allocations, the unicast cell number is stored in this clean culture storage area that takies the degree minimum, for the multicast cell data distributes one corresponding to the idle address of time little multicast storage area of the degree that takies, store the multicast cell data into this and take in time little multicast storage area of degree.
When the storage cell data, to preferentially guarantee the storage of clean culture cell data, on the clean culture storage area that takies the degree minimum and the multicast storage area that takies the degree minimum are positioned at a slice buffer and this buffer be read one write the simple cache device time, to preferentially guarantee the clean culture cell data, and the multicast cell data is stored in the inferior little multicast storage area of the degree of taking.When buffer have two and more than when writing mouthful, in the time of on the clean culture storage area that takies the degree minimum and the multicast storage area that takies the degree minimum are positioned at a slice buffer, unicast cell data and multicast cell data can all write this buffer.
Preferably, can further include following processing after the step 104:
Resolve the header information of unicast cell data, carry out route querying, determine the output port information (comprising the output slogan, i.e. the destination slogan) of unicast cell data;
Can further include following processing after the step S106:
(1) safeguards and the same number of VIQ of input port for each output port, join the team according to unicast cell data storage address and corresponding input/output port information;
(2) according to the head of the queue information of all VIQ, dispatch;
(3) determine group cell addresses according to scheduling result, upgrade the head of the queue information of group VIQ;
(4) according to going out group cell addresses, read cell data and send to corresponding output port;
(5) output port sends on the data link after the cell data of receiving is inserted check word.
Above-mentioned processing provides a kind of unicast data queue management method, can manage the unicast cell data of storing accurately and effectively, and the unicast cell data of orderly output storage.
Preferably, can further include following processing after the step 104:
Resolve the header information of multicast cell data, carry out route querying, determine the bitmap information of multicast cell data;
Can further include following processing after the step S106:
(1) safeguards and the same number of VIQ of input port for each output port, join the team according to memory address and the corresponding bitmap information of multicast cell data;
(2) according to the head of the queue information of all VIQ, dispatch;
(3) determine group cell addresses according to scheduling result, upgrade the head of the queue information of group VIQ;
(4) according to going out group cell addresses, read cell data and send to corresponding output port;
(5) output port sends on the data link after the cell data of receiving is inserted check word.
Above-mentioned processing provides a kind of multi-case data queue management method, can manage the multicast cell data of storing accurately and effectively, and the multicast cell data of orderly output storage.
Preferably, before step S102, can further include following processing:
(1) receives cell data stream, and the cell data stream that receives is carried out verification;
(2) parsing is distinguished unicast cell data and multicast cell data by the header information of the information source data of verification, and the unicast cell data are sent to unicast channel, and the multicast cell data is sent to multicast channel.
The above-mentioned flow information that is treated to statistics unicast cell data and multicast cell data provides the foundation, and provides input port information for follow-up queue management.
Below in conjunction with example above preferred embodiment is elaborated.
Fig. 2 is the structural representation that the inside of the example according to the present invention receives the switching system of 4 unicast cells and 4 multicast cells at most simultaneously.In switching system as shown in Figure 2,4 unicast cells and 4 multicast cells are processed at most at synchronization in its inside, and the shared buffer memory module has 81 to be read the 1 simple dual port RAM of writing and build, and namely has 8 to write mouth, 8 cells can be deposited in the buffer simultaneously.There are the idle chained list of 8 clean cultures and 8 idle chained lists of multicast in administration module inside, idle address, manages respectively the idle address of clean culture and 8 idle addresses of multicast of 8 RAM.
At first will be according to list/multicast traffic ratio (frequency of statistics can be set according to concrete needs), adjust the idle chained list of clean culture and multicast, make the ratio between the address realm of the address realm of the idle chained list management of clean culture and the idle chained list management of multicast follow list/multicast traffic rate variable.
The depth that compares in real time buffer memory according to the size of the idle chained list length of 8 clean cultures, can adopt each clock cycle of mode of streamline to send one group of ordering, A is illustrated in the most shallow clean culture regional number in 8 clean culture zones, B represents time shallow clean culture regional number, C represents that the 3rd shallow clean culture buffer zone number, D represent the 4th shallow clean culture buffer zone number.As shown in Figure 3, can sort with the method for shift register.
For example, A=000, B=111, C=001, D=101 represents the most shallow clean culture zone on #0RAM, and is inferior shallow on #7RAM, and the 3rd is shallow on #1RAM, and the 4th is shallow on #5RAM.Suppose that 4 cells will be stored into respectively #0RAM, #7RAM, #1RAM, #5RAM when advancing into 4 unicast cells.Deposit #0RAM, #7RAM, #1RAM in if enter 3 unicast cells, can be stored into #0RAM, #7RAM if only enter 2 unicast cells, if only enter 1 unicast cell, can be stored into #0RAM.
For the processing of multicast, roughly the same with the processing of clean culture, but when clashing with the unicast cell data, as shown in Figure 4, preferentially guarantee the storage of clean culture cell data.
The depth that compares in real time buffer memory according to the size of the idle chained list length of 8 multicasts, can adopt each clock cycle of mode of streamline to send one group of ordering, a is illustrated in the most shallow multicast regional number in 8 multicast zones, b represents time shallow multicast regional number, c represents the 3rd shallow multicast caching regional number, d represents the 4th shallow multicast caching regional number, the like e, f, g, h represents the darkest multicast caching regional number.
For example, a=000, b=100, c=001, d=011, e=110, f=010, g=111, h=101 represent that the most shallow clean culture zone is on #0RAM, inferior shallow on #4RAM, the 3rd is shallow on #1RAM, and the 4th is shallow on #3RAM, the 5th is shallow on #6RAM, the 6th is shallow on #2RAM, and the 7th is shallow on #7RAM, the darkest on #5RAM.Whether inquiry abcd has with ABCD and conflicts, and wherein a and A are #0RAM, and c also conflicts with C.Suppose only to enter 1 multicast cell, whether inquiry has clean culture to enter, if write multicast caching zone corresponding to a without multicast then, multicast writes b if having then.If enter 2 multicast cells, the unicast cell number that this moment, inquiry entered simultaneously, if 1, then multicast writes b and c; If advance simultaneously 2 unicast cells, then the multicast cell still writes b and c; If enter simultaneously 3 or 4 unicast cells, the multicast cell writes b and d.The like, basic principle is, allows clean culture preferentially write among some the most shallow RAM of clean culture, multicast need to be avoided conflicting with clean culture, selects the multicast more shallow RAM in zone to write the multicast cell in writing mouthful idle RAM.
Fig. 5 is the structured flowchart according to the cache management device of the Ethernet switching chip of the embodiment of the invention.As shown in Figure 5, the cache management device according to the Ethernet switching chip of the embodiment of the invention comprises:
Flow detection module 502 is for the flow information of statistics unicast cell data and multicast cell data;
Storage area is divided module 504, is connected to flow detection module 502, is used for the ratio according to unicast cell data and multicast cell data flow, and the spatial cache of every a slice buffer is divided into clean culture storage area and multicast storage area;
Idle address administration module 506 is connected to storage area and divides module 504, is used for storing the unicast cell data into the clean culture storage area, and the multicast cell data is stored in the multicast storage area.
Said apparatus can be according to the flow information of unicast cell data and multicast cell data, memory space with every a slice buffer is divided into clean culture storage area and multicast storage area dynamically, with storage unicast cell data and multicast cell data, thereby solved that single multicast caching allocation of space in the prior art is dumb, poor expandability and problem that the packet loss that causes increases makes the distribution of spatial cache be adaptive to list/multicast service flow.
Preferably, as shown in Figure 6, the cache management device of Ethernet switching chip can also comprise according to the preferred embodiment of the invention:
Buffer memory depth ordering module 508, be connected to storage area and divide module 504 and idle address administration module 506, be used for according to the degree that takies of each buffer clean culture storage area and multicast storage area each buffer clean culture storage area and multicast storage area being sorted;
Idle address administration module 506, also be used to idle address corresponding to the clean culture storage area that takies the degree minimum of unicast cell data allocations, the unicast cell number is stored in this clean culture storage area that takies the degree minimum, for the multicast cell data distributes the idle address corresponding to the multicast storage area that takies the degree minimum, the multicast cell data is stored in this multicast storage area that takies the degree minimum.
Buffer memory depth ordering module 508 provides load-balancing function, buffer memory depth ordering module 508 is after sorting to each buffer clean culture storage area and multicast storage area according to the degree that takies of each buffer clean culture storage area and multicast storage area, the numbering that can will take the storage area of degree minimum sends to idle address administration module 506, as the foundation of idle address administration module 506 storage cell datas.In specific implementation process, a storage control module that is specifically designed to the control store operation can be set, with the selection to storage policy that realizes, when adopting load balancing, this storage control module can be used for receiving the clean culture buffer memory depth sequencing information that buffer memory depth ordering module 508 sends, depth sequencing information according to the clean culture storage area is judged, unicast cell is preferentially deposited in the most shallow clean culture storage area, and the multicast caching depth sequencing information that receives 508 transmissions of buffer memory depth ordering module, depth sequencing information according to the multicast storage area is judged, the multicast cell is preferentially deposited in the most shallow multicast storage area, and the storage area numbering that in the end cell data and these data is deposited in sends to idle address administration module 506.
Idle address administration module 506 mainly is in charge of the idle address of all buffer zones in the shared buffer memory, when this module receives the storage area numbering of cell data and this cell data, can distribute one to idle address that should storage area, and cell data be write the corresponding address place in respective cache zone in the shared buffer memory.
Preferably, idle address administration module 506, also be used on the clean culture storage area that takies the degree minimum and the multicast storage area that takies the degree minimum are positioned at a slice buffer and this buffer when being the simple cache device, be idle address corresponding to the clean culture storage area that takies the degree minimum of unicast cell data allocations, the unicast cell number is stored in this clean culture storage area that takies the degree minimum, for the multicast cell data distributes one corresponding to the idle address of time little multicast storage area of the degree that takies, store the multicast cell data into this and take in time little multicast storage area of degree.
When the most shallow clean culture zone and the most shallow multicast zone are on same memory, have two kinds of ways to solve: (1) memory be 1 read 1 write mouthful simple twoport ram the time, the clean culture storage area that allows unicast cell preferentially deposit in to choose, multicast cell deposit time shallow multicast storage area in.When (2) memory was true twoport ram, unicast cell and multicast cell can write each the most shallow self-corresponding storage area simultaneously.
Preferably, as shown in Figure 6, the cache management device of Ethernet switching chip can also comprise according to the preferred embodiment of the invention:
Singlecast router module 510 is connected to flow detection module 502, is used for resolving the header information of unicast cell data, carries out route querying, determines the output port information (comprising the output slogan) of unicast cell data;
Queue management module 512, be connected to idle address administration module 506, be used to each output port to safeguard and the same number of VIQ of input port, join the team according to unicast cell data storage address and corresponding input/output port information, and determine group cell addresses according to scheduling result, upgrade the head of the queue information of group VIQ;
Output scheduling module 514 is connected to queue management module 512 and idle address administration module 506, is used for the head of the queue information according to all VIQ, dispatch the generation scheduling result, and group cell data that goes out that will receive sends to the output port module;
Idle address administration module 506 also is used for according to going out group cell addresses, reads cell data and sends to output scheduling module 514;
Output port module 516 is connected to output scheduling module 514, and the output port by correspondence behind the cell data insertion check word that is used for receiving sends to data link.
Preferably, as shown in Figure 6, the cache management device of Ethernet switching chip can also comprise according to the preferred embodiment of the invention:
Multicast routing module 518 is connected to flow detection module 502, is used for resolving the header information of multicast cell data, carries out route querying, determines the bitmap information of multicast cell data;
Queue management module 512, can also be used to each output port to safeguard and the same number of VIQ of input port, memory address and corresponding bitmap information according to the multicast cell data are joined the team, and determine group cell addresses according to scheduling result, upgrade the head of the queue information of group VIQ.
Above-mentioned module has realized a kind of accurately and effectively queue management function.
Comprise in the singlecast router module 510 that unicast routing table is used for unicast cell is carried out route querying, with the output port information (comprising the output slogan, i.e. the destination slogan) of definite unicast cell data, and offer space address administration module 506.Comprise in the multicast routing module 518 that the multicast routing table is used for respectively the multicast cell being carried out route querying, with the bitmap information of definite multicast cell data, and offer space address administration module 506.When being provided with storage control module, singlecast router module 510 and/or multicast routing module 518 can send to storage control module with unicast cell data and output port information, multicast cell data and bitmap information thereof, are transmitted to space address administration module 506 by storage control module.
Space address administration module 506 is after storing cell data, routing iinformation (comprising input port information (comprising the input slogan) and the address in shared buffer memory) and routing iinformation and the bitmap information of output port information and/or multicast cell of the unicast cell that singlecast router module 510 and/or multicast routing module 518 can be determined pass to queue management module 512, queue management module 512 can be ranked according to these information, waits for the scheduling of output scheduling module 514.
Output scheduling module 514 is carried out output scheduling according to each output port queueing message that queue management module 512 provides, and scheduling result is returned to queue management module 512, and indication queue management module 512 goes out the numbering of group VIQ.Queue management module 512 is deleted the cell information that this goes out the head of the queue cell (namely going out group cell) of group VIQ from queue linked list, and will go out group cell addresses and send to idle address administration module 506, idle address administration module 506 is read cell data according to the address and is sent to output scheduling module 514 from shared buffer memory, output scheduling module 514 sends it to output port module 516 by corresponding output port output.
Preferably, as shown in Figure 6, the cache management device of Ethernet switching chip can also comprise according to the preferred embodiment of the invention:
Input interface module 520 is used for receiving cell data stream, and the cell data stream that receives is carried out verification;
Cell sort module 522, be connected to flow detection module 502 and input interface module 520, be used for resolving the header information by the information source data of verification, distinguish unicast cell data and multicast cell data, the unicast cell data are sent to unicast channel, the multicast cell data is sent to multicast channel.
Input interface module 520 receives from the cell on the data link, and it is Distinguish not, only will send to by the cell of verification cell sort module 522; Cell sort module 522 is carried out the cell classification by read head information, and unicast cell is sent to unicast channel, the multicast cell is sent to multicast channel, to make things convenient for flow detection module 502 statistic flow information.Need to prove that the input port information of each cell data can be recorded by input interface module 520 when receiving this cell data.
Below in conjunction with example above preferred embodiment is elaborated.
Cache management device according to the Ethernet switching chip of above preferred embodiment carries out cache management by following steps:
In the 1st step, input interface module receives cell data stream, will send to by the cell data of verification the cell sort module.
In the 2nd step, the cell sort module is resolved header information, and unicast cell is sent to unicast channel, and the multicast cell is sent to multicast channel, and list/multicast cell data sends to the flow detection module.
In the 3rd step, flow detection module monitors unicast data and multi-case data flow information send to idle address administration module with flow information.
The 4th step, storage area is divided module according to the flow proportional of unicast cell data and institute's multicast cell data, the spatial cache of every a slice buffer is divided into clean culture storage area and multicast storage area, then unicast cell data and multicast cell data is sent to list/multicast routing module.
In the 5th step, list/multicast routing module is resolved respectively header information, carries out route querying.Unicast routing table sends to storage control module with unicast cell data and output port information, and the multicast routing table sends to storage control module with multicast cell data and bitmap information.
In the 6th step, storage control module takies the degree sequencing information according to the clean culture buffer zone that buffer memory depth ordering module sends, and judges the unicast cell that receives is deposited toward taking the little clean culture storage area of degree.Same, storage control module takies the degree sequencing information according to the multicast caching zone that buffer memory depth ordering module sends, judgement deposits the multicast cell that receives toward taking the little multicast storage area of degree, at last judged result is sent to idle address administration module.
In the 7th step, idle address administration module distributes the idle address of a corresponding clean culture storage area according to the clean culture storage judged result that storage control module provides, and the unicast cell data are write this address.Multicast is processed equally.And unicast cell memory address and corresponding input/output port information sent to queue management module, multicast cell memory address and input port information and bitmap information are sent to queue management module.
In the 8th step, queue management module is joined the team according to unicast cell memory address and corresponding input/output port information, and this module is safeguarded and the same number of VIQ of input port for each output port, entered in order corresponding VIQ.The head of the queue information of all VIQ is sent to the output scheduling module.
In the 9th step, the output scheduling module is dispatched according to the VIQ head of the queue information that receives, and scheduling result is returned to queue management module.
In the 10th step, queue management module is determined group cell addresses according to scheduling result, then will go out group cell addresses and send to idle address administration module, and upgrade the head of the queue information of group VIQ.
In the 11st step, idle address administration module is read cell data, and cell data is sent to the output scheduling module according to the address that goes out group cell from shared buffer memory.
In the 12nd step, the output scheduling module sends to the output port module with the cell data that receives.
In the 13rd step, the output port module is inserted the cell data received that the output port by correspondence sends on the data link behind the check word.
From above description, can find out, technical scheme provided by the invention has solved the problem of load balancing of shared buffer memory, and solution list/multicast caching dynamic assignment problem, make the distribution of spatial cache be adaptive to list/multicast service flow, avoided improving the utilance of buffer memory because of the unbalanced cache overflow that causes of shared buffer memory occupancy or the generation of irrational Flow Control.Compared with prior art, obtaining progress aspect the raising shared buffer memory utilance, not only improve the memory space utilance of clean culture, also by dynamically adjusting the buffer memory capacity between the list/multicast caching, improved simultaneously the utilance of clean culture and two kinds of buffer memorys of multicast, and expansion flexibly, by the expense that the mode that improves Buffer Utilization is saved buffer, saves cost.
Obviously, those skilled in the art should be understood that, above-mentioned each module of the present invention or each step can realize with general calculation element, they can concentrate on the single calculation element, perhaps be distributed on the network that a plurality of calculation elements form, alternatively, they can be realized with the executable program code of calculation element, thereby, they can be stored in the storage device and be carried out by calculation element, and in some cases, can carry out step shown or that describe with the order that is different from herein, perhaps they are made into respectively each integrated circuit modules, perhaps a plurality of modules in them or step are made into the single integrated circuit module and realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The above is the preferred embodiments of the present invention only, is not limited to the present invention, and for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (12)

1. the buffer memory management method of an Ethernet switching chip is characterized in that, comprising:
The flow information of statistics unicast cell data and multicast cell data;
According to the flow proportional of described unicast cell data and described multicast cell data, the spatial cache of every a slice buffer is divided into clean culture storage area and multicast storage area;
Described unicast cell data are stored in the clean culture storage area, described multicast cell data is stored in the multicast storage area.
2. method according to claim 1 is characterized in that, described unicast cell data are stored in the clean culture storage area, described multicast cell data is stored in the multicast storage area comprise:
The degree that takies according to each buffer clean culture storage area and multicast storage area sorts to each buffer clean culture storage area and multicast storage area;
Be idle address corresponding to the clean culture storage area that takies the degree minimum of described unicast cell data allocations, described unicast cell number is stored in this clean culture storage area that takies the degree minimum, for described multicast cell data distributes the idle address corresponding to the multicast storage area that takies the degree minimum, described multicast cell data is stored in this multicast storage area that takies the degree minimum.
3. method according to claim 2, it is characterized in that, be idle address corresponding to the clean culture storage area that takies the degree minimum of described unicast cell data allocations, described unicast cell data are stored in this clean culture storage area that takies the degree minimum, for described multicast cell data distributes the idle address corresponding to the multicast storage area that takies the degree minimum, described multicast cell data is stored in this multicast storage area that takies the degree minimum comprise:
On the clean culture storage area that takies the degree minimum and the multicast storage area that takies the degree minimum are positioned at a slice buffer and this buffer when being the simple cache device, be idle address corresponding to the clean culture storage area that takies the degree minimum of described unicast cell data allocations, described unicast cell number is stored in this clean culture storage area that takies the degree minimum, for described multicast cell data distributes one corresponding to the idle address of time little multicast storage area of the degree that takies, store described multicast cell data into this and take in time little multicast storage area of degree.
4. method according to claim 3 is characterized in that,
In the ratio according to described unicast cell data and described multicast cell data flow, the spatial cache of every a slice buffer is divided into after clean culture storage area and the multicast storage area, also comprise:
Resolve described unicast cell data and header information, carry out route querying, determine the output port information of described unicast cell data;
After storing into described unicast cell data in the clean culture storage area, also comprise:
For each output port is safeguarded and the same number of virtual input queue VIQ of input port, join the team according to described unicast cell data storage address and corresponding input/output port information;
According to the head of the queue information of all VIQ, dispatch;
Determine group cell addresses according to scheduling result, upgrade the head of the queue information of group VIQ;
Go out group cell addresses according to described, read cell data and send to corresponding output port;
Output port sends on the data link after the cell data of receiving is inserted check word.
5. method according to claim 3 is characterized in that,
In the ratio according to described unicast cell data and described multicast cell data flow, the spatial cache of every a slice buffer is divided into after clean culture storage area and the multicast storage area, also comprise:
Resolve the header information of described multicast cell data, carry out route querying, determine the bitmap bitmap information of described multicast cell data;
After storing into described multicast cell data in the clean culture storage area, also comprise:
For each output port is safeguarded and the same number of VIQ of input port, join the team according to memory address and the corresponding bitmap information of described multicast cell data;
According to the head of the queue information of all VIQ, dispatch;
Determine group cell addresses according to scheduling result, upgrade the head of the queue information of group VIQ;
Go out group cell addresses according to described, read cell data and send to corresponding output port;
Output port sends on the data link after the cell data of receiving is inserted check word.
6. according to claim 1 to 5 each described methods, it is characterized in that, before the flow information of statistics unicast cell data and multicast cell data, also comprise:
The reception cell data flows, and the cell data stream that receives is carried out verification;
Resolve the header information by the information source data of verification, distinguish unicast cell data and multicast cell data, the unicast cell data are sent to unicast channel, the multicast cell data is sent to multicast channel.
7. the cache management device of an Ethernet switching chip is characterized in that, comprising:
The flow detection module is for the flow information of statistics unicast cell data and multicast cell data;
Storage area is divided module, is used for the ratio according to described unicast cell data and described multicast cell data flow, and the spatial cache of every a slice buffer is divided into clean culture storage area and multicast storage area;
Idle address administration module is used for storing described unicast cell data into the clean culture storage area, and described multicast cell data is stored in the multicast storage area.
8. device according to claim 7 is characterized in that, also comprises:
Buffer memory depth ordering module is used for according to the degree that takies of each buffer clean culture storage area and multicast storage area each buffer clean culture storage area and multicast storage area being sorted;
Described idle address administration module, also be used to idle address corresponding to the clean culture storage area that takies the degree minimum of described unicast cell data allocations, described unicast cell number is stored in this clean culture storage area that takies the degree minimum, for described multicast cell data distributes the idle address corresponding to the multicast storage area that takies the degree minimum, described multicast cell data is stored in this multicast storage area that takies the degree minimum.
9. device according to claim 8, it is characterized in that, described idle address administration module, also be used on the clean culture storage area that takies the degree minimum and the multicast storage area that takies the degree minimum are positioned at a slice buffer and this buffer when being the simple cache device, be idle address corresponding to the clean culture storage area that takies the degree minimum of described unicast cell data allocations, described unicast cell number is stored in this clean culture storage area that takies the degree minimum, for described multicast cell data distributes one corresponding to the idle address of time little multicast storage area of the degree that takies, store described multicast cell data into this and take in time little multicast storage area of degree.
10. device according to claim 9 is characterized in that, also comprises:
The singlecast router module for the header information of resolving described unicast cell data, is carried out route querying, determines the output port information of described unicast cell data;
Queue management module, be used to each output port to safeguard and the same number of virtual input queue VIQ of input port, join the team according to described unicast cell data storage address and corresponding input/output port information, and determine group cell addresses according to scheduling result, upgrade the head of the queue information of group VIQ;
The output scheduling module is used for the head of the queue information according to all VIQ, dispatch the generation scheduling result, and group cell data that goes out that will receive sends to the output port module;
Described idle address administration module also is used for going out group cell addresses according to described, reads cell data and sends to described output scheduling module;
Described output port module, the output port by correspondence behind the cell data insertion check word that is used for receiving sends to data link.
11. device according to claim 10 is characterized in that, also comprises:
The multicast routing module for the header information of described multicast cell data, carries out route querying, determines the bitmap bitmap information of described multicast cell data;
Described queue management module, also be used to each output port to safeguard and the same number of virtual input queue VIQ of input port, memory address and corresponding bitmap information according to described multicast cell data are joined the team, and determine group cell addresses according to scheduling result, upgrade the head of the queue information of group VIQ.
12. to 11 each described devices, it is characterized in that according to claim 7, also comprise:
Input interface module is used for receiving cell data stream, and the cell data stream that receives is carried out verification;
The cell sort module is used for resolving the header information by the information source data of verification, distinguishes unicast cell data and multicast cell data, and the unicast cell data are sent to unicast channel, and the multicast cell data is sent to multicast channel.
CN201110181173.4A 2011-06-30 2011-06-30 The buffer memory management method and device of Ethernet switching chip Active CN102857446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110181173.4A CN102857446B (en) 2011-06-30 2011-06-30 The buffer memory management method and device of Ethernet switching chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110181173.4A CN102857446B (en) 2011-06-30 2011-06-30 The buffer memory management method and device of Ethernet switching chip

Publications (2)

Publication Number Publication Date
CN102857446A true CN102857446A (en) 2013-01-02
CN102857446B CN102857446B (en) 2017-09-29

Family

ID=47403648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110181173.4A Active CN102857446B (en) 2011-06-30 2011-06-30 The buffer memory management method and device of Ethernet switching chip

Country Status (1)

Country Link
CN (1) CN102857446B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103701718A (en) * 2013-12-19 2014-04-02 华南理工大学 Dynamic buffer allocation method for transformer substation communication network switches
WO2015165398A1 (en) * 2014-04-30 2015-11-05 华为技术有限公司 Data processing device and terminal
CN105657031A (en) * 2016-01-29 2016-06-08 盛科网络(苏州)有限公司 Service-aware chip cache resource management method
CN106850440A (en) * 2017-01-16 2017-06-13 北京中科睿芯科技有限公司 A kind of router, method for routing and its chip wrapped towards multiaddress shared data route
WO2018090573A1 (en) * 2016-11-18 2018-05-24 深圳市中兴微电子技术有限公司 Buffer space management method and device, electronic apparatus, and storage medium
CN110557432A (en) * 2019-07-26 2019-12-10 苏州浪潮智能科技有限公司 cache pool balance optimization method, system, terminal and storage medium
CN111131089A (en) * 2019-12-24 2020-05-08 西安电子科技大学 Queue management method for improving multicast service HOL blocking

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1984042B (en) * 2006-05-23 2010-10-27 华为技术有限公司 Method and device for managing buffer address
CN101902390B (en) * 2009-05-27 2013-04-17 华为技术有限公司 Unicast and multicast integrated scheduling device, exchange system and method
CN102111327B (en) * 2009-12-29 2014-11-05 中兴通讯股份有限公司 Method and system for cell dispatching

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103701718A (en) * 2013-12-19 2014-04-02 华南理工大学 Dynamic buffer allocation method for transformer substation communication network switches
CN103701718B (en) * 2013-12-19 2017-02-01 华南理工大学 Dynamic buffer allocation method for transformer substation communication network switches
WO2015165398A1 (en) * 2014-04-30 2015-11-05 华为技术有限公司 Data processing device and terminal
US10169280B2 (en) 2014-04-30 2019-01-01 Huawei Technologies Co., Ltd. Data processing apparatus and terminal
CN103955436B (en) * 2014-04-30 2018-01-16 华为技术有限公司 A kind of data processing equipment and terminal
CN105657031A (en) * 2016-01-29 2016-06-08 盛科网络(苏州)有限公司 Service-aware chip cache resource management method
WO2018090573A1 (en) * 2016-11-18 2018-05-24 深圳市中兴微电子技术有限公司 Buffer space management method and device, electronic apparatus, and storage medium
CN108076020A (en) * 2016-11-18 2018-05-25 深圳市中兴微电子技术有限公司 The management method and device of a kind of spatial cache
CN108076020B (en) * 2016-11-18 2020-09-08 深圳市中兴微电子技术有限公司 Cache space management method and device
CN106850440A (en) * 2017-01-16 2017-06-13 北京中科睿芯科技有限公司 A kind of router, method for routing and its chip wrapped towards multiaddress shared data route
CN110557432A (en) * 2019-07-26 2019-12-10 苏州浪潮智能科技有限公司 cache pool balance optimization method, system, terminal and storage medium
CN110557432B (en) * 2019-07-26 2022-04-26 苏州浪潮智能科技有限公司 Cache pool balance optimization method, system, terminal and storage medium
CN111131089A (en) * 2019-12-24 2020-05-08 西安电子科技大学 Queue management method for improving multicast service HOL blocking
CN111131089B (en) * 2019-12-24 2021-07-27 西安电子科技大学 Queue management method for improving multicast service HOL blocking

Also Published As

Publication number Publication date
CN102857446B (en) 2017-09-29

Similar Documents

Publication Publication Date Title
CN102857446A (en) Cache management method and cache management apparatus for Ethernet switching chip
CN103069757B (en) Packet reassembly and resequence method, apparatus and system
CN101478483B (en) Method for implementing packet scheduling in switch equipment and switch equipment
US20090168782A1 (en) High-Speed Scheduling Apparatus for a Switching Node
CN105337883A (en) Multi-business supporting network switching device and implementation method therefor
EP3131017B1 (en) Data processing device and terminal
CN102594660A (en) Virtual interface exchange method, device and system
CN105721354B (en) Network-on-chip interconnected method and device
US20030095558A1 (en) High efficiency data buffering in a computer network device
CN102387222A (en) Address distribution method, apparatus and system thereof
CN102088412A (en) Exchange unit chip, router and transmission method of cell information
US7289443B1 (en) Slow-start packet scheduling particularly applicable to systems including a non-blocking switching fabric and homogeneous or heterogeneous line card interfaces
CN102111327A (en) Method and system for cell dispatching
CN104683242A (en) Two-dimensional network-on-chip topological structure and routing method
CN105243078B (en) A kind of distribution method of file resource, system and device
CN101950279B (en) Method and bus system for balancing data information flow and decoder
CN101931585B (en) Cell order maintaining method and device
CN101541045B (en) Multicast channel resource allocating method
CN102868636A (en) Method and system for stream-based order preservation of multi-core network equipment packet
CN102594654B (en) A kind of method and apparatus of queue scheduling
CN103780507B (en) The management method of cache resources and device
CN107241156B (en) A kind of cell order maintaining method and device
CN101237417B (en) Queue index method, device and traffic shaping method and device
CN102487303A (en) Time slot distribution management method and apparatus thereof
CN101742412B (en) Subframe selecting method of multimedia broadcast multicast service transmitted by single cell

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20130102

Assignee: SANECHIPS TECHNOLOGY Co.,Ltd.

Assignor: ZTE Corp.

Contract record no.: 2015440020319

Denomination of invention: Cache management method and cache management apparatus for Ethernet switching chip

License type: Common License

Record date: 20151123

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221102

Address after: 518055 Zhongxing Industrial Park, Liuxian Avenue, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: SANECHIPS TECHNOLOGY Co.,Ltd.

Address before: 518057 No. 55 South Science and technology road, Shenzhen, Guangdong, Nanshan District

Patentee before: ZTE Corp.