CN106911740A - A kind of method and apparatus of cache management - Google Patents
A kind of method and apparatus of cache management Download PDFInfo
- Publication number
- CN106911740A CN106911740A CN201510979057.5A CN201510979057A CN106911740A CN 106911740 A CN106911740 A CN 106911740A CN 201510979057 A CN201510979057 A CN 201510979057A CN 106911740 A CN106911740 A CN 106911740A
- Authority
- CN
- China
- Prior art keywords
- message
- queue
- reception message
- reception
- buffer memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The embodiment of the invention discloses a kind of method and apparatus of cache management, the method includes:Obtain the corresponding message granular information of descriptor for receiving message;Determine to be operated for the discard processing of the reception message by the corresponding packet loss mechanism of the message granular information.Be applied in combination for various cache management mechanism according to treatment granularity by the method, and caching service efficiency higher is may insure under various application scenarios;So as to carry out efficient management to caching.
Description
Technical field
The present invention relates to data communication technology, more particularly to a kind of method and apparatus of cache management.
Background technology
As the network bandwidth is in the increase of explosion type, with traditional Double Data Rate synchronous DRAM
(DDR, Double Data Rate) as the framework of shared buffer memory, when bandwidth can be caused to be lifted, DDR
Chip pin quantity sharply increase, so as to considerably increase encapsulation and sheet making difficulty.
At present, generally using enhancing dynamic random access memory (eDRAM, enhanced in highdensity
Dynamic Random Access Memory) framework of the grouping parallel framework as shared buffer memory is carried out, from
And when cache access bandwidth increases, it is not necessary to increase chip pin.
But, when use eDRAM in piece as shared buffer memory framework when, real cache capacity can be caused
Real cache capacity of traditional DDR as shared buffer memory framework is less than, and it is current, it is slow for shared
The management deposited, it may appear that when some streams occur congestions, congestion stream can take more shared buffer memory, when gathering around
After the shared buffer memory that plug flow takes reaches certain thresholding, non-congested stream packet can be caused to abandon.Therefore, at present
Need real cache capacity reduce in the case of, find can work as burst data congestion when, to cache into
The efficient buffer memory management method of row.
The content of the invention
In order to solve the above technical problems, the embodiment of the present invention is expected to provide a kind of method and apparatus of cache management,
Efficient management can be carried out to caching.
The technical proposal of the invention is realized in this way:
In a first aspect, the embodiment of the invention provides a kind of method of cache management, methods described is applied to one
Cache management device, methods described includes:
Obtain the corresponding message granular information of descriptor for receiving message;
Discarding for the reception message is determined by the corresponding packet loss mechanism of the message granular information
Treatment operation.
In such scheme, as the type of service TC that the descriptor of the reception message is the reception message
When, it is stream level or intergrade message that the message granular information is used to characterize the reception message;
Correspondingly, determine to receive report for described by the corresponding packet loss mechanism of the message granular information
The discard processing operation of text, specifically includes:
The length for receiving message surpasses with depth sum of the queue in caching is exclusively enjoyed where the reception message
When crossing the corresponding dynamic threshold of queue where the reception message, the reception message is abandoned;Wherein,
The corresponding dynamic threshold of queue where the reception message includes the corresponding afterbody of queue where the reception message
Abandon the corresponding dynamic threshold sum of queue where TD threshold values and the reception message;The reception message
Queue corresponding dynamic threshold in place is used to receive the bursty data of queue where the reception message.
In such scheme, as the type of service TC that the descriptor of the reception message is the reception message
When, it is stream level or intergrade message that the message granular information is used to characterize the reception message;The caching
Including exclusively enjoying caching and classification shared buffer memory;
Correspondingly, determine to receive report for described by the corresponding packet loss mechanism of the message granular information
The discard processing operation of text, specifically includes:
In shared buffer memory is classified, the TC priority according to the reception message determines the reception message institute
In the corresponding shared buffer memory rank of queue and shared buffer memory thresholding;Wherein, where different TC priority packets
Difference between the corresponding shared buffer memory thresholding of queue is used to determine the priority of different TC messages, and makes
Obtain TC priority message higher and be later than the relatively low packet loss of TC priority;Where different TC priority packets
Size of the difference between the corresponding shared buffer memory thresholding of queue is only to accommodate team where TC priority message higher
The size of the bursty traffic of row;
The queue according to where the reception message is corresponding to exclusively enjoy buffer threshold and reception message place queue
Corresponding shared buffer memory thresholding determines the discard processing operation of the reception message.
In such scheme, when the message that receives is for data message, and the descriptor bag for receiving message
When including the queue number of the reception message, it is stream level that the message granular information is used to characterize the reception message
Message;
Correspondingly, determine to receive report for described by the corresponding packet loss mechanism of the message granular information
The discard processing operation of text, specifically includes:
Queue number query request number and the corresponding relation of congested instruction according to the reception message, obtain described
Receive the congested instruction of queue where message;
The discarding thresholding of queue according to where the congested instruction determines the reception message;
Queue depth in the buffer exceedes with the length sum of the reception message where the reception message
Where the reception message during discarding thresholding of queue, by the reception packet loss.
It is right with congested instruction in the queue number query request number according to the reception message in such scheme
Should be related to, before obtaining the congested instruction for receiving message place queue, methods described also includes:
The queue number according to the queue congestion information updating that downstream node feeds back is corresponding with congested instruction to close
System;
Or, the corresponding relation of the queue number and congested instruction is set according to default configuration rule;
Or, the variable condition of queue depth or average queue depth in caching updates the queue number
With the corresponding relation of congested instruction.
In such scheme, when the target that the descriptor of the reception message is the reception message correspondence transmission
During chip identification, it is intergrade message that the message granular information is used to characterize the reception message;
Correspondingly, determine to receive report for described by the corresponding packet loss mechanism of the message granular information
The discard processing operation of text, specifically includes:
The reception message correspondence transmission is obtained according to the objective chip mark for receiving message correspondence transmission
Whether objective chip is reachable:
When the objective chip of the reception message correspondence transmission is unreachable, by the reception packet loss;
When it is described reception message correspondence transmission objective chip up to when, according to the corresponding team of the objective chip
The buffer status of row determine to be operated for the discard processing of the reception message.
Second aspect, the embodiment of the invention provides a kind of device of cache management, and described device includes:Obtain
Take unit and determining unit;Wherein,
The acquiring unit, the corresponding message granular information of descriptor of message is received for obtaining;
The determining unit, for by the corresponding packet loss mechanism of the message granular information determine for
The discard processing operation for receiving message.
In such scheme, as the type of service TC that the descriptor of the reception message is the reception message
When, it is stream level or intergrade message that the message granular information is used to characterize the reception message;
Correspondingly, the determining unit, specifically for when the length and the reception message of the reception message
Place queue corresponding dynamic threshold of queue where the depth sum in exclusively enjoying caching exceedes the reception message
When, the reception message is abandoned;Wherein, the corresponding dynamic threshold of queue where the reception message
Including the corresponding tail drop TD threshold values of queue and team where the reception message where the reception message
Arrange corresponding dynamic threshold sum;The corresponding dynamic threshold of queue where the reception message is used to receive described
Receive the bursty data of queue where message.
In such scheme, as the type of service TC that the descriptor of the reception message is the reception message
When, it is stream level or intergrade message that the message granular information is used to characterize the reception message;The caching
Including exclusively enjoying caching and classification shared buffer memory;
Correspondingly, the determining unit, specifically for:
In shared buffer memory is classified, the TC priority according to the reception message determines the reception message institute
In the corresponding shared buffer memory rank of queue and shared buffer memory thresholding;Wherein, where different TC priority packets
Difference between the corresponding shared buffer memory thresholding of queue determines the priority of different TC messages, and causes TC
Priority message higher is later than the relatively low packet loss of TC priority;Queue pair where different TC priority packets
Size of the difference between the shared buffer memory thresholding answered is only to accommodate dashing forward for queue where TC priority message higher
Send out the size of data flow;
And, queue is corresponding according to where the reception message exclusively enjoys buffer threshold and the reception message institute
Determine the discard processing operation of the reception message in the corresponding shared buffer memory thresholding of queue.
In such scheme, when the message that receives is for data message, and the descriptor bag for receiving message
When including the queue number of the reception message, it is stream level that the message granular information is used to characterize the reception message
Message;
Correspondingly, the determining unit, specifically for:
Queue number query request number and the corresponding relation of congested instruction according to the reception message, obtain described
Receive the congested instruction of queue where message;
The discarding thresholding of queue according to where the congested instruction determines the reception message;
Queue depth in the buffer exceedes with the length sum of the reception message where the reception message
Where the reception message during discarding thresholding of queue, by the reception packet loss.
In such scheme, described device also includes updating dispensing unit, is used for:
The queue number according to the queue congestion information updating that downstream node feeds back is corresponding with congested instruction to close
System;
Or, the corresponding relation of the queue number and congested instruction is set according to default configuration rule;
Or, the variable condition of queue depth or average queue depth in caching updates the queue number
With the corresponding relation of congested instruction.
In such scheme, when the target that the descriptor of the reception message is the reception message correspondence transmission
During chip identification, it is intergrade message that the message granular information is used to characterize the reception message;
Correspondingly, the determining unit, specifically for:
The reception message correspondence transmission is obtained according to the objective chip mark for receiving message correspondence transmission
Whether objective chip is reachable:
When the objective chip of the reception message correspondence transmission is unreachable, by the reception packet loss;
When it is described reception message correspondence transmission objective chip up to when, according to the corresponding team of the objective chip
The buffer status of row determine to be operated for the discard processing of the reception message.
A kind of method and apparatus of cache management are the embodiment of the invention provides, will be various slow according to treatment granularity
Deposit administrative mechanism to be applied in combination, caching service efficiency higher is may insure under various application scenarios;From
And efficient management is carried out to caching.
Brief description of the drawings
Fig. 1 is a kind of method flow schematic diagram of cache management provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic flow sheet for message abandon operation provided in an embodiment of the present invention;
Fig. 3 is another schematic flow sheet for message abandon operation provided in an embodiment of the present invention;
Fig. 4 be it is provided in an embodiment of the present invention another to message carry out abandon operation schematic flow sheet;
Fig. 5 be it is provided in an embodiment of the present invention another to message carry out abandon operation schematic flow sheet;
Fig. 6 is a kind of structural representation of cache management device provided in an embodiment of the present invention;
Fig. 7 is the structural representation of another cache management device provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clearly
Chu, it is fully described by.
Embodiment one
Referring to Fig. 1, it illustrates a kind of method flow of cache management provided in an embodiment of the present invention, the party
Method can apply in cache management device, and the method can include:
S101:Obtain the corresponding message granular information of descriptor for receiving message;
It should be noted that the descriptor for receiving message can carry out parsing acquisition by docking receiving text, have
Body can include:Team where type of service (TC, Traffic Class), the reception message of the reception message
Objective chip mark of the queue number of row transmission corresponding with the reception message etc., the embodiment of the present invention is not made to this
Repeat, it is possible to understand that ground, different descriptors can be used for representing the message granularity letter received corresponding to message
Breath, such as, as the type of service TC that the descriptor of the reception message is the reception message, the reception report
It is stream level or intergrade message that the corresponding message granular information of text characterizes the reception message;When the reception message
When descriptor is the queue number of queue where the reception message, the corresponding message granular information table of the reception message
It is stream level message to levy the reception message;When the mesh that the descriptor of the reception message is reception message correspondence transmission
During mark chip identification, the corresponding message granular information of the reception message characterizes the reception message for intergrade message.
S102:Determined at the discarding for receiving message by the corresponding packet loss mechanism of message granular information
Reason operation.
It should be noted that being closed according to the correspondence between the descriptor and message granular information of foregoing reception message
System, can be according to the discard processing behaviour of the corresponding packet loss mechanism docking receiving text of different message granular informations
It is determined, and performs corresponding discard processing.Concrete mode is as follows:
Preferably, when the descriptor for receiving message is when receiving the type of service TC of message, by message grain
The corresponding packet loss mechanism of degree information determines the discard processing operation for receiving message, specifically includes:
Received when the length for receiving message exceedes with depth sum of the queue in caching is exclusively enjoyed where reception message
During the corresponding dynamic threshold of queue where message, message will be received and abandoned;
Wherein, receiving the corresponding dynamic threshold of queue where message includes queue correspondence where the reception message
Tail drop (TD, Tail-Drop) threshold value and receive the corresponding dynamic threshold sum of queue where message;
It is to be appreciated that receive the corresponding dynamic threshold of queue where message to can be used for receiving reception message place
The bursty data of queue.
Specifically, exclusively enjoy caching only need to configure ensure each message queue caching take, each message queue it is only
Enjoy caching and the related bursting problem of each message queue is received by corresponding dynamic threshold, referring to Fig. 2, when
Dynamic threshold is enabled when opening, and the TD threshold values for exclusively enjoying caching are combined with corresponding dynamic threshold, so as to generate
It is final to abandon thresholding, i.e., foregoing dynamic threshold.Queue is only where receiving message length and receiving message
When enjoying the depth sum in caching more than the discarding thresholding for ultimately generating, then the reception message is abandoned.
Preferably, referring to Fig. 3, when the descriptor for receiving message is when receiving the type of service TC of message, to lead to
Cross the corresponding packet loss mechanism of message granular information to determine to be operated for the discard processing for receiving message, specifically
Including:
S301:In shared buffer memory is classified, according to where the TC priority for receiving message determines to receive message
The corresponding shared buffer memory rank of queue and shared buffer memory thresholding;
It should be noted that between the corresponding shared buffer memory thresholding of queue where difference TC priority packets
Difference determines the priority of different TC messages, and causes that TC priority message higher is later than TC priority
Relatively low packet loss;Difference between the corresponding shared buffer memory thresholding of queue where different TC priority packets
Size is only the size for accommodating the bursty traffic of queue where TC priority message higher;
S302:The queue according to where receiving message is corresponding to be exclusively enjoyed buffer threshold and receives queue pair where message
The shared buffer memory thresholding answered determines that the discard processing for receiving message is operated.
Specifically, in the classification shared buffer memory involved by the present embodiment, first order shared buffer memory can be used
To ensure loss priority;Exclusively enjoying the configuration of caching need to only ensure that minimum caching takes;It is each after the first order
Level shared buffer memory is used to supplement exclusively enjoying caching., it is necessary to reference to after the thresholding for exclusively enjoying caching is reached
Shared buffer memories at different levels after one-level judge whether message abandons, and the only queue where message is corresponding solely
Caching is enjoyed with message to after the corresponding shared buffer memory of queue reaches thresholding, just by packet loss;
It should be noted that:By taking the shared buffer memory of the second level as an example, when the depth of certain type message is slow more than exclusively enjoying
After depositing thresholding, message of joining the team below is needed in the depth for count on second level shared buffer memory;And it is only super
Cross and exclusively enjoy the message of buffer threshold and go out team, can just discharge second level shared buffer memory;
In detail, by receive the descriptor of message be receive message TC as a example by, can with TC0, TC1,
The message of TC2, TC3 is used in conjunction with second level shared buffer memory, and the message of TC5 and TC6 is common using the third level
Caching is enjoyed, the message of TC7 uses the second level and third level shared buffer memory;The use of polylith shared buffer memory,
Flexibly configurable;
First, the packet loss thresholding in first order shared buffer memory configuration TC6 and TC7 is more than TC0 to TC5
Packet loss thresholding, two differences abandoned between thresholdings only need to receive the message of TC6 and TC7
Bursty traffic;Each of configuration TC0 to TC7 messages exclusively enjoys buffer threshold, ensures each TC messages
It is minimum to use caching, remaining TC message is prevented due to burst or the normal TC message data streams of congestion effects;
In the case that each TC message data stream has burst, according to the service condition of classification shared buffer memory, permit
Perhaps each TC message takes shared buffer memory to receive burst.
Preferably, referring to Fig. 4, when receive message be data message, and it is described receive message descriptor bag
When including the queue number of the reception message, by the corresponding packet loss mechanism of message granular information determine for
The discard processing operation of message is received, is specifically included:
S401:According to the queue number query request number and the corresponding relation of congested instruction that receive message, acquisition connects
The congested instruction of queue where receiving text;
S402:The discarding thresholding of queue according to where congested instruction determines to receive message;
S403:Queue depth in the buffer exceedes with the length sum for receiving message and connects where message is received
Where receiving text during the discarding thresholding of queue, by the reception packet loss.
Further, before step S401, the present embodiment can also include obtaining queue number and congested instruction
Corresponding relation process, can specifically include:
The queue number according to the queue congestion information updating that downstream node feeds back is corresponding with congested instruction to close
System, for example, in the network architecture, the congestion marking that downstream subsystem can be fed back (or CPU configurations
Congestion marking) it is written to the queue congestion mark table of corresponding relation for characterizing queue number and congested instruction;
Or, the corresponding relation of the queue number and congested instruction is set according to default configuration rule;
Or, the variable condition of queue depth or average queue depth in caching updates the queue number
With the corresponding relation of congested instruction;For example, can be become by the queue depth in caching or average queue depth
The slope of change is perceived, and slope numerical value can consider queue congestion when being more than predetermined threshold value;Due to
Queue depth's change is very fast, and average queue depth change is relatively gentle, therefore preferably uses average queue depth
The slope of degree is updated to queue number with the corresponding relation of congested instruction.
Preferably, referring to Fig. 5, when the descriptor for receiving message is to receive the objective chip that message correspondence is transmitted
During mark, the discard processing for receiving message is determined by the corresponding packet loss mechanism of message granular information
Operation, can specifically include:
S501:Objective chip mark according to message correspondence transmission is received obtains the mesh for receiving message correspondence transmission
Whether mark chip is reachable:
S502:When the objective chip for receiving message correspondence transmission is unreachable, packet loss will be received;
S503:When receive message correspondence transmission objective chip up to when, according to the corresponding queue of objective chip
Buffer status determine for receive message discard processing operate.
It should be noted that when receive message correspondence transmission objective chip up to when, according to objective chip pair
The buffer status of the queue answered determine the discard processing operation for receiving message, specifically include:
It is compared according to the corresponding queue length of objective chip and TD threshold values, when queue length is more than TD
During threshold value, packet loss will be received;When queue length is not above TD threshold values, message will be received and joined the team.
It is to be appreciated that whether objective chip by the exchange of chip chamber feedback up to that can be known, originally
Inventive embodiments are not repeated this.
It should be noted that those skilled in the art can be needed above-mentioned preferred skill according to actual application
Art scheme is combined, for example, in above-mentioned optimal technical scheme, message queue is corresponding to exclusively enjoy caching
Threshold value can be corresponding dynamic threshold of message queue etc., and the present embodiment is not repeated this.
A kind of method of cache management is present embodiments provided, according to treatment granularity by various cache management mechanism
It is applied in combination, caching service efficiency higher is may insure under various application scenarios;So as to cache into
The efficient management of row.
Embodiment two
Based on previous embodiment identical technology design, referring to Fig. 6, provided it illustrates the embodiment of the present invention
A kind of cache management device 60, the device 60 can include:Acquiring unit 601 and determining unit 602;
Wherein,
The acquiring unit 601, the corresponding message granular information of descriptor of message is received for obtaining;
The determining unit 602, for being determined by the corresponding packet loss mechanism of the message granular information
Discard processing for the reception message is operated.
Exemplarily, as the type of service TC that the descriptor of the reception message is the reception message,
It is stream level or intergrade message that the message granular information is used to characterize the reception message;
Correspondingly, the determining unit 602, specifically for the length when the reception message and the reception
The corresponding dynamic of queue where the depth sum in exclusively enjoying caching exceedes the reception message of queue where message
During threshold value, the reception message is abandoned;Wherein, the corresponding dynamic of queue where the reception message
Threshold value includes the corresponding tail drop TD threshold values of queue and the reception message institute where the reception message
In the corresponding dynamic threshold sum of queue;The corresponding dynamic threshold of queue where the reception message is used to receive
The bursty data of queue where the reception message.
Exemplarily, as the type of service TC that the descriptor of the reception message is the reception message,
It is stream level or intergrade message that the message granular information is used to characterize the reception message;The caching includes
Exclusively enjoy caching and classification shared buffer memory;
Correspondingly, the determining unit 602, specifically for:
In shared buffer memory is classified, the TC priority according to the reception message determines the reception message institute
In the corresponding shared buffer memory rank of queue and shared buffer memory thresholding;Wherein, where different TC priority packets
Difference between the corresponding shared buffer memory thresholding of queue determines the priority of different TC messages, and causes TC
Priority message higher is later than the relatively low packet loss of TC priority;Queue pair where different TC priority packets
Size of the difference between the shared buffer memory thresholding answered is only to accommodate dashing forward for queue where TC priority message higher
Send out the size of data flow;
And, queue is corresponding according to where the reception message exclusively enjoys buffer threshold and the reception message institute
Determine the discard processing operation of the reception message in the corresponding shared buffer memory thresholding of queue.
Exemplarily, when the reception message is data message, and the descriptor for receiving message includes institute
When stating the queue number for receiving message, it is stream level message that the message granular information is used to characterize the reception message;
Correspondingly, the determining unit 602, specifically for:
Queue number query request number and the corresponding relation of congested instruction according to the reception message, obtain described
Receive the congested instruction of queue where message;
The discarding thresholding of queue according to where the congested instruction determines the reception message;
Queue depth in the buffer exceedes with the length sum of the reception message where the reception message
Where the reception message during discarding thresholding of queue, by the reception packet loss.
Further, referring to Fig. 7, the device 60 also includes updating dispensing unit 603, is used for:
The queue number according to the queue congestion information updating that downstream node feeds back is corresponding with congested instruction to close
System;
Or, the corresponding relation of the queue number and congested instruction is set according to default configuration rule;
Or, the variable condition of queue depth or average queue depth in caching updates the queue number
With the corresponding relation of congested instruction.
Exemplarily, when the objective chip that the descriptor of the reception message is the reception message correspondence transmission
During mark, it is intergrade message that the message granular information is used to characterize the reception message;
Correspondingly, the determining unit 602, specifically for:
The reception message correspondence transmission is obtained according to the objective chip mark for receiving message correspondence transmission
Whether objective chip is reachable:
When the objective chip of the reception message correspondence transmission is unreachable, by the reception packet loss;
When it is described reception message correspondence transmission objective chip up to when, according to the corresponding team of the objective chip
The buffer status of row determine to be operated for the discard processing of the reception message.
A kind of cache management device 60 is present embodiments provided, according to treatment granularity by various cache management mechanism
It is applied in combination, caching service efficiency higher is may insure under various application scenarios;So as to cache into
The efficient management of row.
It should be understood by those skilled in the art that, embodiments of the invention can be provided as method, system or meter
Calculation machine program product.Therefore, the present invention can using hardware embodiment, software implementation or combine software and
The form of the embodiment of hardware aspect.And, the present invention can be used and wherein include calculating at one or more
Computer-usable storage medium (the including but not limited to magnetic disk storage and optical storage of machine usable program code
Device etc.) on implement computer program product form.
The present invention is with reference to method according to embodiments of the present invention, equipment (system) and computer program product
Flow chart and/or block diagram describe.It should be understood that flow chart and/or side can be realized by computer program instructions
The knot of flow in each flow and/or square frame and flow chart and/or block diagram and/or square frame in block diagram
Close.Can provide these computer program instructions to all-purpose computer, special-purpose computer, Embedded Processor or
The processor of other programmable data processing devices is producing a machine so that by computer or other can
The instruction of the computing device of programming data processing equipment is produced for realizing in one flow of flow chart or multiple
The device of the function of being specified in one square frame of flow and/or block diagram or multiple square frames.
These computer program instructions may be alternatively stored in can guide computer or other programmable data processing devices
In the computer-readable memory for working in a specific way so that storage is in the computer-readable memory
Instruction is produced includes the manufacture of command device, and the command device is realized in one flow of flow chart or multiple streams
The function of being specified in one square frame of journey and/or block diagram or multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices, made
Obtain and series of operation steps is performed on computer or other programmable devices to produce computer implemented place
Reason, so as to the instruction performed on computer or other programmable devices is provided for realizing in flow chart one
The step of function of being specified in flow or multiple one square frame of flow and/or block diagram or multiple square frames.
The above, only presently preferred embodiments of the present invention is not intended to limit protection model of the invention
Enclose.
Claims (12)
1. a kind of method of cache management, it is characterised in that methods described is applied to a cache management device,
Methods described includes:
Obtain the corresponding message granular information of descriptor for receiving message;
Discarding for the reception message is determined by the corresponding packet loss mechanism of the message granular information
Treatment operation.
2. method according to claim 1, it is characterised in that when the descriptor of the reception message is
During the type of service TC of the reception message, the message granular information is for characterizing the reception message
Stream level or intergrade message;
Correspondingly, determine to receive report for described by the corresponding packet loss mechanism of the message granular information
The discard processing operation of text, specifically includes:
The length for receiving message surpasses with depth sum of the queue in caching is exclusively enjoyed where the reception message
When crossing the corresponding dynamic threshold of queue where the reception message, the reception message is abandoned;Wherein,
The corresponding dynamic threshold of queue where the reception message includes the corresponding afterbody of queue where the reception message
Abandon the corresponding dynamic threshold sum of queue where TD threshold values and the reception message;The reception message
Queue corresponding dynamic threshold in place is used to receive the bursty data of queue where the reception message.
3. method according to claim 1, it is characterised in that when the descriptor of the reception message is
During the type of service TC of the reception message, the message granular information is for characterizing the reception message
Stream level or intergrade message;The caching includes exclusively enjoying caching and classification shared buffer memory;
Correspondingly, determine to receive report for described by the corresponding packet loss mechanism of the message granular information
The discard processing operation of text, specifically includes:
In shared buffer memory is classified, the TC priority according to the reception message determines the reception message institute
In the corresponding shared buffer memory rank of queue and shared buffer memory thresholding;Wherein, where different TC priority packets
Difference between the corresponding shared buffer memory thresholding of queue is used to determine the priority of different TC messages, and makes
Obtain TC priority message higher and be later than the relatively low packet loss of TC priority;Where different TC priority packets
Size of the difference between the corresponding shared buffer memory thresholding of queue is only to accommodate team where TC priority message higher
The size of the bursty traffic of row;
The queue according to where the reception message is corresponding to exclusively enjoy buffer threshold and reception message place queue
Corresponding shared buffer memory thresholding determines the discard processing operation of the reception message.
4. method according to claim 1, it is characterised in that when the reception message is data message,
And during queue number of the descriptor for receiving message including the reception message, the message granular information is used
It is stream level message in the reception message is characterized;
Correspondingly, determine to receive report for described by the corresponding packet loss mechanism of the message granular information
The discard processing operation of text, specifically includes:
Queue number query request number and the corresponding relation of congested instruction according to the reception message, obtain described
Receive the congested instruction of queue where message;
The discarding thresholding of queue according to where the congested instruction determines the reception message;
Queue depth in the buffer exceedes with the length sum of the reception message where the reception message
Where the reception message during discarding thresholding of queue, by the reception packet loss.
5. method according to claim 4, it is characterised in that in the queue according to the reception message
The corresponding relation of number query request number and congested instruction, obtains the congested instruction for receiving queue where message
Before, methods described also includes:
The queue number according to the queue congestion information updating that downstream node feeds back is corresponding with congested instruction to close
System;
Or, the corresponding relation of the queue number and congested instruction is set according to default configuration rule;
Or, the variable condition of queue depth or average queue depth in caching updates the queue number
With the corresponding relation of congested instruction.
6. method according to claim 1, it is characterised in that when the descriptor of the reception message is
When the objective chip for receiving message correspondence transmission is identified, the message granular information is used to characterize described connecing
Receiving text is intergrade message;
Correspondingly, determine to receive report for described by the corresponding packet loss mechanism of the message granular information
The discard processing operation of text, specifically includes:
The reception message correspondence transmission is obtained according to the objective chip mark for receiving message correspondence transmission
Whether objective chip is reachable:
When the objective chip of the reception message correspondence transmission is unreachable, by the reception packet loss;
When it is described reception message correspondence transmission objective chip up to when, according to the corresponding team of the objective chip
The buffer status of row determine to be operated for the discard processing of the reception message.
7. a kind of device of cache management, it is characterised in that described device includes:Acquiring unit and determination are single
Unit;Wherein,
The acquiring unit, the corresponding message granular information of descriptor of message is received for obtaining;
The determining unit, for by the corresponding packet loss mechanism of the message granular information determine for
The discard processing operation for receiving message.
8. device according to claim 7, it is characterised in that when the descriptor of the reception message is
During the type of service TC of the reception message, the message granular information is for characterizing the reception message
Stream level or intergrade message;
Correspondingly, the determining unit, specifically for when the length and the reception message of the reception message
Place queue corresponding dynamic threshold of queue where the depth sum in exclusively enjoying caching exceedes the reception message
When, the reception message is abandoned;Wherein, the corresponding dynamic threshold of queue where the reception message
Including the corresponding tail drop TD threshold values of queue and team where the reception message where the reception message
Arrange corresponding dynamic threshold sum;The corresponding dynamic threshold of queue where the reception message is used to receive described
Receive the bursty data of queue where message.
9. device according to claim 7, it is characterised in that when the descriptor of the reception message is
During the type of service TC of the reception message, the message granular information is for characterizing the reception message
Stream level or intergrade message;The caching includes exclusively enjoying caching and classification shared buffer memory;
Correspondingly, the determining unit, specifically for:
In shared buffer memory is classified, the TC priority according to the reception message determines the reception message institute
In the corresponding shared buffer memory rank of queue and shared buffer memory thresholding;Wherein, where different TC priority packets
Difference between the corresponding shared buffer memory thresholding of queue determines the priority of different TC messages, and causes TC
Priority message higher is later than the relatively low packet loss of TC priority;Queue pair where different TC priority packets
Size of the difference between the shared buffer memory thresholding answered is only to accommodate dashing forward for queue where TC priority message higher
Send out the size of data flow;
And, queue is corresponding according to where the reception message exclusively enjoys buffer threshold and the reception message institute
Determine the discard processing operation of the reception message in the corresponding shared buffer memory thresholding of queue.
10. device according to claim 7, it is characterised in that when the reception message is datagram
Text, and during queue number of the descriptor for receiving message including the reception message, the message granularity letter
It is stream level message to cease for characterizing the reception message;
Correspondingly, the determining unit, specifically for:
Queue number query request number and the corresponding relation of congested instruction according to the reception message, obtain described
Receive the congested instruction of queue where message;
The discarding thresholding of queue according to where the congested instruction determines the reception message;
Queue depth in the buffer exceedes with the length sum of the reception message where the reception message
Where the reception message during discarding thresholding of queue, by the reception packet loss.
11. devices according to claim 10, it is characterised in that described device also includes updating configuring
Unit, is used for:
The queue number according to the queue congestion information updating that downstream node feeds back is corresponding with congested instruction to close
System;
Or, the corresponding relation of the queue number and congested instruction is set according to default configuration rule;
Or, the variable condition of queue depth or average queue depth in caching updates the queue number
With the corresponding relation of congested instruction.
12. devices according to claim 7, it is characterised in that when the descriptor of the reception message
When being identified for the objective chip of the reception message correspondence transmission, the message granular information is described for characterizing
Reception message is intergrade message;
Correspondingly, the determining unit, specifically for:
The reception message correspondence transmission is obtained according to the objective chip mark for receiving message correspondence transmission
Whether objective chip is reachable:
When the objective chip of the reception message correspondence transmission is unreachable, by the reception packet loss;
When it is described reception message correspondence transmission objective chip up to when, according to the corresponding team of the objective chip
The buffer status of row determine to be operated for the discard processing of the reception message.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510979057.5A CN106911740A (en) | 2015-12-22 | 2015-12-22 | A kind of method and apparatus of cache management |
PCT/CN2016/081614 WO2017107363A1 (en) | 2015-12-22 | 2016-05-10 | Cache management method and device, and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510979057.5A CN106911740A (en) | 2015-12-22 | 2015-12-22 | A kind of method and apparatus of cache management |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106911740A true CN106911740A (en) | 2017-06-30 |
Family
ID=59088912
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510979057.5A Withdrawn CN106911740A (en) | 2015-12-22 | 2015-12-22 | A kind of method and apparatus of cache management |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106911740A (en) |
WO (1) | WO2017107363A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113904997A (en) * | 2021-10-21 | 2022-01-07 | 烽火通信科技股份有限公司 | Method and device for caching and scheduling multi-priority service at receiving end of switching chip |
CN114024923A (en) * | 2021-10-30 | 2022-02-08 | 江苏信而泰智能装备有限公司 | Multithreading message capturing method, electronic equipment and computer storage medium |
CN117424864A (en) * | 2023-12-18 | 2024-01-19 | 南京奕泰微电子技术有限公司 | Queue data management system and method for switch |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112804156A (en) * | 2019-11-13 | 2021-05-14 | 深圳市中兴微电子技术有限公司 | Congestion avoidance method and device and computer readable storage medium |
CN114006731B (en) * | 2021-09-30 | 2023-12-26 | 新华三信息安全技术有限公司 | Network attack processing method, device, equipment and machine-readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7383349B2 (en) * | 2002-06-04 | 2008-06-03 | Lucent Technologies Inc. | Controlling the flow of packets within a network node utilizing random early detection |
CN102413063A (en) * | 2012-01-12 | 2012-04-11 | 盛科网络(苏州)有限公司 | Method and system for dynamically adjusting allocation threshold value of output port resources |
CN103685062A (en) * | 2013-12-02 | 2014-03-26 | 华为技术有限公司 | Cache management method and device |
CN104426796A (en) * | 2013-08-21 | 2015-03-18 | 中兴通讯股份有限公司 | Congestion avoiding method and apparatus of router |
-
2015
- 2015-12-22 CN CN201510979057.5A patent/CN106911740A/en not_active Withdrawn
-
2016
- 2016-05-10 WO PCT/CN2016/081614 patent/WO2017107363A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7383349B2 (en) * | 2002-06-04 | 2008-06-03 | Lucent Technologies Inc. | Controlling the flow of packets within a network node utilizing random early detection |
CN102413063A (en) * | 2012-01-12 | 2012-04-11 | 盛科网络(苏州)有限公司 | Method and system for dynamically adjusting allocation threshold value of output port resources |
CN104426796A (en) * | 2013-08-21 | 2015-03-18 | 中兴通讯股份有限公司 | Congestion avoiding method and apparatus of router |
CN103685062A (en) * | 2013-12-02 | 2014-03-26 | 华为技术有限公司 | Cache management method and device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113904997A (en) * | 2021-10-21 | 2022-01-07 | 烽火通信科技股份有限公司 | Method and device for caching and scheduling multi-priority service at receiving end of switching chip |
CN113904997B (en) * | 2021-10-21 | 2024-02-23 | 烽火通信科技股份有限公司 | Method and device for caching and scheduling multi-priority service of receiving end of switching chip |
CN114024923A (en) * | 2021-10-30 | 2022-02-08 | 江苏信而泰智能装备有限公司 | Multithreading message capturing method, electronic equipment and computer storage medium |
CN117424864A (en) * | 2023-12-18 | 2024-01-19 | 南京奕泰微电子技术有限公司 | Queue data management system and method for switch |
CN117424864B (en) * | 2023-12-18 | 2024-02-27 | 南京奕泰微电子技术有限公司 | Queue data management system and method for switch |
Also Published As
Publication number | Publication date |
---|---|
WO2017107363A1 (en) | 2017-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106911740A (en) | A kind of method and apparatus of cache management | |
US9185047B2 (en) | Hierarchical profiled scheduling and shaping | |
CN105391567B (en) | Traffic management implementation method, device and the network equipment | |
CN104980367A (en) | Token bucket limiting speed method and apparatus | |
CN108476177A (en) | Data plane for processing function scalability | |
CN107547418B (en) | A kind of jamming control method and device | |
CN105763478A (en) | Token bucket algorithm-based satellite data ground transmission network flow control system | |
CN106059951A (en) | Transmission control method for DCN (Data Center Network) based on multilevel congestion feedback | |
CN101714947A (en) | Extensible full-flow priority dispatching method | |
CN108156628A (en) | A kind of method, apparatus and system of resource allocation | |
CN106464581A (en) | Data transmission method and system and data receiving device | |
CN104065588B (en) | A kind of device and method of data packet dispatching and caching | |
CN103685060B (en) | data packet sending method and device | |
CN104052676B (en) | A kind of data processing method of transmission path device and transmission path | |
CN109660468A (en) | A kind of port congestion management method, device and equipment | |
CN107733813A (en) | Message forwarding method and device | |
CN101707789B (en) | Method and system for controlling flow | |
CN108235382A (en) | A kind of method, node device and the server of transmission rate adjustment | |
CN111526169B (en) | Method, medium, server and computer device for transmitting data through network | |
CN105763375B (en) | A kind of data packet sending method, method of reseptance and microwave station | |
CN109995608B (en) | Network rate calculation method and device | |
CN111740922B (en) | Data transmission method, device, electronic equipment and medium | |
CN105340318B (en) | Transmit the determination method and device of congestion | |
CN109995667A (en) | The method and sending device of transmitting message | |
CN104243333B (en) | A kind of flow control methods of address analysis protocol message |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20170630 |