CN109495401B - Cache management method and device - Google Patents

Cache management method and device Download PDF

Info

Publication number
CN109495401B
CN109495401B CN201811518672.6A CN201811518672A CN109495401B CN 109495401 B CN109495401 B CN 109495401B CN 201811518672 A CN201811518672 A CN 201811518672A CN 109495401 B CN109495401 B CN 109495401B
Authority
CN
China
Prior art keywords
cache pool
index value
position index
pool
shared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811518672.6A
Other languages
Chinese (zh)
Other versions
CN109495401A (en
Inventor
孙琳洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maipu Communication Technology Co Ltd
Original Assignee
Maipu Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maipu Communication Technology Co Ltd filed Critical Maipu Communication Technology Co Ltd
Priority to CN201811518672.6A priority Critical patent/CN109495401B/en
Publication of CN109495401A publication Critical patent/CN109495401A/en
Application granted granted Critical
Publication of CN109495401B publication Critical patent/CN109495401B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9005Buffering arrangements using dynamic buffer space allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9047Buffering arrangements including multiple buffers, e.g. buffer pools

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a cache management method and device, and relates to the field of data communication. The method divides the cache pool of the appointed type into a local cache pool and a shared cache pool, the local cache pool is used for receiving and sending the message of the appointed type, and the shared cache pool is used for providing a shared buffer area for other messages when the idle buffer area can not be applied in the cache pool corresponding to other messages, so as to ensure the normal processing in the case of burst service. The method of the invention can also dynamically adjust the size of the shared cache pool, thereby improving the utilization efficiency of the cache pool and the reliability of equipment operation.

Description

Cache management method and device
Technical Field
The present invention relates to the field of data communications, and in particular, to a method and an apparatus for managing a cache.
Background
The network device is an important device in a data communication network, and the data communication reliability of the network device is directly determined by the data transceiving performance and the operation stability of the network device. The network device can forward different types of messages, such as ethernet messages, wide area network messages and the like, and a dedicated cache pool can be allocated for each type of messages when the device is initialized, and each type of messages can only be allocated with a buffer area from the corresponding cache pool to carry the messages. A network device usually allocates a larger cache pool for a specific type of packet, and allocates a smaller cache pool for other packets of unspecified type that need to be forwarded.
When other types of message forwarding such as multicast or mirror image are processed, the situation that the consumption of the buffer area in the corresponding cache pool is fast easily occurs, and if no idle buffer area is available in the corresponding cache pool, the related service function is abnormal. At this time, a large number of idle buffer areas may exist in the cache pool corresponding to the specified type of packet, which may cause an unreasonable utilization rate of each cache pool, resulting in unnecessary waste.
Disclosure of Invention
The invention provides a cache management method and a cache management device, which solve the problems of low memory utilization rate, message loss or service interruption and the like caused by unreasonable allocation of a cache pool or burst service.
In a first aspect, the present invention provides a method for managing a cache, including the following steps:
respectively creating corresponding cache pools for different types of messages, and taking part of buffer areas in the cache pools corresponding to the messages of the specified type as shared cache pools; the number of the buffer areas in the cache pool corresponding to the specified type message is larger than the number of the buffer areas in the cache pools corresponding to other messages;
and when the buffer pool corresponding to other messages can not apply for the idle buffer, applying for the buffer from the shared buffer pool.
Wherein, the using a part of buffer areas in the buffer pool corresponding to the specified type of messages as a shared buffer pool includes: presetting a first position index value; all buffer areas with the first position index value or less are used as a local buffer pool for receiving and sending the specified type message; and taking all the buffers larger than the first position index value as the shared cache pool.
The said part of buffer area in the buffer pool corresponding to the message of the appointed type is regarded as the shared buffer pool, also include: presetting a second position index value; the second position index value is used for indicating a minimum position index value set by a shared buffer area in the shared cache pool; wherein the second position index value is greater than the first position index value; initializing a water line value as the second position index value, wherein the water line value is used for indicating a minimum position index value of a buffer area which can be used for sharing in the shared cache pool.
The applying for the buffer area from the shared cache pool includes: obtaining a cache pool lock of the shared cache pool; acquiring an idle buffer area with a position index value greater than or equal to the water line value in the shared cache pool, and setting the idle buffer area to be in an occupied state; releasing the cache pool lock; wherein the cache pool lock is used for mutual exclusion processing of the shared cache pool.
After initializing the water line value, the method further comprises: reducing and or expanding the shared cache pool;
the narrowing the shared cache pool specifically includes: acquiring a current maximum position index value of the local cache pool, and assigning the current maximum position index value to a third position index value; increasing the current maximum position index value of the local cache pool to obtain the latest maximum position index value of the local cache pool; if the latest maximum position index value of the local cache pool is smaller than the waterline value, setting all the buffer areas within the range from the maximum position index value larger than the third position index value to the maximum position index value smaller than or equal to the latest maximum position index value of the local cache pool as the local cache pool; if the latest maximum position index value of the local cache pool is greater than or equal to the water line value, acquiring a cache pool lock of the shared cache pool; setting the water line value as the latest maximum position index value of the local cache pool plus one; setting all the buffer areas from the third position index value to the maximum position index value which is less than or equal to the latest range of the local cache pool as a local cache pool; releasing the cache pool lock; the latest maximum position index value of the local cache pool is required to be smaller than the maximum position index value of the cache pool corresponding to the specified type message;
the expanding the shared cache pool specifically includes: acquiring a current maximum position index value of the local cache pool, and assigning the current maximum position index value to a third position index value; narrowing the current maximum position index value of the local cache pool to obtain the latest maximum position index value of the local cache pool; the latest maximum position index value of the local cache pool needs to be greater than or equal to the first position index value; acquiring the cache pool lock; setting all the buffer areas from the latest maximum position index value of the local cache pool to the third position index value of the local cache pool as a shared cache pool; if the latest maximum position index value of the local cache pool is smaller than the second position index value, setting the water line value as the second position index value; otherwise, setting the water line value as the latest maximum position index value of the local cache pool plus one; releasing the cache pool lock; wherein the cache pool lock is used for mutual exclusion processing of the shared cache pool.
In a second aspect, the present invention provides a cache management apparatus, which specifically includes:
the cache pool creating module is used for respectively creating corresponding cache pools for different types of messages; the number of the buffer areas in the cache pool corresponding to the specified type message is larger than the number of the buffer areas in the cache pools corresponding to other messages;
the cache pool management module is used for taking part of the buffer areas in the cache pool corresponding to the specified type messages as a shared cache pool;
and the buffer area application module is used for applying a buffer area from the shared buffer pool when the idle buffer area can not be applied in the buffer pools corresponding to other messages.
The cache pool management module is specifically used for presetting a first position index value; all buffer areas with the first position index value or less are used as a local buffer pool for receiving and sending the specified type message; and taking all the buffers larger than the first position index value as the shared cache pool.
The cache pool management module is also used for presetting a second position index value; the second position index value is used for indicating the minimum position index value set by a shared buffer area in the shared cache pool; wherein the second location index value is greater than the first location index value; initializing a water line value as the second position index value, wherein the water line value is used for indicating a minimum position index value of a buffer area which can be used for sharing in the shared cache pool.
The buffer area application module is specifically configured to acquire a cache pool lock of the shared cache pool; acquiring an idle buffer area with a position index value greater than or equal to the water line value in the shared cache pool, and setting the idle buffer area to be in an occupied state; releasing the cache pool lock; wherein the cache pool lock is used for mutual exclusion processing of the shared cache pool.
After the water line value is initialized, the cache pool management module is further specifically configured to narrow and/or enlarge the shared cache pool;
the method for reducing the shared cache pool comprises the following steps: acquiring a current maximum position index value of the local cache pool, and assigning the current maximum position index value to a third position index value; increasing the current maximum position index value of the local cache pool to obtain the latest maximum position index value of the local cache pool; if the latest maximum position index value of the local cache pool is smaller than the waterline value, setting all the buffer areas within the range from the maximum position index value larger than the third position index value to the maximum position index value smaller than or equal to the latest maximum position index value of the local cache pool as the local cache pool; if the latest maximum position index value of the local cache pool is greater than or equal to the water line value, acquiring a cache pool lock of the shared cache pool; setting the water line value as the latest maximum position index value of the local cache pool plus one; setting all the buffer areas from the third position index value to the maximum position index value which is less than or equal to the latest range of the local cache pool as a local cache pool; releasing the cache pool lock; the latest maximum position index value of the local cache pool is required to be smaller than the maximum position index value of the cache pool corresponding to the specified type message;
the method for expanding the shared cache pool comprises the following steps: acquiring the current maximum position index value of the local cache pool, and assigning the current maximum position index value to a third position index value; narrowing the current maximum position index value of the local cache pool to obtain the latest maximum position index value of the local cache pool; the latest maximum position index value of the local cache pool is required to be greater than or equal to the first position index value; acquiring the cache pool lock; setting all the buffer areas from the latest maximum position index value of the local cache pool to the third position index value of the local cache pool as a shared cache pool; if the latest maximum position index value of the local cache pool is smaller than the second position index value, setting the water line value as the second position index value; otherwise, setting the water line value as the latest maximum position index value of the local cache pool plus one; releasing the cache pool lock; wherein the cache pool lock is used for mutual exclusion processing of the shared cache pool.
In summary, the present invention divides the specified type of cache pool into a local cache pool and a shared cache pool, where the local cache pool is used for receiving and sending the specified type of message, and the shared cache pool is used for providing a shared buffer area for other messages when no free buffer area can be applied in the cache pools corresponding to other messages, so as to ensure normal processing of the burst service. The method can also dynamically adjust the size of the shared cache pool, thereby improving the utilization efficiency of the cache pool and the reliability of equipment operation.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart illustrating a method for managing a cache according to an embodiment of the present invention;
fig. 2 shows a cache pool corresponding to a specified type of packet according to an embodiment of the present invention;
fig. 3 shows a cache pool corresponding to another message of a specified type according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a reduction of a shared cache pool according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating another embodiment of a reduced shared cache pool;
fig. 6 is a schematic diagram illustrating an enlarged shared cache pool according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating another enlarged shared cache pool according to an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating a device for managing a cache according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The following are detailed descriptions of the respective embodiments.
Example one
An embodiment of the present invention provides a method for managing a cache, where a processing flow diagram of the method is shown in fig. 1, and the method includes the following steps:
step S101: and respectively establishing corresponding cache pools for different types of messages, and taking part of buffer areas in the cache pools corresponding to the messages of the specified type as shared cache pools.
The number of the buffer areas in the cache pool corresponding to the specified type message is larger than the number of the buffer areas in the cache pools corresponding to other messages.
In this embodiment, the buffer with the set number of bytes is used as a unit, and corresponding buffer pools are respectively created for different types of messages, and the number of the buffer pools corresponding to each type of message may be different. The size of the buffer, that is, the number of bytes set, may be a power of 2. The buffer is used to store buffer descriptors and messages. The buffer descriptor includes: location index value of buffer, buffer attribute and usage status. The position index value is used for indicating the index value of the corresponding buffer area in the cache pool. The position index value is a natural number, and the position index values of the buffers are sequentially increased from zero from the initial buffer of the buffer pool. The buffer attribute is used to indicate whether the buffer is currently available for sharing. When the attribute of the buffer area is local, the buffer area is only used for processing the message of the specified type currently; and when the buffer area attribute is shared, the buffer area can be used for processing other messages currently. The use status is used to indicate whether the corresponding buffer is currently free. When the use state is idle, the buffer is not used currently; when the usage status is occupied, it indicates that the buffer is currently in use. In order to ensure the processing efficiency of different types of messages, the cache pools corresponding to different types of messages can be applied as cache pools with continuous physical addresses.
As a preferred embodiment of the present invention, the message of the specified type may be a message of an ethertype. And dividing a cache pool corresponding to the specified type message into a local cache pool and a shared cache pool according to a preset first position index value. As shown in fig. 2, the cache pool corresponding to the specified type packet has N +1 buffers, 0, 1, and 2 … … N represent the position index value of each buffer, and N is a natural number; wherein the preset first position index value is K; and taking all the buffer areas with the position index values less than or equal to K as local cache pools, namely taking the buffer areas with the position index values from 0 to K as the local cache pools, wherein the buffer areas are used for receiving and sending the specified type messages. And taking all the buffers larger than K as a shared buffer pool, namely taking the buffers with the position index values from K +1 to N as the shared buffer pool. Wherein, all buffer zone attributes in the local cache pool are local; all buffer attributes in the shared cache pool are shared. If the cache Pool corresponding to the specified type of message is a cache Pool with continuous physical addresses, the local cache Pool may be applied and released by a hardware FPA (Free Pool allocation unit).
After the cache pool corresponding to the specified type message is divided into a local cache pool and a shared cache pool through a preset first position index value, a second position index value can be preset; the second position index value is used for indicating the minimum position index value set by the shared buffer area in the shared cache pool; wherein the second position index value is greater than the first position index value; initializing a water line value as a second position index value, wherein the water line value is used for indicating a minimum position index value of a buffer area which can be used for sharing in a shared cache pool; wherein the water line value may be greater than or equal to the second position index value. As shown in fig. 3, if the preset second position index value is M and the water line value is initialized to M, the buffer area with the position index values from M to N is used as a shared buffer pool for providing a shared buffer area for other packets. All the buffers with the position index values larger than K and smaller than M can be used as elastic areas between the local cache pool and the shared cache pool, although the buffer attributes are shared, the buffer attributes cannot be used as a shared buffer for other messages under normal conditions.
Step S102: and when the buffer pool corresponding to other messages can not apply for the idle buffer, applying for the buffer from the shared buffer pool.
When the idle buffer pool is not applied in the buffer pools corresponding to other messages, obtaining a buffer pool lock of the shared buffer pool; acquiring an idle buffer area with a position index value greater than or equal to a water line value in a shared cache pool, and setting the idle buffer area to be in an occupied state; releasing the cache pool lock; the cache pool lock is used for mutual exclusion processing of the shared cache pool. As shown in fig. 3, when applying for a buffer from the shared buffer pool, a free buffer can be searched for in the buffers with position index values from M to N. On the other hand, when the applied buffer area is used and needs to be released, the state of the buffer area is set to be an idle state.
As an embodiment of the present invention, according to the receiving situation of each type of message, the method for managing a cache according to the embodiment of the present invention further includes: and reducing the shared cache pool. And obtaining the current maximum position index value of the local cache pool and assigning the current maximum position index value to a third position index value. And increasing the current maximum position index value of the local cache pool to obtain the latest maximum position index value of the local cache pool. If the latest maximum position index value of the local cache pool is less than the waterline value, as shown in FIG. 4, the local cache pools before and after shrinking the shared cache pool are shown: before the shared cache pool is reduced, the third position index value is equal to the current maximum position index value K of the local cache pool, the current maximum position index value of the local cache pool is adjusted to be J, and the J is smaller than the water line value M, so that the latest maximum position index value of the local cache pool is J. And setting all buffer areas with the position index values from more than K to less than or equal to J as local buffer pools. At this time, the shared cache pool is narrowed down to all buffers from the position index value J +1 to N. But the water line value has not changed, and the shared buffer area provided for other messages still ranges from M to N.
As another embodiment of reducing the shared cache pool of the present invention, if the latest maximum position index value of the local cache pool is greater than or equal to the water line value, as shown in fig. 5, the local cache pools before and after the reduction of the shared cache pool are shown: and the third position index value is equal to the current maximum position index value K of the local cache pool, the current maximum position index value of the local cache pool is adjusted to be L, and L is greater than the water line value M, and then the latest maximum position index value of the local cache pool is L. And acquiring a cache pool lock of the shared cache pool, setting the water line value to be L +1, setting all buffer areas with the position index values being larger than K and less than or equal to L as local cache pools, and releasing the cache pool lock. The other messages are provided with a range update of the shared buffer from L +1 to N. Wherein L is less than N. When the current maximum position index value of the local cache pool is increased, the number of preset buffer areas or half of the number of the current buffer areas of the shared cache pool can be used as the amplitude, and the maximum position index value of the local cache pool is increased.
As an embodiment of the present invention, according to the receiving situation of each type of message, the method for managing a cache according to the embodiment of the present invention further includes: and expanding the shared cache pool. And obtaining the current maximum position index value of the local cache pool and assigning the current maximum position index value to a third position index value. And reducing the current maximum position index value of the local cache pool to obtain the latest maximum position index value of the local cache pool. And the latest maximum position index value of the local cache pool is required to be greater than or equal to the first position index value. And acquiring the cache pool lock. And setting all the buffer areas from the latest maximum position index value of the local cache pool to the third position index value of the local cache pool as the shared cache pool. And if the latest maximum position index value of the local cache pool is smaller than the second position index value, setting the water line value as the second position index value. Otherwise, setting the water line value as the latest maximum position index value of the local cache pool plus one. And releasing the cache pool lock. Wherein the cache pool lock is used for mutual exclusion processing of the shared cache pool. As in fig. 6, the local cache pool before and after the expansion of the shared cache pool is shown: before the shared cache pool is expanded, the third position index value is equal to the current maximum position index value L of the local cache pool, the current maximum position index value of the local cache pool is adjusted to T, and T is smaller than the second position index value M, so that the latest maximum position index value of the local cache pool is T. And acquiring a cache pool lock, setting all buffer areas with the position index values larger than T and less than or equal to L as shared cache pools, setting the water line value as M, and releasing the cache pool lock. The range of shared buffers for other messages is updated to a buffer from M to N.
As another embodiment of expanding the shared cache pool of the present invention, if the latest maximum position index value of the local cache pool is greater than or equal to the second position index value, as shown in fig. 7, the local cache pool before and after expanding the shared cache pool is shown: before the shared cache pool is expanded, the third position index value is equal to the current maximum position index value L of the local cache pool. And reducing the current maximum position index value of the local cache pool to S, wherein S is larger than the second position index value M, and the latest maximum position index value of the local cache pool is S. And acquiring a cache pool lock, setting all buffer areas with position index values from S to L as shared cache pools, setting the water line value as S +1, and releasing the cache pool lock. At this time, the range of the shared buffer provided to other packets is updated to a buffer from S +1 to N. When the current maximum position index value of the local cache pool is reduced, the number of the preset buffer areas or the number of the current buffer areas in the shared cache pool can be used as the amplitude, and the maximum position index value of the local cache pool is reduced. In this embodiment, K, M, L, J, S, T are all natural numbers less than N.
In the method, the buffer area of the cache pool division part corresponding to the specified type message is used as the shared cache pool and is used as a supplement when other message cache pools have no idle buffer areas, and the reliability of other messages under the condition of service burst can be improved on the premise of not increasing the memory consumption. In addition, the utilization efficiency of the cache pool can be improved through dynamic adjustment of the shared cache pool. The method of the embodiment is not limited to a specific network processing chip, can be applied to network equipment based on different network processors, and has good practicability.
Example two
An embodiment of the present invention provides a cache management apparatus 80, a schematic diagram of which is shown in fig. 8, including:
a cache pool creating module 801, configured to create corresponding cache pools for different types of messages respectively; the number of the buffer areas in the cache pool corresponding to the specified type message is larger than the number of the buffer areas in the cache pools corresponding to other messages;
a cache pool management module 802, configured to use a part of the buffer areas in the cache pool corresponding to the specified type packet as a shared cache pool;
a buffer applying module 803, configured to apply for a buffer from the shared buffer pool when no free buffer can be applied in the buffer pool corresponding to the other packet.
The cache pool creating module 801 is specifically configured to create corresponding cache pools for different types of messages by using a buffer with a set number of bytes as a unit, where the number of buffers in the cache pool corresponding to each type of message may be different. The size of the buffer, i.e., the set number of bytes, may be a power of 2. The buffer is used to store buffer descriptors and messages. The buffer descriptor includes: location index value of buffer, buffer attribute and usage status. The position index value is used for indicating the index value of the corresponding buffer area in the cache pool. The position index value is a natural number, and the position index values of the buffers are sequentially increased from zero from the initial buffer of the buffer pool. The buffer attribute is used to indicate whether the buffer is currently available for sharing. When the attribute of the buffer area is local, the buffer area is only used for processing the message of the specified type currently; and when the buffer area attribute is shared, the buffer area can be used for processing other messages currently. The usage status is used to indicate whether the corresponding buffer is currently free. When the use state is idle, the buffer area is not used currently; when the usage status is occupied, it indicates that the buffer is currently in use. In order to ensure the processing efficiency of different types of messages, the cache pools corresponding to different types of messages can be applied as cache pools with continuous physical addresses.
The cache pool management module 802 is specifically configured to preset a first position index value; all buffer areas with the first position index value or less are used as a local buffer pool for receiving and sending the specified type message; and taking all the buffers larger than the first position index value as the shared cache pool.
Wherein, all buffer zone attributes in the local cache pool are local; and all buffer areas in the shared cache pool are shared in attribute. If the cache pool corresponding to the specified type message is a cache pool with continuous physical addresses, the local cache pool can be applied and released by the hardware FPA.
After dividing the cache pool corresponding to the specified type of packet into a local cache pool and a shared cache pool according to a preset first position index value, the cache pool management module 802 is further configured to preset a second position index value; the second position index value is used for indicating a minimum position index value set by a shared buffer area in the shared cache pool; wherein the second position index value is greater than the first position index value; initializing a water line value as the second position index value, wherein the water line value is used for indicating a minimum position index value of a buffer area which can be used for sharing in the shared cache pool. Wherein the water line value may be greater than or equal to the second position index value.
The buffer application module 803 is specifically configured to obtain a cache pool lock of the shared cache pool; acquiring an idle buffer area with a position index value greater than or equal to the waterline value in the shared cache pool, and setting the idle buffer area to be in an occupied state; releasing the cache pool lock; wherein the cache pool lock is used for mutual exclusion processing of the shared cache pool. In this embodiment, when the buffer applied for by another message is used and needs to be released, the buffer application module 803 is further configured to set the state of the buffer to an idle state.
As an embodiment of the present invention, after initializing the water line value, the cache pool management module 802 may be further configured to narrow the shared cache pool. Obtaining the current maximum position index value of the local cache pool and assigning the current maximum position index value to a third position index value; and increasing the current maximum position index value of the local cache pool to obtain the latest maximum position index value of the local cache pool. And if the latest maximum position index value of the local cache pool is smaller than the waterline value, setting all the buffer areas within the range from the maximum position index value larger than the third position index value to the maximum position index value smaller than or equal to the latest maximum position index value of the local cache pool as the local cache pool. And if the latest maximum position index value of the local cache pool is greater than or equal to the water line value, acquiring a cache pool lock of the shared cache pool. And setting the water line value as the latest maximum position index value of the local cache pool plus one. Setting all the buffer areas from the third position index value to the maximum position index value which is less than or equal to the latest range of the local cache pool as a local cache pool; releasing the cache pool lock; and the latest maximum position index value of the local cache pool is required to be smaller than the maximum position index value of the cache pool corresponding to the specified type message. When the current maximum position index value of the local cache pool is increased, the number of preset buffer areas or half of the number of the current buffer areas of the shared cache pool can be used as the amplitude, and the maximum position index value of the local cache pool is increased.
As another embodiment of the present invention, after initializing the water line value, the cache pool management module 802 may be further configured to expand the shared cache pool. Acquiring a current maximum position index value of the local cache pool, and assigning the current maximum position index value to a third position index value; and narrowing the current maximum position index value of the local cache pool to obtain the latest maximum position index value of the local cache pool. And the latest maximum position index value of the local cache pool is required to be greater than or equal to the first position index value. Acquiring the cache pool lock; setting all the buffer areas from the latest maximum position index value of the local cache pool to the third position index value of the local cache pool as a shared cache pool; if the latest maximum position index value of the local cache pool is smaller than the second position index value, setting the water line value as the second position index value; otherwise, setting the water line value as the latest maximum position index value of the local cache pool plus one; and releasing the cache pool lock. Wherein the cache pool lock is used for mutual exclusion processing of the shared cache pool. When the current maximum position index value of the local cache pool is reduced, the number of the preset buffer areas or the number of the current buffer areas in the shared cache pool can be used as the amplitude, and the maximum position index value of the local cache pool is reduced.
In the method, the buffer area of the cache pool division part corresponding to the specified type message is used as the shared cache pool and is used as a supplement when other message cache pools have no idle buffer areas, and the reliability of other messages under the condition of service burst can be improved on the premise of not increasing the memory consumption. In addition, the utilization efficiency of the cache pool can be improved by dynamically adjusting the shared cache pool. The method of the embodiment is not limited to a specific network processing chip, can be applied to network equipment based on different network processors, and has good practicability.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and shall cover the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for managing a cache, the method comprising:
respectively creating corresponding cache pools for different types of messages, and dividing the cache pools corresponding to the specified types of messages into a local cache pool, an elastic area and a shared cache pool; the local cache pool is used for receiving and sending the specified type of message; the elastic region is used for dynamically adjusting the size of the shared cache pool; the shared cache pool is used for providing a shared buffer area for other messages; the number of the buffer areas in the cache pool corresponding to the specified type message is larger than the number of the buffer areas in the cache pools corresponding to other messages;
and when the buffer pool corresponding to other messages can not apply for the idle buffer, applying for the buffer from the shared buffer pool.
2. The method according to claim 1, wherein the dividing the cache pool corresponding to the specified type of packet into a local cache pool, an elastic region, and a shared cache pool comprises:
presetting a first position index value; taking all the buffer areas with the position index values smaller than or equal to the first position index value as a local cache pool; presetting a second position index value; the second position index value is used for indicating a minimum position index value set by a shared buffer area in the shared cache pool; wherein the second position index value is greater than the first position index value; initializing all buffers in a range larger than the first position index value and smaller than the second position index value to the elastic region.
3. The method of claim 2, further comprising:
initializing a water line value as the second position index value, wherein the water line value is used for indicating a minimum position index value of a buffer area which can be used for sharing in the shared cache pool.
4. The method of claim 3, wherein said applying for a buffer from said shared cache pool comprises:
obtaining a cache pool lock of the shared cache pool; acquiring an idle buffer area with a position index value greater than or equal to the water line value in the shared cache pool, and setting the idle buffer area to be in an occupied state; releasing the cache pool lock; wherein the cache pool lock is used for mutual exclusion processing of the shared cache pool.
5. The method of claim 3, wherein the elastic region is used to dynamically adjust the size of the shared cache pool, comprising: reducing and or expanding the shared cache pool;
the narrowing of the shared cache pool specifically includes:
acquiring the current maximum position index value of the local cache pool, and assigning the current maximum position index value to a third position index value; increasing the current maximum position index value of the local cache pool to obtain the latest maximum position index value of the local cache pool;
if the latest maximum position index value of the local cache pool is smaller than the waterline value, setting all the buffer areas within the range from the maximum position index value larger than the third position index value to the maximum position index value smaller than or equal to the latest maximum position index value of the local cache pool as the local cache pool;
if the latest maximum position index value of the local cache pool is greater than or equal to the water line value, acquiring a cache pool lock of the shared cache pool; setting the water line value as the latest maximum position index value of the local cache pool plus one; setting all the buffer areas from the third position index value to the maximum position index value which is less than or equal to the latest range of the local cache pool as a local cache pool; releasing the cache pool lock; the latest maximum position index value of the local cache pool is required to be smaller than the maximum position index value of the cache pool corresponding to the specified type message;
the expanding the shared cache pool specifically includes:
acquiring a current maximum position index value of the local cache pool, and assigning the current maximum position index value to a third position index value; narrowing the current maximum position index value of the local cache pool to obtain the latest maximum position index value of the local cache pool; the latest maximum position index value of the local cache pool is required to be greater than or equal to the first position index value;
acquiring the cache pool lock; setting all the buffer areas from the latest maximum position index value of the local cache pool to the third position index value of the local cache pool as a shared cache pool; if the latest maximum position index value of the local cache pool is smaller than the second position index value, setting the water line value as the second position index value; otherwise, setting the water line value as the latest maximum position index value of the local cache pool plus one; releasing the cache pool lock;
wherein the cache pool lock is used for mutual exclusion processing of the shared cache pool.
6. An apparatus for managing a cache, the apparatus comprising:
the cache pool creating module is used for respectively creating corresponding cache pools for different types of messages; the number of the buffer areas in the cache pool corresponding to the specified type message is larger than the number of the buffer areas in the cache pools corresponding to other messages;
the buffer pool management module is used for dividing the buffer pool corresponding to the specified type message into a local buffer pool, an elastic area and a shared buffer pool; the local cache pool is used for receiving and sending the specified type of message; the elastic region is used for dynamically adjusting the size of the shared cache pool; the shared cache pool is used for providing a shared buffer area for other messages;
and the buffer area application module is used for applying the buffer area from the shared buffer pool when the idle buffer area can not be applied in the buffer pools corresponding to other messages.
7. The apparatus of claim 6, wherein the buffer pool management module is specifically configured to preset a first position index value; taking all the buffer areas with the first position index values smaller than or equal to the first position index values as a local cache pool; presetting a second position index value; the second position index value is used for indicating a minimum position index value set by a shared buffer area in the shared cache pool; wherein the second position index value is greater than the first position index value; initializing all buffers in a range larger than the first position index value and smaller than the second position index value to the elastic region.
8. The apparatus of claim 7, wherein the cache pool management module is further configured to initialize a water line value to the second location index value, the water line value indicating a minimum location index value of buffers in the shared cache pool that can be used for sharing.
9. The apparatus of claim 8, wherein the buffer application module is specifically configured to obtain a cache pool lock of the shared cache pool; acquiring an idle buffer area with a position index value greater than or equal to the water line value in the shared cache pool, and setting the idle buffer area to be in an occupied state; releasing the cache pool lock; wherein the cache pool lock is used for mutual exclusion processing of the shared cache pool.
10. The apparatus according to claim 8, wherein the cache pool management module is specifically configured to narrow and/or enlarge the shared cache pool;
the method for reducing the shared cache pool comprises the following steps:
acquiring a current maximum position index value of the local cache pool, and assigning the current maximum position index value to a third position index value; increasing the current maximum position index value of the local cache pool to obtain the latest maximum position index value of the local cache pool;
if the latest maximum position index value of the local cache pool is smaller than the waterline value, setting all the buffer areas ranging from the maximum position index value larger than the third position index value to the maximum position index value smaller than or equal to the latest maximum position index value of the local cache pool as local cache pools;
if the latest maximum position index value of the local cache pool is greater than or equal to the water line value, acquiring a cache pool lock of the shared cache pool; setting the water line value as the latest maximum position index value of the local cache pool plus one; setting all the buffer areas from the third position index value to the latest maximum position index value of the local cache pool as the local cache pool; releasing the cache pool lock; the latest maximum position index value of the local cache pool is required to be smaller than the maximum position index value of the cache pool corresponding to the specified type message;
the method for expanding the shared cache pool comprises the following steps:
acquiring a current maximum position index value of the local cache pool, and assigning the current maximum position index value to a third position index value; narrowing the current maximum position index value of the local cache pool to obtain the latest maximum position index value of the local cache pool; the latest maximum position index value of the local cache pool is required to be greater than or equal to the first position index value;
acquiring the cache pool lock; setting all the buffer areas from the latest maximum position index value of the local cache pool to the third position index value of the local cache pool as a shared cache pool; if the latest maximum position index value of the local cache pool is smaller than the second position index value, setting the water line value as the second position index value; otherwise, setting the water line value as the latest maximum position index value of the local cache pool plus one; releasing the cache pool lock;
wherein the cache pool lock is used for mutual exclusion processing of the shared cache pool.
CN201811518672.6A 2018-12-13 2018-12-13 Cache management method and device Active CN109495401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811518672.6A CN109495401B (en) 2018-12-13 2018-12-13 Cache management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811518672.6A CN109495401B (en) 2018-12-13 2018-12-13 Cache management method and device

Publications (2)

Publication Number Publication Date
CN109495401A CN109495401A (en) 2019-03-19
CN109495401B true CN109495401B (en) 2022-06-24

Family

ID=65709978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811518672.6A Active CN109495401B (en) 2018-12-13 2018-12-13 Cache management method and device

Country Status (1)

Country Link
CN (1) CN109495401B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112286679B (en) * 2020-10-20 2022-10-21 烽火通信科技股份有限公司 DPDK-based inter-multi-core buffer dynamic migration method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1798094A (en) * 2004-12-23 2006-07-05 华为技术有限公司 Method of using buffer area
CN102088395A (en) * 2009-12-02 2011-06-08 杭州华三通信技术有限公司 Method and device for adjusting media data cache
CN102263701A (en) * 2011-08-19 2011-11-30 中兴通讯股份有限公司 Queue regulation method and device
CN103164278A (en) * 2011-12-09 2013-06-19 沈阳高精数控技术有限公司 Real-time dynamic memory manager achieving method for multi-core processor
US8930627B2 (en) * 2012-06-14 2015-01-06 International Business Machines Corporation Mitigating conflicts for shared cache lines
CN104394096A (en) * 2014-12-11 2015-03-04 福建星网锐捷网络有限公司 Multi-core processor based message processing method and multi-core processor
CN105610729A (en) * 2014-11-19 2016-05-25 中兴通讯股份有限公司 Buffer allocation method, buffer allocation device and network processor
CN106330770A (en) * 2015-06-29 2017-01-11 深圳市中兴微电子技术有限公司 Shared cache distribution method and device
CN107347039A (en) * 2016-05-05 2017-11-14 深圳市中兴微电子技术有限公司 A kind of management method and device in shared buffer memory space
CN107515785A (en) * 2016-06-16 2017-12-26 大唐移动通信设备有限公司 A kind of EMS memory management process and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701019A (en) * 2014-11-25 2016-06-22 阿里巴巴集团控股有限公司 Memory management method and memory management device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1798094A (en) * 2004-12-23 2006-07-05 华为技术有限公司 Method of using buffer area
CN102088395A (en) * 2009-12-02 2011-06-08 杭州华三通信技术有限公司 Method and device for adjusting media data cache
CN102263701A (en) * 2011-08-19 2011-11-30 中兴通讯股份有限公司 Queue regulation method and device
CN103164278A (en) * 2011-12-09 2013-06-19 沈阳高精数控技术有限公司 Real-time dynamic memory manager achieving method for multi-core processor
US8930627B2 (en) * 2012-06-14 2015-01-06 International Business Machines Corporation Mitigating conflicts for shared cache lines
CN105610729A (en) * 2014-11-19 2016-05-25 中兴通讯股份有限公司 Buffer allocation method, buffer allocation device and network processor
CN104394096A (en) * 2014-12-11 2015-03-04 福建星网锐捷网络有限公司 Multi-core processor based message processing method and multi-core processor
CN106330770A (en) * 2015-06-29 2017-01-11 深圳市中兴微电子技术有限公司 Shared cache distribution method and device
CN107347039A (en) * 2016-05-05 2017-11-14 深圳市中兴微电子技术有限公司 A kind of management method and device in shared buffer memory space
CN107515785A (en) * 2016-06-16 2017-12-26 大唐移动通信设备有限公司 A kind of EMS memory management process and device

Also Published As

Publication number Publication date
CN109495401A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
WO2019085816A1 (en) Service data transmission method and apparatus
WO2000000892A1 (en) Systems and methods for implementing pointer management
EP2838243B1 (en) Capability aggregation and exposure method and system
WO2020019743A1 (en) Traffic control method and device
CN105718397A (en) Arbitrating Bus Transactions On A Communications Bus Based On Bus Device Health Information And Related Power Management
US6816494B1 (en) Method and apparatus for distributed fairness algorithm for dynamic bandwidth allocation on a ring
EP4037270A1 (en) Service traffic adjusting method and apparatus
US8539089B2 (en) System and method for vertical perimeter protection
JP2020534760A (en) RBG division method and user terminal
US20100262679A1 (en) Method and system for checking automatically connectivity status of an ip link on ip network
CN109194721A (en) A kind of asynchronous RDMA communication dynamic memory management method and system
EP3461085B1 (en) Method and device for queue management
CN109495401B (en) Cache management method and device
US6425067B1 (en) Systems and methods for implementing pointer management
CN110190988B (en) Service deployment method and device
CN110661728B (en) Buffer design method and device combining sharing and privately using in multi-virtual channel transmission
CN106330504B (en) Method for realizing application and service controller
WO2023125380A1 (en) Data management method and corresponding apparatus
US20030163619A1 (en) Buffer controller and buffer control method
CN114024844B (en) Data scheduling method, data scheduling device and electronic equipment
US10250515B2 (en) Method and device for forwarding data messages
CN109739574A (en) Data capture method and electronic equipment, scaling method and device
WO2018233592A1 (en) Method for maintaining sequencing of messages, network node, and storage medium
CN114371945A (en) Message transmission method and device, electronic equipment and computer storage medium
CN110753043A (en) Communication method, device, server and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder

Address after: 610041 nine Xing Xing Road 16, hi tech Zone, Sichuan, Chengdu

Patentee after: MAIPU COMMUNICATION TECHNOLOGY Co.,Ltd.

Address before: 610041 15-24 floor, 1 1 Tianfu street, Chengdu high tech Zone, Sichuan

Patentee before: MAIPU COMMUNICATION TECHNOLOGY Co.,Ltd.

CP02 Change in the address of a patent holder