CN107315622B - Cache management method and device - Google Patents

Cache management method and device Download PDF

Info

Publication number
CN107315622B
CN107315622B CN201710464931.0A CN201710464931A CN107315622B CN 107315622 B CN107315622 B CN 107315622B CN 201710464931 A CN201710464931 A CN 201710464931A CN 107315622 B CN107315622 B CN 107315622B
Authority
CN
China
Prior art keywords
socket
queue
cpu
cache
software queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710464931.0A
Other languages
Chinese (zh)
Other versions
CN107315622A (en
Inventor
胡军
任红军
秦正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou DPTech Technologies Co Ltd
Original Assignee
Hangzhou DPTech Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou DPTech Technologies Co Ltd filed Critical Hangzhou DPTech Technologies Co Ltd
Priority to CN201710464931.0A priority Critical patent/CN107315622B/en
Publication of CN107315622A publication Critical patent/CN107315622A/en
Application granted granted Critical
Publication of CN107315622B publication Critical patent/CN107315622B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5055Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/43Checking; Contextual analysis
    • G06F8/433Dependency analysis; Data or control flow analysis
    • G06F8/434Pointers; Aliasing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a cache management method and a cache management device, which are applied to network equipment carrying a Linux system, wherein the method comprises the following steps: after processing the data packet in the memory pointed by the pointer of the socket cache, the target CPU recovers the socket cache; reading an identification field of the socket cache, and determining whether the socket cache is a socket cache corresponding to the target CPU; wherein, the identification field carries the identification of the CPU to which the socket cache belongs; if not, based on the CPU identification in the identification field, adding the socket cache into the second software queue of the corresponding CPU; and if so, adding the socket cache into the first software queue corresponding to the target CPU. According to the technical scheme, the system overhead generated by the network equipment in the locking and unlocking processes of the software queue is greatly reduced, and therefore the processing performance of the network equipment on the data packet is improved.

Description

Cache management method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for cache management.
Background
In a network device carrying a Linux system, a socket buffer is the most important data structure, and encapsulation and decapsulation of all data packets in the Linux network are performed on the basis of the data structure. In the process of processing the data packet, the Linux system transmits the data packet information through the pointer of the socket cache.
For network equipment carrying a multi-core CPU of a Linux system, each CPU is pre-allocated with a software queue, wherein a plurality of socket caches are respectively stored in each software queue. The network card of the network device corresponds to a plurality of hardware queues, wherein each hardware queue corresponds to a software queue.
When the network card receives a data packet, the five-tuple of the data packet can be calculated based on a preset algorithm, a hardware queue for storing the data packet is determined, then the data packet is stored in the hardware queue, and the data packet is stored in a memory pointed by a pointer of a socket cache based on the socket cache distributed by a CPU from a software queue corresponding to the hardware queue. The CPU corresponding to the software queue may then process the packet.
If the CPU successfully processes the packet, the CPU may recycle the socket cache and add the socket cache to the software queue.
If the CPU fails to process the packet, the quintuple of the packet may be calculated based on a preset algorithm, the designated CPU that processes the packet is determined, and then the socket cache is sent to the designated CPU in the network device for processing. After the designated CPU processes the data packet in the memory to which the pointer of the socket cache points, the socket cache may be recovered, and the socket cache is added to the software queue.
When a designated CPU is joining the socket cache to the software queue, the CPU corresponding to the software queue may be operating the software queue at the same time. When multiple CPUs operate concurrently on the same software queue, errors may be generated.
In the prior art, the problem that errors can be generated by concurrent operation is solved by introducing a locking mechanism. When any CPU operates on the software queue, locking the software queue to prevent other CPUs from operating the software queue concurrently; after the operation is completed, the CPU unlocks the software queue.
However, in practical applications, the locking mechanism increases the system overhead of the network device, and seriously affects the processing performance of the network device on the data packet.
Disclosure of Invention
In view of this, the present application provides a method and an apparatus for cache management, so as to solve the problem that when a lock mechanism avoids a possible error generated by a concurrent operation, the lock mechanism increases the system overhead of a network device, and the processing performance of the network device on a data packet is seriously affected.
Specifically, the method is realized through the following technical scheme:
a cache management method is applied to network equipment carrying a Linux system, wherein the network equipment carries multi-core CPUs and network cards, and each CPU is pre-distributed with a corresponding first software queue and a corresponding second software queue, wherein a plurality of socket caches are stored in the first software queue; the network card corresponds to a plurality of hardware queues, wherein each hardware queue corresponds to a CPU respectively, and the method comprises the following steps:
after processing the data packet in the memory pointed by the pointer of the socket cache, the target CPU recovers the socket cache;
reading an identification field of the socket cache, and determining whether the socket cache is a socket cache corresponding to the target CPU; wherein, the identification field carries the identification of the CPU to which the socket cache belongs;
if not, based on the CPU identification in the identification field, adding the socket cache into the second software queue of the corresponding CPU;
and if so, adding the socket cache into the first software queue corresponding to the target CPU.
In the method of cache management, the method further comprises:
before adding the socket cache into the first software queue or the second software queue, determining that the software queue is the first software queue or the second software queue according to a queue identifier of the software queue.
In the cache management method, the adding the socket cache to the second software queue of the corresponding CPU includes:
determining whether the second software queue is locked;
if the second software queue is not locked, locking the second software queue, and adding the socket cache into the second software queue of the corresponding CPU;
and after the socket cache is added into the second software queue, unlocking the second software queue.
In the method of cache management, the method further comprises:
and taking out a specified number of socket caches from the first software queue in advance, storing the socket caches into a corresponding hardware queue, storing the data packets into a memory pointed by a pointer of the socket caches after the network card stores the data packets into the hardware queue corresponding to the target CPU.
In the method of cache management, the method further comprises:
periodically checking the number of the socket caches available in the hardware queue;
when the number of the socket caches available in the hardware queue is smaller than the specified number, taking out a plurality of socket caches from the first software queue to be stored in the hardware queue, so that the number of the socket caches available in the hardware queue is equal to the specified number.
In the method of cache management, the method further comprises:
when the socket caches are not available in the first software queue, taking out a plurality of socket caches from the second software queue to be stored in the hardware queue, so that the number of the socket caches available in the hardware queue is equal to the specified number.
A cache management device is applied to network equipment carrying a Linux system, the network equipment carries multi-core CPUs and network cards, and each CPU is pre-distributed with a corresponding first software queue and a corresponding second software queue, wherein a plurality of socket caches are stored in the first software queue; the network card corresponds to a plurality of hardware queues, wherein each hardware queue corresponds to a CPU respectively, and the method comprises the following steps:
a recovery unit, configured to recover the socket cache after the target CPU processes the data packet in the memory pointed by the pointer of the socket cache;
a determining unit, configured to read an identifier field of the socket cache, and determine whether the socket cache is a socket cache corresponding to the target CPU; wherein, the identification field carries the identification of the CPU to which the socket cache belongs;
a joining unit, configured to join the socket cache into the second software queue of the corresponding CPU based on the identifier of the CPU in the identifier field if the socket cache is not in the second software queue of the corresponding CPU; and if so, adding the socket cache into a first software queue corresponding to the target CPU.
In the apparatus for cache management, the determining unit is further configured to:
before adding the socket cache into the first software queue or the second software queue, determining that the software queue is the first software queue or the second software queue according to a queue identifier of the software queue.
In the apparatus for cache management, the joining unit is further configured to:
determining whether the second software queue is locked;
if the second software queue is not locked, locking the second software queue, and adding the socket cache into the second software queue of the corresponding CPU;
and after the socket cache is added into the second software queue, unlocking the second software queue.
In the apparatus for cache management, the apparatus further comprises:
and the storage unit is used for taking out a specified number of socket caches from the first software queue in advance, storing the socket caches into corresponding hardware queues, storing the data packets into a memory pointed by a pointer of the socket caches after the data packets are stored into the hardware queues corresponding to the target CPU by the network card.
In the apparatus for cache management, the apparatus further comprises:
a checking unit, configured to periodically check the number of available socket caches in the hardware queue;
the storing unit is further configured to, when the number of available socket caches in the hardware queue is smaller than the specified number, take out a plurality of socket caches from the first software queue and store the socket caches in the hardware queue, so that the number of available socket caches in the hardware queue is equal to the specified number.
In the apparatus for cache management, the storage unit is further configured to:
when the socket caches are not available in the first software queue, taking out a plurality of socket caches from the second software queue to be stored in the hardware queue, so that the number of the socket caches available in the hardware queue is equal to the specified number.
In the embodiment of the present application, after a target CPU of a network device carrying a Linux system processes a data packet in a memory to which a pointer of the socket cache points, the socket cache is recovered, and then an identification field of the socket cache can be read to determine whether the socket cache is a socket cache corresponding to the target CPU; wherein, the identification field carries the identification of the CPU to which the socket cache belongs; if not, adding the socket cache into the second software queue of the corresponding CPU based on the CPU identification in the identification field; if yes, adding the socket cache into a first software queue corresponding to the target CPU;
in the application, each CPU of a multi-core CPU of the network equipment is pre-allocated with two software queues, and after a target CPU recovers the socket cache, whether the socket cache corresponds to the target CPU can be determined according to the identification of the CPU carried by the socket cache, and then the socket cache is added into a first software queue corresponding to the target CPU or a second software queue of other CPUs according to a judgment result; therefore, the first software queue corresponding to each CPU is not operated by other CPUs, the first software queue does not need to use a locking mechanism, and most of socket caches recovered after the CPUs process the data packets are the socket caches corresponding to the CPUs, so that the network equipment greatly reduces the system overhead generated in the locking and unlocking processes of the software queues and improves the processing performance of the network equipment on the data packets.
Drawings
FIG. 1 is a prior art apparatus architecture diagram;
FIG. 2 is an interactive schematic diagram of a packet processing process of the prior art;
FIG. 3 is a diagram of an apparatus architecture shown in the present application;
FIG. 4 is an interactive schematic diagram of a packet processing process shown in the present application;
FIG. 5 is a flow chart illustrating a method of cache management according to the present application;
FIG. 6 is a block diagram illustrating an embodiment of an apparatus for cache management;
fig. 7 is a hardware configuration diagram of a cache management apparatus according to the present application.
Detailed Description
In order to make the technical solutions in the embodiments of the present invention better understood and make the above objects, features and advantages of the embodiments of the present invention more comprehensible, the following description of the prior art and the technical solutions in the embodiments of the present invention with reference to the accompanying drawings is provided.
The network equipment with the Linux system transmits the data packet information through a socket cache data structure. After receiving the data packet, the network card of the network device stores the data packet in the memory of the device, and points the pointer of the socket cache to the memory address, and the subsequent message processing obtains the data packet from the memory through the pointer of the socket cache.
Referring to fig. 1, which is a device architecture diagram in the prior art, as shown in fig. 1, for a network device with a multi-core CPU of a Linux system, a software queue is pre-allocated to each CPU, where each software queue may store a plurality of socket caches. The network card of the network device corresponds to a plurality of hardware queues, wherein each hardware queue corresponds to a software queue. Each CPU can take out a specified number of socket caches from the software queue in advance and store the socket caches in the hardware queue corresponding to the software queue.
When the network card receives a data packet, it may calculate a five-tuple of the data packet based on a preset algorithm (e.g., a hash algorithm), determine a hardware queue in which the data packet is stored, then store the data packet into the hardware queue, and store the data packet into a memory to which a pointer of the socket buffer points based on a socket buffer allocated by the CPU from a software queue corresponding to the hardware queue. The CPU corresponding to the software queue may subsequently obtain the packet from the memory pointed to by the pointer of the socket cache, and process the packet.
On one hand, if the CPU successfully processes the packet, the CPU may recycle the socket cache and rejoin the recycled socket cache in the software queue.
On the other hand, each CPU in the multi-core CPU may be assigned to process a data packet of a different service, and thus the CPU may fail in processing the processing packet; if the CPU fails to process the data packet, the quintuple of the data packet can be calculated based on a preset algorithm (such as a hash algorithm), a specified CPU for processing the data packet is determined, and then the socket cache is handed over to the specified CPU of the network equipment for processing; wherein the designated CPU comprises a CPU which is configured in advance and can process the data packet. After the designated CPU finishes processing the data packet, the socket cache may be recovered, and then the socket cache is added to the software queue.
Referring to fig. 2, which is an interactive schematic diagram of a packet processing process in the prior art, as shown in fig. 2, after processing a packet in a memory pointed by a pointer of a socket cache, if the processing is successful, directly recovering the socket cache, and adding the socket cache to a software queue Ring M corresponding to the CPU M;
if the processing fails, the socket cache is handed over to the CPU N for processing, and after the CPU N successfully processes the data packet, the socket cache can be recycled and added into the software queue Ring M corresponding to the CPU M.
When CPU N adds the above socket cache to software queue Ring M, CPU M may also be operating on software queue Ring M. When multiple CPUs operate concurrently on the same software queue, errors may be generated.
The prior art solves the problem that errors may be generated by concurrent operations by introducing a locking mechanism. When any CPU operates the software queue, locking the software queue to prevent other CPUs from operating the software queue at the same time; after the operation is completed, the CPU unlocks the software queue, and the unlocked software queue can be operated by other CPUs.
However, in practical applications, the locking mechanism increases the system overhead of the network device, and seriously affects the processing performance of the network device on the data packet.
In order to solve the above problems, in the technical solution of the embodiment of the present application, two software queues are respectively allocated to each CPU of a multi-core CPU of a network device in advance, where one software queue is operated only by a corresponding CPU and the other software queue is operated by other CPUs; the software queues corresponding to the CPUs do not need to use a locking mechanism, and each CPU can often process most of the data packets, so that the system overhead generated by the network device in the locking and unlocking process of the software queues can be greatly reduced after the operation of each CPU to the corresponding software queues is not locked or unlocked any more.
Referring to fig. 3, for a device architecture diagram shown in the present application, as shown in fig. 3, two software queues are pre-allocated to each CPU of a network device that mounts a multi-core CPU of a Linux system. In an initial state, one of the software queues Ring can store a plurality of socket caches, and the other software queue Ring is empty. The network card of the network device corresponds to a plurality of hardware queues, wherein each hardware queue corresponds to a CPU. Each CPU may take out a specified number of socket caches from the software queue Ring in advance and store them in the corresponding hardware queue.
When the network card receives a data packet, the five-tuple of the data packet can be calculated based on a preset algorithm, a hardware queue for storing the data packet is determined, then the data packet is stored in the hardware queue, and the data packet is stored in a memory pointed by a pointer of a socket cache based on the socket cache stored in advance by a CPU. The CPU corresponding to the hardware queue may subsequently obtain the packet from the memory pointed to by the pointer of the socket cache and process the packet.
Please continue to refer to fig. 4, which is an interactive schematic diagram of a packet processing process shown in the present application, as shown in fig. 4, after the CPU M processes a packet in the memory pointed by the pointer of the socket cache, if the processing is successful, the socket cache is directly recycled, and the socket cache is added to the software queue Ring M corresponding to the CPU M;
if the processing fails, the socket cache is handed over to the CPU N for processing, and after the CPU N successfully processes the data packet, the socket cache may be recovered and added to the software queue ring M corresponding to the CPU M.
Because each CPU processes the data packet successfully, the software queue Ring which is added in the socket buffer memory is recovered and cannot be operated by other CPUs, the software queue Ring does not need to use a locking mechanism, and most of the socket buffers recovered after each CPU processes the data packet are the corresponding socket buffers, therefore, the network equipment greatly reduces the system overhead generated in the process of locking and unlocking the software queue by each CPU.
The technical solution of the present application is explained below in the perspective of a target CPU.
Referring to fig. 5, which is a flowchart of a method for cache management shown in the present application, the method is applied to a network device carrying a Linux system, the network device carries a multi-core CPU and a network card, and each CPU is pre-allocated with a corresponding first software queue and a corresponding second software queue, wherein a plurality of socket caches are stored in the first software queue; the network card corresponds to a plurality of hardware queues, wherein each hardware queue corresponds to a CPU, and the method comprises the following steps:
step 501: and after processing the data packet in the memory pointed by the pointer of the socket cache, the target CPU recovers the socket cache.
Step 502: reading an identification field of the socket cache, and determining whether the socket cache is a socket cache corresponding to the target CPU; wherein, the identification field carries the identification of the CPU to which the socket cache belongs.
Step 503: and if not, adding the socket cache into the second software queue of the corresponding CPU based on the CPU identification in the identification field.
Step 504: and if so, adding the socket cache into a first software queue corresponding to the target CPU.
The first software queue and the second software queue respectively carry corresponding queue identifications, and the target CPU identifies the first software queue and the second software queue according to the queue identifications.
In this embodiment of the present application, each CPU may take out a specified number of socket caches from its corresponding first software queue in advance, and store the socket caches in the corresponding hardware queue. Wherein, the identification field of each socket cache carries the identification of the CPU to which the socket cache belongs.
The network card receives a data packet, calculates a quintuple of the data packet according to a preset algorithm, determines a hardware queue for storing the data packet, stores the data packet into the hardware queue, and stores the data packet into a memory pointed by a pointer of a socket cache based on the socket cache stored in the hardware queue in advance.
In this embodiment, after the network card stores the data packet in the memory pointed by the pointer of the socket cache, the target CPU may obtain the data packet from the memory based on the pointer of the socket cache and process the data packet.
The target CPU may recycle the socket cache after processing the data packet in the memory pointed to by the pointer of the socket cache.
It should be noted that the socket buffer may be obtained by the target CPU directly from its corresponding hardware queue, or may be processed by the target CPU by other CPUs. For these two cases, the target CPU will not store the socket cache in the same software queue after the socket cache is recycled.
In this embodiment of the present application, after the target CPU recovers the socket cache, the identification field of the socket cache may be read, and whether the socket cache is the socket cache corresponding to the target CPU is determined based on the identification of the CPU carried in the identification field.
In this embodiment of the present application, if the target CPU determines that the socket cache is a socket cache corresponding to the target CPU, the socket cache may be added to the first software queue corresponding to the target CPU.
Specifically, after the target CPU determines that the socket cache is the socket cache corresponding thereto, a first software queue corresponding to the target CPU is determined according to a queue identifier of the software queue, and then the socket cache is added to the first software queue.
In this embodiment of the present application, if the target CPU determines that the socket cache is not the socket cache corresponding to the target CPU, the socket cache may be added to the second software queue of the corresponding CPU based on the identifier of the CPU in the identifier field.
Specifically, after the target CPU determines the CPU corresponding to the socket cache, a second software queue corresponding to the CPU is determined based on a queue identifier of a software queue of the CPU, and then the socket cache is added to the second software queue.
In one embodiment, the second software queue may be operated concurrently, since the second software queue corresponding to each CPU may be operated by other CPUs. Thus, the second software queue may still avoid errors that may be generated by concurrent operations through the locking mechanism.
Further, after determining the second software queue, the target CPU may determine whether the second software queue is locked before adding the socket cache to the second software queue.
If the second software queue is not locked, the target CPU may lock the second software queue, prevent the second software queue from being operated by other CPUs, and then add the socket cache to the second software queue. After the socket cache is added to the second software queue, the target CPU may unlock the second software queue. The unlocked second software queue can be operated by other CPUs.
By retaining the lock mechanism in the second software queue, errors due to concurrent operation of the second software queue by multiple CPUs can be effectively avoided.
In this embodiment of the present application, the target CPU may periodically check the number of available socket caches in the corresponding hardware queue, and when the number of available socket caches in the hardware queue is smaller than the specified number, the target CPU may take out a plurality of socket caches from the corresponding first software queue to store in the hardware queue, so that the number of available socket caches in the hardware queue is equal to the specified number.
By the measures, the target CPU can periodically supplement the socket cache to the corresponding hardware queue, and the hardware queue can be prevented from being incapable of storing the data packet distributed by the network card into the memory due to lack of the socket cache.
If the data packet which can not be processed by the target CPU occurs, the recycled socket cache is added into a second software queue corresponding to the target CPU after the data packet is processed by other CPUs. In a special case, if the number of packets that the target CPU cannot process is sufficient, the socket cache corresponding to the target CPU is added to the second software queue. Therefore, the first software queue corresponding to the target CPU may not have an available socket cache, in which case the target CPU cannot take the socket cache from the first software queue and place it into the hardware queue.
Therefore, in one embodiment shown, when the first software queue has no socket cache, the target CPU may take out several socket caches from the corresponding second software queue and store them in the hardware queue, so that the number of available socket caches in the hardware queue is equal to the specified number.
By the measures, even if a large number of data packets which cannot be processed by the target CPU occur, the target CPU can still periodically supplement socket cache to the corresponding hardware queue, and the situation that the hardware queue cannot store the data packets distributed by the network card into the memory due to lack of the socket cache is prevented.
To sum up, in this embodiment of the present application, after processing a data packet in a memory pointed by a pointer of a socket cache, a target CPU may recycle the socket cache, then read an identification field of the socket cache, determine whether the socket cache is a socket cache corresponding to the target CPU, and if not, add the socket cache to a second software queue of a corresponding CPU based on an identification of the CPU in the identification field; if yes, adding the socket cache into a first software queue corresponding to the target CPU;
because each CPU of the multi-core CPU is pre-distributed with two software queues respectively, the target CPU only operates the corresponding first software queue or the second software queue of other CPUs, wherein, because the first software queue corresponding to each CPU can not be operated by other CPUs, the first software queue does not need to use a locking mechanism, and most of socket caches recycled by each CPU processing data packets are the corresponding socket caches, therefore, after locking and unlocking operations when each CPU adds the socket caches into the first software queue are cancelled, the system overhead of the network equipment is greatly reduced, the network equipment can have more resources to process the data packets, and the performance of processing the data packets is improved.
Corresponding to the foregoing embodiments of the method for cache management, the present application further provides embodiments of an apparatus for cache management.
Referring to fig. 6, a block diagram of an embodiment of a cache management apparatus shown in the present application is shown:
as shown in fig. 6, the apparatus 60 for cache management includes:
a recycling unit 610, configured to recycle the socket cache after the target CPU processes the data packet in the memory pointed by the pointer of the socket cache.
A determining unit 620, configured to read an identifier field of the socket cache, and determine whether the socket cache is a socket cache corresponding to the target CPU; wherein, the identification field carries the identification of the CPU to which the socket cache belongs.
A joining unit 630, configured to join, if not, the socket cache into the second software queue of the corresponding CPU based on the identifier of the CPU in the identifier field; and if so, adding the socket cache into a first software queue corresponding to the target CPU.
In this example, the determining unit 620 is further configured to:
before adding the socket cache into the first software queue or the second software queue, determining that the software queue is the first software queue or the second software queue according to a queue identifier of the software queue.
In this example, the joining unit 630 is further configured to:
determining whether the second software queue is locked;
if the second software queue is not locked, locking the second software queue, and adding the socket cache into the second software queue of the corresponding CPU;
and after the socket cache is added into the second software queue, unlocking the second software queue.
In this example, the apparatus further comprises:
a storing unit 640, configured to take out a specified number of socket caches from the first software queue in advance, store the socket caches in corresponding hardware queues, so that after the network card stores the data packet in the hardware queue corresponding to the target CPU, store the data packet in a memory pointed by a pointer of the socket cache.
In this example, the apparatus further comprises:
a checking unit 650 for periodically checking the number of available socket caches in the hardware queue.
The depositing unit 640 is further configured to, when the number of available socket caches in the hardware queue is smaller than the specified number, take out a plurality of socket caches from the first software queue and deposit the socket caches in the hardware queue, so that the number of available socket caches in the hardware queue is equal to the specified number.
In this example, the storage unit 640 is further configured to:
when the socket caches are not available in the first software queue, taking out a plurality of socket caches from the second software queue to be stored in the hardware queue, so that the number of the socket caches available in the hardware queue is equal to the specified number.
The embodiment of the cache management device can be applied to network equipment carrying a Linux system. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. Taking a software implementation as an example, as a device in a logical sense, the device is formed by reading a corresponding computer program instruction in a nonvolatile memory into an internal memory for operation through a processor of a network device on which the Linux system is mounted. In terms of hardware, as shown in fig. 7, the hardware structure diagram of the network device equipped with the Linux system and having the cache management apparatus according to the present application is shown, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 7, the network device equipped with the Linux system and having the cache management apparatus according to the embodiment may further include other hardware according to the actual function of the cache management apparatus, which is not described again.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A cache management method is applied to network equipment carrying a Linux system, wherein the network equipment carries multi-core CPUs and network cards, and each CPU is pre-distributed with a corresponding first software queue and a corresponding second software queue, wherein a plurality of socket caches are stored in the first software queue; the network card corresponds to many hardware queues, and wherein, each hardware queue corresponds a CPU respectively, its characterized in that includes:
after processing the data packet in the memory pointed by the pointer of the socket cache, the target CPU recovers the socket cache;
reading an identification field of the socket cache, and determining whether the socket cache is a socket cache corresponding to the target CPU; wherein, the identification field carries the identification of the CPU to which the socket cache belongs;
if not, based on the CPU identification in the identification field, adding the socket cache into the second software queue of the corresponding CPU;
if yes, adding the socket cache into the first software queue corresponding to the target CPU;
the first software queue is operated only by the corresponding CPU, and a locking mechanism is not needed; the second software queue is operated by a corresponding CPU or other CPUs, and a locking mechanism is reserved;
the adding the socket cache into the second software queue of the corresponding CPU includes:
determining whether the second software queue is locked;
if the second software queue is not locked, locking the second software queue, and adding the socket cache into the second software queue of the corresponding CPU;
and after the socket cache is added into the second software queue, unlocking the second software queue.
2. The method of claim 1, further comprising:
before adding the socket cache into the first software queue or the second software queue, determining that the software queue is the first software queue or the second software queue according to a queue identifier of the software queue.
3. The method of claim 1, further comprising:
and taking out a specified number of socket caches from the first software queue in advance, storing the socket caches into a corresponding hardware queue, storing the data packets into a memory pointed by a pointer of the socket caches after the network card stores the data packets into the hardware queue corresponding to the target CPU.
4. The method of claim 3, further comprising:
periodically checking the number of the socket caches available in the hardware queue;
when the number of the socket caches available in the hardware queue is smaller than the specified number, taking out a plurality of socket caches from the first software queue to be stored in the hardware queue, so that the number of the socket caches available in the hardware queue is equal to the specified number.
5. The method of claim 4, further comprising:
when the socket caches are not available in the first software queue, taking out a plurality of socket caches from the second software queue to be stored in the hardware queue, so that the number of the socket caches available in the hardware queue is equal to the specified number.
6. A cache management device is applied to network equipment carrying a Linux system, the network equipment carries multi-core CPUs and network cards, and each CPU is pre-distributed with a corresponding first software queue and a corresponding second software queue, wherein a plurality of socket caches are stored in the first software queue; the network card corresponds to many hardware queues, and wherein, each hardware queue corresponds a CPU respectively, its characterized in that includes:
a recovery unit, configured to recover the socket cache after the target CPU processes the data packet in the memory pointed by the pointer of the socket cache;
a determining unit, configured to read an identifier field of the socket cache, and determine whether the socket cache is a socket cache corresponding to the target CPU; wherein, the identification field carries the identification of the CPU to which the socket cache belongs;
a joining unit, configured to join the socket cache into the second software queue of the corresponding CPU based on the identifier of the CPU in the identifier field if the socket cache is not in the second software queue of the corresponding CPU; if so, adding the socket cache into a first software queue corresponding to the target CPU;
the first software queue is operated only by the corresponding CPU, and a locking mechanism is not needed; the second software queue is operated by a corresponding CPU or other CPUs, and a locking mechanism is reserved;
the adding unit is further used for:
determining whether the second software queue is locked;
if the second software queue is not locked, locking the second software queue, and adding the socket cache into the second software queue of the corresponding CPU;
and after the socket cache is added into the second software queue, unlocking the second software queue.
7. The apparatus of claim 6, wherein the determining unit is further configured to:
before adding the socket cache into the first software queue or the second software queue, determining that the software queue is the first software queue or the second software queue according to a queue identifier of the software queue.
8. The apparatus of claim 6, further comprising:
and the storage unit is used for taking out a specified number of socket caches from the first software queue in advance, storing the socket caches into corresponding hardware queues, storing the data packets into a memory pointed by a pointer of the socket caches after the data packets are stored into the hardware queues corresponding to the target CPU by the network card.
9. The apparatus of claim 8, further comprising:
a checking unit, configured to periodically check the number of available socket caches in the hardware queue;
the storing unit is further configured to, when the number of available socket caches in the hardware queue is smaller than the specified number, take out a plurality of socket caches from the first software queue and store the socket caches in the hardware queue, so that the number of available socket caches in the hardware queue is equal to the specified number.
10. The apparatus of claim 9, wherein the storage unit is further configured to:
when the socket caches are not available in the first software queue, taking out a plurality of socket caches from the second software queue to be stored in the hardware queue, so that the number of the socket caches available in the hardware queue is equal to the specified number.
CN201710464931.0A 2017-06-19 2017-06-19 Cache management method and device Active CN107315622B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710464931.0A CN107315622B (en) 2017-06-19 2017-06-19 Cache management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710464931.0A CN107315622B (en) 2017-06-19 2017-06-19 Cache management method and device

Publications (2)

Publication Number Publication Date
CN107315622A CN107315622A (en) 2017-11-03
CN107315622B true CN107315622B (en) 2020-05-12

Family

ID=60181865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710464931.0A Active CN107315622B (en) 2017-06-19 2017-06-19 Cache management method and device

Country Status (1)

Country Link
CN (1) CN107315622B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347619A (en) * 2019-07-01 2019-10-18 北京天融信网络安全技术有限公司 Data transmission method and device between a kind of network interface card and cpu
CN113626181B (en) * 2021-06-30 2023-07-25 苏州浪潮智能科技有限公司 Memory cleaning method, device, equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040080769A (en) * 2003-03-13 2004-09-20 삼성전자주식회사 Apparatus for reducing amount of data copyed for forwarding packet in network device and method thereof
CN101789959A (en) * 2009-12-30 2010-07-28 北京天融信科技有限公司 SKB reusing method and device in multinuclear system
CN102916905A (en) * 2012-10-18 2013-02-06 曙光信息产业(北京)有限公司 Gigabit network card multi-path shunting method and system based on hash algorithm
CN104158764A (en) * 2014-08-12 2014-11-19 杭州华三通信技术有限公司 Message processing method and device
CN105939293A (en) * 2016-01-22 2016-09-14 杭州迪普科技有限公司 SKB (Struct sk_buff) recycling method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040080769A (en) * 2003-03-13 2004-09-20 삼성전자주식회사 Apparatus for reducing amount of data copyed for forwarding packet in network device and method thereof
CN101789959A (en) * 2009-12-30 2010-07-28 北京天融信科技有限公司 SKB reusing method and device in multinuclear system
CN102916905A (en) * 2012-10-18 2013-02-06 曙光信息产业(北京)有限公司 Gigabit network card multi-path shunting method and system based on hash algorithm
CN104158764A (en) * 2014-08-12 2014-11-19 杭州华三通信技术有限公司 Message processing method and device
CN105939293A (en) * 2016-01-22 2016-09-14 杭州迪普科技有限公司 SKB (Struct sk_buff) recycling method and device

Also Published As

Publication number Publication date
CN107315622A (en) 2017-11-03

Similar Documents

Publication Publication Date Title
US8639890B2 (en) Data segment version numbers in distributed shared memory
KR102011949B1 (en) System and method for providing and managing message queues for multinode applications in a middleware machine environment
US9672351B2 (en) Authenticated control stacks
US9251162B2 (en) Secure storage management system and method
US9767019B2 (en) Pauseless garbage collector write barrier
CN110188110B (en) Method and device for constructing distributed lock
US20090217270A1 (en) Negating initiative for select entries from a shared, strictly fifo initiative queue
CN108491252A (en) distributed transaction processing method and distributed system
US9176857B2 (en) Method and apparatus for managing video memory in embedded device
US10970172B2 (en) Method to recover metadata in a content aware storage system
CN107315622B (en) Cache management method and device
CN113239098B (en) Data management method, computer and readable storage medium
CN111444147A (en) Data page creating method and device, terminal equipment and storage medium
CN111125040A (en) Method, apparatus and storage medium for managing redo log
CN103246548B (en) A kind of event scheduling method and device of fault-tolerant order-preserving
CN112039970A (en) Distributed business lock service method, server, system and storage medium
CN109831394B (en) Data processing method, terminal and computer storage medium
CN109542922B (en) Processing method for real-time service data and related system
CN107391539B (en) Transaction processing method, server and storage medium
CN107967265B (en) File access method, data server and file access system
CN106572036A (en) SKB management method and apparatus
CN110677465B (en) Control method and device of distributed lock
CN110221911B (en) Ethernet data protection method and device
CN113051081B (en) Event state management method, system and storage medium based on memory pool
CN110580232B (en) Lock management method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant