CN111831403A - Service processing method and device - Google Patents

Service processing method and device Download PDF

Info

Publication number
CN111831403A
CN111831403A CN201910327734.3A CN201910327734A CN111831403A CN 111831403 A CN111831403 A CN 111831403A CN 201910327734 A CN201910327734 A CN 201910327734A CN 111831403 A CN111831403 A CN 111831403A
Authority
CN
China
Prior art keywords
thread
cache
message
cache region
threads
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910327734.3A
Other languages
Chinese (zh)
Inventor
张乐
徐杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201910327734.3A priority Critical patent/CN111831403A/en
Publication of CN111831403A publication Critical patent/CN111831403A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/466Transaction processing
    • G06F9/467Transactional memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention discloses a service processing method and a device, wherein the method comprises the following steps: for each received message, when determining that a first cache region has an idle cache, caching the message into the idle cache in the first cache region; the first cache region is a software-managed cache region; and performing service processing on the message in the first cache region through the second thread. The embodiment of the invention realizes the caching of the message through the first cache region managed by the software, and because the first cache region managed by the software cannot generate a flow control signal when being full, the influence on the service processing of other threads on the same CPU with the second thread is reduced or even eliminated when the service processing capability of the second thread is insufficient.

Description

Service processing method and device
Technical Field
The present invention relates to the field of network communication devices, and in particular, to a method and an apparatus for service processing.
Background
In network application, when the length of a message is greater than a Maximum Transmission Unit (MTU), fragmentation processing needs to be performed on the message, reassembly is an inverse fragmentation process, several fragmentation packets belonging to the same message are restored to an original message, and whether the fragmentation packets belong to the same original message is determined according to whether a source IP address, a destination IP address, a message Identifier (ID), and a Protocol number in an Internet Protocol (IP) header are the same. The general reorganization is performed on a device (destination node) for terminating a message, an intermediate node does not need to perform packet reorganization, however, according to the service attribute requirement of carrier level Network address translation (CGN, gateway address translation), if Port Address Translation (PAT) is performed during NAT44 translation, because there is no Port information in a fragmented message and the message is wrong after translation, the packet needs to be reorganized first and then performed, so general CGN services all have related functions of fragment reorganization.
However, a large number of fragment messages to be reassembled often appear in the existing network, which causes a flow control phenomenon of the CPU when the thread processing capability of the fragment reassembly is not sufficient, and directly affects the normal processing of other services on the same CPU, such as CGN messages.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for processing a service, which can reduce or even eliminate an influence on service processing of other threads that belong to the same CPU as a thread when the service processing capability of the thread is insufficient.
The embodiment of the invention provides a service processing method, which comprises the following steps:
for each received message, when determining that a first cache region has an idle cache, caching the message into the idle cache in the first cache region; the first cache region is a software-managed cache region;
and performing service processing on the message in the first cache region through the second thread.
An embodiment of the present invention provides a service processing apparatus, including:
the message caching module is used for caching each received message into an idle cache in a first cache region when the idle cache in the first cache region is determined; the first cache region is a software-managed cache region;
and the message processing module is used for performing service processing on the message in the first cache region through the second thread.
The embodiment of the invention provides a service processing device, which comprises a processor and a computer-readable storage medium, wherein instructions are stored in the computer-readable storage medium, and when the instructions are executed by the processor, any one of the service processing methods is realized.
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the service processing methods described above.
The embodiment of the invention comprises the following steps: for each received message, when determining that a first cache region has an idle cache, caching the message into the idle cache in the first cache region; the first cache region is a software-managed cache region; and performing service processing on the message in the first cache region through the second thread. The embodiment of the invention realizes the caching of the message through the first cache region managed by the software, and because the first cache region managed by the software cannot generate a flow control signal when being full, the influence on the service processing of other threads on the same CPU with the second thread is reduced or even eliminated when the service processing capability of the second thread is insufficient.
In another embodiment, when there is no free buffer in the first buffer area, the method further comprises: and discarding the message. The embodiment of the invention releases the cache in the first cache region as soon as possible when the service processing capability is insufficient, and further reduces or even eliminates the influence on the service processing of other threads on the same CPU with the second thread when the service processing capability of the second thread is insufficient.
In another embodiment, a message type or an idle cache in the first cache region corresponding to a first thread is determined by the first thread in the N first threads; wherein, N is an integer greater than or equal to 1, and the first thread is a thread with better service processing capacity; and caching the message into an idle cache in a first cache region corresponding to the message type or one first thread through one first thread of the N first threads. The embodiment of the invention realizes the caching of the message through the thread with better service processing capability, shares the service processing capability of the second thread, and further reduces or even eliminates the influence on the service processing of other threads on the same CPU with the second thread when the service processing capability of the second thread is insufficient.
In another embodiment, before determining, by a first thread of the N first threads, that there is a free cache in a first cache region corresponding to a packet type or the first thread, the method further includes: caching the message into a second cache region corresponding to one first thread in the N first threads; the second cache region is a hardware management cache region; after the message is cached in a message type or an idle cache in a first cache region corresponding to a first thread through the first thread in the N first threads, the method further comprises the following steps: and deleting the message in the second cache region through one first thread in the N first threads. In the embodiment of the present invention, a message is cached in a second cache region of a first thread, and since the second cache region is a cache region managed by hardware, once the second cache region is full, a flow control signal is generated, so that after the message is cached in an idle cache in the first cache region, the message in the second cache region needs to be deleted, so that no influence is caused on the second cache region, and further, when the service processing capability of the second thread is insufficient, the influence on the service processing of other threads belonging to the same CPU as the second thread is reduced or even eliminated.
Additional features and advantages of embodiments of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of embodiments of the invention. The objectives and other advantages of the embodiments of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the examples of the invention serve to explain the principles of the embodiments of the invention and not to limit the embodiments of the invention.
Fig. 1 is a flowchart of a service processing method according to an embodiment of the present invention;
fig. 2(a) is a first schematic diagram of a service processing procedure according to an embodiment of the present invention;
fig. 2(b) is a schematic diagram of a service processing procedure according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a service processing apparatus according to another embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments of the present invention may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Referring to fig. 1, an embodiment of the present invention provides a service processing method, including:
step 100, for each received message, when determining that a first cache region has an idle cache, caching the message into the idle cache in the first cache region; the first cache region is a software-managed cache region.
In the embodiment of the present invention, whether there is an idle cache in the first cache region may be determined by applying for caching from the cache manager. The cache manager manages the cache in the first cache region at a software level.
In this embodiment of the present invention, the first cache region may be any one of: a circular queue, a sequential queue, etc. Of course, the first cache region may also be another data structure, which is not limited in the embodiment of the present invention, and the specific data structure form is also not used to limit the protection scope of the embodiment of the present invention.
Step 101, performing service processing on the packet in the first cache region through the second thread.
In this embodiment of the present invention, the second thread is a thread with insufficient business processing capability. For example, the second thread may include at least one of: the system comprises a fragment recombination thread, a CGN thread, an Internet Protocol Security (IPSec) thread and a thread for processing the same service. Of course, the embodiment of the present invention does not only limit the second thread to the fragment reassembly thread, the NAT thread, the IPSec thread, and the thread for processing the same service, but also limits the second thread to the thread with insufficient service processing capability.
In this embodiment of the present invention, when the second thread is a fragment reassembly thread, before determining that there is an idle cache in the first cache region corresponding to the packet type of the packet, the method further includes: and determining the message to be a message needing fragmentation and reassembly.
In another embodiment of the present invention, when the packet is not a packet requiring fragment reassembly, the method further includes: and performing service processing on the message according to other processing flows.
In the embodiment of the invention, whether the message needs to be fragmented and recombined can be judged by analyzing the message type.
The embodiment of the invention realizes the caching of the message through the first cache region managed by the software, and because the first cache region managed by the software cannot generate a flow control signal when being full, the influence on the service processing of other threads on the same CPU with the second thread is reduced or even eliminated when the service processing capability of the second thread is insufficient.
In an embodiment of the present invention, step 100 may be performed by one of the N first threads. That is, determining that there is a free cache in the first cache region includes:
determining that an idle cache exists in the first cache region corresponding to the message type of the message through one first thread of the N first threads; wherein, N is an integer greater than or equal to 1, and the first thread is a thread with better service processing capacity;
or, determining that there is an idle cache in the first cache region corresponding to the first thread through the first thread;
the caching the packet into an idle cache in the first cache region comprises:
caching the message into an idle cache in a first cache region corresponding to the message type through one of the N first threads;
or, the message is cached to the cache of the space in the first cache region corresponding to the first thread through the first thread.
When step 100 is executed by one of the N first threads, the cache of the packet is realized by the thread with better service processing capability, the service processing capability of the second thread is shared, and further, when the service processing capability of the second thread is insufficient, the influence on the service processing of other threads belonging to the same CPU as the second thread is reduced or even eliminated.
In another embodiment of the present invention, before determining, by a first thread of the N first threads, that there is an idle cache in a first cache region corresponding to a packet type or the first thread, the method further includes:
caching the message into a second cache region corresponding to one first thread in the N first threads; the second cache region is a hardware management cache region;
after the message is cached in a message type or an idle cache in a first cache region corresponding to a first thread through the first thread in the N first threads, the method further comprises the following steps: and deleting the message in the second cache region through one first thread in the N first threads.
In this embodiment of the present invention, different messages may be cached in second cache regions corresponding to different first threads of the N first threads; and/or different messages can be cached in a second cache region corresponding to the same first thread in the N first threads.
In an embodiment of the present invention, the N first threads include at least one of: a service thread of multi-thread processing and a service thread of single-thread processing;
wherein the business thread comprises at least one of: a main service thread and an auxiliary service thread.
That is, the first thread may be any other thread with better traffic handling capabilities than the second thread, such as a NAT thread.
In the embodiment of the present invention, a message is cached in a second cache region of a first thread, and since the second cache region is a cache region managed by hardware, once the second cache region is full, a flow control signal is generated, so that after the message is cached in an idle cache in the first cache region, the message in the second cache region needs to be deleted, so that no influence is caused on the second cache region, and further, when the service processing capability of the second thread is insufficient, the influence on the service processing of other threads belonging to the same CPU as the second thread is reduced or even eliminated.
In the embodiment of the present invention, the first cache region satisfies at least one of the following conditions:
the N first threads correspond to one first cache region;
m second threads correspond to one first cache region; in this case, M second threads may be used to process traffic of the same traffic type (or packet type);
the N first threads correspond to N first cache regions;
m second threads correspond to M first cache regions;
one first thread corresponds to one first cache region; in this case, one first buffer is created between each first thread and the second thread, and thus, there are N first buffers in total;
one said second thread corresponds to one said first cache region; in this case, a first buffer is created between each second thread and the first thread, so there are M first buffers in total, and different second threads may be used to process services of different service types (or packet types).
In the embodiment of the present invention, when N first threads correspond to N first cache regions; or, when one second thread corresponds to one first cache region, specifically, when caching is performed, a packet may be cached in N second cache regions corresponding to N first threads by using a polling method or other methods.
When the N first threads correspond to the N first cache regions; or, when one first thread corresponds to one first cache region, the second thread may perform service processing on the packets in the N first cache regions by using a polling method or other methods.
In the embodiment of the present invention, the cache size of the first cache region may be configured according to the actual processing capability of the second thread.
In another embodiment of the present invention, when there is no free buffer in the first buffer area, the method further includes: and discarding the message. That is, after the buffer in the first buffer area is exhausted, the packet is directly discarded and is not sent to the second thread, so that the generation of the flow control signal of the Virtual processor (VCPU, Virtual CPU) of the second thread is controlled, and the influence on the service processing of other threads belonging to the same CPU as the second thread is reduced or even eliminated.
For example, as shown in fig. 2(a), in a CGN service process, normally, because there are fewer packets of the fragment reassembly service, 40 NAT threads and 1 fragment reassembly thread are allocated. A ring queue (ring) is created between each NAT thread and the fragment reassembly thread, and there are 40 ring queues, and each ring queue contains 128 BUFFERs (BUFFERs) for buffering messages.
After receiving the message, the packet receiving and distributing engine fcm distributes the message to the cache region (i.e. the second cache region) of the VCPU (i.e. the CGN VCPU in fig. 2 (a)) of each NAT thread packet by packet according to a polling method, the NAT thread analyzes the message type as a message needing fragmentation and reassembly, and applies for BUFFER to a ring manager (i.e. the POPQ in fig. 2); if there is an idle BUFFER, directly replacing the BUFFER storing the message in the cache of the VCPU of the NAT thread with the applied BUFFER (that is, caching the message stored in the cache of the VCPU of the NAT thread in the applied BUFFER, and deleting the message stored in the cache of the VCPU of the NAT thread); and if the ring full application cannot reach the idle BUFFER, directly discarding the message.
The fragment reassembly thread performs service processing on the messages in the rings by adopting a polling method, and reads 10 BUFFERs at most once per ring to ensure the reassembly of the messages.
For another example, as shown in fig. 2(b), there are a plurality of service threads (including a fragment reassembly thread, an NAT thread, and an IPSec thread) that affect each other, one service distribution thread is used to distribute messages of different services, a RING BUFFER is provided between a VCPU of each service and a service distribution VCPU, the service distribution thread caches the message in the corresponding RING BUFFER according to the type of the message, and the service thread processes the message in the RING BUFFER.
The size of the RING BUFFER can be configured according to the actual processing capacity of the service thread, so that the processing capacity of each service thread does not affect other VCPUs.
Referring to fig. 3, another embodiment of the present invention provides a service processing apparatus, including:
the message caching module 301 is configured to, for each received message, cache the message in an idle cache in a first cache area when it is determined that the first cache area has the idle cache; the first cache region is a software-managed cache region;
a message processing module 302, configured to perform service processing on the message in the first cache region through the second thread.
In this embodiment of the present invention, the message caching module 301 may determine whether there is an idle cache in the first cache region by applying for caching from the cache manager. The cache manager manages the cache in the first cache region at a software level.
In this embodiment of the present invention, the first cache region may be any one of: a circular queue, a sequential queue, etc. Of course, the first cache region may also be another data structure, which is not limited in the embodiment of the present invention, and the specific data structure form is also not used to limit the protection scope of the embodiment of the present invention.
In this embodiment of the present invention, the second thread is a thread with insufficient business processing capability. For example, the second thread may include at least one of: the system comprises a fragment recombination thread, a CGN thread, an Internet Protocol Security (IPSec) thread and a thread for processing the same service. Of course, the embodiment of the present invention does not only limit the second thread to the fragment reassembly thread, the NAT thread, the IPSec thread, and the thread for processing the same service, but also limits the second thread to the thread with insufficient service processing capability.
In this embodiment of the present invention, when the second thread is a fragment reassembly thread, the packet cache module 301 is further configured to: and determining the message to be a message needing fragmentation and reassembly.
In another embodiment of the present invention, the message caching module 301 is further configured to: and when the message is not the message needing fragmentation and reassembly, performing service processing on the message according to other processing flows.
In the embodiment of the present invention, the message caching module 301 may determine whether the message needs to be fragmented and reassembled by analyzing the message type.
The embodiment of the invention realizes the caching of the message through the first cache region managed by the software, and because the first cache region managed by the software cannot generate a flow control signal when being full, the influence on the service processing of other threads on the same CPU with the second thread is reduced or even eliminated when the service processing capability of the second thread is insufficient.
In this embodiment of the present invention, the packet caching module 301 may execute step 100 through one of the N first threads. That is, the packet caching module 301 is specifically configured to:
determining that an idle cache exists in the first cache region corresponding to the message type of the message through one first thread of the N first threads; wherein, N is an integer greater than or equal to 1, and the first thread is a thread with better service processing capacity; caching the message into an idle cache in a first cache region corresponding to the message type through one of N first threads;
or, determining that there is an idle cache in the first cache region corresponding to the first thread through the first thread; and caching the message into a cache of a space in the first cache region corresponding to the first thread through the first thread.
When step 100 is executed by one of the N first threads, the cache of the packet is realized by the thread with better service processing capability, the service processing capability of the second thread is shared, and further, when the service processing capability of the second thread is insufficient, the influence on the service processing of other threads belonging to the same CPU as the second thread is reduced or even eliminated.
In another embodiment of the present invention, the message caching module 301 is further configured to:
caching the message into a second cache region corresponding to one first thread in the N first threads; the second cache region is a hardware management cache region;
and after the message is cached in an idle cache in the first cache region through one of the N first threads, deleting the message in the second cache region through one of the N first threads.
In this embodiment of the present invention, different messages may be cached in second cache regions corresponding to different first threads of the N first threads; and/or different messages can be cached in a second cache region corresponding to the same first thread in the N first threads.
In an embodiment of the present invention, the N first threads include at least one of: a service thread of multi-thread processing and a service thread of single-thread processing;
wherein the business thread comprises at least one of: a main service thread and an auxiliary service thread.
That is, the first thread may be any other thread with better traffic handling capabilities than the second thread, such as a NAT thread.
In the embodiment of the present invention, a message is cached in a second cache region of a first thread, and since the second cache region is a cache region managed by hardware, once the second cache region is full, a flow control signal is generated, so that after the message is cached in an idle cache in the first cache region, the message in the second cache region needs to be deleted, so that no influence is caused on the second cache region, and further, when the service processing capability of the second thread is insufficient, the influence on the service processing of other threads belonging to the same CPU as the second thread is reduced or even eliminated.
In the embodiment of the present invention, the first cache region satisfies at least one of the following conditions:
the N first threads correspond to one first cache region;
m second threads correspond to one first cache region; in this case, M second threads may be used to process traffic of the same traffic type (or packet type);
the N first threads correspond to N first cache regions;
m second threads correspond to M first cache regions;
one first thread corresponds to one first cache region; in this case, one first buffer is created between each first thread and the second thread, and thus, there are N first buffers in total;
one said second thread corresponds to one said first cache region; in this case, a first buffer is created between each second thread and the first thread, so there are M first buffers in total, and different second threads may be used to process services of different service types (or packet types).
In the embodiment of the present invention, when N first threads correspond to N first cache regions; or, when one second thread corresponds to one first cache region, specifically, when caching is performed, a packet may be cached in N second cache regions corresponding to N first threads by using a polling method or other methods.
When the N first threads correspond to the N first cache regions; or, when one first thread corresponds to one first cache region, the second thread may perform service processing on the packets in the N first cache regions by using a polling method or other methods.
In the embodiment of the present invention, the cache size of the first cache region may be configured according to the actual processing capability of the second thread.
In another embodiment of the present invention, the message caching module 301 is further configured to: and when the first cache region has no idle cache, discarding the message. That is, after the buffer in the first buffer area is exhausted, the packet is directly discarded and is not sent to the second thread, so that the generation of the flow control signal of the Virtual processor (VCPU, Virtual CPU) of the second thread is controlled, and the influence on the service processing of other threads belonging to the same CPU as the second thread is reduced or even eliminated.
Another embodiment of the present invention provides a service processing apparatus, including a processor and a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed by the processor, the service processing apparatus implements any one of the service processing methods described above.
Another embodiment of the present invention proposes a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of any of the above-mentioned service processing methods.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
Although the embodiments of the present invention have been described above, the descriptions are only used for understanding the embodiments of the present invention, and are not intended to limit the embodiments of the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the embodiments of the invention as defined by the appended claims.

Claims (14)

1. A service processing method comprises the following steps:
for each received message, when determining that a first cache region has an idle cache, caching the message into the idle cache in the first cache region; the first cache region is a software-managed cache region;
and performing service processing on the message in the first cache region through the second thread.
2. The method of claim 1, wherein when there is no free buffer in the first buffer, the method further comprises: and discarding the message.
3. The method of claim 1 or 2, wherein the determining that there is a free buffer in the first buffer area comprises:
determining that an idle cache exists in the first cache region corresponding to the message type of the message through one of the N first threads; wherein, N is an integer greater than or equal to 1, and the first thread is a thread with better service processing capacity;
or, determining that there is an idle cache in the first cache region corresponding to the first thread through the first thread;
the caching the packet into an idle cache in the first cache region comprises:
caching the message into an idle cache in a first cache region corresponding to the message type through one of N first threads;
or, the message is cached to the cache of the space in the first cache region corresponding to the first thread through the first thread.
4. The method of claim 3, wherein before determining, by a first thread of the N first threads, that there is a free cache in the first cache region corresponding to the packet type or the first thread, the method further comprises:
caching the message into a second cache region corresponding to one first thread in the N first threads; the second cache region is a hardware management cache region;
after the message is cached in the message type or an idle cache in a first cache region corresponding to one first thread through one first thread of the N first threads, the method further comprises the following steps: and deleting the message in the second cache region through one first thread in the N first threads.
5. The method of claim 3, wherein the first cache region satisfies at least one of:
the N first threads correspond to one first cache region;
m second threads correspond to one first cache region;
the N first threads correspond to N first cache regions;
m second threads correspond to M first cache regions;
one first thread corresponds to one first cache region;
one of the second threads corresponds to one of the first cache regions.
6. The method according to claim 3, wherein different ones of the packets are cached in second cache regions corresponding to different ones of the N first threads;
and/or different messages are cached to a second cache region corresponding to the same first thread in the N first threads.
7. The method of claim 3, wherein the N first threads comprise at least one of: a service thread of multi-thread processing and a service thread of single-thread processing;
wherein the business thread comprises at least one of: a main service thread and an auxiliary service thread.
8. The method according to claim 1 or 2, wherein the second thread is a thread with insufficient traffic processing capacity.
9. The method of claim 8, wherein the second thread comprises at least one of: the system comprises a fragmentation reorganization thread, a Network Address Translation (NAT) thread, an internet protocol security (IPSec) thread and a thread for processing the same service.
10. The method according to claim 9, wherein when the second thread is a fragment reassembly thread, before determining that there is an idle buffer in the first buffer corresponding to the packet type of the packet, the method further comprises: and determining the message to be a message needing fragmentation and reassembly.
11. The method of claim 1 or 2, wherein the first buffer comprises any one of: circular queue, sequential queue.
12. A traffic processing apparatus, comprising:
the message caching module is used for caching each received message into an idle cache in a first cache region when the idle cache in the first cache region is determined; the first cache region is a software-managed cache region;
and the message processing module is used for performing service processing on the message in the first cache region through the second thread.
13. A transaction device comprising a processor and a computer readable storage medium having instructions stored thereon, wherein the instructions, when executed by the processor, implement a transaction method according to any of claims 1 to 11.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the traffic processing method according to any one of claims 1 to 11.
CN201910327734.3A 2019-04-23 2019-04-23 Service processing method and device Pending CN111831403A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910327734.3A CN111831403A (en) 2019-04-23 2019-04-23 Service processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910327734.3A CN111831403A (en) 2019-04-23 2019-04-23 Service processing method and device

Publications (1)

Publication Number Publication Date
CN111831403A true CN111831403A (en) 2020-10-27

Family

ID=72911483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910327734.3A Pending CN111831403A (en) 2019-04-23 2019-04-23 Service processing method and device

Country Status (1)

Country Link
CN (1) CN111831403A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112217919A (en) * 2020-12-11 2021-01-12 广东省新一代通信与网络创新研究院 Method and system for realizing network address conversion
CN112653639A (en) * 2020-12-21 2021-04-13 北京华环电子股份有限公司 IPv6 message fragment recombination method based on multi-thread interactive processing

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112217919A (en) * 2020-12-11 2021-01-12 广东省新一代通信与网络创新研究院 Method and system for realizing network address conversion
CN112217919B (en) * 2020-12-11 2021-03-23 广东省新一代通信与网络创新研究院 Method and system for realizing network address conversion
CN112653639A (en) * 2020-12-21 2021-04-13 北京华环电子股份有限公司 IPv6 message fragment recombination method based on multi-thread interactive processing
CN112653639B (en) * 2020-12-21 2022-10-14 北京华环电子股份有限公司 IPv6 message fragment recombination method based on multi-thread interactive processing

Similar Documents

Publication Publication Date Title
US11895154B2 (en) Method and system for virtual machine aware policy management
US8625431B2 (en) Notifying network applications of receive overflow conditions
CN109783250B (en) Message forwarding method and network equipment
KR100875739B1 (en) Apparatus and method for packet buffer management in IP network system
US9571417B2 (en) Processing resource access request in network
US10545896B2 (en) Service acceleration method and apparatus
CN109246036B (en) Method and device for processing fragment message
US10178033B2 (en) System and method for efficient traffic shaping and quota enforcement in a cluster environment
CN111831403A (en) Service processing method and device
JP5951888B2 (en) COMMUNICATION DEVICE, COMMUNICATION METHOD, AND COMMUNICATION PROGRAM
CN113037879A (en) ARP learning method and node equipment
US10623195B2 (en) Protecting a network from a unicast flood
CN107483637B (en) NFS-based client link management method and device
US9917764B2 (en) Selective network address storage within network device forwarding table
CN112291310B (en) Method and device for counting connection number
US20070297432A1 (en) Host-Controlled Network Interface Filtering Based on Active Services, Active Connections and Active Protocols
EP4075741A1 (en) Method and apparatus for acquiring forwarding information
US20230336503A1 (en) Receiving packet data
CN112272210B (en) Message caching method and device
CN113127145B (en) Information processing method, device and storage medium
US11336557B2 (en) System and method for managing computing resources
CN114793217A (en) Intelligent network card, data forwarding method and device and electronic equipment
KR101854377B1 (en) Express packet processing system and the controlling method thereof
CN112311678A (en) Method and device for realizing message distribution
CN116346722A (en) Message processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination