CN109495504B - Firewall equipment and message processing method and medium thereof - Google Patents

Firewall equipment and message processing method and medium thereof Download PDF

Info

Publication number
CN109495504B
CN109495504B CN201811574742.XA CN201811574742A CN109495504B CN 109495504 B CN109495504 B CN 109495504B CN 201811574742 A CN201811574742 A CN 201811574742A CN 109495504 B CN109495504 B CN 109495504B
Authority
CN
China
Prior art keywords
attack
message
central processing
core
log
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811574742.XA
Other languages
Chinese (zh)
Other versions
CN109495504A (en
Inventor
刘健男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201811574742.XA priority Critical patent/CN109495504B/en
Publication of CN109495504A publication Critical patent/CN109495504A/en
Application granted granted Critical
Publication of CN109495504B publication Critical patent/CN109495504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0209Architectural arrangements, e.g. perimeter networks or demilitarized zones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1458Denial of Service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application discloses firewall equipment, which comprises three types of central processing units and a memory corresponding to each type of central processing unit, wherein a forwarding core is configured in a first type of central processing unit and is used for forwarding a normal message and discarding a small-flow attack message to a first attack message cache region; the second type central processing unit is provided with an attack defense special core which is used for forwarding normal messages and discarding large-flow attack messages to a second attack message cache region; and a log recording core is configured in the third type central processing unit and used for reading attack messages in the first attack message cache region and the second attack message cache region through lock-free reading operation and generating an attack log. The firewall equipment can realize normal forwarding of the message, can defend flow attack, and can record an attack log at the same time.

Description

Firewall equipment and message processing method and medium thereof
Technical Field
The present application relates to the field of network security technologies, and in particular, to a firewall device, a packet processing method thereof, and a computer-readable storage medium.
Background
A Firewall (Firewall), also called a guard wall, is a barrier between an intranet and an extranet and controls the ingress and egress of messages according to predefined rules. The firewall can be understood as the first line of defense of the network system, and the function of the firewall is to prevent the entrance of illegal users.
With the rapid development of science and technology, network card attacks are also upgraded, and many DDOS (Distributed Denial of Service) attacks of tens of gigas or even hundreds of gigas are developed at all times, so that the message processing pressure of firewalls is higher and higher, and in order to better maintain network security, a firewall application party such as an enterprise purchases a firewall not only to hope that the firewall can prevent network attacks, but also to hope that the firewall can record attack logs so as to facilitate subsequent analysis.
However, in the existing software firewall, due to the multi-core concurrent resource competition and the limitation of software performance, it is difficult to ensure the normal flow smoothness and prevent the large-flow attack, and there is no way to record the attack log. Based on this, a scheme for realizing a software firewall is urgently needed to be realized at present, so that the recording of an attack log can be realized while the normal flow is ensured to be smooth and the large-flow attack is prevented.
Disclosure of Invention
The embodiment of the application provides firewall equipment, a message processing method and a storage medium, which can record an attack log even under the condition of encountering large-flow attack and do not influence the attack defense performance and the normal message forwarding performance of the firewall.
In view of the above, a first aspect of the present application provides a firewall device, including:
three types of central processing units and a memory corresponding to each type of central processing unit; the three types of central processing units comprise a first type of central processing unit, a second type of central processing unit and a third type of central processing unit; wherein the content of the first and second substances,
the first type central processing unit is internally provided with a forwarding core, and the forwarding core is used for receiving and identifying a first type of message, and forwarding a normal message or discarding an attack message to a first attack message cache region corresponding to the forwarding core according to an identification result;
the second type central processing unit is internally provided with an attack defense special core, and the attack defense special core is used for receiving and identifying a second type of message, and forwarding a normal message or discarding the attack message to a second attack message cache region corresponding to the attack defense special core according to the identification result;
and a log recording core is configured in the third type central processing unit, and the log recording core is used for reading attack messages in the first attack message cache region and the second attack message cache region through lock-free reading operation and generating an attack log according to the attack messages.
Optionally, the forwarding core is further configured to, when the first attack packet buffer is saturated, recover a storage space of all attack packets that have been read by the log recording core in the first attack packet buffer, and release the storage space to a first packet memory pool corresponding to the forwarding core;
and the attack defense special core is also used for recovering all storage spaces of the attack messages read by the log recording core in the second attack message cache region when the second attack message cache region is saturated, and releasing the storage spaces to a second message memory pool corresponding to the attack defense special core.
Optionally, the forwarding core is further configured to, when the available memory proportion in the first message memory pool corresponding to the forwarding core is smaller than a first threshold, recover a storage space of all attack messages read by the log recording core in the first attack message cache region, and release the storage space to the first message memory pool corresponding to the forwarding core;
the attack defense special core is also used for recovering all the storage spaces of the attack messages read by the log recording core in the second attack message cache region and releasing the storage spaces to the second message memory pool corresponding to the attack defense special core when the available memory proportion in the second message memory pool corresponding to the attack defense special core is smaller than a first threshold value.
Optionally, the firewall device includes a plurality of the first type central processing units and a plurality of the second type central processing units.
Optionally, the third type of central processing unit, the first type of central processing unit, and the second type of central processing unit share the same physical device respectively; and the log record core in the third type central processing unit is specifically configured to read the attack packet through the hyper-thread virtualized by the physical device.
Optionally, the first type of central processing unit and the second type of central processing unit use the same physical device.
Optionally, the log recording core is specifically configured to start a plurality of log threads, and respectively read attack packets in a concurrent manner by using a lock-free read operation through the plurality of log threads, where one log thread corresponds to one first attack packet cache region or one second attack packet cache region, and is configured to read attack packets from the attack packet cache region corresponding to the log thread.
Optionally, the firewall device includes a plurality of third type central processing units, and a log recording core in each third type central processing unit starts a log thread, and reads an attack packet from one of the first attack packet buffer area or the second attack packet buffer area through the log thread.
Optionally, the forwarding core is further configured to notify the log thread to read the attack packet when the percentage of the written area of the first attack packet buffer reaches a second threshold and the log thread has not been read yet;
and the attack defense special core is also used for informing the log thread to read the attack message when the written area proportion of the second attack message cache region reaches a second threshold value and the log thread is not read yet.
Optionally, the forwarding core is further configured to add prompt information at the tail of the attack packet according to the size of the attack traffic, and notify the log thread through the prompt information, so that the log thread adjusts the number of packets read each time according to the prompt information;
the special core for attack defense is also used for adding prompt information at the tail part of the attack message according to the size of the attack flow, and informing the log thread through the prompt information so that the log thread can adjust the number of messages read each time according to the prompt information.
A second aspect of the present application provides a method for processing a packet of a firewall device, including:
the first type of message is received and identified by the first central processing unit through a pre-configured forwarding core, and a normal message is forwarded or an attack message is discarded to a first attack message cache region corresponding to the forwarding core according to an identification result;
the second type central processing unit receives and identifies a second type of message through a pre-configured attack defense special core, and forwards a normal message or discards the attack message to a second attack message cache region corresponding to the attack defense special core according to the identification result;
and the third central processing unit reads the attack messages in the first attack message cache region and the second attack message cache region through a log record core which is pre-configured and through lock-free reading operation, and generates an attack log according to the attack messages.
Optionally, the method further includes:
the first central processing unit recovers, through the forwarding core, all storage spaces of attack messages read by the log recording core in the first attack message cache region when the first attack message cache region is saturated, and releases the storage spaces to a first message memory pool corresponding to the forwarding core;
and the second type central processing unit recovers all the storage spaces of the attack messages read by the log recording core in the second attack message cache region through the attack defense special core when the second attack message cache region is saturated, and releases the storage spaces to a second message memory pool corresponding to the attack defense special core.
Optionally, when the available memory proportion in the first message memory pool corresponding to the forwarding core is smaller than a first threshold value through the forwarding core, the first type central processing unit recovers all storage spaces of attack messages read by the log recording core in the first attack message cache region, and releases the storage spaces to the first message memory pool corresponding to the forwarding core;
and the second type central processing unit recovers all the storage spaces of the attack messages read by the log recording core in the second attack message cache region through the attack defense special core when the available memory proportion in the second message memory pool corresponding to the attack defense special core is smaller than a first threshold value, and releases the storage spaces to the second message memory pool corresponding to the attack defense special core.
Optionally, the third type of central processing unit starts a plurality of log threads through the log recording core, and reads attack packets in a concurrent manner through the multiple log threads by using a lock-free read operation, wherein one log thread corresponds to one central processing unit and is used for reading the attack packets from the corresponding central processing unit.
Optionally, the method further includes:
when the ratio of the written areas of the first attack message cache region reaches a second threshold value and the log thread is not read, the first type central processing unit informs the log thread to read the attack message through the forwarding core;
and the second type central processing unit informs the log thread to read the attack message through the attack defense special core when the written area proportion of the second attack message cache region reaches a second threshold value and the log thread is not read yet.
Optionally, the method further includes:
the first type central processing unit adds prompt information at the tail part of the attack message through the forwarding core according to the size of the attack flow, and informs the log thread through the prompt information so that the log thread adjusts the number of messages read each time according to the prompt information;
and the second type central processing unit adds prompt information at the tail part of the attack message according to the size of the attack flow through the special attack defense core, and informs the log thread through the prompt information so that the log thread adjusts the number of messages read each time according to the prompt information.
A third aspect of the present application provides a computer-readable storage medium for storing a program code for executing the message processing method according to the second aspect.
According to the technical scheme, the embodiment of the application has the following advantages:
the embodiment of the application provides firewall equipment, which comprises three types of Central Processing Units (CPUs) and a memory corresponding to each type of CPU, wherein the three types of CPUs are a first type of CPU, a second type of CPU and a third type of CPU respectively; the first type of central processing unit is provided with a forwarding core which is used for forwarding a normal message and discarding a small-flow attack message to a first attack message cache region; the second type central processing unit is provided with an attack defense special core which is used for forwarding normal messages and discarding large-flow attack messages to a second attack message cache region; and the third type central processing unit is provided with a log recording core which is used for reading attack messages in the first attack message cache region and the second attack message cache region through lock-free reading operation and generating an attack log according to the attack messages. The three types of central processing units in the firewall equipment work independently and parallelly, a forwarding core processes normal messages and small flow attacks, an attack defense special core processes large flow attacks, a log recording core actively acquires attack messages and correspondingly generates attack logs, and the three types of central processing units perform respective functions, so that the firewall equipment can realize normal forwarding of the messages, can effectively defend large flow attacks, can record the attack logs, effectively improves the performance of the firewall equipment and meets market requirements.
Drawings
Fig. 1 is a schematic structural diagram of a firewall device according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of another firewall device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of another firewall device according to an embodiment of the present application;
fig. 4 is a flowchart illustrating a message processing method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the prior art, a hardware firewall is usually selected as a barrier between an intranet and an extranet, and the hardware firewall is high in price and has better capability of resisting attacks. However, even if a central processing unit with better performance is selected for the hardware firewall, due to factors such as multi-core concurrent resource competition and software performance limitation, it is generally impossible to record the attack log while preventing the traffic attack.
Compared with a hardware firewall, the software firewall has relatively weak performance in resisting attacks, and can not record the attack logs while resisting the attacks; the reason is that if the software firewall records the attack log, the forwarding core in the software firewall needs to extract and record the attack flow information from the attack message, and then sends the recorded attack flow information to the log system, if a performance occupation checking command such as a perf command is used to check the performance occupation condition of the forwarding core when the large attack flow comes, a large amount of calls of the forwarding core occupied by the log system at the moment are obviously found.
That is to say, when a traffic attack arrives, the forwarding core needs to perform operations of forwarding a normal packet, resisting the traffic attack, extracting attack traffic information, and sending the attack traffic information to the log system at the same time, and extracting the attack traffic information and sending the attack traffic information to the log system may occupy a large amount of performance of the forwarding core, so that when the traffic attack arrives, the performance of the forwarding core itself is difficult to support operations of forwarding the normal packet, resisting the traffic attack, and extracting the attack traffic information at the same time. In addition, before sending attack traffic information to the log system, the forwarding core also needs to notify the log system accordingly, and in the notification process, the performance of the forwarding core is affected to different degrees by different threads no matter using a signal call or a system call.
In order to solve the technical problems in the prior art, embodiments of the present application provide a firewall device, which can record an attack log even when a large-flow attack is encountered, and does not affect the attack defense performance of the firewall and the forwarding performance of a normal message.
Specifically, the firewall device provided in the embodiment of the present application includes three types of central processing units and a memory corresponding to each type of central processing unit, where the three types of central processing units are respectively configured with a forwarding core, an attack defense dedicated core, and a log recording core, where the forwarding core is configured to forward a normal packet and discard a small-flow attack packet to a first attack packet cache region, the attack defense dedicated core is configured to discard a large-flow attack packet to a second attack packet cache region, and the log recording core is configured to read attack packets in the first attack packet cache region and the second attack packet cache region through a lock-and-read-free operation, and generate an attack log accordingly.
The firewall equipment can simultaneously execute three operations of forwarding normal messages, resisting flow attacks and generating attack logs, and even when the firewall equipment encounters a large-flow attack, the attack log recording does not influence the attack resisting performance and the message forwarding performance of the firewall.
The firewall device provided by the present application is described below by way of an embodiment.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a firewall device according to an embodiment of the present application. As shown in fig. 1, the firewall device includes three types of central processing units, namely, a first central processing unit 101, a second central processing unit 102, and a third central processing unit 103; the firewall device further includes memories corresponding to the three types of central processing units, i.e., a memory 104 corresponding to the first type of central processing unit 101, a memory 105 corresponding to the second type of central processing unit 102, and a memory 106 corresponding to the third type of central processing unit 103.
It should be noted that the firewall device shown in fig. 1 is only an example, in practical applications, three types of central processing units may respectively correspond to one memory, or multiple types of central processing units may correspond to one memory, that is, any two of the first type central processing unit 101, the second type central processing unit 102, and the third type central processing unit 103 may correspond to one memory, or the three types of central processing units may correspond to one memory.
The first type central processing unit 101 is configured with a forwarding core, and the forwarding core is configured to receive and identify the first type of packet, and further forward a normal packet according to the identification result, or discard the attack packet to a first attack packet cache region corresponding to the forwarding core.
The second type central processing unit 102 is configured with an attack defense dedicated core, which is configured to receive and identify the second type of packet, and further forward the normal packet according to the identification result, or discard the attack packet to a second attack packet cache region corresponding to the attack defense dedicated core.
The first type of message specifically comprises a normal message and a small-flow attack message; the second type of message specifically includes a normal message and a large-traffic attack message. After receiving the message from the outside, the firewall device may identify the type of the message received by the firewall device in advance according to the flow characteristic rule set in the network card driver, and if the received message is determined to have the characteristic conforming to the characteristic of the first type of message, the firewall device correspondingly sends the message to the first type of central processing unit 101, and if the received message is determined to have the characteristic conforming to the characteristic of the second type of message, the firewall device correspondingly sends the message to the second type of central processing unit 102.
According to the characteristics of the message, the large-flow attack message is distinguished from the normal message and the small-flow attack message, and the second type central processing unit 102 configured with a special core for attack defense is used for processing the large-flow attack message, wherein the operation is also called black hole processing; after receiving the large-flow attack message such as DDOS attack, the firewall device directly guides the large-flow attack message to an independent thread attack defense special core for processing, so that the forwarding of the normal message is not influenced by the large-flow attack message, and meanwhile, the high efficiency of processing the large-flow attack message can be ensured.
The first attack packet buffer may be specifically disposed in the memory 104 corresponding to the first type of central processing unit, and the second attack packet buffer may be specifically disposed in the memory 105 corresponding to the second type of central processing unit; the first attack message buffer area and the forwarding core are in one-to-one correspondence, and the second attack message buffer area and the attack defense special core are in one-to-one correspondence.
After receiving the first type of message, the first type of message is received by the first type of central processing unit 101, where the configured forwarding core further identifies and processes the first type of message, if the first type of message is identified as a normal message, the normal message is correspondingly forwarded, and if the first type of message is identified as a small-flow attack message, the attack message is discarded into the memory 104 corresponding to the first type of central processing unit 101, that is, the attack message is discarded into the first attack message cache region corresponding to the forwarding core.
Similarly, after the second type of packet is received by the second type of central processing unit 102, the configured attack defense dedicated core further processes the second type of packet, if the second type of packet is identified as a normal packet, the normal packet is correspondingly forwarded, and if the second type of packet is identified as a large-flow attack packet, the attack packet is discarded into the memory 105 corresponding to the second type of central processing unit 102, that is, the attack packet is discarded into the second attack packet cache region corresponding to the attack defense dedicated core.
It should be understood that, in practical applications, the attack defense dedicated core may process a large flow of attack messages, and may also implement a normal message forwarding function, that is, in a case where the attack defense dedicated core receives a normal message, the attack defense dedicated core may also forward the normal message.
It should be noted that, in practical applications, in order to improve the performance of the firewall device, a plurality of first type central processing units 101 and a plurality of second type central processing units 102 may be generally provided, that is, a plurality of first type central processing units 101 configured with forwarding cores and a plurality of second type central processing units 103 configured with attack defense special cores may be provided; correspondingly, the message forwarding performance and the attack defense performance of the firewall equipment are correspondingly enhanced along with the increase of the first type central processing unit 101 and the second type central processing unit 102; the number of each type of central processing unit can be specifically set according to actual requirements.
It should be noted that, in some cases, the first type central processing unit 101 and the second type central processing unit 102 may use the same physical device, that is, the forwarding core and the attack defense dedicated core may share one central processing unit, and the forwarding core and the attack defense dedicated core in the central processing unit share one exclusive thread, which may simultaneously implement three functions of normal packet forwarding, small-flow attack packet processing, and large-flow attack packet processing.
Accordingly, the number of attack packet buffers (including the first attack packet buffer and the second attack packet buffer) in the firewall device will depend on the number of forwarding cores, attack defense dedicated cores, and shared central processors. Assuming that A forwarding cores, B attack defense cores and N attack message cache regions exist; if a shared central processing unit does not exist between the forwarding core and the attack defense core, N is A + B, wherein the number of the first attack message cache regions is A, and the number of the second attack message cache regions is B; if there are C shared central processing units between the forwarding core and the attack defense core, N is a + B-C.
In general, when initializing a memory corresponding to a central processing unit, a general memory pool mubf memory may be first constructed in the memory, and after receiving a message, the central processing unit may correspondingly apply for a content for storing the received message from the general memory pool mubf memory, and after the message is released, the memory occupied by the message is released back to the general memory pool mubf memory.
For the firewall device provided in the embodiment of the present application, when initializing the memory, it is necessary to correspondingly construct a common-memory pool corresponding to the forwarding core for the forwarding core, and construct a special-memory pool corresponding to the attack defense core for the attack defense core. Specifically, a common-memory pool of a first message may be established in a memory corresponding to the first type of central processing unit, after receiving the message, the forwarding core applies for a memory for storing the message from the common-memory pool of the first message, and after the message is released, the memory occupied by the message is released back to the common-memory pool of the first message; and after the message is released, releasing the memory occupied by the message back to the special-memory pool of the second message.
It should be noted that, when the firewall device adopts a Non-Uniform Memory Access Architecture (NUMA) central processing unit, the Memory corresponding to the first type of central processing unit is initialized, which is substantially the Memory corresponding to the NUMA where the first type of central processing unit is located, and accordingly, the number of the first message Memory pools common-Memory is equal to the number of NUMAs including the first type of central processing unit in the firewall device.
In order to prevent the influence on the processing of the large-flow attack message due to resource competition when the large-flow attack message arrives; when initializing the memory corresponding to the second central processing unit, a second message memory pool special-memory can be correspondingly constructed for each attack defense special-purpose core, after each attack defense special-purpose core receives the message, the memory is applied from the second message memory pool special-memory corresponding to the attack defense special-purpose core, so that the situation that the competition of memory resources does not exist among the attack defense special-purpose cores is ensured, and correspondingly, the attack defense performance of the firewall equipment can be linearly enhanced along with the increase of the number of the attack defense special-purpose cores.
It should be understood that, when the first type of central processing unit and the second type of central processing unit correspond to the same memory, a first message memory pool common-memory dedicated to the forwarding core and a second message memory pool special-memory dedicated to the attack defense core may be correspondingly constructed in the memory.
It should be noted that, because the number of attack messages that the attack defense dedicated core needs to process is usually much larger than the number of messages that the forwarding core needs to process, when the memory is initialized, the size of the second message memory pool special-memory that is constructed is usually much larger than the size of the first message memory pool common-memory, that is, the number of messages that can be stored in the second message memory pool special-memory is much larger than the number of messages that can be stored in the first message memory pool common-memory.
When the memory is initialized, besides the first message memory pool common-memory and the second message memory pool special-memory, a first attack message buffer zone dorp ring and a second attack message buffer zone dorp ring are correspondingly constructed. The number of the first message cache regions is equal to that of the forwarding cores, and the number of the second attack message cache regions is equal to that of the attack defense special cores.
In addition, the size of the first attack message buffer dorp ring depends on the size of the common-memory pool of the first message, and the size of the second attack message buffer dorp ring depends on the size of the common-memory pool of the second message. When the firewall device adopts a central processing unit of a NUMA architecture, the number of messages that can be stored in the first attack message buffer memory dorp ring depends on the number of messages that can be stored in the first message memory pool common-mempool and the number of central processing units included in one NUMA, and assuming that M messages can be stored in the first message memory pool common-mempool and the number of central processing units included in the NUMA is a, the length X of the first attack message buffer memory dorp ring is equal to M/a. The number of messages that can be stored in the second attack message buffer dorp ring is still equal to the number of messages that can be stored in the second message memory pool special-memory.
It should be understood that, since the size of the second packet memory pool is usually much larger than the size of the common-memory pool of the first packet memory pool, the number of packets that can be stored in the second attack packet buffer dorp ring is also much larger than the number of packets that can be stored in the first attack packet buffer dorp ring.
The third type of central processing unit 103 is configured with a log recording core, and the log recording core is configured to read the attack messages in the first attack message cache region and the second attack message cache region through a lock-free read operation, and generate an attack log according to the read attack messages.
The first type central processing unit 101 discards the received message to the first attack message cache region when determining that the message is an attack message, and similarly, the second type central processing unit 102 discards the received message to the second attack message cache region when determining that the message is an attack message. Furthermore, the third type central processing unit 103 configured with a log recording core reads the attack packet stored in the first attack packet buffer area and the second attack packet buffer area in a traversing manner, and generates an attack log according to the read attack packet.
It should be noted that, in order to ensure that when the log recording core reads the attack packet from the first attack packet buffer and the second attack packet buffer, the reason why the read attack packet is discarded can be known, the packets in the first packet buffer and the second packet buffer usually carry fields for marking the reason of discarding, such as an extern _ id field, so that the log recording core can obtain the reason of discarding according to the extern _ id field, and extract specific fields in the attack packet to form the attack log.
It should be noted that, in a general case, when multiple cores perform read operations or write operations on data in the same cache region at the same time, the multiple cores need to perform corresponding operations on the data in the cache region through a locking mode, that is, the order of the operations performed on the data in the cache region by each core needs to be determined according to the priority corresponding to each core, where a core with a higher priority operates first and a core with a lower priority operates later. In the technical scheme provided by the embodiment of the application, the log recording core and the forwarding core can correspondingly operate data in the first attack message cache region in a lock-free mode, that is, while the forwarding core writes an attack message into the first attack message cache region, the log recording core can also read the attack message from the first attack message cache region, and the forwarding core and the log recording core are not mutually influenced; similarly, the data in the second attack message cache region may also be correspondingly operated in a lock-free mode between the log recording core and the attack defense dedicated core, that is, while the attack defense dedicated core writes the attack message into the second attack message cache region, the log recording core may also read the attack message from the second attack message cache region, and the attack defense dedicated core and the log recording core are not affected by each other.
The three types of central processing units in the firewall equipment work independently and parallelly, a forwarding core processes normal messages and small flow attacks, an attack defense special core processes large flow attacks, a log recording core actively acquires attack messages and correspondingly generates attack logs, and the three types of central processing units perform respective functions, so that the firewall equipment can realize normal forwarding of the messages, can effectively defend large flow attacks, and can record the attack logs, the performance of the firewall equipment is effectively improved, and market requirements are met.
It should be noted that, after the log recording core reads the attack messages stored in the first attack message cache region and the second attack message cache region, the forwarding core may correspondingly recover the storage space occupied by the read attack messages in the first attack message cache region to the first message memory pool, and the attack defense dedicated core may correspondingly recover the storage space occupied by the read attack messages in the second attack message cache region to the second message memory pool, so as to ensure that the first message memory pool and the second message memory pool can provide enough storage space to be called by the forwarding core and the attack defense dedicated core when the forwarding core and the attack defense dedicated core subsequently receive the messages, thereby ensuring that the forwarding core and the attack defense dedicated core can normally operate.
In a possible implementation manner, the forwarding core is configured to, when the first attack packet buffer is saturated, retrieve a storage space of all attack packets read by the log recording core in the first attack packet buffer, and release the storage space to a first packet memory pool corresponding to the forwarding core; similarly, the attack defense dedicated core is configured to, when the second attack packet buffer area is saturated, retrieve a storage space of all attack packets read by the log recording core in the second attack packet buffer area, and release the storage space to the second packet memory pool corresponding to the attack defense dedicated core.
Specifically, when the first attack packet cache region is filled with the attack packet discarded by the forwarding core, the forwarding core may recover the storage space occupied by the attack packet marked as read in the first attack packet cache region, where the attack packet marked as read is actually the attack packet read by the log recording core, and after recovering the storage space occupied by the read attack packet, further release the recovered storage space to the first packet memory pool corresponding to the forwarding core.
Similarly, when the second attack message cache region is filled with the attack message discarded by the attack defense dedicated core, the attack defense dedicated core may recover the storage space occupied by the attack message marked as read in the second attack message cache region, the attack message marked as read is the attack message substantially read by the log recording core, and after recovering the storage space occupied by the read attack message, the recovered storage space is further released to the second message memory pool corresponding to the attack defense dedicated core.
In another possible implementation manner, the forwarding core is configured to, when the available memory proportion in the first message memory pool corresponding to the forwarding core is smaller than a first threshold, retrieve storage spaces of all attack messages read by the log recording core in the first attack message cache region, and release the retrieved storage spaces to the first message memory pool corresponding to the forwarding core; similarly, the attack defense dedicated core is configured to, when the available memory proportion in the second message memory pool corresponding to the attack defense dedicated core is smaller than the first threshold, retrieve the storage space of all attack messages read by the log recording core in the second attack message cache region, and release the retrieved storage space to the second message memory pool corresponding to the attack defense dedicated core.
Specifically, when the available memory occupancy in the first message memory pool is smaller than the first threshold, it indicates that the memory that can be applied by the forwarding core for storing the newly received message is small, at this time, the forwarding core may recover the storage space occupied by the attack message marked as read in the first attack message cache region, and further release the recovered storage space to the first message memory pool corresponding to the forwarding core, thereby increasing the available memory in the first message memory pool.
Similarly, when the available memory occupancy in the second message memory pool is smaller than the first threshold, it indicates that the memory that can be applied by the attack defense special core for storing the newly received message is less, and at this time, the attack defense special core can recycle the storage space occupied by the attack message marked as read in the second attack message cache region, and further release the recycled storage space to the second message memory pool corresponding to the attack defense special core, thereby increasing the available memory in the second message memory pool.
It should be understood that the first threshold may be set according to actual requirements, and may be generally set to 1/10 of the total memory, accordingly, the first threshold corresponding to the first message memory pool is 1/10 of the total memory of the first message memory pool, and the first threshold corresponding to the second message memory pool is 1/10 of the total memory of the second message memory pool; of course, the first threshold may also be set to other values according to actual requirements, and the first threshold is not limited in any way.
When the memories in the first attack message cache region and the second attack message cache region are recovered by the two possible implementation modes, a batch recovery mode is adopted, namely when the memory recovery condition is met, all the storage spaces of the attack messages read by the log recording core are recovered at one time, the recovery mode can simplify the overall design scheme of the firewall equipment to a certain extent, and meanwhile, the forwarding processing performance of the firewall equipment can be improved.
It should be noted that, when reading attack messages in the first attack message cache region and the second attack message cache region, the log recording core configured in the third type central processing unit may specifically start a plurality of log threads, and respectively read the attack messages in a concurrent manner by using a lock-free read operation through the plurality of log threads, where one log thread corresponds to one first attack message cache region or one second attack message cache region and is used to read the attack messages from the corresponding attack message cache region.
Specifically, when the log recording core reads the attack packet from the first attack packet buffer and the second attack packet buffer, the log recording core may correspondingly start a log thread for each attack packet buffer (including the first attack packet buffer and the second attack packet buffer), and further, correspondingly read the attack packet from each attack packet buffer through the log thread corresponding to each attack packet buffer.
It should be understood that the log thread here is in a one-to-one correspondence relationship with the first attack packet buffer area or the second attack packet buffer area, and accordingly, the log thread is in a one-to-one correspondence relationship with the first type of central processing unit or the second type of central processing unit, and the log thread is specially used for reading the attack packet from the attack packet buffer area corresponding to the log thread.
In the mechanism for concurrently reading the attack message cache region by the multiple log threads, the log threads and the attack message cache region are in one-to-one correspondence, that is, one log thread only reads the attack message from one attack message cache region, and the condition that the multiple log threads share one attack message cache region does not exist, so that resource competition among the log threads can be effectively avoided in the process of reading the attack message, and the operation of recording the log by the log recording core can be ensured to be smoothly carried out.
In a possible implementation manner, in order to ensure that the firewall device log recording performance can be linearly enhanced with the increase of the number of the log recording cores, a plurality of third type central processing units may be set in the firewall device, a log thread is started by the log recording core in each third type central processing unit, and the attack packet is read from one first attack packet buffer or one second attack packet buffer through the log thread.
Referring to fig. 2, fig. 2 is a schematic diagram of a working principle corresponding to the firewall device. As shown in fig. 2, the firewall device includes: the system comprises two central processing units configured with forwarding cores, two central processing units shared by the forwarded cores and the special attack defense cores, and one central processing unit configured with the special attack defense core; and an attack message caching area drop ring corresponding to each central processing unit is arranged in the memory corresponding to the central processing unit.
The firewall device further includes a third type of central processing unit corresponding to each central processing unit, as shown in fig. 2, the firewall device includes five central processing units configured with log recording cores, and each central processing unit corresponds to the central processing unit configured with the forwarding core and/or the attack defense dedicated core.
And the log recording core in each third type of central processing unit starts a log thread aiming at the corresponding first type of central processing unit or second type of central processing unit, and reads the attack message from the attack message cache region drop ring corresponding to the first type of central processing unit or the second type of central processing unit through the log thread.
It should be understood that the firewall device shown in fig. 2 is merely an example. In practical application, the attack packet buffer is set in the memory corresponding to the central processing unit, for convenience of description, the central processing unit and the memory are merged in fig. 2, the attack packet buffer is directly set in the central processing unit, and the memory and the central processing unit are actually independent from each other; in addition, the firewall device may include a plurality of first type central processing units and second type central processing units, and the number of the third type central processing units depends on the number of the first type central processing units and the second type central processing units, and the number of the central processing units included in the firewall device is not specifically limited herein.
In the firewall device shown in fig. 2, all the logging cores are completely parallelized, and since the resources accessed by all the logging cores are independent resources, each attack packet buffer only has two threads for reading and writing to access, and other central processing units cannot access the attack packet buffer, the logging performance of the firewall device can be linearly improved along with the increase of the logging cores.
In the device supporting the hyper-threading, the third type of central processing unit may share the same physical device with the first type of central processing unit and the second type of central processing unit, respectively; and the log record core in the third type central processing unit may be specifically configured to read the attack packet through the hyper-thread virtualized by the physical device.
In the device supporting the hyper-threading, a physical device therein may support the operation of two types of central processing units at the same time, specifically, a virtual device may be created in the physical device, and while the first type of central processing unit or the second type of central processing unit is operated on the physical device, the created virtual device may support the operation of a third type of central processing unit.
It should be noted that the physical device is actually a central processing unit, which supports the operation of two types of central processing units, and can substantially support the operation of a forwarding core and a log recording core at the same time, or support the operation of an attack defense special core and a log recording core at the same time; the virtualized central processing unit in the central processing unit can be used to support the work of the log record core.
It should be understood that a third type of central processing unit running in the same physical device has a corresponding relationship with the first type of central processing unit or the second type of central processing unit running therein, that is, if the first type of central processing unit and the third type of central processing unit are running in the same physical device at the same time, a log record core in the third type of central processing unit can directly read an attack message from a first attack message cache region corresponding to the first type of central processing unit through a virtual hyper-thread of the physical device; similarly, if a second type of central processing unit and a third type of central processing unit are simultaneously operated in the same physical device, a log record core in the third type of central processing unit can directly read an attack packet from a second attack packet buffer corresponding to the second type of central processing unit through a virtual hyper-thread in the physical device.
It should be understood that the above-mentioned hyper-thread virtualized by the physical device is essentially a log thread, and the operations required to be executed by the hyper-thread are the same as those required by the log thread, that is, the hyper-thread is used for reading the attack packet from the attack packet buffer.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating an operating principle of a firewall device supporting hyper-threading according to an embodiment of the present application. The firewall equipment comprises four central processing units, wherein the central processing unit 1 and the central processing unit 2 simultaneously support a forwarding core and a log recording core to work, and the central processing unit 3 and the central processing unit 4 simultaneously support an attack defense special core and a log recording core to work.
As shown in fig. 3, the hyper-threads virtualized in the central processing unit 1, the central processing unit 2, the central processing unit 3, and the central processing unit 4 are respectively used as log threads, the log recording cores in the central processing unit 1 and the central processing unit 2 read attack packets from the first attack packet buffer area corresponding to the forwarding core through the hyper-threads virtualized therein, and the log recording cores in the central processing unit 3 and the central processing unit 4 read attack packets from the second attack packet buffer area corresponding to the attack defense dedicated core through the hyper-threads virtualized therein.
It should be understood that the logging cores running therein are not shown in the respective central processors in the firewall apparatus shown in fig. 3 for convenience of description. The firewall device shown in fig. 3 is merely an example. In practical application, the attack packet buffer is set in the memory corresponding to the central processing unit, for convenience of description, the central processing unit and the memory are merged in fig. 3, the attack packet buffer is directly set in the central processing unit, and the memory and the central processing unit are actually independent from each other; in addition, the firewall device may include a plurality of cpus supporting the hyper-threading, and the number of the cpus included in the firewall device is not specifically limited herein.
In the firewall equipment supporting the hyper-threading, a forwarding core or an attack defense special core and a log recording core can jointly use the same cache of a cache, and the log thread can read attack messages in an attack message cache region in the hyper-threading; the attack message buffer area is sequentially put in attack messages by the forwarding core or the attack defense special core, the log recording core reversely reads the attack messages, the forwarding core or the attack defense special core and the log recording core cannot simultaneously access the same memory, after the log recording core reads the attack messages in the attack message buffer area, the memory occupied by the read attack messages cannot be read again before being released, and the memory occupied by the read attack messages is usually released in batches when reaching a certain condition, so that the log recording core and the forwarding core or the attack defense special core cannot simultaneously occupy the same resource. In addition, since the logging core and the forwarding core or the attack defense dedicated core perform operations that are completely different, making the logging core and the forwarding core or the attack defense dedicated core share the same physical device can make the physical device more fully utilized.
Because the log recording core and the forwarding core are not in a synchronous working relation, namely the log thread can not read the attack message from the attack message cache region immediately after the attack message is written into the attack message cache region; therefore, a problem that the log thread is difficult to read the attack message in time may occur. Under the condition of encountering a large amount of attack messages, the attack message cache region can be written or written to full in a large amount in a short time, and the log thread can still not read the attack messages in the attack message cache region, so that the performance of the firewall is influenced to a certain degree.
In order to prevent the above situation from occurring, the present application provides two ways of notifying the log thread to read the attack packet, and the two ways are described below.
In a first implementation manner, when the proportion of the written area of the forwarding core in the first attack message cache region reaches a second threshold value and the log thread does not read the attack message therein, the forwarding core notifies the log thread to read the attack message; similarly, when the ratio of the written area of the second attack message cache region reaches the second threshold value and the log thread does not read the attack message therein, the attack defense special core informs the log thread to read the attack message.
Specifically, the forwarding core monitors the size of a written area in the first attack message cache region, and when it is monitored that the proportion of the written area in the first attack message cache region reaches a second threshold value and it is determined that the log thread has not read the attack message in the first attack message cache region, the forwarding core notifies the log thread to read the attack message in the first attack message cache region.
Similarly, the attack defense special core monitors the size of the written area in the second attack message cache region, and when it is monitored that the proportion of the written area in the second attack message cache region reaches a second threshold value and it is determined that the log thread has not read the attack message in the second attack message cache region, the attack defense special core notifies the log thread to read the attack message in the second attack message cache region.
It should be understood that the size of the second threshold may be set according to actual requirements, and may be generally set to 1/3, and of course, the second threshold may also be set to other values, and the second threshold is not specifically limited herein.
In a second implementation manner, the forwarding core may add prompt information at the tail of the attack packet according to the size of the attack traffic, and notify the log thread through the prompt information, so that the log thread adjusts the number of packets read each time according to the prompt information; similarly, the attack defense special core can also add prompt information at the tail of the attack message according to the size of the attack flow, and inform the log thread through the prompt information, so that the log thread adjusts the number of messages read each time according to the prompt information.
Specifically, the forwarding core correspondingly adds prompt information at the tail of the attack message discarded into the first attack message cache region according to the size of the attack traffic, wherein the prompt information is used for prompting the log thread to adjust the number of messages read each time; correspondingly, when the log thread reads the attack message, the prompt information carried in the attack message is obtained, and then the log thread can correspondingly adjust the number of messages read each time according to the prompt information, and when the attack message is read from the first attack message cache region next time, the attack message is read according to the adjusted number.
Similarly, the attack defense special core may also increase, according to the size of the attack traffic, prompt information correspondingly at the tail of the attack message discarded into the second attack message cache region, where the prompt information is used to prompt the log thread to adjust the number of messages read each time, and accordingly, when the log thread reads the attack message, the prompt information carried in the attack message is obtained, and further, the log thread may adjust the number of messages read each time according to the prompt information, and when the attack message is read from the second attack message cache region next time, the attack message is read according to the adjusted number.
It should be understood that when the attack traffic is large, the forwarding core or the attack defense dedicated core adds prompt information at the tail of the attack message to notify the log thread to increase the number of messages read each time; when the attack flow is less, the prompt information added at the tail part of the attack message by the forwarding core or the attack defense special core is used for informing the log thread to reduce the number of the messages read each time.
Therefore, the log thread is informed to read the attack messages in the attack message cache region through the two implementation modes, the log thread can be ensured to timely read the attack messages in the attack message cache region, and the performance of the firewall device is effectively prevented from being influenced under the condition of encountering a large number of attack messages.
Aiming at the firewall equipment, the application also provides a message processing method of the firewall equipment, so that the firewall equipment can process the message received by the firewall equipment in actual application based on the message processing method.
Referring to fig. 4, fig. 4 is a schematic flowchart of a message processing method according to an embodiment of the present application. As shown in fig. 4, the message processing method includes the following steps:
step 401: and the first type of central processing unit receives and identifies the first type of message through a pre-configured forwarding core, and forwards a normal message or discards an attack message to a first attack message cache region corresponding to the forwarding core according to the identification result.
Step 402: and the second type central processing unit receives and identifies the second type of message through the pre-configured attack defense special core, and forwards the normal message or discards the attack message to a second attack message cache region corresponding to the attack defense special core according to the identification result.
Step 403: and the third central processing unit reads the attack messages in the first attack message cache region and the second attack message cache region through a log record core which is pre-configured and through lock-free reading operation, and generates an attack log according to the attack messages.
Optionally, the message processing method further includes:
the first central processing unit recovers, through the forwarding core, all storage spaces of attack messages read by the log recording core in the first attack message cache region when the first attack message cache region is saturated, and releases the storage spaces to a first message memory pool corresponding to the forwarding core;
and the second type central processing unit recovers all the storage spaces of the attack messages read by the log recording core in the second attack message cache region through the attack defense special core when the second attack message cache region is saturated, and releases the storage spaces to a second message memory pool corresponding to the attack defense special core.
Optionally, when the available memory proportion in the first message memory pool corresponding to the forwarding core is smaller than a first threshold value through the forwarding core, the first type central processing unit recovers all storage spaces of attack messages read by the log recording core in the first attack message cache region, and releases the storage spaces to the first message memory pool corresponding to the forwarding core;
and the second type central processing unit recovers all the storage spaces of the attack messages read by the log recording core in the second attack message cache region through the attack defense special core when the available memory proportion in the second message memory pool corresponding to the attack defense special core is smaller than a first threshold value, and releases the storage spaces to the second message memory pool corresponding to the attack defense special core.
Optionally, the third type of central processing unit starts a plurality of log threads through the log recording core, and reads attack packets in a concurrent manner through the multiple log threads by using a lock-free read operation, wherein one log thread corresponds to one central processing unit and is used for reading the attack packets from the corresponding central processing unit.
Optionally, the message processing method further includes:
when the ratio of the written areas of the first attack message cache region reaches a second threshold value and the log thread is not read, the first type central processing unit informs the log thread to read the attack message through the forwarding core;
and the second type central processing unit informs the log thread to read the attack message through the attack defense special core when the written area proportion of the second attack message cache region reaches a second threshold value and the log thread is not read yet.
Optionally, the message processing method further includes:
the first type central processing unit adds prompt information at the tail part of the attack message through the forwarding core according to the size of the attack flow, and informs the log thread through the prompt information so that the log thread adjusts the number of messages read each time according to the prompt information;
and the second type central processing unit adds prompt information at the tail part of the attack message according to the size of the attack flow through the special attack defense core, and informs the log thread through the prompt information so that the log thread adjusts the number of messages read each time according to the prompt information.
In the message processing method of the firewall device provided by the embodiment of the application, the central processing units of three types in the firewall device respectively and independently work in parallel, the forwarding core processes normal messages and small-flow attacks, the attack defense special core processes large-flow attacks, the log recording core actively acquires attack messages and correspondingly generates attack logs, and the three take their own roles, so that the firewall device can realize normal forwarding of the messages and effectively defend large-flow attacks, and can record the attack logs at the same time, thereby effectively improving the performance of the firewall device and meeting the market demands.
An embodiment of the present application further provides a computer-readable storage medium, configured to store a program code, where the program code is configured to execute any one implementation manner of the message processing method of the firewall device described in the foregoing embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (12)

1. A firewall device, comprising:
three types of central processing units and a memory corresponding to each type of central processing unit; the three types of central processing units comprise a first type of central processing unit, a second type of central processing unit and a third type of central processing unit; wherein the content of the first and second substances,
the first type central processing unit is internally provided with a forwarding core, and the forwarding core is used for receiving and identifying a first type of message, and forwarding a normal message or discarding an attack message to a first attack message cache region corresponding to the forwarding core according to an identification result;
the second type central processing unit is internally provided with an attack defense special core, and the attack defense special core is used for receiving and identifying a second type of message, and forwarding a normal message or discarding the attack message to a second attack message cache region corresponding to the attack defense special core according to the identification result;
and a log recording core is configured in the third type central processing unit, and the log recording core is used for reading attack messages in the first attack message cache region and the second attack message cache region through lock-free reading operation and generating an attack log according to the attack messages.
2. The firewall device of claim 1, wherein the third type of central processor shares a same physical device with each of the first and second types of central processors; and the log record core in the third type central processing unit is specifically configured to read the attack packet through the hyper-thread virtualized by the physical device.
3. The firewall device of claim 1, wherein the first type of central processor and the second type of central processor are the same physical device.
4. The firewall device according to claim 1, wherein the log recording core is specifically configured to start a plurality of log threads, and respectively read attack packets in a concurrent manner by using lock-free read operations through the plurality of log threads, wherein one log thread corresponds to one of the first attack packet buffer or the second attack packet buffer and is configured to read an attack packet from the attack packet buffer corresponding to the log thread.
5. The firewall device according to claim 1, wherein the firewall device includes a plurality of the third type central processing units, and a log recording core in each of the third type central processing units starts a log thread, and reads an attack packet from one of the first attack packet buffer or the second attack packet buffer through the log thread.
6. The firewall device of claim 1,
the forwarding core is further configured to notify the log thread to read the attack packet when the written area proportion of the first attack packet cache region reaches a second threshold value and the log thread is not read yet;
and the attack defense special core is also used for informing the log thread to read the attack message when the written area proportion of the second attack message cache region reaches a second threshold value and the log thread is not read yet.
7. The firewall device of claim 1,
the forwarding core is further used for adding prompt information at the tail part of the attack message according to the size of the attack flow, and informing the log thread through the prompt information so that the log thread can adjust the number of messages read each time according to the prompt information;
the special core for attack defense is also used for adding prompt information at the tail part of the attack message according to the size of the attack flow, and informing the log thread through the prompt information so that the log thread can adjust the number of messages read each time according to the prompt information.
8. A message processing method of firewall equipment is characterized by comprising the following steps:
the first type of message is received and identified by the first central processing unit through a pre-configured forwarding core, and a normal message is forwarded or an attack message is discarded to a first attack message cache region corresponding to the forwarding core according to an identification result;
the second type central processing unit receives and identifies a second type of message through a pre-configured attack defense special core, and forwards a normal message or discards the attack message to a second attack message cache region corresponding to the attack defense special core according to the identification result;
and the third central processing unit reads the attack messages in the first attack message cache region and the second attack message cache region through a log record core which is pre-configured and through lock-free reading operation, and generates an attack log according to the attack messages.
9. The message processing method according to claim 8, wherein the third type of central processing unit starts a plurality of log threads through the log recording core, and reads attack messages in a concurrent manner by using lock-free read operations through the plurality of log threads, respectively, wherein one log thread corresponds to one central processing unit and is used for reading the attack messages from the corresponding central processing unit.
10. The message processing method according to claim 8, wherein the method further comprises:
when the ratio of the written areas of the first attack message cache region reaches a second threshold value and the log thread is not read, the first type central processing unit informs the log thread to read the attack message through the forwarding core;
and the second type central processing unit informs the log thread to read the attack message through the attack defense special core when the written area proportion of the second attack message cache region reaches a second threshold value and the log thread is not read yet.
11. The message processing method according to claim 8, wherein the method further comprises:
the first type central processing unit adds prompt information at the tail part of the attack message through the forwarding core according to the size of the attack flow, and informs the log thread through the prompt information so that the log thread adjusts the number of messages read each time according to the prompt information;
and the second type central processing unit adds prompt information at the tail part of the attack message according to the size of the attack flow through the special attack defense core, and informs the log thread through the prompt information so that the log thread adjusts the number of messages read each time according to the prompt information.
12. A computer-readable storage medium for storing program code for executing the message processing method according to any one of claims 8 to 11.
CN201811574742.XA 2018-12-21 2018-12-21 Firewall equipment and message processing method and medium thereof Active CN109495504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811574742.XA CN109495504B (en) 2018-12-21 2018-12-21 Firewall equipment and message processing method and medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811574742.XA CN109495504B (en) 2018-12-21 2018-12-21 Firewall equipment and message processing method and medium thereof

Publications (2)

Publication Number Publication Date
CN109495504A CN109495504A (en) 2019-03-19
CN109495504B true CN109495504B (en) 2021-05-25

Family

ID=65711402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811574742.XA Active CN109495504B (en) 2018-12-21 2018-12-21 Firewall equipment and message processing method and medium thereof

Country Status (1)

Country Link
CN (1) CN109495504B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110224947A (en) * 2019-06-05 2019-09-10 东软集团股份有限公司 Message processing method, device and equipment in a kind of multicore repeater system
CN110545291B (en) * 2019-09-29 2022-02-11 东软集团股份有限公司 Defense method for attack message, multi-core forwarding system and related products
CN113709044B (en) * 2020-05-20 2023-05-23 阿里巴巴集团控股有限公司 Data forwarding method, device, electronic equipment and storage medium
CN113890746B (en) * 2021-08-16 2024-05-07 曙光信息产业(北京)有限公司 Attack traffic identification method, device, equipment and storage medium
CN113991839B (en) * 2021-10-15 2023-11-14 许继集团有限公司 Device and method for improving remote control opening reliability
CN113938325B (en) * 2021-12-16 2022-03-18 紫光恒越技术有限公司 Method and device for processing aggressive traffic, electronic equipment and storage equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102497322A (en) * 2011-12-19 2012-06-13 曙光信息产业(北京)有限公司 High-speed packet filtering device and method realized based on shunting network card and multi-core CPU (Central Processing Unit)
CN104202333A (en) * 2014-09-16 2014-12-10 浪潮电子信息产业股份有限公司 Implementation method of distributed firewall
CN107864156A (en) * 2017-12-18 2018-03-30 东软集团股份有限公司 Ssyn attack defence method and device, storage medium
CN108566382A (en) * 2018-03-21 2018-09-21 北京理工大学 The fire wall adaptive ability method for improving of rule-based life cycle detection
CN108667730A (en) * 2018-04-17 2018-10-16 东软集团股份有限公司 Message forwarding method, device, storage medium based on load balancing and equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080022401A1 (en) * 2006-07-21 2008-01-24 Sensory Networks Inc. Apparatus and Method for Multicore Network Security Processing
CN102801659B (en) * 2012-08-15 2016-03-30 成都卫士通信息产业股份有限公司 A kind of security gateway implementation method based on Flow Policy and device
CN106357726B (en) * 2016-08-24 2019-08-20 东软集团股份有限公司 Load-balancing method and device
CN107181738B (en) * 2017-04-25 2020-09-11 中国科学院信息工程研究所 Software intrusion detection system and method
CN107682312A (en) * 2017-08-25 2018-02-09 中国科学院信息工程研究所 A kind of security protection system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102497322A (en) * 2011-12-19 2012-06-13 曙光信息产业(北京)有限公司 High-speed packet filtering device and method realized based on shunting network card and multi-core CPU (Central Processing Unit)
CN104202333A (en) * 2014-09-16 2014-12-10 浪潮电子信息产业股份有限公司 Implementation method of distributed firewall
CN107864156A (en) * 2017-12-18 2018-03-30 东软集团股份有限公司 Ssyn attack defence method and device, storage medium
CN108566382A (en) * 2018-03-21 2018-09-21 北京理工大学 The fire wall adaptive ability method for improving of rule-based life cycle detection
CN108667730A (en) * 2018-04-17 2018-10-16 东软集团股份有限公司 Message forwarding method, device, storage medium based on load balancing and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多核(多处理单元)的防火墙架构研究与关键技术实现;宋志军;《中国优秀硕士学位论文全文数据库信息科技辑》;20091115;第3-5章 *
基于多核的协议分析状态检测防火墙的研究;张超云;《中国优秀硕士论文全文数据库信息科技辑》;20080115;第3章 *

Also Published As

Publication number Publication date
CN109495504A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109495504B (en) Firewall equipment and message processing method and medium thereof
US9794287B1 (en) Implementing cloud based malware container protection
CN107451012B (en) Data backup method and stream computing system
WO2015119522A2 (en) Systems and methods for detecting return-oriented programming (rop) exploits
CN107544755B (en) Data read-write control method and device
CN110737534A (en) Task processing method and device and server
US20210250404A1 (en) Video data storage method and device in cloud storage system
US20220237129A1 (en) Providing a secure communication channel between kernel and user mode components
US10489587B1 (en) Systems and methods for classifying files as specific types of malware
CN108595346B (en) Feature library file management method and device
CN114710263B (en) Key management method, key management device, key management apparatus, and storage medium
CN107908957B (en) Safe operation management method and system of intelligent terminal
CN112685762B (en) Image processing method and device with privacy protection function, electronic equipment and medium
CN105630416B (en) Disk method and device is kicked in a kind of cloud storage system
CN108521351B (en) Session flow statistical method, processor core, storage medium and electronic device
CN112817516A (en) Data read-write control method, device, equipment and storage medium
US8984336B1 (en) Systems and methods for performing first failure data captures
WO2017214856A1 (en) Mitigation of cross-vm covert channel
WO2021144978A1 (en) Attack estimation device, attack estimation method, and attack estimation program
US10416916B2 (en) Method and memory merging function for merging memory pages
CN110347517B (en) Dual-system communication method and computer-readable storage medium
CN105871780B (en) Session log sending method and device
CN109375966A (en) A kind of method, apparatus of node initializing, equipment and storage medium
CN113535412B (en) Method, apparatus and computer program product for tracking locks
CN112187668B (en) Queue management method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant