CN113595822A - Data packet management method, system and device - Google Patents

Data packet management method, system and device Download PDF

Info

Publication number
CN113595822A
CN113595822A CN202110843163.6A CN202110843163A CN113595822A CN 113595822 A CN113595822 A CN 113595822A CN 202110843163 A CN202110843163 A CN 202110843163A CN 113595822 A CN113595822 A CN 113595822A
Authority
CN
China
Prior art keywords
flow table
cache
message
target
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110843163.6A
Other languages
Chinese (zh)
Other versions
CN113595822B (en
Inventor
汪锐
周志雄
李登峰
刘彬
董杰
张明帧
梁丽华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING HENGGUANG INFORMATION TECHNOLOGY CO LTD
Original Assignee
BEIJING HENGGUANG INFORMATION TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING HENGGUANG INFORMATION TECHNOLOGY CO LTD filed Critical BEIJING HENGGUANG INFORMATION TECHNOLOGY CO LTD
Priority to CN202110843163.6A priority Critical patent/CN113595822B/en
Publication of CN113595822A publication Critical patent/CN113595822A/en
Application granted granted Critical
Publication of CN113595822B publication Critical patent/CN113595822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2255Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention provides a method, a system and a device for managing a data packet, wherein the method comprises the following steps: performing primary matching judgment on the acquired data packet message and flow table items in the cache flow table to generate a first matching result; if the first matching result comprises a target cache flow table entry, updating the target cache flow table entry according to the data packet message; if the first matching result does not comprise the target cache flow table entry, performing secondary matching judgment on the data packet message and the flow table entry in the memory flow table to generate a second matching result; if the second matching result comprises the target memory flow table entry, the target memory flow table entry is updated according to the data packet message, and the flow information of the data packet can be efficiently and accurately counted, so that the processing performance of the network equipment is improved.

Description

Data packet management method, system and device
Technical Field
The present invention relates to the field of computer network technologies, and in particular, to a method, a system, and an apparatus for managing data packets.
Background
At present, a user sends a memory access command to access user data in a memory, and the access command is fed back to the user data from the time of sending the memory access command to the memory, wherein a certain time delay exists, and the time delay is not fixed. If the time interval between two messages of the same data stream is smaller than the access delay of the memory, the traffic information acquired by the second message is not the data updated by the first message, so that the problem of inaccurate traffic information occurs. If the second message accesses the memory after the first message is updated, the processing efficiency is reduced, and the processing performance of the network device is low.
Disclosure of Invention
An object of the present invention is to provide a method for managing data packets, which can efficiently and accurately count the traffic information of the data packets, thereby improving the processing performance of the network device. It is another object of the present invention to provide a packet management system. It is still another object of the present invention to provide a packet management apparatus. It is a further object of this invention to provide a computer readable medium. It is a further object of the present invention to provide a computer apparatus.
In order to achieve the above object, an aspect of the present invention discloses a packet management method, including:
performing primary matching judgment on the acquired data packet message and flow table items in the cache flow table to generate a first matching result;
if the first matching result comprises a target cache flow table entry, updating the target cache flow table entry according to the data packet message;
if the first matching result does not comprise the target cache flow table entry, performing secondary matching judgment on the data packet message and the flow table entry in the memory flow table to generate a second matching result;
and if the second matching result comprises the target memory flow table entry, updating the target memory flow table entry according to the data packet message.
Preferably, the data packet message includes a message key value, the cache flow table includes a plurality of flow table entries, and each flow table entry corresponds to one cache hash value;
carrying out matching judgment on the acquired data packet message and flow table items in the cache flow table once to generate a first matching result, wherein the matching judgment comprises the following steps:
performing hash calculation on the key value of the message to obtain a hash value of the message;
matching the message hash value with the cache hash value;
if a target cache hash value matched with the message hash value exists, determining a flow entry corresponding to the target cache hash value as a target cache flow entry, and generating a first matching result comprising the target cache flow entry;
and if the target cache hash value matched with the message hash value does not exist, generating a first matching result without the target cache flow table entry.
Preferably, the data packet message includes a message key value, the memory flow table includes a plurality of flow table entries, and each flow table entry corresponds to one memory hash value;
and performing secondary matching judgment on the data packet message and the flow table items in the memory flow table to generate a second matching result, wherein the secondary matching judgment comprises the following steps:
performing hash calculation on the key value of the message to obtain a hash value of the message;
matching the message hash value with the memory hash value;
if a target memory hash value matched with the message hash value exists, determining a flow entry corresponding to the target memory hash value as a target memory flow entry, and generating a second matching result comprising the target memory flow entry;
and if the target memory hash value matched with the message hash value does not exist, generating a second matching result without the target memory flow table entry.
Preferably, the data packet message includes a service requirement, and the target cache flow entry includes a target cache address and a target cache queue identifier;
if the first matching result includes the target cache flow entry, updating the target cache flow entry according to the data packet message, including:
writing the data packet message and the target cache address into a target waiting queue indicated by the target cache queue identification;
acquiring a data packet message and a target cache address at a head of queue position from a target waiting queue;
and updating the target cache flow table entry corresponding to the target cache address according to the service requirement.
Preferably, the service requirement includes new creation, deletion, search or modification;
updating a target cache flow table entry corresponding to the target cache address according to the service requirement, wherein the updating comprises the following steps:
if the service requirement is newly established, establishing a flow table entry for the cache flow table corresponding to the target cache flow table entry;
if the service requirement is deletion, deleting the target cache flow table entry;
if the service requirement is searching, inquiring a corresponding target cache flow table entry according to the target cache address;
and if the service requirement is modification, modifying the target cache flow table entry according to the acquired modification requirement.
Preferably, the memory flow table further includes an aging time field;
the method further comprises the following steps:
traversing the aging time field of the memory flow table according to the address sequence of the memory flow table, and judging whether the flow table item corresponding to the aging time field is overtime or not according to the current time and the aging time field;
if yes, deleting the overtime flow table entry;
and if not, repeatedly executing the step of traversing the aging time field of the memory flow table according to the address sequence of the memory flow table.
The invention also discloses a data packet management system, which comprises: the system comprises a message management subsystem, a cache subsystem and a memory flow table subsystem;
the message management subsystem is used for acquiring the data packet message, extracting the data packet message from the data packet message and sending the data packet message to the cache subsystem;
the cache subsystem is used for carrying out primary matching judgment on the data packet message and flow table items in the cache flow table to generate a first matching result; if the first matching result comprises a target cache flow table entry, updating the target cache flow table entry according to the data packet message; if the first matching result does not comprise the target cache flow table entry, sending the data packet message to a memory flow table subsystem;
the memory flow table subsystem is used for carrying out secondary matching judgment on the data packet message and flow table entries in the memory flow table to generate a second matching result; and if the second matching result comprises the target memory flow table entry, updating the target memory flow table entry according to the data packet message.
Preferably, the cache subsystem comprises a cache module;
the cache module is used for carrying out Hash calculation on the key value of the message to obtain a Hash value of the message; matching the message hash value with the cache hash value; if a target cache hash value matched with the message hash value exists, determining a flow entry corresponding to the target cache hash value as a target cache flow entry, and generating a first matching result comprising the target cache flow entry; and if the target cache hash value matched with the message hash value does not exist, generating a first matching result without the target cache flow table entry.
Preferably, the data packet message includes a service requirement, and the target cache flow entry includes a target cache address and a target cache queue identifier; the cache subsystem also comprises a waiting queue module and a service processing module;
the waiting queue module is used for storing a plurality of waiting queues, and each waiting queue comprises a corresponding buffer queue identifier;
the cache module is used for writing the data packet message and the target cache address into a target waiting queue indicated by the target cache queue identification;
the service processing module is used for acquiring a data packet message and a target cache address at the head of the queue from the target waiting queue; and updating the target cache flow table entry corresponding to the target cache address according to the service requirement.
The invention also discloses a data packet management device, comprising:
the cache judging unit is used for carrying out primary matching judgment on the acquired data packet message and flow table items in the cache flow table to generate a first matching result;
the cache updating unit is used for updating the target cache flow table entry according to the data packet message if the first matching result comprises the target cache flow table entry;
the memory judging unit is used for judging the secondary matching of the data packet message and the flow table entries in the memory flow table if the first matching result does not comprise the target cache flow table entry, and generating a second matching result;
and the memory updating unit is used for updating the target memory flow table entry according to the data packet message if the second matching result comprises the target memory flow table entry.
The invention also discloses a computer-readable medium, on which a computer program is stored which, when executed by a processor, implements a method as described above.
The invention also discloses a computer device comprising a memory for storing information comprising program instructions and a processor for controlling the execution of the program instructions, the processor implementing the method as described above when executing the program.
The method comprises the steps of carrying out primary matching judgment on the acquired data packet message and flow table items in a cache flow table to generate a first matching result; if the first matching result comprises a target cache flow table entry, updating the target cache flow table entry according to the data packet message; if the first matching result does not comprise the target cache flow table entry, performing secondary matching judgment on the data packet message and the flow table entry in the memory flow table to generate a second matching result; if the second matching result comprises the target memory flow table entry, the target memory flow table entry is updated according to the data packet message, and the flow information of the data packet can be efficiently and accurately counted, so that the processing performance of the network equipment is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a packet management system according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for managing a data packet according to an embodiment of the present invention;
fig. 3 is a flowchart of another data packet management method according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of matching a HASH chain table according to an embodiment of the present invention;
fig. 5 is a schematic overall flow chart of packet management according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a TCP packet state change in a process of processing a data packet by a service processing module according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a packet management device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to facilitate understanding of the technical solutions provided in the present application, the following first describes relevant contents of the technical solutions in the present application. With the continuous development of microelectronic technology and computer technology and the increasing demand of people for internet applications, the scale of computer networks is rapidly expanding, and various applications on the networks are also emerging. The computer network influences the life of people in many ways, the network security becomes the most important basic requirement of internet application, the flow management of mass data in the network is needed, the detailed information of flow is mastered in real time, the comprehensive analysis of the flow type and the flow size in the network is made, the network equipment is adjusted in real time, and the user experience is improved to the maximum extent.
The transmission Control Protocol/Internet Protocol (TCP/IP) is one of the most important protocols in the network Protocol, and TCP/IP is the most widely used, and a bidirectional application connection can be uniquely identified by a five-tuple of a Protocol number, a source IP address, a destination IP address, a source port address and a destination port address. The data packet management method provided by the invention is explained by taking a TCP/IP protocol as an example and combining with an important application scene. However, the packet management method proposed by the present invention is not limited to the TCP/IP protocol, and can also be applied to network environments of other protocols.
Messages with the same quintuple are called a flow, messages from a client to a server are called forward messages, and corresponding data traffic is forward traffic; the message from the server to the client is called a reverse message, and the corresponding data flow is a reverse flow. The source IP address and the destination IP address in the forward packet and the reverse packet are opposite, and the source port and the destination port are also opposite.
At present, a data stream in which a time interval between two messages of the same data stream is smaller than an access delay of a memory is called a elephant stream, and if the elephant stream exists in a network, at this time, traffic information acquired by a second message is not data after updating of a first message, so that a problem of inaccurate traffic information occurs, wherein the traffic information includes the number of messages of forward traffic and reverse traffic and the number of bytes of the messages. If the second message accesses the memory after the first message is updated, the processing efficiency is reduced, and the processing performance of the network device is low. In order to solve the technical problem, the invention provides a data packet management method, which performs data packet management in a mode of mutual coordination of a cache and a memory. Packet management, namely: and managing the flow table in the memory through the quintuple information in the data packet, including creating, deleting, modifying and inquiring the table entry. When the elephant flow occurs in the network, the second message can directly manage the data packet in the cache without accessing the memory, so that the access pressure of the memory can be relieved, the flow information of the data packet can be efficiently and accurately counted, and the processing performance of the network equipment is improved.
Fig. 1 is a schematic structural diagram of a packet management system according to an embodiment of the present invention, and as shown in fig. 1, the packet management system includes a message management subsystem 100, a cache subsystem 200, and a memory flow table subsystem 300. The message management subsystem 100 is connected to the cache subsystem 200, and the cache subsystem 200 is connected to the memory flow table subsystem 300.
The message management subsystem 100 is configured to obtain a data packet message, extract the data packet message from the data packet message, and send the data packet message to the cache subsystem 200. Specifically, the message management subsystem 100 receives a data packet sent by an upper module to obtain the data packet. The upper module is a data packet parsing module, and the data packet parsing module parses the network data packet, generates a data packet, and sends the data packet to the message management subsystem 100. After receiving the data message, the message management subsystem 100 stores the data message and extracts quintuple information and service requirements in the data message; and generating a data packet message according to the quintuple information and the service requirement, and sending the data packet message to the cache subsystem 200. In the embodiment of the invention, the five-tuple information comprises a protocol number, a source IP address, a destination IP address, a source port address and a destination port address; traffic requirements include, but are not limited to, TCP state, length, memory address, request content.
The cache subsystem 200 is configured to perform primary matching judgment on the data packet message and a flow table entry in the cache flow table, and generate a first matching result; if the first matching result comprises a target cache flow table entry, updating the target cache flow table entry according to the data packet message; and if the first matching result does not comprise the target cache flow table entry, sending the data packet message to the memory flow table subsystem. Specifically, the five-tuple information in the packet message serves as a KEY (KEY) value for matching with the flow entry of the cache flow table.
In the embodiment of the present invention, the cache flow table in the cache subsystem is a HASH (HASH) linked list, and the cache flow table includes, but is not limited to, five-tuple information, a HASH value, a cache address, a state, a wait queue pointer, and a reference count. The hash value is obtained by calculating according to quintuple information through a hash algorithm, specifically, the size of a source IP address and the size of a destination IP address are compared, and if the source IP address is larger than the destination IP address, the input sequence of the quintuple information of the hash algorithm is the source IP address, the destination IP address, a protocol number, a source port address and a destination port address; if the source IP address is smaller than the destination IP address, the input sequence of the quintuple information of the Hash algorithm is the destination IP address, the source IP address, the protocol number, the destination port address and the source port address; if the source IP address is the same as the destination IP address, comparing the sizes of the source port address and the destination port address, and if the source port address is larger than the destination port address, the input sequence of the five-tuple information of the Hash algorithm is the source IP address, the destination IP address, the protocol number, the source port address and the destination port address; if the source port address is less than the destination port address, the destination IP address, the source IP address, the protocol number, the destination port address, and the source port address. The input sequence of the quintuple information of the hash algorithm is determined, so that the same hash value can be calculated by the uplink and downlink messages of the same flow, and the subsequent updating processing of the flow table entry is facilitated.
The waiting queue pointer is a pointer pointing to a waiting queue corresponding to the flow table entry, and one flow table entry corresponds to one or zero waiting queues; the reference count indicates the number of waiting messages in the waiting queue.
In the embodiment of the invention, the flow table entry of the cache flow table comprises a synchronized state and an unsynchronized state, wherein the synchronized state indicates that the flow table entry of the cache flow table is synchronized with the flow table entry of the memory flow table through a message, and the unsynchronized state indicates that the flow table entry of the cache flow table is not synchronized with the flow table entry of the memory flow table through the message.
The memory flow table subsystem 300 is configured to perform secondary matching judgment on the data packet message and a flow table entry in the memory flow table, and generate a second matching result; and if the second matching result comprises the target memory flow table entry, updating the target memory flow table entry according to the data packet message. Specifically, the five-tuple information in the packet message is used as a KEY value for matching with the flow entry of the memory flow table.
In the embodiment of the present invention, the memory flow table in the memory flow table subsystem 300 is a HASH linked list, and the HASH collision problem can be solved by using a chained address method, where the memory flow table includes, but is not limited to, quintuple information, a state, a HASH value, and a memory address. As an alternative, the memory flow table subsystem 300 includes a HASH table module and a flow table aging management module. The HASH table module is used for updating the flow table entry of the memory flow table according to the service information, namely: and newly creating, deleting, modifying and inquiring the flow table entry. The flow table aging management module is used for carrying out aging treatment on the flow table entries of the overtime memory flow table.
In the embodiment of the invention, the memory space is divided into a linear space and a shared space according to a specified proportion, and as an alternative, the specified proportion is that the linear space occupies 75% of the memory space, and the shared space occupies 25% of the memory space. The linear space is used for storing the HASH linked list, and the shared space is used for providing space for expanding the HASH linked list.
In an embodiment of the present invention, cache subsystem 200 includes a cache module 210.
The cache module 210 is configured to perform hash calculation on the key value of the packet to obtain a hash value of the packet; matching the message hash value with the cache hash value; if a target cache hash value matched with the message hash value exists, determining a flow entry corresponding to the target cache hash value as a target cache flow entry, and generating a first matching result comprising the target cache flow entry; and if the target cache hash value matched with the message hash value does not exist, generating a first matching result without the target cache flow table entry.
In the embodiment of the invention, the data packet message comprises service requirements, and the target cache flow table entry comprises a target cache address and a target cache queue identifier. The cache subsystem 200 also includes a wait queue module 220 and a traffic processing module 230. The cache module 210 and the waiting queue module 220 are connected through a first communication interface; the cache module 210 is connected with the service processing module 230 through a second communication interface; the waiting queue module 220 is connected with the service processing module 230 through a third communication interface; the cache module 210 is connected to the memory flow table subsystem 300 through a fourth communication interface and a fifth communication interface, where the fourth communication interface is used for lookup access initiated by the cache module 210 to the memory flow table subsystem 300, and the fifth communication interface is used for update processing initiated by the cache module 210 to the memory flow table subsystem. The embodiment of the present invention does not limit the type of the set communication interface.
In the embodiment of the present invention, different communication interfaces are set for the lookup access and the maintenance processing of the memory flow table subsystem 300, so that the low-performance maintenance processing does not affect the high-performance lookup access.
The wait queue module 220 is configured to store a plurality of wait queues, each wait queue including a corresponding buffer queue identifier. Specifically, the waiting queue module 220 sends the packet message at the head of queue position in the waiting queue and the destination cache address to the service processing module 230 through the third communication interface.
The cache module 210 is configured to write the packet message and the destination cache address into the destination wait queue indicated by the destination cache queue identification. Specifically, the cache module 210 writes the packet message and the destination cache address into the destination wait queue indicated by the destination cache queue identification through the first communication interface.
The cache module 210 is further configured to send the synchronization completion message to the waiting queue in the waiting queue module 220 through the first communication interface, so that the waiting queue pushes the data packet message to the service processing module 230.
The cache module 210 is further configured to perform lookup and access on a memory flow table in the memory flow table subsystem 300 through the fourth communication interface, and start aging processing on a memory flow entry of the memory flow table in the memory flow table subsystem 300 through the fifth communication interface or perform update processing corresponding to the cache flow entry on the memory flow entry.
The service processing module 230 is configured to obtain a data packet message at a head of queue position and a target cache address from the target waiting queue; and updating the target cache flow table entry corresponding to the target cache address according to the service requirement. Specifically, the service processing module 230 accesses the cache module 210 through the second communication interface, and updates the target cache flow entry corresponding to the target cache address according to the service requirement.
In the technical scheme provided by the embodiment of the invention, the acquired data packet message is matched and judged once with the flow table item in the cache flow table to generate a first matching result; if the first matching result comprises a target cache flow table entry, updating the target cache flow table entry according to the data packet message; if the first matching result does not comprise the target cache flow table entry, performing secondary matching judgment on the data packet message and the flow table entry in the memory flow table to generate a second matching result; if the second matching result comprises the target memory flow table entry, the target memory flow table entry is updated according to the data packet message, and the flow information of the data packet can be efficiently and accurately counted, so that the processing performance of the network equipment is improved.
It should be noted that the packet management system shown in fig. 1 is also applicable to the packet management method shown in fig. 2 or fig. 3, and is not described herein again.
The following describes an implementation process of the packet management method according to an embodiment of the present invention, taking a packet management apparatus as an execution subject. It can be understood that the executing entity of the packet management method provided by the embodiment of the present invention includes, but is not limited to, a packet management device.
Fig. 2 is a flowchart of a data packet management method according to an embodiment of the present invention, and as shown in fig. 2, the method includes:
step 101, performing matching judgment on the acquired data packet message and a flow table item in a cache flow table once to generate a first matching result.
In the embodiment of the invention, the data packet message comprises a message key value, the cache flow table comprises a plurality of flow table entries, and each flow table entry corresponds to one cache hash value.
And 102, if the first matching result comprises a target cache flow table entry, updating the target cache flow table entry according to the data packet message.
Step 103, if the first matching result does not include the target cache flow entry, performing secondary matching judgment on the data packet message and the flow entry in the memory flow table to generate a second matching result.
In the embodiment of the present invention, the memory flow table includes a plurality of flow table entries, and each flow table entry corresponds to one memory hash value.
And step 104, if the second matching result comprises the target memory flow table entry, updating the target memory flow table entry according to the data packet message.
In the technical scheme provided by the embodiment of the invention, the acquired data packet message is matched and judged once with the flow table item in the cache flow table to generate a first matching result; if the first matching result comprises a target cache flow table entry, updating the target cache flow table entry according to the data packet message; if the first matching result does not comprise the target cache flow table entry, performing secondary matching judgment on the data packet message and the flow table entry in the memory flow table to generate a second matching result; if the second matching result comprises the target memory flow table entry, the target memory flow table entry is updated according to the data packet message, and the flow information of the data packet can be efficiently and accurately counted, so that the processing performance of the network equipment is improved.
Fig. 3 is a flowchart of another data packet management method according to an embodiment of the present invention, as shown in fig. 3, the method includes:
step 201, performing hash calculation on the key value of the packet to obtain a hash value of the packet.
In the embodiment of the invention, each step is executed by the data packet management device.
In the embodiment of the invention, a message key value is obtained from a data packet message, and the message key value is quintuple information, namely: a source IP address, a destination IP address, a protocol number, a source port address, and a destination port address. Specifically, a hash value of the packet is calculated according to the quintuple information through a hash algorithm.
In the embodiment of the invention, in order to ensure that the same hash value can be calculated by the uplink and downlink messages of the same flow, the input sequence of the quintuple information of the hash algorithm needs to be determined. Specifically, comparing the size of a source IP address and a destination IP address, if the source IP address is larger than the destination IP address, the input sequence of the quintuple information of the hash algorithm is the source IP address, the destination IP address, the protocol number, the source port address and the destination port address; if the source IP address is smaller than the destination IP address, the input sequence of the quintuple information of the Hash algorithm is the destination IP address, the source IP address, the protocol number, the destination port address and the source port address; if the source IP address is the same as the destination IP address, comparing the sizes of the source port address and the destination port address, and if the source port address is larger than the destination port address, the input sequence of the five-tuple information of the Hash algorithm is the source IP address, the destination IP address, the protocol number, the source port address and the destination port address; if the source port address is less than the destination port address, the destination IP address, the source IP address, the protocol number, the destination port address, and the source port address. The input sequence of the quintuple information of the hash algorithm is determined, so that the same hash value can be calculated by the uplink and downlink messages of the same flow, and the subsequent updating processing of the flow table entry is facilitated.
Step 202, matching the message hash value with a cache hash value, and executing step 203 if a target cache hash value matched with the message hash value exists; if there is no target cache hash value matching the message hash value, go to step 204.
In the embodiment of the present invention, the cache flow table includes a plurality of flow table entries, and each flow table entry corresponds to one cache hash value. Specifically, the message hash value is matched with the cache hash value, whether a target cache hash value matched with the message hash value exists is judged, if yes, it is indicated that the data packet message hits a flow entry of the cache flow table, the cache hash value corresponding to the hit flow entry is determined as the target cache hash value, and step 203 is executed; if not, it indicates that the packet message misses the flow entry of the cache flow table, and step 204 is executed continuously.
Step 203, determining the flow entry corresponding to the target cache hash value as the target cache flow entry, generating a first matching result including the target cache flow entry, and continuing to execute step 206.
Specifically, the hit flow entry is determined as the target cache flow entry, a first matching result is generated, the first matching result is a hit, the first matching result includes the target cache flow entry, and step 206 is continuously performed.
Further, the data packet message, the target cache address and the allocated target waiting queue identifier are sent to the waiting queue module, so that the waiting queue module writes the data packet message and the target cache address into the target waiting queue indicated by the target waiting queue identifier, and adds 1 to the reference count corresponding to the target cache flow entry.
Step 204, generating a first matching result not including the target cache flow entry.
Specifically, if the data packet message misses the cache flow table, a first matching result is generated, where the first matching result is a miss, and the first matching result does not include the target cache flow table entry.
Further, a cache flow table entry is newly built in the cache flow table, a waiting queue in the waiting queue module is allocated to the newly built cache flow table entry, and a waiting queue identifier of the allocated waiting queue is bound with the newly built cache flow table entry; and sending the data packet message, the sequence number of the newly-built cache flow table entry and the bound wait queue identifier to the wait queue module, so that the wait queue module writes the data packet message and the sequence number of the newly-built cache flow table entry into the wait queue indicated by the bound wait queue identifier, adds 1 to the reference count corresponding to the newly-built cache flow table entry, continues to perform further matching on the flow table entry of the memory flow table, and continues to perform step 205.
Step 205, performing secondary matching judgment on the data packet message and the flow table entry in the memory flow table to generate a second matching result, and if the second matching result includes the target memory flow table entry, executing step 207; if the second match does not include the target memory flow entry, go to step 208.
In the embodiment of the invention, the data packet message comprises a message key value, the memory flow table comprises a plurality of flow table entries, and each flow table entry corresponds to one memory hash value. Specifically, if the second matching result includes the target memory flow entry, it indicates that the packet message hits the flow entry of the memory flow table, step 207 is executed; if the second matching result does not include the target memory flow entry, it indicates that the packet message misses the flow entry of the memory flow table, step 208 is executed.
In the embodiment of the present invention, step 205 specifically includes:
and step 2051, performing hash calculation on the key value of the message to obtain a hash value of the message.
Step 2052, matching the message hash value with the memory hash value, and if a target memory hash value matched with the message hash value exists, executing step 2053; if there is no target memory hash value matching the message hash value, go to step 2054.
Step 2053, determining the flow entry corresponding to the target memory hash value as a target memory flow entry, and generating a second matching result including the target memory flow entry.
Step 2054, a second matching result not including the target memory flow entry is generated.
In the embodiment of the present invention, steps 2051 to 2054 are the same as the matching in steps 201 to 204, except that steps 201 to 204 are directed to matching the packet message and the cache flow table, steps 2051 to 2054 are directed to matching the packet message and the memory flow table, and the specific matching process refers to steps 201 to 204, which is not described in detail herein.
Taking a memory flow table as an HASH chain table as an example, and fig. 4 is a schematic flow chart of matching the HASH chain table provided in the embodiment of the present invention, as shown in fig. 4, matching the HASH chain table, first performing HASH calculation according to a message key value in a data packet message, and calculating a message HASH value; matching the message HASH value with a memory HASH value of a HASH chain table of a memory linear space, and outputting a second matching result if the message HASH value is matched with the target memory HASH value, wherein the second matching result is hit and comprises a target memory flow table entry corresponding to the target memory HASH value; if the HASH value of the target memory is not matched, judging whether the next node of the HASH linked list exists or not, and if so, continuing to match the HASH value of the memory of the next node of the HASH linked list; and if not, outputting a second matching result, wherein the second matching result is a miss and does not comprise the target memory flow table entry.
And step 206, updating the target cache flow table item according to the data packet message, and ending the process.
In the embodiment of the present invention, step 206 specifically includes:
step 2061, writing the data packet message and the target cache address into the target waiting queue indicated by the target cache queue identification.
In the embodiment of the invention, the data packet message comprises a message key value and a service requirement, and the data packet message and the target cache address are written into the target waiting queue indicated by the target cache queue identification so as to push the data packet message to the service processing module for processing in sequence.
Step 2062, the data packet message and the target cache address at the head of the queue are obtained from the target waiting queue.
In the embodiment of the invention, the data packet message and the target cache address at the head of the queue are obtained from the target waiting queue according to the queue first-in first-out principle so as to be pushed to the service processing module for processing until the target waiting queue is empty. And when the target waiting queue is empty, releasing the empty queue, and emptying the target cache queue identifier of the target cache flow table entry. And if the waiting queue module receives the data packet message and the target cache address sent to the empty queue, pushing the data packet message and the target cache address to the service processing module for processing.
Step 2063, updating the target cache flow table entry corresponding to the target cache address according to the service requirement.
In the embodiment of the invention, the service requirement comprises new creation, deletion, search or modification.
Specifically, if the service requirement is newly established, the service processing module establishes a flow table entry for the cache flow table corresponding to the target cache flow table entry; if the service requirement is deletion, the service processing module deletes the target cache flow table entry; if the service requirement is searching, the service processing module queries a corresponding target cache flow table entry according to the target cache address; and if the service requirement is modification, the service processing module modifies the target cache flow table item according to the acquired modification requirement.
And when the service processing module finishes updating the target cache flow table entry, subtracting 1 from the reference count corresponding to the target cache flow table entry. When the reference count is 0, the cache module starts aging processing on the target cache flow table entry, correspondingly updates the memory flow table, and modifies the state of the target cache flow table entry into invalid.
Further, when the cache flow table entry is updated, the updated cache flow table entry and the corresponding memory flow table entry need to be updated correspondingly. When the cache flow table entry is frequently updated, the corresponding memory flow table entry is not updated for a long time, so that a corresponding update time threshold is set for the cache flow table, and if the time length for which one cache flow table entry is not updated is greater than the update time threshold, the cache module correspondingly updates the cache flow table entry and the corresponding memory flow table entry, so as to avoid that the memory flow table entry is not updated for a long time to trigger the aging process initiated by the flow table aging management module on the memory flow table entry.
And step 207, updating the target memory flow table entry according to the data packet message, and ending the process.
Specifically, the target memory flow entry corresponding to the target memory address is updated according to the service requirement. The service requirement comprises new creation, deletion, search or modification. If the service requirement is newly established, the service processing module establishes a flow table entry for the memory flow table corresponding to the target memory flow table entry; if the service requirement is deletion, the service processing module deletes the target memory flow table entry; if the service requirement is searching, the service processing module queries a corresponding target memory flow table entry according to the target memory address; and if the service requirement is modified, the service processing module modifies the target memory flow table item according to the obtained modification requirement.
Further, when the memory flow table entry is updated, the updated memory flow table entry and the corresponding cache flow table entry need to be updated correspondingly.
Further, when the memory flow entry is updated and the current time is updated to the update time in the aging time field, the updated memory flow entry and the corresponding cache flow entry need to be updated correspondingly. And after the cache flow table entry is updated correspondingly, the cache module sends a synchronization completion message to the target waiting queue and adds a hit identifier for the cache flow table entry.
Further, the memory flow table further includes an aging time field, where the aging time field includes an update time and a timeout time threshold corresponding to each memory flow table entry. The updating time is the time for updating the memory flow entry, and the timeout threshold is set according to the actual requirements of different memory flow entries, so that the purpose of quickly deleting the timeout flow entry can be achieved.
The method further comprises the following steps: traversing the aging time field of the memory flow table according to the address sequence of the memory flow table, judging whether the flow table item corresponding to the aging time field is overtime according to the current time and the aging time field, if so, indicating that the flow table item is not active any more, deleting the overtime flow table item, and setting the state of the flow table item to be invalid; if not, the flow table entry is active, and the step of traversing the aging time field of the memory flow table according to the address sequence of the memory flow table is repeatedly executed. Subtracting the updating time in the aging time field from the current time to obtain a time difference value, judging whether the time difference value is greater than an overtime time threshold value in the aging time field, and if so, indicating that the flow entry corresponding to the aging time field is overtime; if not, the flow table entry corresponding to the aging time field is not overtime.
As an alternative, an aging table may be set in the memory, and is used to record the update time and the timeout time threshold of the memory flow entries, and traverse each memory flow entry according to the address sequence of the aging table.
Because the HASH collision problem is solved by a chain address method, the HASH chain table is changed when being newly built and deleted, and temporary access abnormity may be caused. In order to avoid modifying, creating new or deleting different flow table entries corresponding to the same hash value at the same time, the updating operation on the flow table entries corresponding to the same hash value is written into a preset queue, and when an updating operation is completed, the next updating operation is taken out from the preset queue to update the flow table entries, that is: the updating operation of the flow table entry corresponding to the same hash value is serially controlled by adopting a preset queue, and the updating operation of the flow table entries corresponding to different hash values is parallelly controlled by adopting a plurality of different waiting queues, so that the problem of abnormal access can be avoided, the processing efficiency can be improved, and the performance of network equipment can be improved.
And step 208, outputting a search failure message, and ending the process.
In the embodiment of the invention, if the data packet message hits the flow table entry of the cache flow table which is not hit, and the flow table entry of the memory flow table which is not hit, the search failure message is output.
Further, if the second matching result is a miss, the state of the newly-built cache flow entry is set to be invalid.
Further, if the second matching result is a miss, the cache module sends a synchronization completion message to the allocated wait queue, and adds a miss identifier to the newly-created cache flow entry.
Fig. 5 is a schematic overall flow chart of packet management according to an embodiment of the present invention, and as shown in fig. 5, a packet message is sent to a cache module; the cache module judges whether the data packet message hits a flow table item of a cache flow table, if so, the data packet message and the target cache address are written into the waiting queue module, and the data packet message and the target cache address are pushed to the service processing module from a target waiting queue in the waiting queue module; if not, a cache flow table entry is newly built in the cache module, secondary matching judgment is carried out on the data packet message and the flow table entry in the memory flow table, a second matching result is returned to the cache module, if the flow table entry in the memory flow table is matched, the target memory flow table entry is updated, the corresponding cache flow table entry is correspondingly updated, and the data packet message and the target memory address are sent to the service processing module; if the flow table entry in the memory flow table is not matched, a search failure message is output, and the cache module sends a synchronization completion message to the waiting queue module and correspondingly modifies the state of the cached flow table entry.
Taking an example that a data packet message includes a TCP message, fig. 6 is a schematic flow chart of a TCP message state change in a process of processing a data packet by a service processing module according to an embodiment of the present invention, as shown in fig. 6, a TCP message for establishing a connection (SYN) is received, and SYN is a unidirectional state; receiving a reverse SYN acknowledgement message (ACK), wherein the state of the TCP message is a reverse access state, that is: the data flow direction is from the server to the client; receiving a forward SYN acknowledge message (ACK), the state of the TCP message, bi-directional access state (bi-directional EST-STA), i.e.: the data flow direction is from the server to the client and from the client to the server, which indicates that the establishment of the bidirectional connection between the client and the server is successful. After the bidirectional connection is successfully established, if a forward connection closing message (forward FIN) is received, the state of the TCP message is a waiting forward connection closing state (CS-FIN-WAIT); and receiving a forward acknowledgement message (ACK), wherein the state of the TCP message is a forward connection closing acknowledgement state (CS-CLOSE), which indicates that the connection from the client to the server is closed. After the bidirectional connection is successfully established, if a reverse connection closing message (reverse FIN) is received, the state of the TCP message is a waiting reverse connection closing state (SC-FIN-WAIT); and receiving a reverse acknowledgement message (ACK), wherein the state of the TCP message is an acknowledgement reverse connection closing state (SC-CLOSE), and the connection from the server to the client is closed. If the connection from the client to the server and the connection from the server are closed, the TCP message is deleted, and the service processing module finishes the processing of the TCP message. It is worth noting that for TCP messages, the data flow direction is only from the client to the server (forward direction) and from the server to the client (backward direction).
In the embodiment of the invention, the data packet management is carried out in a mode of mutually matching the cache and the memory, the high-speed access capability of the cache and the large-scale storage capability of the memory are fully utilized, the processing performance of network equipment is improved, the flow information of the data packet can be accurately counted on the basis of higher processing performance, and reliable support is provided for subsequent flow analysis; by matching the data packet message with the cache flow table, and matching the data packet message with the memory flow table if the data packet message is not matched with the target cache flow table, the access pressure of the memory can be reduced under the condition that the elephant flow occurs. The statistical processing of the flow information of the data packet can be performed by matching the message key value with a preset uplink message key value and a preset downlink message key value, and determining that the message is an uplink message or a downlink message.
In the technical scheme of the data packet management method provided by the embodiment of the invention, the acquired data packet message is matched and judged once with the flow table item in the cache flow table, and a first matching result is generated; if the first matching result comprises a target cache flow table entry, updating the target cache flow table entry according to the data packet message; if the first matching result does not comprise the target cache flow table entry, performing secondary matching judgment on the data packet message and the flow table entry in the memory flow table to generate a second matching result; if the second matching result comprises the target memory flow table entry, the target memory flow table entry is updated according to the data packet message, and the flow information of the data packet can be efficiently and accurately counted, so that the processing performance of the network equipment is improved.
Fig. 7 is a schematic structural diagram of a packet management device according to an embodiment of the present invention, where the device is configured to execute the packet management method, and as shown in fig. 7, the device includes: a buffer determination unit 11, a buffer update unit 12, a memory determination unit 13, and a memory update unit 14.
The cache distinguishing unit 11 is configured to perform matching distinguishing on the acquired packet message and a flow table entry in the cache flow table once, and generate a first matching result.
The cache updating unit 12 is configured to update the target cache flow entry according to the packet message if the first matching result includes the target cache flow entry.
The memory determination unit 13 is configured to perform secondary matching determination on the packet message and a flow entry in the memory flow table if the first matching result does not include the target cache flow entry, and generate a second matching result.
The memory updating unit 14 is configured to update the target memory flow entry according to the data packet message if the second matching result includes the target memory flow entry.
In the embodiment of the present invention, the cache discrimination unit 11 is specifically configured to perform hash calculation on a key value of a packet to obtain a hash value of the packet; matching the message hash value with the cache hash value; if a target cache hash value matched with the message hash value exists, determining a flow entry corresponding to the target cache hash value as a target cache flow entry, and generating a first matching result comprising the target cache flow entry; and if the target cache hash value matched with the message hash value does not exist, generating a first matching result without the target cache flow table entry.
In the embodiment of the present invention, the memory determination unit 13 is specifically configured to perform hash calculation on a key value of a packet to obtain a hash value of the packet; matching the message hash value with the memory hash value; if a target memory hash value matched with the message hash value exists, determining a flow entry corresponding to the target memory hash value as a target memory flow entry, and generating a second matching result comprising the target memory flow entry; and if the target memory hash value matched with the message hash value does not exist, generating a second matching result without the target memory flow table entry.
In this embodiment of the present invention, the memory updating unit 14 is specifically configured to write the data packet message and the target cache address into the target waiting queue indicated by the target cache queue identifier; acquiring a data packet message and a target cache address at a head of queue position from a target waiting queue; and updating the target cache flow table entry corresponding to the target cache address according to the service requirement.
In the embodiment of the present invention, the memory updating unit 14 is specifically configured to, if the service requirement is newly created, create a new flow entry for the cache flow table corresponding to the target cache flow entry; if the service requirement is deletion, deleting the target cache flow table entry; if the service requirement is searching, inquiring a corresponding target cache flow table entry according to the target cache address; and if the service requirement is modification, modifying the target cache flow table entry according to the acquired modification requirement.
In the embodiment of the present invention, the apparatus further includes: a judging unit 15 and a deleting unit 16.
The judging unit 15 is configured to traverse the aging time field of the memory flow table according to the address sequence of the memory flow table, judge whether the flow entry corresponding to the aging time field is overtime according to the current time and the aging time field, and repeatedly execute the step of traversing the aging time field of the memory flow table according to the address sequence of the memory flow table if the flow entry is not overtime.
The deleting unit 16 is configured to delete the overtime flow entry if the overtime flow entry is overtime;
in the scheme of the embodiment of the invention, the acquired data packet message is matched and judged once with the flow table items in the cache flow table to generate a first matching result; if the first matching result comprises a target cache flow table entry, updating the target cache flow table entry according to the data packet message; if the first matching result does not comprise the target cache flow table entry, performing secondary matching judgment on the data packet message and the flow table entry in the memory flow table to generate a second matching result; if the second matching result comprises the target memory flow table entry, the target memory flow table entry is updated according to the data packet message, and the flow information of the data packet can be efficiently and accurately counted, so that the processing performance of the network equipment is improved.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer device, which may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
An embodiment of the present invention provides a computer device, including a memory and a processor, where the memory is used to store information including program instructions, and the processor is used to control execution of the program instructions, and the program instructions are loaded and executed by the processor to implement the steps of the above-mentioned embodiment of the data packet management method.
Referring now to FIG. 8, shown is a schematic diagram of a computer device 600 suitable for use in implementing embodiments of the present application.
As shown in fig. 8, the computer apparatus 600 includes a Central Processing Unit (CPU)601 which can perform various appropriate works and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM)) 603. In the RAM603, various programs and data necessary for the operation of the computer apparatus 600 are also stored. The CPU601, ROM602, and RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output section 607 including a Cathode Ray Tube (CRT), a liquid crystal feedback (LCD), and the like, and a speaker and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 606 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted as necessary on the storage section 608.
In particular, according to an embodiment of the present invention, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (12)

1. A method for packet management, the method comprising:
performing primary matching judgment on the acquired data packet message and flow table items in the cache flow table to generate a first matching result;
if the first matching result comprises a target cache flow table entry, updating the target cache flow table entry according to the data packet message;
if the first matching result does not comprise a target cache flow table entry, performing secondary matching judgment on the data packet message and a flow table entry in a memory flow table to generate a second matching result;
and if the second matching result comprises a target memory flow table entry, updating the target memory flow table entry according to the data packet message.
2. The method according to claim 1, wherein the packet message includes a message key value, the cache flow table includes a plurality of flow table entries, and each flow table entry corresponds to a cache hash value;
the said match to the flow table entry in the data packet message and buffer flow table obtained and judge once, produce the first match result, including:
performing hash calculation on the key value of the message to obtain a hash value of the message;
matching the message hash value with the cache hash value;
if a target cache hash value matched with the message hash value exists, determining a flow entry corresponding to the target cache hash value as a target cache flow entry, and generating a first matching result comprising the target cache flow entry;
and if the target cache hash value matched with the message hash value does not exist, generating a first matching result without the target cache flow table entry.
3. The method according to claim 1, wherein the packet message includes a message key value, the memory flow table includes a plurality of flow table entries, and each flow table entry corresponds to a memory hash value;
the second matching judgment is carried out on the data packet message and the flow table items in the memory flow table to generate a second matching result, and the second matching result comprises the following steps:
performing hash calculation on the key value of the message to obtain a hash value of the message;
matching the message hash value with the memory hash value;
if a target memory hash value matched with the message hash value exists, determining a flow entry corresponding to the target memory hash value as a target memory flow entry, and generating a second matching result comprising the target memory flow entry;
and if the target memory hash value matched with the message hash value does not exist, generating a second matching result without the target memory flow table entry.
4. The method according to claim 1, wherein the packet message includes a traffic demand, and the destination cache flow entry includes a destination cache address and a destination cache queue identification;
if the first matching result includes a target cache flow entry, updating the target cache flow entry according to the packet message, including:
writing the data packet message and the target cache address into a target waiting queue indicated by the target cache queue identification;
acquiring a data packet message and a target cache address at the head of the queue from the target waiting queue;
and updating the target cache flow table entry corresponding to the target cache address according to the service requirement.
5. The method according to claim 4, wherein the service requirement includes new creation, deletion, search or modification;
the updating the target cache flow table entry corresponding to the target cache address according to the service requirement includes:
if the service requirement is newly established, establishing a flow table entry for the cache flow table corresponding to the target cache flow table entry;
if the service requirement is deletion, deleting the target cache flow table entry;
if the service requirement is searching, inquiring a corresponding target cache flow table entry according to the target cache address;
and if the service requirement is modification, modifying the target cache flow table entry according to the acquired modification requirement.
6. The packet management method according to claim 1, wherein the memory flow table further includes an aging time field;
the method further comprises the following steps:
traversing an aging time field of the memory flow table according to an address sequence of the memory flow table, and judging whether a flow table entry corresponding to the aging time field is overtime or not according to the current time and the aging time field;
if yes, deleting the overtime flow table entry;
and if not, repeatedly executing the step of traversing the aging time field of the memory flow table according to the address sequence of the memory flow table.
7. A data packet management system is characterized in that the system comprises a message management subsystem, a cache subsystem and a memory flow table subsystem;
the message management subsystem is used for acquiring a data message, extracting a data packet message from the data message and sending the data packet message to the cache subsystem;
the cache subsystem is used for carrying out primary matching judgment on the data packet message and flow table items in a cache flow table to generate a first matching result; if the first matching result comprises a target cache flow table entry, updating the target cache flow table entry according to the data packet message; if the first matching result does not comprise a target cache flow table entry, sending the data packet message to the memory flow table subsystem;
the memory flow table subsystem is used for carrying out secondary matching judgment on the data packet message and flow table entries in the memory flow table to generate a second matching result; and if the second matching result comprises a target memory flow table entry, updating the target memory flow table entry according to the data packet message.
8. The packet management system of claim 7, wherein the cache subsystem comprises a cache module;
the cache module is used for carrying out Hash calculation on the key value of the message to obtain a message Hash value; matching the message hash value with the cache hash value; if a target cache hash value matched with the message hash value exists, determining a flow entry corresponding to the target cache hash value as a target cache flow entry, and generating a first matching result comprising the target cache flow entry; and if the target cache hash value matched with the message hash value does not exist, generating a first matching result without the target cache flow table entry.
9. The packet management system of claim 8, wherein the packet message includes a traffic demand, and wherein the destination cache flow entry includes a destination cache address and a destination cache queue identification; the cache subsystem also comprises a waiting queue module and a service processing module;
the waiting queue module is used for storing a plurality of waiting queues, and each waiting queue comprises a corresponding buffer queue identifier;
the cache module is used for writing the data packet message and a target cache address into a target waiting queue indicated by the target cache queue identification;
the service processing module is used for acquiring a data packet message and a target cache address at a head of queue position from the target waiting queue; and updating the target cache flow table entry corresponding to the target cache address according to the service requirement.
10. A packet management apparatus, characterized in that the apparatus comprises:
the cache judging unit is used for carrying out primary matching judgment on the acquired data packet message and flow table items in the cache flow table to generate a first matching result;
a cache updating unit, configured to update a target cache flow entry according to the packet message if the first matching result includes the target cache flow entry;
the memory judging unit is used for judging secondary matching between the data packet message and a flow table entry in a memory flow table if the first matching result does not comprise a target cache flow table entry, and generating a second matching result;
and the memory updating unit is used for updating the target memory flow table entry according to the data packet message if the second matching result comprises the target memory flow table entry.
11. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method of packet management according to any one of claims 1 to 6.
12. A computer device comprising a memory for storing information including program instructions and a processor for controlling the execution of the program instructions, wherein the program instructions when loaded and executed by the processor implement the packet management method of any of claims 1 to 6.
CN202110843163.6A 2021-07-26 2021-07-26 Data packet management method, system and device Active CN113595822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110843163.6A CN113595822B (en) 2021-07-26 2021-07-26 Data packet management method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110843163.6A CN113595822B (en) 2021-07-26 2021-07-26 Data packet management method, system and device

Publications (2)

Publication Number Publication Date
CN113595822A true CN113595822A (en) 2021-11-02
CN113595822B CN113595822B (en) 2024-03-22

Family

ID=78249971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110843163.6A Active CN113595822B (en) 2021-07-26 2021-07-26 Data packet management method, system and device

Country Status (1)

Country Link
CN (1) CN113595822B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114629842A (en) * 2022-03-30 2022-06-14 阿里巴巴(中国)有限公司 Flow table processing method, electronic device, readable storage medium and product
CN115914102A (en) * 2023-02-08 2023-04-04 阿里巴巴(中国)有限公司 Data forwarding method, flow table processing method, device and system
CN116055397A (en) * 2023-03-27 2023-05-02 井芯微电子技术(天津)有限公司 Queue entry maintenance method and device
CN116389322A (en) * 2023-06-02 2023-07-04 腾讯科技(深圳)有限公司 Traffic data processing method, device, computer equipment and storage medium

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011078108A1 (en) * 2009-12-21 2011-06-30 日本電気株式会社 Pattern-matching method and device for a multiprocessor environment
CN103281246A (en) * 2013-05-20 2013-09-04 华为技术有限公司 Message processing method and network equipment
CN104601468A (en) * 2015-01-13 2015-05-06 杭州华三通信技术有限公司 Message forwarding method and device
CN105099920A (en) * 2014-04-30 2015-11-25 杭州华三通信技术有限公司 Method and device for setting SDN flow entry
CN105516016A (en) * 2015-11-25 2016-04-20 北京航空航天大学 Flow-based data packet filtering system and data packet filtering method by using Tilera multi-core accelerator card
US20160182373A1 (en) * 2014-12-23 2016-06-23 Ren Wang Technologies for network device flow lookup management
US20160301655A1 (en) * 2015-04-07 2016-10-13 Nicira, Inc. Address resolution protocol suppression using a flow-based forwarding element
CN106453129A (en) * 2016-09-30 2017-02-22 杭州电子科技大学 Elephant flow two-level identification system and method
CN106789733A (en) * 2016-12-01 2017-05-31 北京锐安科技有限公司 A kind of device and method for improving large scale network flow stream searching efficiency
US20170163575A1 (en) * 2015-12-07 2017-06-08 Ren Wang Mechanism to support multiple-writer/multiple-reader concurrency for software flow/packet classification on general purpose multi-core systems
CN107113282A (en) * 2014-12-30 2017-08-29 华为技术有限公司 A kind of method and device for extracting data message
US20170264497A1 (en) * 2016-03-08 2017-09-14 Nicira, Inc. Method to reduce packet statistics churn
CN109347745A (en) * 2018-09-20 2019-02-15 郑州云海信息技术有限公司 A kind of flow table matching process and device based on OpenFlow interchanger
CN109600313A (en) * 2017-09-30 2019-04-09 迈普通信技术股份有限公司 Message forwarding method and device
CN109600318A (en) * 2018-11-29 2019-04-09 新华三技术有限公司合肥分公司 A kind of method and SDN controller monitoring application program in SDN
CN109714266A (en) * 2018-12-25 2019-05-03 迈普通信技术股份有限公司 A kind of data processing method and the network equipment
CN109873768A (en) * 2017-12-01 2019-06-11 华为技术有限公司 Update method, hardware accelerator, OVS and the server of forwarding table
US20200059485A1 (en) * 2019-10-10 2020-02-20 Mesut Ergin Secure networking protocol optimization via nic hardware offloading
CN111092785A (en) * 2019-12-05 2020-05-01 深圳市任子行科技开发有限公司 Data monitoring method and device
CN112313910A (en) * 2018-06-13 2021-02-02 华为技术有限公司 Multi-path selection system and method for data center centric metropolitan area networks
CN112491901A (en) * 2020-11-30 2021-03-12 北京锐驰信安技术有限公司 Network flow fine screening device and method
CN112994983A (en) * 2021-04-01 2021-06-18 杭州迪普信息技术有限公司 Flow statistical method and device and electronic equipment

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011078108A1 (en) * 2009-12-21 2011-06-30 日本電気株式会社 Pattern-matching method and device for a multiprocessor environment
CN103281246A (en) * 2013-05-20 2013-09-04 华为技术有限公司 Message processing method and network equipment
CN105099920A (en) * 2014-04-30 2015-11-25 杭州华三通信技术有限公司 Method and device for setting SDN flow entry
US20160182373A1 (en) * 2014-12-23 2016-06-23 Ren Wang Technologies for network device flow lookup management
CN107113282A (en) * 2014-12-30 2017-08-29 华为技术有限公司 A kind of method and device for extracting data message
CN104601468A (en) * 2015-01-13 2015-05-06 杭州华三通信技术有限公司 Message forwarding method and device
US20160301655A1 (en) * 2015-04-07 2016-10-13 Nicira, Inc. Address resolution protocol suppression using a flow-based forwarding element
CN105516016A (en) * 2015-11-25 2016-04-20 北京航空航天大学 Flow-based data packet filtering system and data packet filtering method by using Tilera multi-core accelerator card
US20170163575A1 (en) * 2015-12-07 2017-06-08 Ren Wang Mechanism to support multiple-writer/multiple-reader concurrency for software flow/packet classification on general purpose multi-core systems
US20170264497A1 (en) * 2016-03-08 2017-09-14 Nicira, Inc. Method to reduce packet statistics churn
CN106453129A (en) * 2016-09-30 2017-02-22 杭州电子科技大学 Elephant flow two-level identification system and method
CN106789733A (en) * 2016-12-01 2017-05-31 北京锐安科技有限公司 A kind of device and method for improving large scale network flow stream searching efficiency
CN109600313A (en) * 2017-09-30 2019-04-09 迈普通信技术股份有限公司 Message forwarding method and device
CN109873768A (en) * 2017-12-01 2019-06-11 华为技术有限公司 Update method, hardware accelerator, OVS and the server of forwarding table
CN112313910A (en) * 2018-06-13 2021-02-02 华为技术有限公司 Multi-path selection system and method for data center centric metropolitan area networks
CN109347745A (en) * 2018-09-20 2019-02-15 郑州云海信息技术有限公司 A kind of flow table matching process and device based on OpenFlow interchanger
CN109600318A (en) * 2018-11-29 2019-04-09 新华三技术有限公司合肥分公司 A kind of method and SDN controller monitoring application program in SDN
CN109714266A (en) * 2018-12-25 2019-05-03 迈普通信技术股份有限公司 A kind of data processing method and the network equipment
US20200059485A1 (en) * 2019-10-10 2020-02-20 Mesut Ergin Secure networking protocol optimization via nic hardware offloading
CN111092785A (en) * 2019-12-05 2020-05-01 深圳市任子行科技开发有限公司 Data monitoring method and device
CN112491901A (en) * 2020-11-30 2021-03-12 北京锐驰信安技术有限公司 Network flow fine screening device and method
CN112994983A (en) * 2021-04-01 2021-06-18 杭州迪普信息技术有限公司 Flow statistical method and device and electronic equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
姜立立;曾国荪;丁春玲;: "数据流特征感知的交换机流表智能更新方法", 计算机应用, no. 07 *
曹作伟等: ""应用于协议无感知转发交换机的流缓存方法"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王鑫: ""基于DPI的高速网络报文处理技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114629842A (en) * 2022-03-30 2022-06-14 阿里巴巴(中国)有限公司 Flow table processing method, electronic device, readable storage medium and product
CN115914102A (en) * 2023-02-08 2023-04-04 阿里巴巴(中国)有限公司 Data forwarding method, flow table processing method, device and system
CN115914102B (en) * 2023-02-08 2023-05-23 阿里巴巴(中国)有限公司 Data forwarding method, flow table processing method, equipment and system
CN116055397A (en) * 2023-03-27 2023-05-02 井芯微电子技术(天津)有限公司 Queue entry maintenance method and device
CN116055397B (en) * 2023-03-27 2023-08-18 井芯微电子技术(天津)有限公司 Queue entry maintenance method and device
CN116389322A (en) * 2023-06-02 2023-07-04 腾讯科技(深圳)有限公司 Traffic data processing method, device, computer equipment and storage medium
CN116389322B (en) * 2023-06-02 2023-08-15 腾讯科技(深圳)有限公司 Traffic data processing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113595822B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN113595822B (en) Data packet management method, system and device
US20160285812A1 (en) Method of issuing messages of a message queue and a message issuing device
JP4792505B2 (en) Data synchronization processing method, client, server, and data synchronization system between client and server
CN106209948B (en) A kind of data push method and device
US9860332B2 (en) Caching architecture for packet-form in-memory object caching
EP3562096B1 (en) Method and device for timeout monitoring
WO2015096692A1 (en) Method and system for controlling data reception traffic and computer storage medium
CN108055302B (en) Picture caching processing method and system and server
US20140258375A1 (en) System and method for large object cache management in a network
EP4044474A2 (en) Data transmission method and apparatus, and electronic device
US20210064578A1 (en) Method and device for deduplication
US11507277B2 (en) Key value store using progress verification
CN110502364A (en) Across the cloud back-up restoring method of big data sandbox cluster under a kind of OpenStack platform
US10637969B2 (en) Data transmission method and data transmission device
CN114244752A (en) Flow statistical method, device and equipment
CN110650182B (en) Network caching method and device, computer equipment and storage medium
US20200311029A1 (en) Key value store using generation markers
CN109218799B (en) Method, storage medium, device and system for quickly switching high-definition images of android television
US11334623B2 (en) Key value store using change values for data properties
EP3886396A1 (en) Methods for dynamically controlling transmission control protocol push functionality and devices thereof
WO2018153236A1 (en) Method and apparatus for accelerating dynamic resource access based on api request, medium, and device
CN109660589B (en) Request processing method and device and electronic equipment
CN114827159B (en) Network request path optimization method, device, equipment and storage medium
US10250515B2 (en) Method and device for forwarding data messages
CN115550250A (en) Small flow message retransmission method, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant