CN105337991B - A kind of integrated message flow is searched and update method - Google Patents

A kind of integrated message flow is searched and update method Download PDF

Info

Publication number
CN105337991B
CN105337991B CN201510815890.6A CN201510815890A CN105337991B CN 105337991 B CN105337991 B CN 105337991B CN 201510815890 A CN201510815890 A CN 201510815890A CN 105337991 B CN105337991 B CN 105337991B
Authority
CN
China
Prior art keywords
message flow
nodal information
message
information
cpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510815890.6A
Other languages
Chinese (zh)
Other versions
CN105337991A (en
Inventor
杨白
黄高平
陈建华
唐靖飚
李欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Rongteng Network Technology Co Ltd
Original Assignee
Hunan Rongteng Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Rongteng Network Technology Co Ltd filed Critical Hunan Rongteng Network Technology Co Ltd
Priority to CN201510815890.6A priority Critical patent/CN105337991B/en
Publication of CN105337991A publication Critical patent/CN105337991A/en
Application granted granted Critical
Publication of CN105337991B publication Critical patent/CN105337991B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3063Pipelined operation

Abstract

The invention belongs to express network message flow field of information management, and being related to a kind of integrated message flow lookup and update method, step includes:(1) FPGA is parsed and is extracted the five-tuple information and payload data of message;(2) DDR addresses are calculated according to five-tuple information and reads the message flow nodal information that the address is stored;(3) message flow nodal information and five-tuple information are compared;If consistent, generation hit message flow nodal information;Otherwise continue to carry out payload data deep message detection matching, newly-built message flow nodal information is generated if matching;(4) data are uploaded to CPU by FPGA by PCI_E;(5) data that CPU parsings receive obtain message flow nodal information;CPU completes generation update message flow nodal information after the message flow nodal information stored in memory management, and is issued to FPGA after being packaged into TLP load data forms;(6) FPGA parses the update message stream information received and writes in DDR.The present invention is using integrated realization method, and process is simple, and access speed is fast.

Description

A kind of integrated message flow is searched and update method
Technical field
The invention belongs to express network message flow management domains, and in particular to a kind of integrated message flow is searched and update Method.
Background technology
The lookup of message flow has fundamental role with more new function in multiple network converges shunting device, especially right In the equipment with deep message detection DPI (Deep Packet Inspect), directly determine that its detects matched property Energy.Message flow refers to that identical source IP, mesh need to be included by the continuous message of network node, these messages in certain section of time interval The stream informations such as IP, source PORT (port numbers), purpose PORT (port numbers) and protocol number.Message flow has uplink and downlink differentiation, The message flow sent by initiator is known as downstream, and the message flow that recipient sends is known as upstream.Message flow is searched and update The message for matching DPI is mainly based upon, by solving the new record message, integrated addition is carried out to message flow, is searched, more Newly, time-out detection and deletion.
Deep message detection refers to match message load using predefined rule, and detection method is by rule list Finite state machine is compiled into up to formula, and a byte per treatment is to obtain the state to be transferred to, therefore less efficient in next step. So need to establish message flow to the message for matching DPI, if subsequent message compare stream information whether the message with foundation Stream is consistent, need not carry out the matching of DPI again if consistent, greatly improve efficiency.
With the increase of network size, the flow of backbone network often reaches more than 100G, and the scale of message flow is reachable To millions of or even up to ten million items, how the extensive message flow of completion of real-time high-efficiency lookup and be updated to improve system The key of performance.
Existing message flow is searched and deficiency existing for update method:1st, it is parallel using multinuclear, multithreading and multichannel etc. Mechanism is realized, it is necessary to occupy larger memory.In addition channel interface can not also meet the needs of express network.2、FPGA (Field-Programmable Gate Array, i.e. field programmable gate array) is searched and to update large-scale message flow non- It is often complicated, it is difficult to realize.
The content of the invention
The large-scale message flow of FPGA management is extremely difficult in the prior art, and is related to the problems such as generating, is overtime of stream, place Reason process is more complicated;And the bottleneck problem of channel interface is faced using the parallel mechanism of multinuclear, multichannel, it can not handle The network traffics of more than 100G.To solve the problems, such as that existing message flow is searched with existing in terms of update, the present invention proposes one kind It is searched based on the integrated message flow of FPGA and CPU (Central Processing Unit, i.e. central processing unit) and is sent out with updating Method, specific technical solution are as follows:
A kind of integrated message flow is searched and update method, comprises the following steps:
(1) FPGA ports receive the message of input, and message is parsed, and extract the five-tuple information and payload of message Data;
(2) five-tuple information is obtained into DDR (Double Data Rate, the abbreviation for inquiry by Hash calculation DDR, i.e. Double Data Rate synchronous DRAM) address, the message flow section that DDR is stored is accessed and read according to the address Point information;
(3) in the message flow nodal information that FPGA is relatively read out and step (1) incoming message five-tuple information whether Unanimously;It represents to hit if consistent, generation hit message flow nodal information;Miss is represented if inconsistent, is continued pair The payload data of the incoming message carries out deep message detection matching, if matching, generates newly-built message flow nodal information;
(4) FPGA passes through PCI_E (Peripheral Component Interconnect Express, abbreviation PCI_E) The hit message flow nodal information or newly-built message flow nodal information that bus interface generates step (3) are advised according to PCI_E agreements Fixed TLP (Transaction Layer Packet, abbreviation TLP, i.e. process layer message) load data form is uploaded to CPU;
(5) CPU parses the TLP load datas received, is hit or created message flow nodal information;It obeys the order In or newly-built message flow nodal information in obtain CPU memory address values;Then to the message flow node in CPU correspondence memories address Information carries out generation update message flow nodal information after system administration, and is packaged into total by PCI_E after TLP load data forms Line interface is issued to FPGA;
(6) after FPGA parses the update message flow nodal information that PCI_E bus interface issues, by update message flow node letter In the DDR addresses obtained in breath write step (2), the lookup of entire message flow nodal information and newer process are completed.
Further, the Systems Management processes in the step (5) include message flow nodal information in CPU memory address Hit count entry and aging count the update of entry, CPU judges that the aging of message flow nodal information counts the value of entry one by one Whether it is more than default time-out count value, deletes timeout packet stream nodal information in table;
Detailed process is:Analysis judgment TLP load datas are hit message flow nodal information or newly-built message flow node Information, if hit message flow nodal information, then the address value in message accesses the corresponding position of CPU memory address, And aging counts entry in the message flow nodal information for being stored the address and the value of hit count entry is reset, and completes more Newly;If newly-built message flow nodal information, judge that CPU is corresponded at this time and hit in the message flow nodal information of memory address storage Whether the value for counting entry is more than preset value;If greater than preset value, then by newly-built message flow nodal information with replacing the memory The message flow nodal information of location storage;Conversely, then abandoning newly-built message flow nodal information, do not make any changes;It reads in CPU and matches somebody with somebody The ageing time chain tabular value put compares aging in the message flow nodal information that identical address stores in CPU memories and counts item one by one Whether purpose value is more than ageing time chain tabular value, and such as larger than ageing time chain tabular value then deletes the message flow nodal information, and raw Into update message flow nodal information.
The advantageous effect obtained using the present invention:1. the present invention is only managed and safeguarded using integrated realization method, CPU The message flow nodal information of biography need not handle entire complete incoming message, and process is simple, and the memory of occupancy is few.2. this hair Bright to parse incoming message using FPGA, management and update message flow nodal information are completed by CPU, efficiently solve FPGA management The extremely complex bottleneck of extensive message flow nodal information, meets the complicated message flow nodal information management under high speed network environment It needs.
Description of the drawings
Fig. 1 is the method for the present invention flow chart;
Fig. 2 is the message flow nodal information form of DDR storages;
Fig. 3 is the message flow nodal information form in PCI_E buses;
Fig. 4 is the message flow nodal information form in DMA loads;
Fig. 5 is the form of the message flow nodal information stored in CPU memories;
Fig. 6 is the message flow nodal information flow chart stored in CPU managing internal memories;
Fig. 7 is using the internet convergence of the present invention and shunting device overall structure.
Specific embodiment
Below in conjunction with Figure of description and specific preferred embodiment, the invention will be further described, but not therefore and It limits the scope of the invention.
As shown in Figure 1, the integrated message flow based on FPGA and CPU is searched and update method, key step are as follows:
(1) FPGA parses the message that port receives, and extracts five-tuple information and payload data.Parsing includes fixed Frame, CRC (Cyclic Redundancy Check, i.e. cyclic redundancy check) verifications, MAC (Media Access Control) The processes such as head stripping.Five-tuple information refers to source IP address, source PORT (port), purpose IP address, purpose PORT (port) and Transport layer protocol.
(2) Hash calculation is carried out to five-tuple information, and judges whether to hit in DDR using acquired results as inquiry address The message flow nodal information of storage;The message flow nodal information that DDR is stored is accessed and read according to the address;
(3) it is whether consistent with the message five-tuple information of input to compare the message flow nodal information read out one by one by FPGA. It represents to hit if consistent, generation hit message flow nodal information.As inconsistent, represent miss, continue to the incoming message Payload information carry out deep message detection (DPI) and match, matching process is the prior art, by message and configured rule into The byte-by-byte comparison of row;If matched, illustrate that the message is one and carries the new message for flowing nodal information, the message matched Then generate newly-built message flow nodal information.
(4) FPGA will generate newly-built message flow nodal information or hit message flow nodal information is provided according to PCI_E agreements TLP load data forms are packaged into, and passes through PCI_E bus interface and is uploaded to CPU;
(5) CPU is created or is hit message flow nodal information, root after being parsed to the TLP load datas received Generation updates message flow nodal information after carrying out system administration to the message flow nodal information stored in CPU memories according to order, together Sample is packaged into after TLP load data forms and is issued to FPGA by PCI_E bus interface;
(6) after FPGA parses the update hardware message flow nodal information that PCI_E bus interface issues, message flow section will be updated In the DDR addresses obtained in point information write step (2), the lookup of entire message flow nodal information and newer process are completed.
In the present embodiment, the ability that step (1) is handled using programmable fpga chip fast and stable is to the defeated of multichannel Enter message and parsing is identified, can efficiently extract the five-tuple information and payload data of outgoing packet.Wherein five-tuple information refers to Be message protocol number, source IP, destination IP, source port number and destination slogan.Payload data refers to having removed all associations Discuss the pure user data behind head.
In the present embodiment, five-tuple information obtains the DDR addresses that need to be accessed by CRC32 Hash calculations, and FPGA is read Go out the message flow nodal information that the address is stored, form is as shown in Figure 2.Wherein LARGE_IP represents source IP, destination IP two The big IP of numerical value in person, SMALL_IP represent the IP that numerical value is small in source IP, destination IP the two.LARGR_PORT represents source, purpose The big PORT of numerical value in PORT the two, the small PORT of numerical value in SMALL_PORT sources, purpose PORT the two.In stream effective marker value In the case of 1, the five-tuple information extracted in incoming message and element corresponding in message flow nodal information form are judged It is whether consistent.It as completely the same, represents to hit, generation hit message flow nodal information;The information is reported by hit type, hit Literary length, hit DDR addresses composition.Miss then continues to proceed by byte-by-byte deep message inspection from head to payload data DPI matchings are surveyed, newly-built message flow nodal information is generated if having matched condition code;Contained in the information content DDR addresses, IP address, PORT ports, DPI rules number, the new message flow nodal information mode of generation, type of message, the new message flow node of generation Information effective marker, but without entire complete message is all uploaded to CPU.
Communication interfaces of the computer-internal I/O bussing techniques PCI_E as FPGA and CPU is used in the present embodiment.PCI_E Bus uses dual-channel transmission pattern, less data cable can be used to provide higher connection speed.In order to further dash forward Aobvious PCI_E buses handle the advantages of big data performance is more preferable, it is necessary to which multiple data message streams are packaged into a big data block CPU is transmitted to, preferably to utilize DMA (Direct Memory Access, i.e. direct memory access) performance.Such as Fig. 3 institutes Show process layer FPGA by multiple message flow Information encapsulations DMA load in, before adding the customized DMA header of last layer, Whether indication bit, the data for DMA payload lengths indication bit is contained in the header information, creating or hit message flow are effective Indication bit.CPU just can know that effective message flow number and each message flow in DMA loads by parsing the DMA header Length.In a data link layer in order to ensure reliable, the correct transmission of data packet, in the head and afterbody of process layer data packet Add sequence number and redundant validation code.Physical layer is the bottom, and data link layer bag transmits after physical layer frame head FRAME is stamped To CPU.The message flow nodal information form shown in Fig. 4 being packaged in DMA loads, includes message flow nodal information correspondence DDR addresses, build the contents such as stream mode.
CPU realizes that high speed indirect accesses the actual report stored in DDR by managing the message flow nodal information shown in Fig. 5 Text stream nodal information function, completes the control of entire message flow nodal information.Shown in Fig. 6 is CPU managing internal memories in step (5) The flow chart of the message flow nodal information of middle storage, specific implementation step are:
(5.1) TLP (the Transaction Layer Packet, i.e. process layer report that CPU parsings are obtained from PCI_E buses Text) load data, it is hit message flow nodal information or newly-built message flow nodal information to judge the information.
(5.2) if hit message flow nodal information, then the address in information accesses the corresponding positions of CPU memories It puts, and the aging in the message flow nodal information that the position is stored counts entry and the value of hit count entry is reset, and completes Update.If newly-built message flow nodal information, judge that CPU is corresponded at this time and ordered in the message flow nodal information of memory address storage Whether several purpose values of falling into a trap are more than preset value;If greater than preset value, then newly-built message flow nodal information is replaced into the memory The message flow nodal information of address storage;Conversely, then abandoning newly-built message flow nodal information, do not make any changes.
(5.3) it is the currency (i.e. ageing time chain tabular value) at 0 to read ageing time chain table address in CPU, compares CPU Whether the value of aging counting entry is more than ageing time chain tabular value in the message flow nodal information that identical address stores in memory, such as The message flow nodal information is then deleted more than ageing time chain tabular value, and generates stored messages stream nodal information in update DDR.Instead It, software is without any processing.Chained list address value adds 1 afterwards, the process described in (5.3) is being repeated, until address increases to most Until big value (i.e. chain tabular value has been read).Final updating message flow nodal information passes through after being packaged into TLP load data forms PCI_E buses are issued to FPGA.
In the present embodiment, FPGA parses message flow nodal information and is divided into 3 steps in step (6):First to receiving The message flow nodal information of TLP load data forms carries out byte-by-byte framing, until finding complete correctly frame head FRAME. Then frame head is removed, the verification that the remaining data of the frame are carried out with CRC calculates, and verification correctly then continues the parsing of next step, instead Then abandon the frame data.Last basis extracts the relevant information in TLP heads, parses message flow nodal information.
It is the high-speed backbone flow collection and shunting device designed according to the present invention shown in Fig. 7.SFP(Small Form- Factor Pluggable, you can the small-sized conversion factor of plug) high speed network interfaces on subcard can realize optical signal and electricity Conversion between signal.Newer message stream information is stored in DDR3.FPGA and CPU integrations are filtered message, shunt, The operations such as convergence, exchange, and exported after encapsulating Ethernet heading, improve the hardware integration degree and system of veneer and unit case Stability.
Above-mentioned simply presently preferred embodiments of the present invention not makees the present invention limitation in any form, any to be familiar with sheet The technical staff in field, in the case where not departing from technical solution of the present invention scope, all using the technology contents of the disclosure above Many possible changes and modifications are made to technical solution of the present invention or are revised as the equivalent embodiment of equivalent variations, according to this Inventive technique essence should all fall in the technology of the present invention any simple modifications, equivalents, and modifications made for any of the above embodiments In the range of scheme protection.

Claims (2)

1. a kind of integrated message flow is searched and update method, it is characterised in that comprises the following steps:
(1) FPGA ports receive the message of input, and message is parsed, and extract the five-tuple information of message and payload number According to;
(2) five-tuple information is obtained into the DDR addresses for inquiry by Hash calculation, is accessed according to the address and read DDR The message flow nodal information stored, DDR represent Double Data Rate synchronous DRAM;
(3) whether the message flow nodal information that FPGA is relatively read out is consistent with the five-tuple information of incoming message in step (1); It represents to hit if consistent, generation hit message flow nodal information;Miss is represented if inconsistent, is continued to the input The payload data of message carries out deep message detection matching, if matching, generates newly-built message flow nodal information;
(4) the hit message flow nodal information or newly-built message flow node that FPGA is generated step (3) by PCI_E bus interface Information is uploaded to CPU according to TLP load datas form as defined in PCI_E agreements;TLP represents process layer message;
(5) CPU parses the TLP load datas received, is hit or created message flow nodal information;From hit or CPU memory address values are obtained in newly-built message flow nodal information;Then to the message flow nodal information in CPU correspondence memories address Generation updates message flow nodal information after carrying out system administration, and is connect after being packaged into TLP load data forms by PCI_E buses Mouth is issued to FPGA;
(6) after FPGA parses the update message flow nodal information that PCI_E bus interface issues, update message flow nodal information is write Enter in the DDR addresses obtained in step (2), complete the lookup of entire message flow nodal information and newer process.
2. a kind of integrated message flow as described in claim 1 is searched and update method, which is characterized in that the step (5) Systems Management processes in include the hit count entry of message flow nodal information and aging in CPU memory address and count item Purpose update, CPU judge one by one message flow nodal information aging count entry value whether be more than default time-out count value, Delete timeout packet stream nodal information in table;Detailed process is:Analysis judgment TLP load datas are hit message flow nodal informations Or newly-built message flow nodal information, if hit message flow nodal information, then the address value in message is accessed in CPU The corresponding position of address is deposited, and aging counts entry and hit count entry in the message flow nodal information that the address is stored Value reset, complete update;If newly-built message flow nodal information, judges the report that appropriate address stores in CPU memories at this time Whether the value of hit count entry is more than preset value in text stream nodal information;If greater than preset value, then by newly-built message flow section Point information replaces the message flow nodal information of memory address storage;Conversely, then abandoning newly-built message flow nodal information, do not appoint What changes;The ageing time chain tabular value configured in CPU is read, one by one the message flow node of more identical CPU memory address storage Whether the value of aging counting entry is more than ageing time chain tabular value in information, and such as larger than ageing time chain tabular value then deletes the message Nodal information is flowed, and generates update message flow nodal information.
CN201510815890.6A 2015-11-23 2015-11-23 A kind of integrated message flow is searched and update method Active CN105337991B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510815890.6A CN105337991B (en) 2015-11-23 2015-11-23 A kind of integrated message flow is searched and update method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510815890.6A CN105337991B (en) 2015-11-23 2015-11-23 A kind of integrated message flow is searched and update method

Publications (2)

Publication Number Publication Date
CN105337991A CN105337991A (en) 2016-02-17
CN105337991B true CN105337991B (en) 2018-05-18

Family

ID=55288274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510815890.6A Active CN105337991B (en) 2015-11-23 2015-11-23 A kind of integrated message flow is searched and update method

Country Status (1)

Country Link
CN (1) CN105337991B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106027427B (en) * 2016-05-27 2019-06-04 深圳市风云实业有限公司 Method and device is averagely shunted based on the FPGA HASH realized
CN107707479B (en) * 2017-10-31 2021-08-31 北京锐安科技有限公司 Five-tuple rule searching method and device
CN110019232B (en) * 2017-12-27 2021-04-27 中移(杭州)信息技术有限公司 Message storage method and device
CN110995546B (en) * 2019-12-23 2022-02-25 锐捷网络股份有限公司 Message sampling method and device
CN111597142B (en) * 2020-05-15 2024-04-12 北京光润通科技发展有限公司 FPGA-based network security acceleration card and acceleration method
CN111770023B (en) * 2020-06-28 2022-04-15 湖南有马信息技术有限公司 Message duplicate removal method and device based on FPGA and FPGA chip
CN112737914B (en) * 2020-12-28 2022-08-05 北京天融信网络安全技术有限公司 Message processing method and device, network equipment and readable storage medium
CN113709110B (en) * 2021-07-27 2023-07-21 深圳市风云实业有限公司 Intrusion detection system and method combining soft and hard
CN114244752A (en) * 2021-12-16 2022-03-25 锐捷网络股份有限公司 Flow statistical method, device and equipment
CN114125077B (en) * 2022-01-26 2022-05-03 之江实验室 Method and device for realizing multi-executive TCP session normalization
CN115334013B (en) * 2022-08-12 2024-01-23 北京天融信网络安全技术有限公司 Flow statistics method, network card and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103117948A (en) * 2013-02-22 2013-05-22 桂林电子科技大学 Hierarchical parallel high-speed network transmission control protocol (TCP) flow recombination method based on field programmable gate array (FPGA)
CN103312618A (en) * 2013-05-30 2013-09-18 中国人民解放军国防科学技术大学 Flow management method based on combination of software and hardware
CN104753931A (en) * 2015-03-18 2015-07-01 中国人民解放军信息工程大学 DPI (deep packet inspection) method based on regular expression

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9042252B2 (en) * 2012-11-13 2015-05-26 Netronome Systems, Incorporated Inter-packet interval prediction learning algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103117948A (en) * 2013-02-22 2013-05-22 桂林电子科技大学 Hierarchical parallel high-speed network transmission control protocol (TCP) flow recombination method based on field programmable gate array (FPGA)
CN103312618A (en) * 2013-05-30 2013-09-18 中国人民解放军国防科学技术大学 Flow management method based on combination of software and hardware
CN104753931A (en) * 2015-03-18 2015-07-01 中国人民解放军信息工程大学 DPI (deep packet inspection) method based on regular expression

Also Published As

Publication number Publication date
CN105337991A (en) 2016-02-17

Similar Documents

Publication Publication Date Title
CN105337991B (en) A kind of integrated message flow is searched and update method
CN104753931B (en) A kind of deep message detection method based on regular expression
US9081742B2 (en) Network communications processor architecture
US9762544B2 (en) Reverse NFA generation and processing
CN101771627B (en) Equipment and method for analyzing and controlling node real-time deep packet on internet
CN104579823B (en) A kind of exception of network traffic detecting system based on high amount of traffic and method
US9727508B2 (en) Address learning and aging for network bridging in a network processor
US8180803B2 (en) Deterministic finite automata (DFA) graph compression
TWI477106B (en) System and method for line-rate application recognition integrated in a switch asic
US7627570B2 (en) Highly scalable subscription matching for a content routing network
US9356844B2 (en) Efficient application recognition in network traffic
CN103812860B (en) A kind of high speed network strategy matching method based on FPGA
CN102739473A (en) Network detecting method using intelligent network card
US8885480B2 (en) Packet priority in a network processor
CN105359472B (en) A kind of data processing method and device for OpenFlow networks
CN106257434A (en) A kind of data transmission method based on enhancement mode peripheral interconnection protocol bus and device
CN102420750B (en) Single bag canonical matching unit and method
CN103685224A (en) A network invasion detection method
CN102075430A (en) Compression and message matching method for deep message detection deterministic finite automation (DFA) state transfer tables
CN107332886A (en) Method of data synchronization, device, system, electronic equipment and readable storage medium storing program for executing
CN103685222A (en) A data matching detection method based on a determinacy finite state automation
CN206962832U (en) Network data auditing system based on FPGA high-performance capture cards
EP2978173A1 (en) Packet controlling method and device
CN103685221A (en) A network invasion detection method
Wellem et al. A hardware-accelerated infrastructure for flexible sketch-based network traffic monitoring

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant