CN114640641B - Flow-aware OpenFlow flow table elastic energy-saving searching method - Google Patents

Flow-aware OpenFlow flow table elastic energy-saving searching method Download PDF

Info

Publication number
CN114640641B
CN114640641B CN202210193963.2A CN202210193963A CN114640641B CN 114640641 B CN114640641 B CN 114640641B CN 202210193963 A CN202210193963 A CN 202210193963A CN 114640641 B CN114640641 B CN 114640641B
Authority
CN
China
Prior art keywords
cache
flow
flow table
stream
tcam
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210193963.2A
Other languages
Chinese (zh)
Other versions
CN114640641A (en
Inventor
熊兵
曾振国
周浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha University of Science and Technology
Original Assignee
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University of Science and Technology filed Critical Changsha University of Science and Technology
Priority to CN202210193963.2A priority Critical patent/CN114640641B/en
Publication of CN114640641A publication Critical patent/CN114640641A/en
Application granted granted Critical
Publication of CN114640641B publication Critical patent/CN114640641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9005Buffering arrangements using dynamic buffer space allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3236Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions
    • H04L9/3239Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions involving non-keyed hash functions, e.g. modification detection codes [MDCs], MD5, SHA or RIPEMD
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3247Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving digital signatures

Abstract

The invention discloses an elastic energy-saving searching method for an OpenFlow flow table of flow sensing, which comprises the following steps: the elastic energy-saving cache consists of a plurality of logic segments, the candidate positions of the segments can be obtained by carrying out hash computation on the flow identifiers of the arriving network packets once through a sub-hash algorithm, the hash computation cost is obviously reduced, and the cache lookup can be rapidly completed in a parallel matching mode. Meanwhile, by utilizing the locality of network traffic, the cache stores active accurate streams in the network by adopting a low-power-consumption storage medium SRAM, so that most packets hit the cache for direct forwarding, thereby bypassing the OpenFlow flow table searching process and obviously reducing the TCAM energy consumption. Aiming at the fluctuation of network flow, the buffer memory senses the dynamic change of the number of active accurate flows in real time and performs adaptive expansion and contraction, and the buffer memory hit rate is kept while the higher buffer memory loading rate is ensured so as to realize the elastic energy-saving search of the OpenFlow flow table.

Description

Flow-aware OpenFlow flow table elastic energy-saving searching method
Technical Field
The invention relates to the field of OpenFlow flow tables, in particular to a flow-aware elastic energy-saving searching method for an OpenFlow flow table.
Background
The software defined network (Software Defined Networking, SDN) is used as a novel network system architecture with separated data control and programmable software, and can effectively solve the problems of difficult flow control, complicated equipment, incapability of flexible regulation and control and the like of the traditional network. The SDN uniformly integrates control functions in the traditional network equipment to an SDN controller, and the controller uniformly prepares flow rules according to a global network view, and the flow rules are issued through a southbound interface protocol and are installed in a switch of a data plane so as to guide packet forwarding. Currently, the main flow southbound interface protocol of communication between an SDN controller and a switch is the OpenFlow protocol, and extracts matching fields of transport protocols of various layers of a traditional network to be combined into an OpenFlow table entry, so as to form the OpenFlow table. In OpenFlow, by means of wild card mask, one OpenFlow entry can process multiple network flows, and decides the final matching result according to priority. To support fast lookup of generic flow table entries, openFlow switches often use TCAM to store flow tables.
However, the following problems occur: in software-defined networks, since OpenFlow introduces wild cards in the flow table matching field, openFlow switches typically use tri-state content addressable registers (Ternary Content Addressable Memory, TCAM) to store flow tables to enable fast packet forwarding. But each TCAM lookup requires parallel access to the entire dataset, which is energy intensive. At present, the main method capable of effectively reducing the energy consumption of the OpenFlow flow table lookup is to construct an energy-saving cache and store the recently arrived accurate flow in the network so as to expect most network packets to access the hit cache and further complete forwarding processing without searching the whole flow table. However, the existing energy-saving buffer memory is difficult to adapt to jitter and fluctuation of network traffic due to the fixed capacity. Specifically, when network traffic increases rapidly, the cache can only store a small amount of traffic, and the cache hit rate is obviously reduced; when the network flow is suddenly reduced, a large number of empty entries are easy to appear in the cache, and the cache utilization rate is obviously reduced. In contrast, the patent designs a dynamic telescopic elastic energy-saving cache based on network flow characteristics, and can still keep higher hit rate and utilization rate under the condition of network flow fluctuation. On the basis, a flow-aware OpenFlow flow table elastic energy-saving searching method is provided, so as to adapt to the dynamic variability of network flow.
Comparison document: CN111131084a discloses a QoS aware OpenFlow table hierarchical storage architecture and application, and the architecture of the present invention stores high priority flows in TCAM, ensuring packet lookup performance of the high priority flows; and (3) buffering low-priority and non-priority active accurate streams by adopting a Cuckoo hash structure, so that data packets in the accurate streams are directly hit and buffered, further, corresponding SRAM stream table entries are quickly found, the stream table searching energy consumption is reduced, and the performance of the OpenFlow switch is improved. According to the comparison file scheme, the cache capacity cannot be adaptively expanded according to the network flow, so that the cache hit rate is obviously reduced in a network scene with large flow fluctuation such as a data center, and a stable and efficient flow table energy-saving searching process is difficult to realize.
Disclosure of Invention
The invention aims to solve the technical problem of providing a scalable elastic energy-saving cache, which is used for adaptively adjusting the cache capacity according to the network flow so as to store all flow table items meeting the requirements and ensure the cache hit rate. When the number of empty cache items is large, the cache is contracted, the cache space utilization rate is improved, and therefore the overall search performance of the cache is improved. Accordingly, an elastic energy-saving searching method for the flow sensing OpenFlow flow table is provided.
In order to solve the technical problems, the invention adopts the following technical scheme:
the invention provides an OpenFlow large-scale flow table elastic energy-saving and efficient searching architecture, which comprises the following steps:
the elastic energy-saving cache is used for reducing the energy consumption of searching the TCAM flow table, and the active accurate flow is stored by using a low-energy storage medium SRAM, so that most network flows can be directly forwarded through the cache. When the number of active accurate streams changes, the cache capacity will be adaptively adjusted to accommodate most of the active accurate streams in the network, ensuring a cache hit rate.
The TCAM flow table and the DRAM flow table are used for splitting the flow table items, and then the matching fields are stored in the TCAM to ensure the searching performance of the flow table. The content field is stored in a dynamic random access memory (Dynamic Random Access Memory, DRAM) with low lookup overhead and large storage capacity to relieve TCAM storage pressure.
The elastic energy-saving buffer consists of a plurality of logic segments, and the number of the segments is adaptively increased or decreased according to the change of the active accurate stream. When the number of active accurate streams is obviously increased, the cache is dynamically increased by one segment to store the newly-appearing active accurate streams in the network, so that the cache hit rate is ensured. When the number of active precise stream numbers is reduced significantly, the cache automatically reduces one segment, and the stream in the cache is transferred to the rest segments as much as possible, so that the cache utilization rate is improved, and meanwhile, the cache hit rate is ensured not to generate obvious downslide. Along with the up-and-down fluctuation of the number of the active accurate streams, the cache is adaptively expanded and contracted so as to obtain higher cache hit rate and cache utilization rate at the same time, thereby ensuring the overall performance of the cache.
The elastic energy-saving buffer is divided into sections by 2 m Each cache entry contains a flow signature, a TCAM index value, and a timestamp. Stream signature is composed of stream key wordsThe input signature function is calculated, typically 2 or 4 bytes. The TCAM index value indicates a TCAM flow entry corresponding to the currently active precision flow, so as to further read the corresponding action set. The time stamp records the arrival time of the last data packet in the active precision stream for cache replacement and timeout scanning operations.
The TCAM flow table stores the identification field matching field and mask in the flow table item, the DRAM stores the action set tree and the time stamp of the remaining content field of the flow table item, and the corresponding DRAM flow table item can be directly positioned through the TCAM index value.
The invention also provides a cache indexing method based on the framework, which comprises the following steps:
the elastic energy-saving buffer is a fast matching network packet, bits which are not repeated mutually are randomly selected from the flow signature to be arranged randomly, so that a plurality of independent sub hash functions are obtained, and further candidate positions of the packet in each segment are obtained fast. When a new segment is added in the cache, a corresponding sub hash function is generated at the same time.
The sub-hash functions can be stored through a sub-hash matrix, and when the stream index is cached, the sub-hash functions corresponding to all the segments are needed to be acquired first so as to read corresponding bits in the stream signature in sequence, and then the index value of the stream in each segment is obtained rapidly. In the process of dynamically adding and deleting the cache segments, the number of sub hash functions to be acquired is also changed. In order to avoid frequent generation and destruction of the sub-hash functions, a portion of the sub-hash functions may be selectively stored in the sub-hash matrix. Setting the current segmentation number as n, storing the corresponding n sub hash functions into a matrix A for the current cache stream index; the spare sub-hash function is stored in matrix B. When a new segment is cached, the first sub-hash function in matrix B is moved into matrix A, and when the segment is cached, the last sub-hash function of matrix A is moved into matrix B.
The cache stream indexing method comprises the following steps: the bit recorded in the ith (1.ltoreq.i) row of the sub-hash matrix and the sequence thereof are the ith sub-hash function, and the jth (1.ltoreq.j.ltoreq.m) column records the bit read from the stream signature in sequence when the sub-hash operation is carried out according to the ith rowBit value b in a bit-sequential read stream signature i,j Calculation 2 (j-1) b i,j And accumulate and then modulo the segment length l.
The invention also provides a flow table searching method based on the framework, which comprises the following steps:
the elastic energy-saving buffer firstly calculates candidate positions of the flow at each position according to the sub hash function, rapidly completes buffer searching through parallel matching, reads TCAM index values in matched buffer items and positions the TCAM index values to corresponding DRAM flow table items, thereby bypassing the TCAM flow table searching process and directly executing an action set in the DRAM flow table items;
after the elastic energy-saving cache fails to search, the TCAM flow table needs to be further matched in the TCAM flow table, and if the matching is successful, the corresponding DRAM flow table item can be positioned and the packet is forwarded. If the TCAM flow table also fails to search, the SDN controller is requested to issue a corresponding flow rule.
Further, the efficient searching method comprises the following operations:
1. forwarding OpenFlow flow table item packets;
searching an OpenFlow flow table item for each arriving data packet, finding a corresponding flow table item, and further executing a corresponding action set, wherein the OpenFlow flow table item is represented by a matching domain and an action set;
2. OpenFlow flow table entry insertion or deletion;
OpenFlow flow table entry insertion: when the OpenFlow switch receives a flow-mod message with an ADD command issued by the SDN controller, the flow-mod message is inserted into a TCAM flow table and a DRAM flow table;
OpenFlow flow table entry deletion: when the OpenFlow switch receives a flow-mod message with a DELETE command issued by the SDN controller, deleting a corresponding TCAM flow table item and a DRAM flow table item, and if the flow rule is cached, deleting a corresponding cache item;
3. timeout scanning of OpenFlow flow table items;
judging whether the flow table item is out of date according to the difference value of the current time stamp and the time stamp stored in the DRAM, if the difference value is larger than the timeout interval, the flow table item is out of date and the flow table item is deleted;
4. elastic energy-saving cache searching;
calculating a stream signature according to the stream key words, then reading a sub-hash matrix, calculating candidate positions of the stream in each segment according to the stream signature and each sub-hash function, and matching corresponding buffer items in each segment in parallel to quickly finish buffer searching;
5. elastic energy-saving buffer expansion and contraction operation;
elastic energy-saving buffer expansion: generating a sub-hash function (or moving the first sub-hash function in the matrix B into the matrix A) and adding a corresponding buffer segment;
elastic energy-saving buffer shrinkage: deleting the last segment, and moving the active accurate stream stored in the segment to the segment at the front end;
6. elastic energy-saving cache insertion or deletion
Elastic energy-saving buffer insertion: obtaining all candidate positions through the sub hash function, inserting the stream signature into an empty buffer item of the forefront section, and triggering a buffer expansion operation if the buffer loading rate is higher when the buffer is subjected to the insertion operation;
elastic energy-saving cache deletion: positioning a candidate storage position of the stream in the cache according to the stream signature and the sub hash function, positioning and removing a corresponding cache item, and triggering a cache contraction operation if the cache loading rate is lower when the cache is in a deletion operation;
7. elastic energy-saving cache updating and overtime scanning;
elastic energy-saving cache update: firstly searching a cache, and if the cache searching fails, continuing searching the TCAM; if the stream directly hits the cache, updating the time stamp of the matching node; if the stream misses the cache and hits the TCAM, it is inserted into the cache or its active state is updated;
elastic energy-saving cache timeout scanning: judging whether a node overturns according to the difference value of the current system timestamp and the timestamp in the stream table entry, if the difference value is larger than the overtime interval, judging the node as the overtime node, and clearing the node from the cache;
further, the OpenFlow flow table packet forwarding specifically includes: first, analyzing the protocol header of each layer, extracting key fields (such as source/destination IP address, source/destination port and protocol type) to generate a stream key, and then calculating a stream signature to search the cache. If the cache lookup is successful, the corresponding DRAM table item is directly positioned according to the TCAM index value, and the action set is executed to complete the energy-saving forwarding of the packet. Then, the time stamp in the matching cache entry and the counter and time stamp in the DRAM stream entry in the elastic energy-saving cache are updated. If the cache lookup fails, the TCAM flow table is further searched by using the flow key. If the search is successful, the corresponding action set in the DRAM flow table is obtained according to the TCAM index value so as to carry out packet forwarding. After the packet is successfully forwarded, if the flow is converted into an active state, the flow is stored into an elastic energy-saving cache, and the related information of the corresponding DRAM table entry is updated. If the lookup of the TCAM flow table fails, it means that the Packet belongs to a new flow, and the OpenFlow switch will package header information of the data Packet into a Packet-in message and submit the Packet-in message to the SDN controller to request to issue a new flow rule.
Further, the OpenFlow table insertion specifically includes: flow rules are first extracted from the message. If the TCAM capacity is enough, the matching field is stored in the TCAM flow table, and the content field is stored in the DRAM flow table. If the TCAM storage capacity is full, firstly eliminating the last non-accessed flow list item in the TCAM, and then installing the new rule into the flow list.
Further, the OpenFlow flow table entry deletion specifically further includes: extracting flow rule information, and searching the TCAM flow table according to the flow rule matching domain. If the TCAM flow table is successfully searched, deleting the TCAM flow table item. If the stream is an active stream, the corresponding cache entry is also cached and deleted. And finally, positioning and deleting the corresponding DAM flow table entry according to the TCAM index value. If the TCAM flow table cannot match the flow rule, an error message is sent to the controller, and a result of failure in deleting the table entry is reported.
Further, the OpenFlow flow entry timeout scanning specifically further includes: traversing the TCAM flow table, judging whether the flow table is out of date according to the difference between the current time stamp and the time stamp stored in the DRAM flow table for each flow table, if the difference is larger than the timeout interval (divided into hard time and idle time), indicating that the flow table is out of date, and deleting the flow table. The OpenFlow flow table timeout scanning flow is as follows: traversing the TCAM list and reading the time stamp in the corresponding DRAM list item, and deleting the TCAM list item if the overtime list item is detected. If the flow corresponding to the flow table item is in an active state, locating and deleting the node in the corresponding cache. Finally, deleting the corresponding DRAM flow table entry to finish the elimination of the overtime flow table entry;
further, the elastic energy-saving cache searching specifically further includes: a stream signature is calculated from the stream key. And then reading the sub-hash matrix, calculating candidate positions of the flow in each segment according to the flow signature and the sub-hash function, and matching corresponding cache items in parallel to quickly complete cache searching. If the cache lookup is successful, the corresponding DRAM stream table item is directly positioned according to the TCAM index value stored in the matched cache item, and an action set in the corresponding DRAM stream table item is executed to finish packet forwarding. After the packet is successfully forwarded, the information such as the buffer time stamp, the counter in the DRAM table entry, the time stamp and the like is updated. If the cache searching fails, returning a cache searching failure result;
further, the inserting and expanding of the elastic energy-saving buffer specifically further includes: and calculating a stream signature according to the stream key words, and obtaining all candidate positions through a sub hash function. If all candidate positions are not empty, the cache table item which is not accessed for the longest time is removed, and the stream signature, the TCAM index value and the time stamp are stored in the position. If an empty entry exists in the candidate location, the flow signature is inserted into the empty cache entry of the front-most segment. After the insertion is successful, judging whether the cache expansion is needed or not according to the current cache loading rate and the cache hit rate. If the expansion condition is triggered, a sub hash function is newly generated (or a first sub hash function in the matrix B is moved into the matrix A), and a corresponding buffer segment is newly added to improve the packet forwarding capacity of the buffer;
further, the elastic energy-saving cache deleting and shrinking specifically further includes: firstly, locating candidate storage positions of the stream in the cache according to the stream signature and the sub hash function, and locating and removing corresponding cache items. After the cache entry is removed successfully, the cache load rate will decrease. If the cache shrink condition is triggered at this time, the last cache segment is removed and the corresponding sub-hash function is deleted (or the last sub-hash function in matrix a is moved into matrix B). Before this, the active precision stream stored in the segment needs to be moved to the front-end segment to prevent the cache hit rate from being greatly reduced. If the cache entry to be moved forward is the longest non-accessed candidate entry, the entry does not need to be moved forward; otherwise, the position of the candidate cache item which is accessed for the longest time in the front-end segment is positioned, and the cache item to be advanced is placed in the position. Thus, the tail-section cache entry advance operation may be considered a multiple cache insert operation;
further, the elastic energy-saving cache timeout scanning specifically further includes: judging whether a node overtime or not through the difference value of the current system timestamp and the timestamp in the stream table entry, if the difference value is larger than the overtime interval, judging that the node is the overtime node, and cleaning the node from the cache. The process of scanning and caching is as follows: each segment is traversed in turn and immediately removed if a timeout cache entry is detected. After each buffer segment is traversed, if a buffer contraction condition is triggered at this time, the active accurate stream stored in the last segment is required to be shifted to the front segment, the buffer space of the tail segment is recovered, and the corresponding sub hash function is deleted (or the sub hash function is shifted to the matrix B);
the invention has the beneficial effects that:
1. the elastic energy-saving cache always accommodates active accurate streams in a network by dynamically expanding and contracting the cache capacity, always maintains the cache hit rate, takes the cache utilization rate into account, and ensures the energy-saving effect of the cache; meanwhile, the stream index value of each segment is obtained through one-time hash operation, and then the cache is quickly searched in a parallel matching mode, so that the cache searching performance is ensured.
2. The invention uses the elastic energy-saving cache to store active accurate flows, so that most network packets can directly hit the cache to carry out low-energy-consumption forwarding, and the energy consumption of searching an OpenFlow flow table is effectively reduced; meanwhile, through sensing the change of active accurate flows in the network in real time, the cache dynamically adjusts the cache capacity, and all the current active accurate flows are always stored, so that the cache hit rate is ensured, the stable energy-saving effect is further achieved, and the elastic energy-saving searching of the OpenFlow flow table is realized.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an OpenFlow flow table elastic energy-saving lookup architecture of the method of the present invention.
Fig. 2 is a diagram of a resilient energy-saving cache architecture in the method of the present invention.
Fig. 3 is a schematic diagram of a sub-hash matrix in the method of the present invention.
Fig. 4 is a flowchart of packet forwarding in the OpenFlow table in the method of the present invention.
Fig. 5 is a flowchart of OpenFlow table insertion in the method of the present invention.
Fig. 6 is a flowchart of OpenFlow table deletion in the method of the present invention.
Fig. 7 is a flowchart of OpenFlow table timeout scanning in the method of the present invention.
FIG. 8 is a flow chart of a resilient energy-efficient cache lookup in the method of the present invention.
FIG. 9 is a flow chart of the resilient energy-saving cache entry insertion in the method of the present invention.
FIG. 10 is a flow chart of the elastic energy-saving cache entry deletion in the method of the present invention.
FIG. 11 is a flow chart of a flexible power saving cache timeout scan in the method of the present invention.
Detailed Description
In order to better illustrate the invention, the invention is further verified by the following specific examples. The examples are presented herein only to more directly describe the invention and are merely a part of the invention and should not be construed as limiting the invention in any way.
As shown in fig. 1, an embodiment of the present invention provides a flow aware OpenFlow table elastic energy-saving lookup architecture, including:
the elastic energy-saving cache is used for caching active accurate streams in a network and realizing high-speed low-power stream table cache searching; the buffer is composed of a plurality of logical segments, the capacity of which increases or decreases adaptively according to the change of the active precision stream. The storage medium of the elastic energy-saving cache adopts SRAM preferentially;
the TCAM flow table stores the matching fields of the flow table items, such as the matching fields and the mask, and when the elastic energy-saving cache fails to search, the TCAM flow table is continuously searched; the DRAM flow table stores the content fields of the flow table items, such as action sets, time stamps and the like, and can directly acquire the DRAM action sets according to the TCAM index value.
As shown in fig. 2, the elastic energy-saving buffer is used for reducing the lookup frequency of the TCAM flow table and reducing the lookup energy consumption of the TCAM flow table.
And caching to randomly select a plurality of non-repeated bits from the stream signature for random arrangement to obtain sub-hash functions corresponding to each segment pair, and further rapidly obtaining candidate positions of the packet in each segment. When a new segment is added to the cache, a new sub-hash function is generated at the same time.
As shown in fig. 3, the sub-hash matrix is used to reduce the hash computation overhead of the plurality of segment indexes.
The calculation method of the segment index comprises the following steps: reading the bit value b in the stream signature in bit order in the ith row when calculating the index of the ith segment i,j Calculate 2 (j-1) b i,j And accumulate and then modulo the segment length.
The embodiment also provides a method based on the architecture, which comprises the following steps:
when the OpenFlow receives a packet, firstly searching an elastic energy-saving cache, and if the cache searching fails, continuing searching the TCAM; after judging that one stream is converted into an active state, storing the stream into an elastic energy-saving buffer memory, and accelerating the matching speed of the subsequent packets.
1. Forwarding OpenFlow flow table item packets;
as shown in fig. 4, by searching the OpenFlow flow table for each arriving data packet, a corresponding flow table entry is found, and then a corresponding action set is executed;
first, analyzing the protocol header of each layer, extracting key fields (such as source/destination IP address, source/destination port and protocol type) to generate a stream key, and then calculating a stream signature to search the cache. If the cache lookup is successful, the corresponding DRAM table item is directly positioned according to the TCAM index value, and the action set is executed to complete the energy-saving forwarding of the packet. Then, the time stamp in the matching cache entry and the counter and time stamp in the DRAM stream entry in the elastic energy-saving cache are updated. If the cache lookup fails, the TCAM flow table is further searched by using the flow key. If the search is successful, the corresponding action set in the DRAM flow table is obtained according to the TCAM index value so as to carry out packet forwarding.
After the packet is successfully forwarded, if the flow is converted into an active state, the flow is stored into an elastic energy-saving cache, and the related information of the corresponding DRAM table entry is updated. If the lookup of the TCAM flow table fails, it means that the Packet belongs to a new flow, the OpenFlow switch will package the header information of the data Packet into a Packet-in message and submit the Packet-in message to the controller, requesting to issue a new flow rule.
2. OpenFlow flow table entry insertion or deletion;
as shown in fig. 5, the OpenFlow flow entry is inserted: when the OpenFlow switch receives a flow-mod message with an ADD command issued by the SDN controller, first, a flow rule is extracted from the message. If the TCAM capacity is enough, the matching field is stored in the TCAM flow table, and the content field is stored in the DRAM flow table. If the TCAM storage capacity is full, firstly eliminating the last non-accessed flow list item in the TCAM, and then installing the new rule into the flow list.
As shown in fig. 6, the OpenFlow flow entry deletes: when the OpenFlow switch receives a flow-mod message with a DELETE command issued by the SDN controller, firstly extracting flow rule information, and searching a TCAM flow table according to a flow rule matching domain. If the TCAM flow table is successfully searched, deleting the TCAM flow table item. If the stream is an active stream, the corresponding cache entry is also cached and deleted. And finally, positioning and deleting the corresponding DAM flow table entry according to the TCAM index value. If the TCAM flow table cannot match the flow rule, an error message is sent to the controller, and a result of failure in deleting the table entry is reported.
3. Timeout scanning of OpenFlow flow table items;
as shown in fig. 7, whether the flow entry is expired is judged according to the difference between the current time stamp and the time stamp stored in the DRAM, and if the difference is greater than the timeout interval, the flow entry is expired and the flow entry is deleted;
the OpenFlow flow table item timeout scanning specifically further includes: traversing the TCAM list and reading the time stamp in the corresponding DRAM list item, and deleting the TCAM list item if the overtime list item is detected. If the flow corresponding to the flow table item is in an active state, locating and deleting the node in the corresponding cache. And finally, deleting the corresponding DRAM flow table entry to finish the elimination of the overtime flow table entry.
4. Elastic energy-saving cache searching;
as shown in fig. 8, when the OpenFlow switch receives a network packet, the OpenFlow switch will first search the cache, and if the cache search is successful, the fast forwarding can be performed by bypassing the TCAM search process with high energy consumption.
The elastic energy-saving cache searching process specifically further comprises the following steps: a stream signature is calculated from the stream key. And then reading the sub-hash matrix, calculating candidate positions of the flow in each segment according to the flow signature and each sub-hash function, and matching corresponding cache items in parallel to quickly finish cache searching. If the cache lookup is successful, the corresponding DRAM stream table item is directly positioned according to the TCAM index value stored in the matched cache item, and an action set in the corresponding DRAM stream table item is executed to finish packet forwarding. After the packet is successfully forwarded, the information such as the buffer time stamp, the counter in the DRAM table entry, the time stamp and the like is updated. And if the cache searching fails, returning a cache searching failure result.
5. Elastic energy-saving cache insertion and deletion;
as shown in fig. 9, elastic energy-saving cache insertion: when an accurate stream is converted into an active state, the accurate stream needs to be stored into an elastic energy-saving buffer, so that more subsequent packets can be directly forwarded through the buffer.
The cache insertion process specifically further comprises the following steps: and calculating a stream signature according to the stream key words, and obtaining all candidate positions through a sub hash function. If all candidate positions are not empty, the cache table item which is not accessed for the longest time is removed, and the stream signature, the TCAM index value and the time stamp are stored in the position. If an empty entry exists in the candidate location, the flow signature is inserted into the empty cache entry of the front-most segment. After the insertion is successful, judging whether the cache expansion is needed or not according to the current cache loading rate and the cache hit rate. If the expansion condition is triggered, a sub-hash function is newly generated (or the first sub-hash function in the matrix B is moved into the matrix A), and a corresponding buffer segment is newly added to improve the packet forwarding capability of the buffer.
As shown in fig. 10, the elastic energy-saving cache delete: when a certain cache item needs to be deleted, the candidate storage position of the stream in the cache is positioned according to the stream signature and the sub hash function, and the corresponding cache item is positioned and removed. After the cache entry is removed successfully, the cache load rate will decrease. If the cache shrink condition is triggered at this time, the last cache segment is removed and the corresponding sub-hash function is deleted (or the last sub-hash function in matrix a is moved into matrix B). Before this, the active precision stream stored in the segment needs to be moved to the front-end segment to prevent the cache hit rate from being greatly reduced. If the cache entry to be moved forward is the longest non-accessed candidate entry, the entry does not need to be moved forward; otherwise, the position of the candidate cache item which is accessed for the longest time in the front-end segment is positioned, and the cache item to be advanced is placed in the position. Thus, the tail-biting entry advancing operation may be considered as a multiple cache insertion operation.
6. Elastic energy-saving cache overtime scanning;
as shown in fig. 11, since the flows in the network are not always active, a flow may not arrive for a long time after being stored in the elastic energy-saving buffer. Therefore, it is necessary to clear the timeout node in the cache by using the timeout scanning mechanism, so as to prevent the idle stream from occupying the cache space and causing the decrease of the cache performance.
The elastic energy-saving cache timeout scanning operation specifically further comprises: judging whether a node overtime or not through the difference value of the current system timestamp and the timestamp in the stream table entry, if the difference value is larger than the overtime interval, judging that the node is the overtime node, and cleaning the node from the cache. The process of scanning and caching is as follows: each segment is traversed in turn and immediately removed if a timeout cache entry is detected. After each buffer segment is traversed, if the buffer contraction condition is triggered at this time, the active accurate stream stored in the last segment is required to be shifted to the front segment, the buffer space of the tail segment is recovered, and the corresponding sub-hash function is deleted (or the sub-hash function is shifted to the matrix B).
The specific embodiments described herein are offered by way of example only to illustrate the spirit of the invention. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions thereof without departing from the spirit of the invention or exceeding the scope of the invention as defined in the accompanying claims.

Claims (10)

1. The flow-aware OpenFlow flow table elastic energy-saving searching system is characterized by comprising:
the elastic energy-saving cache is used for caching active accurate streams in a network and realizing high-speed low-power stream table cache searching; the buffer memory consists of a plurality of logic segments, and the capacity adaptability of the buffer memory is increased or reduced according to the change of the active accurate stream; SRAM stores active accurate flow, so that most network flows can be directly forwarded through cache; when the number of active accurate streams changes, the cache capacity is adaptively adjusted to accommodate most of the active accurate streams in the network, so that the cache hit rate is ensured;
the TCAM flow table and the DRAM flow table are used for splitting the flow table items, and then the matching fields are stored in the TCAM to ensure the searching performance of the flow table; the content field is stored in a Dynamic Random Access Memory (DRAM) with small searching cost and large storage capacity so as to relieve the storage pressure of the TCAM;
the elastic energy-saving buffer consists of a plurality of logic segments, and the number of the segments is adaptively increased or decreased according to the change of the active accurate stream; when the number of active accurate streams is obviously increased, a segment is dynamically increased in the cache so as to store the newly-appearing active accurate streams in the network and ensure the cache hit rate; when the number of the active precise stream numbers is obviously reduced, the cache automatically reduces one segment, and streams in the segment are transferred to the rest segments as much as possible, so that the cache utilization rate is improved, and meanwhile, the cache hit rate is ensured not to obviously slide down; along with the up-and-down fluctuation of the number of the active accurate streams, the cache is adaptively expanded and contracted to obtain higher cache hit rate and cache utilization rate at the same time, so that the overall performance of the cache is ensured;
each cache item in the elastic energy-saving cache segment comprises a flow signature, a TCAM index value and a time stamp; the stream signature is obtained by calculating a stream keyword input signature function and is 2 or 4 bytes; the TCAM index value indicates a TCAM flow table item corresponding to the current active accurate flow so as to further read a corresponding action set; the time stamp records the arrival time of the last data packet in the active accurate stream so as to perform cache replacement and overtime scanning operation;
the TCAM flow table stores the identification field matching field and the mask in the flow table item, the DRAM stores the action set tree of the remaining content field of the flow table item and the time stamp, and the corresponding DRAM flow table item can be directly positioned through the TCAM index value.
2. A method for searching for an elastic energy-saving cache based on claim 1, comprising:
the elastic energy-saving buffer is a fast matching network packet, and randomly selects mutually non-repeated bits from the stream signature to be randomly arrangedObtaining a plurality of independent sub hash functions, and further rapidly obtaining candidate positions of the packet in each segment; when a new segment is cached, a new sub hash function is generated at the same time;
the sub hash functions can be stored through a sub hash matrix, when the buffer storage stream index is carried out, the sub hash functions corresponding to all the segments are needed to be obtained at first, so that corresponding bits in the stream signature are read in sequence, and further, the index value of the stream in each segment is obtained rapidly; at the position ofIn the process of dynamically adding and deleting the cache segments, the number of sub hash functions to be obtained is also changed; to avoid frequent generation and destruction of sub-hash functions, a portion of the sub-hash functions may be selectively stored in a sub-hash matrix; let the current number of segments benThen correspond tonThe sub hash functions are stored in a matrix A and are used for indexing the current cache stream; storing the standby sub-hash function into a matrix B; when the newly added segment is cached, a first sub-hash function in the matrix B is moved into the matrix A, and when the segment is cached and deleted, a last sub-hash function of the matrix A is moved into the matrix B;
the cache stream indexing method comprises the following steps: sub-hash matrixi(1≤i) The bit of the line record and the sequence thereof are the firstiSub-hash function, the firstj(1≤ jm) The column records the bits read from the stream signature in sequence during the sub-hash operation, according to the firstiBit-sequential reading of bit values in a stream signature in rowsb i,j Calculation 2 j(-1) b i,j And accumulate and then modulo the segment length.
3. The method of claim 2, wherein the search method comprises the operations of:
a. forwarding OpenFlow flow table item packets;
searching an OpenFlow flow table for each arriving data packet, finding a corresponding flow table item, and further executing a corresponding action set;
b. OpenFlow flow table entry insertion or deletion;
OpenFlow flow table entry insertion: when the OpenFlow switch receives a flow-mod message with an ADD command issued by an SDN controller, firstly extracting a flow rule from the message; judging the type of the flow rule according to the flow matching information, if the flow rule is a general rule, storing the matching field of the general rule into a TCAM flow table preferentially, and storing the content field into a DRAM flow table; if the TCAM storage capacity is full, the stream table item which is not accessed for the longest time in the TCAM is transferred to the SRAM; if the rule is accurate, the matching field is directly stored in the SRAM flow table, and the content field is stored in the DRAM flow table;
OpenFlow flow table entry deletion: when the OpenFlow switch receives a flow-mod message with a DELETE command issued by the SDN controller, deleting table entries in the TCAM flow table and the DRAM flow table;
c. timeout scanning of OpenFlow flow table items;
judging whether the flow table item is out of date according to the difference value of the current time stamp and the time stamp stored in the DRAM, if the difference value is larger than the timeout interval, the flow table item is out of date and the flow table item is deleted;
d. elastic energy-saving cache searching;
calculating a stream signature according to the stream key words; then reading the sub hash matrix, calculating candidate positions of the flow in each segment according to the flow signature and the sub hash function, and matching corresponding buffer items in each segment in parallel to quickly finish buffer searching;
e. elastic energy-saving buffer expansion and contraction operation;
elastic energy-saving buffer expansion: newly generating a sub-hash function, or moving the first sub-hash function in the matrix B into the matrix A, and newly adding a corresponding cache segment;
elastic energy-saving buffer shrinkage: deleting the last segment, and moving the active accurate stream stored in the segment to the segment at the front end;
f. elastic energy-saving cache insertion or deletion;
elastic energy-saving buffer insertion: obtaining all candidate positions through the sub hash function, inserting the stream signature into an empty buffer item of the forefront section, and triggering a buffer expansion operation if the buffer loading rate is higher when the buffer is subjected to the insertion operation;
elastic energy-saving cache deletion: positioning a candidate storage position of the stream in the cache according to the stream signature and the sub hash function, positioning and removing a corresponding cache item, and triggering a cache contraction operation if the cache loading rate is lower when the cache is in a deletion operation;
g. elastic energy-saving cache overtime scanning;
elastic energy-saving cache timeout scanning: judging whether a node overtime according to the difference value of the current system time stamp and the time stamp in the stream table item, if the difference value is larger than the overtime interval, judging the node as the overtime node, and clearing the node from the cache.
4. The method according to claim 3, wherein the OpenFlow table packet forwarding is specifically: first analyzing the protocol header of each layer, extracting key fields including source/destination IP address, source/destination port and protocol type to generate stream key wordkeyFurther calculating a flow signature to find a cache; if the cache lookup is successful, the corresponding DRAM table item is directly positioned according to the TCAM index value, and an action set in the corresponding DRAM table item is executed to complete the energy-saving forwarding of the packet; then, updating the time stamp in the matched cache item and the counter and the time stamp in the DRAM stream table item in the elastic energy-saving cache; if the cache lookup fails, the stream key is usedkeyFurther searching a TCAM flow table; if the searching is successful, acquiring a corresponding action set in the DRAM flow table according to the TCAM index value so as to carry out packet forwarding; after the packet is successfully forwarded, if the flow is converted into an active state, the flow is stored into an elastic energy-saving cache, and the related information of the corresponding DRAM table item is updated; if the lookup of the TCAM flow table fails, it means that the Packet belongs to a new flow, and the OpenFlow switch will package header information of the data Packet into a Packet-in message and submit the Packet-in message to the SDN controller to request to issue a new flow rule.
5. The method of claim 3, wherein the OpenFlow table insertion and deletion specifically further comprises:
OpenFlow flow table insertion: firstly extracting a flow rule from a message; if the TCAM capacity is enough, storing the matching field in the TCAM flow table, and storing the content field in the DRAM flow table; if the TCAM storage capacity is full, firstly eliminating the last non-accessed flow table item in the TCAM, and then installing a new rule into the flow table;
OpenFlow flow table delete: extracting flow rule information, and searching a TCAM flow table according to a flow rule matching domain; if the TCAM flow table is successfully searched, deleting the TCAM flow table item; if the stream is an active stream, the corresponding cache item is also required to be cached and deleted; finally, positioning and deleting the corresponding DRAM stream table entry according to the TCAM index value; if the TCAM flow table cannot match the flow rule, an error message is sent to the controller, and a result of failure in deleting the table entry is reported.
6. The method of claim 3, wherein the OpenFlow flow entry timeout scan specifically further comprises: traversing the TCAM flow table, for each flow item in the TCAM flow table, judging whether the flow table item is out of date according to the difference value of the current time stamp and the time stamp stored in the DRAM flow table, if the difference value is larger than the time-out intervalhard timeOr (b)idle timeIndicating that the flow entry has expired, the flow entry should be deleted; the OpenFlow flow table timeout scanning flow is as follows: traversing the TCAM list and reading the time stamp in the corresponding DRAM list item, if the overtime list item is detected, deleting the TCAM list item; if the flow corresponding to the flow table item is in an active state, locating and deleting the node in the corresponding cache; and finally, deleting the corresponding DRAM flow table entry to finish the elimination of the overtime flow table entry.
7. The method of claim 3, wherein the resilient energy-saving cache lookup further comprises: calculating a stream signature according to the stream key words; then reading the sub hash matrix, calculating candidate positions of the flow in each segment according to the flow signature and the sub hash function, and matching corresponding cache items in parallel to quickly finish cache searching; if the cache lookup is successful, directly positioning the corresponding DRAM stream table item according to the TCAM index value stored in the matched cache item, and executing an action set in the corresponding DRAM stream table item to complete packet forwarding; after the packet is successfully forwarded, the information such as a buffer time stamp, a counter in a DRAM table entry, a time stamp and the like is updated; and if the cache searching fails, returning a cache searching failure result.
8. The method according to claim 3, wherein the resilient energy-saving cache insertion and expansion specifically further comprises: computing a stream signature from a stream keysignObtaining all candidate positions through a sub-hash function; if all candidate positions are not empty, the cache table item which is not accessed for the longest time is removed, and the stream signature and the TCAM are searchedStoring the index value and the time stamp in the position; if there is an empty entry in the candidate location, thensignInserting into the empty cache entry of the foremost segment; after the insertion is successful, judging whether the cache expansion is needed or not according to the current cache loading rate and the cache hit rate; if the expansion condition is triggered, a sub-hash function is newly generated, or a first sub-hash function in the matrix B is moved into the matrix A, or a corresponding buffer segment is newly added, so that the packet forwarding capacity of the buffer is improved.
9. The method of claim 3, wherein the elastic energy-saving cache deletion and contraction further comprises: firstly, positioning candidate storage positions of the stream in a cache according to the stream signature and the sub hash function, and positioning and removing corresponding cache items; after the cache item is successfully removed, the cache loading rate is reduced; if the cache shrinkage condition is triggered at this time, removing the last cache segment, and moving the last sub-hash function in the matrix A into the matrix B; before that, the active accurate stream stored in the segment needs to be moved to the segment at the front end to prevent the cache hit rate from being greatly reduced; if the cache entry to be moved forward is the longest non-accessed candidate entry, the entry does not need to be moved forward; otherwise, locating the position of the longest accessed candidate cache item in the front-end segment, and placing the cache item to be moved forward in the position; thus, the tail-biting entry advancing operation may be considered as a multiple cache insertion operation.
10. The method of claim 3, wherein the elastic energy-saving cache timeout scan specifically further comprises: judging whether a node overturns or not through the difference value of the current system timestamp and the timestamp in the stream table entry, if the difference value is larger than the overtime interval, judging that the node is an overtime node, and cleaning the node from a cache; the process of scanning and caching is as follows: traversing each segment in turn, and immediately removing if a timeout cache item is detected; after each buffer segment is traversed, if the buffer contraction condition is triggered at this time, the active accurate stream stored in the last segment is required to be moved to the front segment, the buffer space of the tail segment is recovered or the sub hash function is moved to the matrix B.
CN202210193963.2A 2022-03-01 2022-03-01 Flow-aware OpenFlow flow table elastic energy-saving searching method Active CN114640641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210193963.2A CN114640641B (en) 2022-03-01 2022-03-01 Flow-aware OpenFlow flow table elastic energy-saving searching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210193963.2A CN114640641B (en) 2022-03-01 2022-03-01 Flow-aware OpenFlow flow table elastic energy-saving searching method

Publications (2)

Publication Number Publication Date
CN114640641A CN114640641A (en) 2022-06-17
CN114640641B true CN114640641B (en) 2024-03-12

Family

ID=81948386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210193963.2A Active CN114640641B (en) 2022-03-01 2022-03-01 Flow-aware OpenFlow flow table elastic energy-saving searching method

Country Status (1)

Country Link
CN (1) CN114640641B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6754662B1 (en) * 2000-08-01 2004-06-22 Nortel Networks Limited Method and apparatus for fast and consistent packet classification via efficient hash-caching
CN111131084A (en) * 2019-12-06 2020-05-08 湖南工程学院 QoS-aware OpenFlow flow table hierarchical storage architecture and application
CN111966284A (en) * 2020-07-16 2020-11-20 长沙理工大学 OpenFlow large-scale flow table elastic energy-saving and efficient searching framework and method
CN113810298A (en) * 2021-09-23 2021-12-17 长沙理工大学 OpenFlow virtual flow table elastic acceleration searching method supporting network flow jitter

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170195253A1 (en) * 2015-12-31 2017-07-06 Fortinet, Inc. Flexible pipeline architecture for multi-table flow processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6754662B1 (en) * 2000-08-01 2004-06-22 Nortel Networks Limited Method and apparatus for fast and consistent packet classification via efficient hash-caching
CN111131084A (en) * 2019-12-06 2020-05-08 湖南工程学院 QoS-aware OpenFlow flow table hierarchical storage architecture and application
CN111966284A (en) * 2020-07-16 2020-11-20 长沙理工大学 OpenFlow large-scale flow table elastic energy-saving and efficient searching framework and method
CN113810298A (en) * 2021-09-23 2021-12-17 长沙理工大学 OpenFlow virtual flow table elastic acceleration searching method supporting network flow jitter

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Bundle-Updatable SRAM-Based TCAM Design for OpenFlow-Compliant Packet Processor;Ding-Yuan Lee , Ching-Che Wang , An-Yeu Wu;IEEE Transactions on Very Large Scale Integration (VLSI) Systems;全文 *
DAFT:一种OpenFlow大规模流表区分存储与加速查找架构;熊兵;邬仁庚;赵锦元;王进;;计算机学报(03);全文 *

Also Published As

Publication number Publication date
CN114640641A (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN108337172B (en) Large-scale OpenFlow flow table accelerated searching method
CN109921996B (en) High-performance OpenFlow virtual flow table searching method
US8924687B1 (en) Scalable hash tables
CN111966284B (en) OpenFlow large-scale flow table elastic energy-saving and efficient searching system and method
CN110808910B (en) OpenFlow flow table energy-saving storage framework supporting QoS and method thereof
JP3199051B2 (en) Router deciding device having cache and machine-readable recording medium recording program
EP1808987B1 (en) Longest prefix matching using tree bitmap data structures
US20080192754A1 (en) Routing system and method for managing rule entries of ternary content addressable memory in the same
US7592935B2 (en) Information retrieval architecture for packet classification
CN107528783B (en) IP route caching with two search phases for prefix length
JP2005513895A (en) Hybrid search memory for network processors and computer systems
Mishra et al. Duos-simple dual tcam architecture for routing tables with incremental update
US20190294549A1 (en) Hash Table-Based Mask Length Computation for Longest Prefix Match Caching
CN110096458B (en) Named data network content storage pool data retrieval method based on neural network
US10901897B2 (en) Method and apparatus for search engine cache
CN101840417B (en) UID query method for internet of things based on correlation
CN114640641B (en) Flow-aware OpenFlow flow table elastic energy-saving searching method
US20050114393A1 (en) Dynamic forwarding method using binary search
CN110109616B (en) Named data network content storage pool data deletion method based on neural network
CN113810298B (en) OpenFlow virtual flow table elastic acceleration searching method supporting network flow jitter
JP2001237881A (en) Table type data retrieval device and packet processing system using it, and table type data retrieval method
CN108156249B (en) Network cache updating method based on approximate Markov chain
US11782895B2 (en) Cuckoo hashing including accessing hash tables using affinity table
CN110196938B (en) Named data network content storage pool data insertion method based on neural network
CN112416820A (en) Data packet classification storage method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant