CN108549684B - Caching method adopting multi-path search reduction rule dependence in SDN - Google Patents

Caching method adopting multi-path search reduction rule dependence in SDN Download PDF

Info

Publication number
CN108549684B
CN108549684B CN201810300982.4A CN201810300982A CN108549684B CN 108549684 B CN108549684 B CN 108549684B CN 201810300982 A CN201810300982 A CN 201810300982A CN 108549684 B CN108549684 B CN 108549684B
Authority
CN
China
Prior art keywords
rule
rules
dependency
candidate
switch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810300982.4A
Other languages
Chinese (zh)
Other versions
CN108549684A (en
Inventor
王换招
冯琳
张鹏
唐亚哲
李沛
梅凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201810300982.4A priority Critical patent/CN108549684B/en
Publication of CN108549684A publication Critical patent/CN108549684A/en
Application granted granted Critical
Publication of CN108549684B publication Critical patent/CN108549684B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a caching method for reducing rule dependence by adopting multi-path search in an SDN (software defined network), which comprises the following steps of 1, depicting indexes for rule selection; step 2, when the capacity of the switch is N, selecting a rule with the weight of the first N large rules as an initial result set; step 3, dividing the rules in the result set into a plurality of candidate subsets according to the dependency relationship; step 4, completing the dependency relationship of each candidate subset; step 5, removing rules contained in the candidate subsets, performing multi-path search by taking the candidate subsets as an initial, and selecting rules to enable the total number of the rules to be equal to the total capacity of the switch; step 6, selecting a rule set with the maximum total weight from the candidate subsets as a target set; and 7, caching the rules contained in the target set into the switch. And constructing various rule caching schemes by utilizing the flexibility of the coverage set, reducing the rules with low weight on a dependence chain, saving the storage space of the switch and caching the rules with high weight as much as possible.

Description

Caching method adopting multi-path search reduction rule dependence in SDN
Technical Field
The invention belongs to the technical field of internet, and particularly relates to a caching method for reducing rule dependence by adopting multi-path search in an SDN.
Background
Software Defined Networking (SDN) implements flexible control of Network traffic by separating the forwarding plane and the control plane of a Network device. In the SDN, a forwarding device does not have computing power, and forwards a packet only according to a flow entry installed by a controller. Since all data packets entering the network for the first time need to be subjected to path calculation through the controller, excessive mismatch requests are generated, so that the controller becomes a processing bottleneck, the forwarding performance is affected, and if no mechanism is provided, the number of times of requesting the controller is reduced, the mismatch requests are overstocked, and a large network delay is generated, and even packet loss occurs. Considering that the network flow obeys Zipf distribution, for the purposes of reducing the requested times of the controller and reasonably utilizing the limited switch space, the matching condition of the rules is counted, the popular rules are issued to the switch in advance, and the probability of hitting the switch rules by the data packets is improved by utilizing the flow locality principle, so that the requested times of the controller are reduced, and the load of the controller is reduced.
Since the matching domain for matching the forwarded SDN flow entry allows wildcard to exist, the matching domains between rules may overlap, that is, one packet header may match multiple rules, and the relationship between the multiple rules is referred to as a dependency relationship between the rules. According to the difference of priorities, when a data packet arrives, the switch forwards the rule with the highest priority matched with the data packet, when a matching item cannot be found, the controller is requested to issue the corresponding rule, if the dependency relationship among the rules is not considered when the rule is issued, namely when the forwarding rule with the low priority exists in the switch, the data packet is directly matched and forwarded when passing through the switch, and a mismatch request cannot be sent to the controller, so that the wrong matching behavior is caused. Therefore, the dependency relationship between the rules must be considered when the rule caching is performed, so that the accuracy of the rule issuing can be ensured.
In recent years, a caching method for handling rule dependency problems in the SDN includes:
the technical scheme 1: a dependency set caching method. The method is the most direct caching method for guaranteeing the rule dependency relationship, when the rule is selected, all the dependency rules related to the rule are directly issued together, and the correctness of rule caching is guaranteed.
The technical scheme 2 is as follows: an overlay set caching method. The coverage set method avoids caching low-weight rules introduced by dependency relationship by using a 'splicing' technology, not only ensures the correctness of the rules, but also saves the flow table space of the switch, and a user has more space to cache the high-weight rules.
Technical scheme 3: a mixed set caching method. The mixed set method considers the advantages of the dependent set method and the coverage set method, the ratio of the rules when each rule uses different methods is calculated respectively, and the rule selection is more practical from the overall aspect.
The technical scheme 4 is as follows: CAB caching method. The domain space is divided into a plurality of logic spaces called as a block, each rule is associated with one or more blocks, when a data packet arrives, the block to which the data packet belongs is determined, and then all the associated rules in the block are issued.
The main problems in scheme 1 above are: caching all dependencies would result in high caching costs, especially when a high weight rule relies on many low weight rules. Therefore, the dependency set method not only occupies the switch space but also affects the hit rate of the rules when all the low-weight rules depended on by the high-weight rules are cached.
The main problems in scheme 2 above are: although the overlay set approach reduces the caching cost of the rules, it redirects a portion of the traffic to other decision components, reducing the overall weight of the caching rules, thereby affecting the hit rate.
The main problems in scheme 3 above are: the mixed set method is inflexible in the way of caching rules, and when the rules are selected, either the entire dependency set rules are cached or the rules and their coverage sets are cached directly, which results in many high-weight caching schemes being lost. The method neglects the appearance of the mixed set, brings a cache combination with a brand new rule, and the combination can obtain higher cache weight and is beneficial to improving the hit rate.
The main problems in the above scheme 4 are: the switch space is limited, if a certain block is hit, all rules are issued, and when the correlation among the rules is low, redundant rules occupy extra space and waste is caused. In addition, since the data packet needs to calculate the hit block first, the issued block information needs to be additionally stored, and more storage space is occupied.
Disclosure of Invention
In order to solve the above problems, the present invention provides a caching method for reducing rule dependence by using multiple searches in an SDN, which overcomes the disadvantage that the caching method of the existing caching algorithm is too single, constructs multiple rule caching schemes by using the flexibility of an overlay set, reduces the low-weight rules on a dependence chain, saves the storage space of a switch, and is also helpful for caching the high-weight rules as much as possible.
In order to achieve the above object, the caching method for reducing rule dependence by using multiple searches in an SDN according to the present invention includes the following steps:
step 1, describing indexes of rules, and taking the matching condition of each rule and a data packet in a statistical time period as the weight W of each rule;
step 2, when the capacity of the switch is N, selecting a rule with the rule weight of the first N as an initial result set;
step 3, dividing rules in the initial result set into a plurality of candidate subsets according to the dependency relationship;
step 4, complementing the dependency relationship of each candidate subset obtained in the step 3 to enable the dependency relationship to become a distributable rule with correct semantics;
step 5, taking each candidate subset obtained in the step 4 as an initial to carry out multi-path search, and enabling the total number of rules in each candidate subset to be equal to the total capacity of the switch according to the rule selection rule of the indexes of the rules;
step 6, selecting a rule set with the maximum total weight from the candidate subsets obtained in the step 5 as a target set;
and 7, caching the rules contained in the target set to the switch.
Further, in step 1, a dependency chain is expressed according to the dependency relationship among the rules, the cache cost c of each rule is obtained according to the structure of the dependency chain, and each rule dependency index and each rule coverage index are calculated according to the cache cost c and the rule weight W.
Further, when rule RiThe rule relied upon is { Rj,...,Rm},RiThe weight of the rule relied upon is Wj,...,WmIn the time of the preceding paragraph, the calculation formula of the dependency index is:
Figure BDA0001619780230000031
the calculation formula of the coverage index is as follows:
Figure BDA0001619780230000032
wherein m-j +1 is RiThe length of the rule relied upon.
Further, the specific operation method in step 5 is as follows: if the total number of the rules contained in all the candidate subsets is equal to the capacity of the switch, combining all the candidate subsets to be used as a target rule set, and ending the rule selection process; otherwise, carrying out multi-path search by taking the candidate subsets as the starting point, traversing each candidate subset, setting the rule contained in each candidate subset as processed, selecting the rule with the maximum dependency index and coverage index value in unprocessed rules as the candidate rule each time, if the dependency index of the rule is larger than the coverage index, marking the rule as processed when the residual space of the switch is enough, and adding the rule and the dependency rule into the candidate subset; when the space of the switch is insufficient, setting the dependence index of the rule as the minimum value and entering next selection until the space of the switch is full; if the coverage index of the selected rule is larger than the dependency index, when the switch space is enough, the rule and the coverage rule thereof are added into the candidate subset after the rule is marked as processed, and when the switch space is not enough, the coverage index of the rule is set to be the minimum value, and then the next selection is carried out until the switch space is filled.
Further, in step 2, the initial ordered rule sequence is obtained by descending the weight of each rule, and then the first N rules are selected from the ordered sequence as the result set.
Further, in step 3, when a rule is depended on by a plurality of rules, the rule is added to the subset of the dependent rules respectively.
Further, the operation process of step 4 is: firstly, checking whether the dependency relationship of the rules contained in the divided candidate subsets is complete, processing the next candidate subset if the dependency relationship is complete, completing the dependency relationship of the rules if the dependency relationship is incomplete, firstly judging the state of the dependency chain contained in the rule to be completed, and adding the chain tail rule into the candidate subsets if only the last rule of the chain tail is not selected into the candidate subsets when the switch still has residual space; if not only the end-of-chain rule is not selected into the candidate subset, then the overlay rule R of the end-of-chain rule of the candidate subset is added into the candidate subsetk *
Further, in step 4, rule R is coveredk *And rule RkAre the same, act differently, rule RkRule R for the last bit to be selected in the initial result setjThe rule on which it depends.
Compared with the prior art, the invention has at least the following beneficial technical effects that the invention caches popular rules to the switch in advance, and utilizes the flow locality principle to improve the probability of hitting the switch rules by the data packets, thereby reducing the times of requesting the controller, reducing the load of the controller, accelerating the forwarding speed of the data packets and reducing the network delay. Because the weight of the rule reflects the matching condition of the rule within a period of time, generally, the rule with the larger weight is more common, the caching algorithm provided by the invention takes the maximum rule lumped weight as the target of rule selection, when the capacity of the switch is N, namely the number of the rules which can be accommodated by the switch is at most N, ideally, when there are N rules and no dependency exists among the rules, only the rule with the larger weight before the caching weight is needed, but because the dependency exists among the rules, the rule with the larger weight before the weight may not be completely cached, therefore, the caching method related by the invention considers the influence of the caching result of the rule with the higher weight on the final caching scheme, firstly, the rule is sorted in a descending manner according to the weight, the rule with the larger weight before the weight is selected as an initial set, the set is divided into a plurality of subsequences according to the dependency and the dependency among the rules is supplemented, and filling the remaining space of each subsequence according to the rules, and finally selecting the sequence with the maximum total weight value of the rules in the subsequences as a target set. The caching method considers the influence of high-weight rules on the final caching scheme, overcomes the defect that the existing caching mode is too single, constructs various rule caching schemes by utilizing the flexibility that the coverage rules can appear at any position of a dependency chain, reduces the low-weight rules on the dependency chain, saves the storage space of the switch, and is beneficial to caching the high-weight rules as much as possible.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a rule dependency representation;
the arrows in FIG. 2 represent the dependencies of the rules, with rules pointing to the rules on which they depend directly.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present invention is further described with reference to fig. 1, and provides a caching method for reducing rule dependency by using multiple searches in an SDN, including the following steps:
step 1, depicting indexes for rule selection;
whether the rules are cached or not is determined by the weight of the rules and the caching cost of the rules, the dependency relationship among the rules can be expressed in a form of a dependency chain, wherein the caching cost c can be obtained by the structure of the dependency chain, the matching condition of each rule and a data packet (namely the matching times of the rule and the data packet) in a certain statistical period is obtained as the value of each rule weight W, and two selection indexes of each rule can be calculated according to the two values of the caching cost c and the rule weight W: a dependency index and a coverage index. When rule RiThe rule relied upon is { Rj,...,Rm},RiThe weight of the rule relied upon is Wj,...,WmIn the case of the above, the two indexes are calculated as follows:
dependence index:
Figure BDA0001619780230000061
coverage indexes are as follows:
Figure BDA0001619780230000062
wherein m-j +1 is RiThe length of the rule relied upon;
step 2, when the capacity of the switch is N, selecting N large rules before the rule weight as an initial result set;
the method comprises the steps that an initial ordered rule sequence is obtained by descending and arranging according to the weight of each rule, when the capacity of a switch is N, if the dependency relationship among the rules is not considered, the first N rules arranged in the initial ordered rule sequence are first N large rules, the first N large rules are target rule sets, however, due to the existence of the dependency relationship, some rules in the first N large rules are removed for ensuring the dependency correctness, and the first N rules are selected from the ordered sequence as an initial result set as the first N rules are expected to appear in the result set as much as possible;
step 3, dividing the rules in the result set into a plurality of candidate subsets according to the dependency relationship;
and (3) performing dependency analysis on the rules in the initial result set obtained in the step (2), dividing the rules into a plurality of candidate subsets according to the dependency relationship among the rules, namely traversing each unprocessed rule, if the dependency rules of the rules are in the first N selected rules, marking the rules which depend on the rules as processed and adding the rules into the subsequence of the rules, and ending traversal after the first N rules are marked to finally obtain a plurality of candidate rule subsequences. In particular, when a rule is relied upon by a plurality of other rules, the rule is added to a subset of all dependent rules, respectively.
Step 4, complementing the dependency relationship of the candidate subsets obtained in each step 3 to enable the dependency relationship to become a distributable rule with correct semantics;
when the first N rules are divided into the candidate rule subsets according to the dependency relationship, the dependency relationship of a certain rule may not be completely contained in the subsets, and matching errors may be caused after the rule sequence with the incomplete dependency relationship is issued, so that whether the dependency relationship of the rule is complete or not must be checked for the divided candidate rule subsets, if the dependency relationship is complete, the next candidate subset is processed, otherwise, the dependency relationship of the rule needs to be completed.
Firstly, judging the state of a dependent chain contained in the rule (which means whether the rule depended by the rule is selected into a candidate subset), and adding a chain tail rule into the candidate subset if only the last rule at the chain tail is not selected into an initial result set when the switch still has residual space, as in the case of 1; if not only end of chain rules are not selected into the initial result set, then the overlay rules that have been rules at the end of the candidate subset are added to the candidate subset, i.e., one rule is added to replace the other rules on the dependent chain, as in case 2. When a certain data packet matches the coverage rule, the data packet is forwarded to the controller to be matched with the global rule, and the rule forwarding is guaranteed to be correct.
When rule RiIs shown in FIG. 2, the scenario described by case 1, namely dividing RmBesides, the rest of the dependency rules are in the top N large sequence, and in this case, the rule R is added to the rules of the top N sequencemIs added to RiIn the candidate subsequence; scenario described in case 2, rule RiOnly its dependency rule RjIn the first N large sequences, an extra coverage rule R is added at this timek *To RiIn the candidate subsequence, the length of the dependent chain is shortened while the correctness of the dependent relation is ensured. In general, Rk *Is realized by modifying RkThe action of (1) is forwarding to the controller; rk *And RkAre the same, but act differently, rule RkIs rule RjThe rule on which it depends.
Step 5, setting rules contained in each candidate subset as processed, performing multi-path search by taking each candidate subset obtained in the step 4 as an initial, and enabling the total number of the rules in each candidate subset to be equal to the total capacity of the switch according to the index selection rule of the rules;
because the rule without the dependency relationship exists in the rule and the first N rules which are selected possibly are all the rules without the dependency relationship, the total rule number of all candidate subsets needs to be counted after the dependency relationship is divided into the candidate subsets, if the total rule number contained in all the candidate subsets is equal to the capacity of the switch, all the candidate subsets are merged to be used as a target rule set, and the rule selection process is ended; otherwise (when the total number of the rules contained in all the candidate subsets is smaller than the capacity of the switch), multi-path search is carried out by taking the candidate subsets as the starting point, each candidate subset is traversed, the rule contained in the candidate subset is set to be processed, the rule with the maximum index value in the unprocessed rule is selected every time (the dependency index and the coverage index of all the unprocessed rules are calculated, and the rule corresponding to the maximum value is selected to be the index with the maximum index value) is used as the candidate rule, if the dependency index of the rule is larger than the coverage index, the rule and the dependency rule are added into the candidate subset after the rule is marked to be processed when the residual space of the switch is enough; when the space of the switch is insufficient, setting the dependence index of the rule as the minimum value and entering next selection until the space of the switch is full; if the coverage index of the selected rule is larger than the dependency index, when the switch space is enough, the rule and the coverage rule thereof are added into the candidate subset after the rule is marked as processed, and when the switch space is not enough, the coverage index of the rule is set to be the minimum value, and then the next selection is carried out until the switch space is filled.
And 6, selecting a rule set with the maximum total weight from the candidate subsets as a target set.
Since it is desirable to select a commonly used rule and the weight of the rule reflects the matching condition of the rule, when solving the cache problem, it is desirable to maximize the total weight of the selected rule, so it is necessary to count the total weight of the rule cached in each candidate subset (i.e. the total weight, i.e. the sum of the weights W of the rules cached in each candidate subset), and select the rule with the maximum total weight as the final target rule set.
And 7, caching the rules contained in the target set to the switch.
The rules contained in the target set with the maximum total weight value selected by the caching algorithm are cached to the switch, and the rules with the large weight value are cached to the switch, so that the probability of finding the rules which can be matched when the data packet reaches the switch can be increased, the times of requesting the controller for decision-making due to mismatch are reduced, and the data packet is rapidly forwarded.

Claims (5)

1. A caching method adopting multi-path search reduction rule dependence in an SDN is characterized by comprising the following steps:
step 1, describing indexes of rules, and taking the matching condition of each rule and a data packet in a statistical time period as the weight W of each rule;
whether a rule is cached is determined by the weight of the rule and its caching cost, the dependencies between rulesThe relationship can be expressed in a form of a dependency chain, wherein the cache cost c can be obtained from a structure of the dependency chain, the matching condition of each rule and a data packet in a certain statistical period, that is, the matching times of the rule and the data packet are obtained as the value of each rule weight W, and two selection indexes of each rule can be calculated according to the two values of the cache cost c and the rule weight W: a dependency index and a coverage index; when rule RiThe rule relied upon is { Rj,...,Rm},RiThe weight of the rule relied upon is Wj,...,WmIn the case of the above, the two indexes are calculated as follows:
dependence index:
Figure FDA0002522852370000011
coverage indexes are as follows:
Figure FDA0002522852370000012
wherein m-j +1 is RiThe length of the rule relied upon; wiRepresents a rule RiWeight of (1), WkRepresents a rule RkThe value range of k is j-m, rule RkIs rule RiThe rule on which it depends;
step 2, when the capacity of the switch is N, selecting a rule with the rule weight of the first N as an initial result set;
step 3, dividing rules in the initial result set into a plurality of candidate subsets according to the dependency relationship;
step 4, complementing the dependency relationship of each candidate subset obtained in the step 3 to enable the dependency relationship to become a distributable rule with correct semantics;
step 5, taking each candidate subset obtained in the step 4 as an initial to carry out multi-path search, and enabling the total number of rules in each candidate subset to be equal to the total capacity of the switch according to the rule selection rule of the indexes of the rules;
the specific operation method of the step 5 comprises the following steps: if the total number of the rules contained in all the candidate subsets is equal to the capacity of the switch, combining all the candidate subsets to be used as a target rule set, and ending the rule selection process; otherwise, carrying out multi-path search by taking the candidate subsets as the starting point, traversing each candidate subset, setting the rule contained in each candidate subset as processed, selecting the rule with the maximum dependency index and coverage index value in unprocessed rules as the candidate rule each time, if the dependency index of the rule is larger than the coverage index, marking the rule as processed when the residual space of the switch is enough, and adding the rule and the dependency rule into the candidate subset; when the space of the switch is insufficient, setting the dependence index of the rule as the minimum value and entering next selection until the space of the switch is full; if the coverage index of the selected rule is larger than the dependency index, when the switch space is enough, marking the rule as processed, adding the rule and the coverage rule thereof into the candidate subset, and when the switch space is not enough, setting the coverage index of the rule as the minimum value, and entering next selection until the switch space is filled;
step 6, selecting a rule set with the maximum total weight from the candidate subsets obtained in the step 5 as a target set;
and 7, caching the rules contained in the target set to the switch.
2. The caching method for the SDN that depends on the multiple search reduction rules according to claim 1, wherein in step 2, the initial ordered rule sequence is obtained by descending the weight of each rule, and then the first N rules are selected from the ordered sequence as the result set.
3. The method of claim 1, wherein in step 3, when a rule is depended on by a plurality of rules, the rule is added to a subset of dependency rules.
4. The caching method for adopting multi-path search reduction rule dependency in the SDN according to claim 1, wherein the operation process of step 4 is: firstly, checking whether the dependency relationship of the rules contained in the divided candidate subsets is complete, and processing the next rule if the dependency relationship is completeIf the candidate subsets are incomplete, complementing the dependency relationship of the rules, firstly judging the state of a dependency chain contained in the rule to be complemented, and if only the last rule of the chain tail is not selected into the candidate subsets when the switch still has residual space, adding the chain tail rule into the candidate subsets; if not only the end-of-chain rule is not selected into the candidate subset, then the overlay rule R of the end-of-chain rule of the candidate subset is added into the candidate subsetk *
5. The SDN caching method based on multi-path search reduction rule dependency as claimed in claim 4, wherein in step 4, the coverage rule Rk *And rule RkAre the same, act differently, rule RkRule R for the last bit to be selected in the initial result setjThe rule on which it depends.
CN201810300982.4A 2018-04-04 2018-04-04 Caching method adopting multi-path search reduction rule dependence in SDN Expired - Fee Related CN108549684B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810300982.4A CN108549684B (en) 2018-04-04 2018-04-04 Caching method adopting multi-path search reduction rule dependence in SDN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810300982.4A CN108549684B (en) 2018-04-04 2018-04-04 Caching method adopting multi-path search reduction rule dependence in SDN

Publications (2)

Publication Number Publication Date
CN108549684A CN108549684A (en) 2018-09-18
CN108549684B true CN108549684B (en) 2020-08-18

Family

ID=63513941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810300982.4A Expired - Fee Related CN108549684B (en) 2018-04-04 2018-04-04 Caching method adopting multi-path search reduction rule dependence in SDN

Country Status (1)

Country Link
CN (1) CN108549684B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134300B (en) * 2022-06-07 2023-08-25 复旦大学 Switching equipment rule cache management method oriented to software defined network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681223A (en) * 2015-12-31 2016-06-15 清华大学 SDN data packet forwarding method and method
CN106572170A (en) * 2016-10-28 2017-04-19 中国电子科技集团公司第五十四研究所 Controller and dynamic load balancing method under SDN hierarchical multiple controllers
CN107347021A (en) * 2017-07-07 2017-11-14 西安交通大学 One kind is based on SDN method for reliable transmission

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9379963B2 (en) * 2013-12-30 2016-06-28 Cavium, Inc. Apparatus and method of generating lookups and making decisions for packet modifying and forwarding in a software-defined network engine

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681223A (en) * 2015-12-31 2016-06-15 清华大学 SDN data packet forwarding method and method
CN106572170A (en) * 2016-10-28 2017-04-19 中国电子科技集团公司第五十四研究所 Controller and dynamic load balancing method under SDN hierarchical multiple controllers
CN107347021A (en) * 2017-07-07 2017-11-14 西安交通大学 One kind is based on SDN method for reliable transmission

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Exploiting the Vulnerability of Flow Table Overflow in Software-Defined Network:Attack Model, Evaluation, and Defense;Yadong Zhou,Kaiyue Chen,JuJunyuan Leng,njie Zhang,Yazhe Tang;《Hindawi Security and Communication Networks》;20180109;全文 *
Rule-Caching Algorithms for Software-Defined Networks;Naga Katta, Omid Alipourfard, Jennifer Rexford and David Walk;《Citeseer》;20141231;全文 *
一种基于预测与动态调整负载因子的SDN流表优化算法;史少平,庄雷,杨思锦;《计算机科学》;20170131;全文 *

Also Published As

Publication number Publication date
CN108549684A (en) 2018-09-18

Similar Documents

Publication Publication Date Title
US11811660B2 (en) Flow classification apparatus, methods, and systems
CN108153757B (en) Hash table management method and device
US7827182B1 (en) Searching for a path to identify where to move entries among hash tables with storage for multiple entries per bucket during insert operations
JP6352290B2 (en) Packet transmission method for content owner and node in content-centric network
US8750144B1 (en) System and method for reducing required memory updates
WO2017105700A1 (en) High speed flexible packet classification using network processors
US8171163B2 (en) Method for tracking transmission status of data to entities such as peers in a network
US20080306917A1 (en) File server for performing cache prefetching in cooperation with search ap
US10467217B2 (en) Loop detection in cuckoo hashtables
CN105359142B (en) Hash connecting method and device
CN112702399B (en) Network community cooperation caching method and device, computer equipment and storage medium
CN106528844B (en) A kind of data request method and device and data-storage system
CN106940696B (en) Information query method and system for SDN multi-layer controller
CN108549684B (en) Caching method adopting multi-path search reduction rule dependence in SDN
CN114327641A (en) Instruction prefetching method, instruction prefetching device, processor and electronic equipment
CN102855213B (en) A kind of instruction storage method of network processing unit instruction storage device and the device
CN116723143B (en) Network target range resource allocation method and system based on traffic affinity
JP2018511131A (en) Hierarchical cost-based caching for online media
CN110138742A (en) Firewall policy optimization method, system and computer readable storage medium
US11599583B2 (en) Deep pagination system
CN108984123A (en) A kind of data de-duplication method and device
CN112016006A (en) Trust-Walker-based Trust recommendation model
CN112783673A (en) Method and device for determining call chain, computer equipment and storage medium
CN116962288B (en) CDN multi-node path-finding optimization method, device, equipment and storage medium
US20170091249A1 (en) Systems And Methods To Access Memory Locations In Exact Match Keyed Lookup Tables Using Auxiliary Keys

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200818