CN105681438A - Centralized caching decision strategy in content-centric networking - Google Patents
Centralized caching decision strategy in content-centric networking Download PDFInfo
- Publication number
- CN105681438A CN105681438A CN201610058534.9A CN201610058534A CN105681438A CN 105681438 A CN105681438 A CN 105681438A CN 201610058534 A CN201610058534 A CN 201610058534A CN 105681438 A CN105681438 A CN 105681438A
- Authority
- CN
- China
- Prior art keywords
- node
- content
- cache
- buffer memory
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
- H04L67/5682—Policies or rules for updating, deleting or replacing the stored data
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Content-centric networking is one solution of an information-centric networking thought. The invention discloses a centralized caching decision strategy in the content-centric networking. Under the architecture of the content-centric networking, researches of basic functions, such as data naming, content forwarding and intranet caching of information-centric networking, are realized by utilizing the existing framework adopting a SDN; the centralized caching decision strategy is provided; a centralized controller is placed in the content-centric networking; the controller generates one caching list for each node by collecting all node request information, and deterministically guides each node to cache contents; therefore, the transmission flow in the network is minimized; and the user request delay is reduced.
Description
Technical field
The invention belongs to one of new network framework content center network field. Particularly to the cache decision problem of caching system in content center network field.
Background technology
In current information-intensive society, the Internet plays vital role in our life. The unprecedented success that Internet obtains has changed and will continue to change the life style of people, including information transmission, work, business and military running etc. Certainly, all trades and professions are without the application being not benefit from the Internet. But, along with the change of Web Usage Patterns, the networking of industrial control network, Internet of Things, car, cloud computing technology universal, traditional network architecture has exposed a lot of problem, such as the redundant transmission etc. of safety, extensibility, movement and physically based deformation address.
According to statistics, to account for the proportion of network traffics increasing for video flow in recent years, and estimates that continuation is increased by this ratio. Cisco VisualNetworkingIndexTMSituation VNI report display is adopted for the world projections of 2014 to 2019 years and Internet service, improve constantly along with Internet user and number of devices sustainable growth, broadband speeds and the quick of video tour amount is risen, it is contemplated that within 2019, Global Internet flow is up to 64 times of 2015 annual flows. Within the scope of Global Internet, flow reached the 18GB of 2019 from the 6GB of 2014 per capita. Wherein, video flow account for the ratio of internet traffic increase from 64% in 2014 be 2019 80%. Video request program, the flow such as HD video increases with surprising rapidity.
But current TCP/IP architectural framework is owing to it exists a large amount of transmitting redundancy and is unfavorable for the distribution of video flow. The communication pattern of host-to-host makes acquisition information must navigate to specific physical host. This pays close attention to the content of information with people and the feature of the position of non-information runs in the opposite direction. Along with social networks, business recommend, cloud computing, the application technology such as Internet of Things constantly universal, the security privacy service of legacy network is constantly born and is subject to baptism. Even if people constantly improve application layer in the way of beating " patch ", it is achieved that CDN, P2P network and solution safety problem, but can not inherently solve root problem. Therefore, traditional TCP/IP network architecture is changed imperative. The Jacobson in Park research center proposed content center network CCN (content-centricnetworking) in 2009.It has adopted the thought of information centre's network. Namely network is centered by information, realizes routing forwarding with the name of information for foundation, and is no longer with IP address for according to realizing routing forwarding. Utilize the Intranet buffer memory decoupling over time and space sender and recipient of information.
It is by search modes that the communication mode transition between traditional host-to-host is that user arrives content. This is because the position that people no longer concerned about content are deposited, and more concerned be the information content obtained. Having two basic conceptions in content center network, one is based on the route of name. This network architecture proposes to define a unique name respectively for each content, even each content it is divided into the block (chunk) of formed objects and defines a unique name for them respectively, being route by the name of content but not route by traditional IP address. Another concept is Intranet buffer memory. In the process of data transmission, traditional IP address-based transmission means makes router " can not remember " information transmitted. When user asks identical content again, router still needs to by upstream node to server request information. And content center network is owing to it is with the feature of content name so that router can distinguish the content of transmission. Router in content center network has caching capabilities, it is possible to got off by the content caching of transmission. When user asks again, when request is coated the router being routed to this request content of buffer memory, router can directly transfer a packet to user, and no longer needs to server request content. Thus the flow decreased in network, alleviate server load, shorten the request time delay of user, improve user experience quality.
Intranet buffer memory, as the particular attribute of content center network, becomes the focus that many researcheres are paid close attention to. Owing in network, the buffer memory capacity of router node is extremely limited, the content information capacity compared in network is inappreciable, therefore, how to maximally utilize these Intranet cache resources, it is thus achieved that maximum network trap is the problem of numerous researcher research. Traditional cache policy is buffer memory or improve the performance of network in the way of probability (trying to achieve with different standards) buffer memory everywhere, the model of buffer network is arbitrary topology, request content obeys Zipf distribution, asks as poisson arrival, and each node asks same content probability identical.
Summary of the invention
[goal of the invention]: the present invention asks unbalanced problem to propose caching system framework and the cache algorithm of centralized calculating cache decision to more efficiently utilize cache resources and solution different regions.
[technical scheme]: it is an object of the invention to be reached by following measure:
The present invention adopts centralized control mode, places a centralized controller in the content in heart network, the solicited message of each node of router in responsible collection network. This solicited message had not only included router and has represented the probability of area user's content information but also include unsuccessfully forwarding the content request message through this router from other router node cache hit. And have research (referring to document L.Veltri, G.Morabito, S.Salsano, N.Blefari-Melazzi, andA.Detti " Supportinginformation-centricfunctionalityinsoftwaredefi nednetworks; " [C] inIEEEICC, June2012, pp.6645-6650. software defined network (SDN) etc.) is utilized to achieve the basic function of content center network (CCN), as: numerical nomenclature, forward with content, Intranet buffer memory.This centralized cache decision framework being us provides theoretical basis.
Emphasis of the present invention considers the scene (namely unbalanced problem is asked in area) that different regions request identical content probability is different, assume one centralized controller of existence in network, it is possible to the request content Probability p of any node (CCN router) in collection networkij(referring to the probability of router i request content j). Specifically comprise the following steps that
These statistical datas are sent to controller by step 1, node.
Step 2, controller according to the statistical data collected and network topological information centralized calculate each node under the limited restriction of buffer memory capacity, it is determined that calculating to property each node should which content of buffer memory so that in network, the flow of generation is minimum.
Step 3, controller generate cache decision list (indicate this node should the content of buffer memory) for each router node.
Cache decision list is issued in each router by step 4, controller. When router has content to transmit through out-of-date, first look for cache decision list, if this content name is present in list, then this content of buffer memory, otherwise, not buffer memory. Subsequently, forward the content to down hop or send local user to. This centralized Intranet cache management framework is shown in Fig. 1. Wherein, key components are the computing module in controller. The cache decision solution below step 2 implemented is described in detail.
1) link pie graph G=(V, E) between the router node in content center network and node. Each node ViThere is buffer memory capacity Ci. In network, properties collection is F={f1, f2..., fn. Block owing to splitting the file into formed objects in content center network is transmitted, it is therefore assumed here that, each content all has formed objects, and this size is equal to a buffer memory unit. pijRepresent the probability of node i request content j. hijRepresent the node i request content j network traffics produced. Here, the network traffics of a unit are shown by a skip list. hijIt is actually then node i request content j until request bag forwards the jumping figure of process during cache hit.
By described above, our target be try to achieve each node i should the properties collection X of buffer memoryi={ xi1, xi2..., xin. Wherein, xinFor binary variable. Represent when being 1 that content n is through this content of node i buffer memory, be 0 represent not buffer memory. By such cache decision so that network traffics are minimum. It is expressed as I:
xij∈ 0,1}, i ∈ V, j ∈ F
Here, aikRepresent that node i request content is forwarded to the node that kth is jumped.
Obviously, this problem is a nonlinear problem. We are converted into a linear programming problem by following construction of function. Assume functionRepresent the node total number of cache contents j on the path that node i request content to kth is jumped. Then hij=max{ (1-fijk) * (k+1) | k=0,1,2,3...}. Make h 'ij≥(1-fijk) * (k+1), (k=0,1,2,3...), we construct another object function is II:
h′ij≥(1-fijk) * (k+1), i ∈ V, j ∈ F, k=0,1,2...
xij∈ 0,1}, i ∈ V, j ∈ F
This problem is an integral linear programming problem, and observing known I and II has identical solution.
If because assuming x*, h*It it is the optimal solution of I. Its desired value is y. Then have,
Here,So, x*, h*For meeting a solution of II. Its desired value is y.
On the other hand, it is assumed that x ', h ' for the optimal solution of II, desired value is y '.Then have
h′ij≥(1-fijk) * (k+1), i ∈ V, j ∈ F, k=0,1,2 ..., equally,
Now, if h 'ij> (1-fijk) * (k+1), i ∈ V, j ∈ F, k=0,1,2 ..., then than
h′ij=(1-fijk) * (k+1), i ∈ V, j ∈ F, k=0,1,2 ... desired value is big, so, must have
h′ij=(1-fijk) * (k+1), i ∈ V, j ∈ F, k=0,1,2 .... Know according to construction of function process,
So, x ', h ' is the optimal solution meeting I, and its desired value is y '=y. So x*, h*With x ', h ' for same solution. I and II has identical solution, and we can solve this cache decision problem with the tool kit solving integral linear programming problem accordingly.
2) the concrete heuristic solving strategy method that the following description present invention proposes.
Step 1, utilize the method solving k-means problem, calculate the optimum position of c cache node corresponding when having c (c from 1 to | V-1 |) individual cache node. And " importance " of nodes is judged according to the node increased from c-1 to c. Obviously, being ordered as No. 0 node is most important node.
Step 2, for i-th cache node, try to achieve the context number j currently making network traffics minimum with greedy algorithm. Content j is added the cache list of i-th node. Until the cache list of i-th node is full.
Step 3, repeat i+1 node is carried out the calculating of step 2, until the cache list of all nodes all generates.
[beneficial effect]: the invention has the beneficial effects as follows: in content center network, existing caching method is all based on distributed, the embodiment when various places request distribution is consistent. And the present invention proposes centralized Intranet cache management framework. Aim to solve the problem that in the request content situation pockety of area, use the cache decision strategy that the present invention proposes, improve cache resources utilization rate, promote network performance, reduce user and ask to postpone.
Accompanying drawing explanation
Fig. 1 is content center network caching system frame diagram;
Fig. 2 is content center network topological diagram
Detailed description of the invention
Below in conjunction with accompanying drawing and instantiation, the present invention done concrete introduction.
Such as Fig. 2, having a content source comprising all the elements (server), have 3 routers A, B, C in network, they are respectively provided with same buffered capacity Ca=Cb=Cc=2. While these three router represents local user's request content, A router also forwards the buffer memory from B, C to be not hit by forwarding request. Properties collection is F={f1, f2, f3..., f7, f8. The probability (representing with number of times here) of their respective request content of router statistics. A isB isC is
These statistical datas are sent to controller by step 1, router.
Step 2, controller utilize the all-router statistical information collected and network topological information to be calculated, and try to achieve each router cache content during network minimum discharge.
Step 3, controller utilize the result tried to achieve to be that each router generates a cache decision list.
Cache decision list is handed down to router by step 4, controller. When having data content through router, first look for whether this content name is present in cache decision list. If existing, then this content of buffer memory, otherwise, not buffer memory. Then, data forwarded or be sent to the process such as local user.
Wherein, specifically to calculate process as follows for step 2 middle controller:
Step 2.1 according to k-means way to solve the problem, calculates the optimum position of c cache node corresponding when having c (c from 1 to | V-1 |) individual cache node.And " importance " of nodes is judged according to the node increased from c-1 to c. Obviously, being ordered as No. 0 node is most important node. In present networks topology, calculate that to obtain A be most important node, be ordered as O, B, C be time important node, sort respectively 1,2.
Step 2.2, for No. 0 cache node A, tries to achieve the context number j currently making network traffics minimum with greedy algorithm. Content j is added the cache list of A node. Until the cache list of A node is full. The cache list of A node is { f1, f2}。
Step 2.3 calculates 1, No. 2 cache node B, C successively. Its cache list respectively { f3, f4, { f5, f6. So far, the cache list of all nodes all generates. Calculating process terminates.
Above the present invention is described in detail, but the present invention is not limited to above-mentioned embodiment, the knowledge that those skilled in the art can possess according to oneself, the present invention is done various change to reach more excellent effect.
Claims (3)
1. a kind of central controlled cache decision strategy in content center network. It is characterized in that, in content center network there is very big buffer memory redundancy in the cache policy everywhere of acquiescence, currently existing scheme all have ignored the situation that local content request probability differs, and the present invention considers that this sight proposes a kind of central controlled cache decision strategy, maximizes and reduces network traffics. Step is as follows:
(1) these statistical datas are sent to controller by node.
(2) controller according to the statistical data collected and network topological information centralized calculate each node under the limited restriction of buffer memory capacity, it is determined that calculating to property each node should which content of buffer memory so that in network, the flow of generation is minimum.
(3) then, controller generates cache decision list (indicate this node should the content of buffer memory) for each router node.
(4) cache decision list is issued in each router by controller. When router has content to transmit through out-of-date, first look for cache decision list, if this content name is present in list, then this content of buffer memory, otherwise, not buffer memory. Subsequently, forward the content to down hop or send local user to.
2. cache decision strategy according to claim 1, key modules is controller computing module, it is characterised in that for cache decision problem, extraction model, try to achieve optimal solution. Specific as follows:
Link pie graph G=(V, E) between router node and node in content center network. Each node ViThere is buffer memory capacity Ci. In network, properties collection is F={f1, f2..., fn. Block owing to splitting the file into formed objects in content center network is transmitted, it is therefore assumed here that, each content all has formed objects, and this size is equal to a buffer memory unit. pijRepresent the probability of node i request content j. hijRepresent the node i request content j network traffics produced. Here, the network traffics of a unit are shown by a skip list. hijIt is actually then node i request content j until request bag forwards the jumping figure of process during cache hit.
By described above, our target be try to achieve each node i should the properties collection X of buffer memoryi={ xi1, xi2..., xin. Wherein, xinFor binary variable. Represent when being 1 that content n is through this content of node i buffer memory, be 0 represent not buffer memory. By such cache decision so that network traffics are minimum. It is expressed as I:
xij∈ 0,1}, i ∈ V, j ∈ F
Here, aikRepresent that node i request content is forwarded to the node that kth is jumped.
Obviously, this problem is a nonlinear problem. We are converted into a linear programming problem by following construction of function. Assume functionRepresent the node total number of cache contents j on the path that node i request content to kth is jumped. Then
h′ij≥(1-fijk) * (k+1), i ∈ V, j ∈ F, k=0,1,2 ...
xij∈ 0,1}, i ∈ V, j ∈ F
hij=max{ (1-fijk) * (k+1) | k=0,1,2,3...}. Make h 'ij≥(1-fijk) * (k+1), (k=0,1,2,3...), we construct another object function is II:
This problem is an integral linear programming problem, and I and II has identical solution. Because, if assuming x*, h*It it is the optimal solution of I. Its desired value is y. Then have,
Here,So, x*, h*For meeting a solution of II. Its desired value is y.
On the other hand, it is assumed that x ', h ' for the optimal solution of II, desired value is y '. Then have
h′ij≥(1-fijk) * (k+1), i ∈ V, j ∈ F, k=0,1,2 ..., equally,
Now, if h 'ij> (1-fijk) * (k+1), i ∈ V, j ∈ F, k=0,1,2 ..., then than
h′ij=(1-fijk) * (k+1), i ∈ V, j ∈ F, k=0,1,2 ... desired value is big, so, must have
h′ij=(1-fijk) * (k+1), i ∈ V, j ∈ F, k=0,1,2 .... Know according to construction of function process,
So, x ', h ' is the optimal solution meeting I, and its desired value is y '=y. So x*, h*With x ', h ' for same solution. So, I and II has identical solution, and we can solve this cache decision problem with the tool kit solving integral linear programming problem accordingly.
3. cache decision strategy according to claim 1, it is characterised in that propose a kind of didactic solution. Specifically comprise the following steps that
Step 1, according to solving k-means problem method, calculate the optimum position of c cache node corresponding when having c (c from 1 to | V-1 |) individual cache node. And " importance " of nodes is judged according to the node increased from c-1 to c. Obviously, being ordered as No. 0 node is most important node.
Step 2, for i-th cache node, try to achieve the context number j currently making network traffics minimum with greedy algorithm. Content j is added the cache list of i-th node. Until the cache list of i-th node is full.
Step 3, repeat i+1 node is carried out the calculating of step 2, until the cache list of all nodes all generates.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610058534.9A CN105681438B (en) | 2016-01-26 | 2016-01-26 | centralized content center network cache decision method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610058534.9A CN105681438B (en) | 2016-01-26 | 2016-01-26 | centralized content center network cache decision method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105681438A true CN105681438A (en) | 2016-06-15 |
CN105681438B CN105681438B (en) | 2019-12-13 |
Family
ID=56302780
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610058534.9A Active CN105681438B (en) | 2016-01-26 | 2016-01-26 | centralized content center network cache decision method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105681438B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107105043A (en) * | 2017-04-28 | 2017-08-29 | 西安交通大学 | A kind of content center network caching method based on software defined network |
CN107483587A (en) * | 2017-08-21 | 2017-12-15 | 清华大学 | A kind of power telecom network cache optimization method of content oriented |
CN108512685A (en) * | 2017-02-28 | 2018-09-07 | 北京大学 | Information centre's method for controlling network flow and device |
CN108600393A (en) * | 2018-05-30 | 2018-09-28 | 常熟理工学院 | A kind of Internet of Things data communication means based on name data |
CN108712458A (en) * | 2018-03-30 | 2018-10-26 | 中国科学院信息工程研究所 | Support the software defined network controller of content-control |
CN110324389A (en) * | 2018-03-30 | 2019-10-11 | 北京京东尚科信息技术有限公司 | Data transmission method, device and computer readable storage medium based on car networking |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103312725A (en) * | 2013-07-05 | 2013-09-18 | 江苏大学 | Content-centric networking cache judgment method based on node importance degrees |
CN104022911A (en) * | 2014-06-27 | 2014-09-03 | 哈尔滨工业大学 | Content route managing method of fusion type content distribution network |
CN104166630A (en) * | 2014-08-06 | 2014-11-26 | 哈尔滨工程大学 | Method oriented to prediction-based optimal cache placement in content central network |
-
2016
- 2016-01-26 CN CN201610058534.9A patent/CN105681438B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103312725A (en) * | 2013-07-05 | 2013-09-18 | 江苏大学 | Content-centric networking cache judgment method based on node importance degrees |
CN104022911A (en) * | 2014-06-27 | 2014-09-03 | 哈尔滨工业大学 | Content route managing method of fusion type content distribution network |
CN104166630A (en) * | 2014-08-06 | 2014-11-26 | 哈尔滨工程大学 | Method oriented to prediction-based optimal cache placement in content central network |
Non-Patent Citations (1)
Title |
---|
L.VELTRI ETC: ""Supporting information-centric functionality in software defined networks"", 《IEEE ICC》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108512685A (en) * | 2017-02-28 | 2018-09-07 | 北京大学 | Information centre's method for controlling network flow and device |
CN108512685B (en) * | 2017-02-28 | 2020-10-13 | 北京大学 | Information center network flow control method and device |
CN107105043A (en) * | 2017-04-28 | 2017-08-29 | 西安交通大学 | A kind of content center network caching method based on software defined network |
CN107105043B (en) * | 2017-04-28 | 2019-12-24 | 西安交通大学 | Content-centric network caching method based on software defined network |
CN107483587A (en) * | 2017-08-21 | 2017-12-15 | 清华大学 | A kind of power telecom network cache optimization method of content oriented |
CN107483587B (en) * | 2017-08-21 | 2020-10-30 | 清华大学 | Content-oriented electric power communication network cache optimization method |
CN108712458A (en) * | 2018-03-30 | 2018-10-26 | 中国科学院信息工程研究所 | Support the software defined network controller of content-control |
CN110324389A (en) * | 2018-03-30 | 2019-10-11 | 北京京东尚科信息技术有限公司 | Data transmission method, device and computer readable storage medium based on car networking |
CN108712458B (en) * | 2018-03-30 | 2021-06-18 | 中国科学院信息工程研究所 | Software defined network controller supporting content control |
CN110324389B (en) * | 2018-03-30 | 2024-04-09 | 北京京东尚科信息技术有限公司 | Data transmission method and device based on Internet of vehicles and computer readable storage medium |
CN108600393A (en) * | 2018-05-30 | 2018-09-28 | 常熟理工学院 | A kind of Internet of Things data communication means based on name data |
CN108600393B (en) * | 2018-05-30 | 2020-08-18 | 常熟理工学院 | Internet of things data communication method based on named data |
Also Published As
Publication number | Publication date |
---|---|
CN105681438B (en) | 2019-12-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105681438A (en) | Centralized caching decision strategy in content-centric networking | |
CN106170024B (en) | System, method and node for data processing in software defined network | |
CN104335537B (en) | For the system and method for the multicast multipath of layer 2 transmission | |
CN102195855B (en) | Business routing method and business network | |
WO2017032253A1 (en) | Data transmission method, switch using same, and network control system | |
CN104717304B (en) | A kind of CDN P2P content optimizations select system | |
Pang et al. | SDN-based data center networking with collaboration of multipath TCP and segment routing | |
CN105812462B (en) | A kind of ICN method for routing based on SDN | |
CN109688056B (en) | Intelligent network control system and method | |
CN105553749B (en) | A kind of ICN logical topology construction methods based on SDN | |
CN104601485B (en) | The distribution method of network flow and the method for routing for realizing network flow distribution | |
Nour et al. | A distributed cache placement scheme for large-scale information-centric networking | |
CN107204919B (en) | POF-based edge fast routing and caching system and method | |
CN107105043B (en) | Content-centric network caching method based on software defined network | |
CN100452734C (en) | Global Internet topology knowledge-based P2P application construction method | |
CN106533733A (en) | CCN collaborative cache method and device based on network clustering and Hash routing | |
CN112399485A (en) | CCN-based new node value and content popularity caching method in 6G | |
Jingjing et al. | The deployment of routing protocols in distributed control plane of SDN | |
Aswini et al. | Artificial Intelligence Based Smart Routing in Software Defined Networks. | |
CN102025622B (en) | Method for realizing low-power consumption routing based on cognitive network | |
CN111865793B (en) | IPv6 network service customized reliable routing system and method based on function learning | |
Alduayji et al. | PF-EdgeCache: Popularity and freshness aware edge caching scheme for NDN/IoT networks | |
Vijaya Kumar et al. | Neural networks based efficient multiple multicast routing for mobile networks | |
Lin et al. | Proactive multipath routing with a predictive mechanism in software‐defined networks | |
Pasquini et al. | Bloom filters in a Landmark-based Flat Routing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |