CN106254446A - A kind of caching laying method based on content center network and device - Google Patents

A kind of caching laying method based on content center network and device Download PDF

Info

Publication number
CN106254446A
CN106254446A CN201610617647.8A CN201610617647A CN106254446A CN 106254446 A CN106254446 A CN 106254446A CN 201610617647 A CN201610617647 A CN 201610617647A CN 106254446 A CN106254446 A CN 106254446A
Authority
CN
China
Prior art keywords
node
caching
optimal path
content
interest bag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610617647.8A
Other languages
Chinese (zh)
Other versions
CN106254446B (en
Inventor
赵彦平
李良
李海峰
庞振江
周小强
武穆清
赵敏
凌申
张勇
韩东锋
全明睿
王建民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
Beijing Smartchip Microelectronics Technology Co Ltd
Maintenance Branch of State Grid Shanxi Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
Beijing Smartchip Microelectronics Technology Co Ltd
Maintenance Branch of State Grid Shanxi Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, State Grid Information and Telecommunication Co Ltd, Beijing Smartchip Microelectronics Technology Co Ltd, Maintenance Branch of State Grid Shanxi Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201610617647.8A priority Critical patent/CN106254446B/en
Publication of CN106254446A publication Critical patent/CN106254446A/en
Application granted granted Critical
Publication of CN106254446B publication Critical patent/CN106254446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Abstract

The present invention relates to a kind of caching laying method based on content center network and device, method includes: receives the first interest bag of user's transmission and resolves, if there is not the pre stored data corresponding with content name in the first interest bag, content name is sent to controller, receive the largest buffered judgement index that controller sends, caching for present node adjudicates the nodal information in index and optimal path, largest buffered is adjudicated index store to the first interest bag to generate the second interest bag, next node in optimal path sends the second interest bag, receive packet, wherein, in packet, storage has the data corresponding with content name in the second interest bag and largest buffered judgement, deliver a packet to user.The path management in Overall Network can be realized and cache decision realizes, select optimum node to cache, alleviate offered load pressure, be effectively improved network-caching efficiency.

Description

A kind of caching laying method based on content center network and device
Technical field
The present invention relates to the communications field, particularly relate to a kind of caching laying method based on content center network and device.
Background technology
Along with developing rapidly of the Internet, distribution efficiency and security requirement to network are more and more higher.Existing biography Transport control protocol view/Internet Protocol (Transmission Control Protocol/Internet Protocol, TCP/IP) network cannot meet user to net distribution efficiency and the requirement of safety.
Content center network (Content-Centric Networking, abbreviation: CCN) meet the tendency in this case and Raw.CCN network is with instead of numerical nomenclature to physical entity name.Mainly include two kinds of Packet type: interest bag and packet, Interest bag is sent by content requestor, carries the name prefix of required content, and packet is sent by content source, is used for feeding back correspondence Request data.Node in network all possesses certain cache routing function, and node can be selective according to cache policy The content caching that user is often accessed by ground is in suitable network node, when identical content requests arrives, from caching joint Point obtains packet.
It is to be buffered in return path from the content of the packet of content source acquisition at existing nodal cache strategy On each node, this strategy brings serious cache contents redundancy, greatly reduces the buffer efficiency of network.
The information being disclosed in this background section is merely intended to increase the understanding of the general background to the present invention, and should not When being considered the existing skill that recognizes or imply in any form this information structure well known to persons skilled in the art Art.
Summary of the invention
Technical problem
In view of this, the technical problem to be solved in the present invention is, how to provide a kind of caching based on content center network Laying method and device, it is possible to increase the buffer efficiency of network.
Solution
For solving above technical problem, the present invention provides a kind of caching based on content center network to place in first aspect Method, including:
Receiving the first interest bag of user's transmission and to described first interest Packet analyzing, described first interest bag storage has: Content name, described content name is the name prefix of content needed for user;
Judge whether the pre stored data corresponding with content name in described first interest bag;
If there is not the pre stored data corresponding with content name in described first interest bag, described content name is sent To controller;
Receive described controller send described largest buffered judgement index, for present node caching adjudicate index with And the nodal information in optimal path, described largest buffered judgement index is stored to described first interest bag to generate second Interest bag, wherein, described optimal path includes the multiple of required for the described primary nodal point to described content server process Node, described caching judgement index be according to the plurality of node in described optimal path remaining cache tolerance and request frequency Rate is calculated, and described second interest bag includes: content name and the described largest buffered of described first interest bag storage are sentenced Certainly index;
Next node in described optimal path sends described second interest bag;
Receive packet, wherein, in packet storage have the data corresponding with content name in described second interest bag and Described largest buffered is adjudicated;
Described packet is sent to user.
The present invention provides a kind of caching laying method based on content center network in second aspect, including:
Receive the nodal information in the caching judgement index and optimal path that described controller sends, wherein, described Shortest path includes that multiple nodes of process required for the described primary nodal point to described content server, the judgement of described caching refer to Be designated as according to the plurality of node in described optimal path remaining cache tolerance and request frequency calculated;
If the present node in described optimal path not existing corresponding with content name in described second interest bag pre- Deposit data bag, then according to the nodal information in described optimal path, the next node of the present node in described optimal path Send described second interest bag;
When the node in described optimal path does not exist the pre-poke corresponding with content name in described second interest bag During according to bag, described second interest bag is sent to content server;
Receiving packet, in described packet, storage has the data corresponding with described content name and the judgement of maximum caching Index, wherein, largest buffered index is the maximum in the caching judgement index of multiple nodes in described optimal path;
Obtain described largest buffered judgement index;
If described largest buffered judgement index is equal with the caching judgement index that present node stores, then work as prosthomere described The point described packet of storage is as pre stored data, and described packet is sent to primary nodal point.
In a kind of possible implementation, the caching that the described controller of described reception sends adjudicates index and optimum road After nodal information in footpath, also include:
If described present node exists the pre stored data corresponding with content name in described second interest bag, then will Described pre stored data is sent to described primary nodal point.
The present invention provides a kind of caching laying method based on content center network in the third aspect, including:
Receive the content name that primary nodal point sends, and have and described content name according to the lookup storage of described content name The content server of corresponding data;
Calculate the described primary nodal point optimal path to described content server, wherein, described optimal path include from Described primary nodal point is to multiple nodes of process required for described content server;
Obtain the plurality of node spatial cache tolerance in described optimal path and request frequency, according to described caching Spatial measure and described request frequency calculate the caching judgement index of multiple nodes in described optimal path;
The caching of the plurality of node is adjudicated index and is issued to each respective nodes respectively, and described largest buffered is sentenced Certainly index sends the node to described first interest bag place, and wherein, described first interest bag storage has: content name, described Largest buffered index is the maximum in the caching judgement index of multiple nodes in described optimal path.
In a kind of possible implementation, described according to described spatial cache tolerance and the calculating of described request frequency The caching judgement index of multiple nodes in optimal path, including:
Obtain node RiRemaining cache space space (Ri) and described optimal path in the remaining cache of total node empty Between sum, described node on optimal path spatial cache tolerance Sp (Ri) it is described remaining cache space space (Ri) and institute State the ratio of the remaining cache space sum of total node in optimal path;
Obtain in the scheduled time, according to the second formula, calculate node RiContent requests frequency Re (Cj),
Described second formula is
Wherein, num (Cij) it is described node RiReceive about content CjRequest number, num (Ci) it is described optimum Content C that in path, total node receivesjRequest number;
According to the 3rd formula, calculate node RiAbout content CjCaching judgement index Cache (Rij),
Described formula three is: Cache (Rij)=Sp (Ri)×Re(Cij)。
The present invention provides a kind of caching laying method based on content center network in fourth aspect, including:
Receiving the second interest bag that the node in optimal path sends, described second interest bag storage has: content name with Largest buffered judgement index, wherein, described largest buffered index is in the caching judgement index of multiple nodes in optimal path Maximum;
Obtain described largest buffered judgement index;
Obtain the data corresponding with content name in described second interest bag;
Described data and described largest buffered are adjudicated index store in the packet, and send described packet.
The present invention provides a kind of caching apparatus for placing based on content center network at the 5th aspect, including:
First receiver module, for receiving the first interest bag of user's transmission and to described first interest Packet analyzing, described First interest bag storage has: content name, and described content name is the name prefix of content needed for user;
First judge module, for judging whether the pre-stored data corresponding with content name in described first interest bag Bag;If there is not the pre stored data corresponding with content name in described first interest bag, described content name is sent to control Device processed;
Described first receiver module is additionally operable to receive the described largest buffered judgement index of described controller transmission, for working as Nodal information in the caching judgement index of front nodal point and optimal path, stores described largest buffered judgement index to described To generate the second interest bag in first interest bag, wherein, described optimal path includes from described primary nodal point to described content Multiple nodes of process required for server, described caching judgement index be according to the plurality of node in described optimal path Remaining cache tolerance and request frequency calculated, described second interest bag includes: described first interest bag storage interior Hold title and described largest buffered judgement index;
First sending module, sends described second interest bag for the next node in described optimal path;
Described first receiver module is additionally operable to receive packet, and wherein, in packet, storage has and described second interest bag Data and described largest buffered that middle content name is corresponding are adjudicated;
Described first sending module, is additionally operable to described packet is sent to user.
The present invention provides a kind of caching apparatus for placing based on content center network at the 6th aspect, including:
Second receiver module, the caching sent for receiving described controller adjudicates the node in index and optimal path Information, wherein, described optimal path includes multiple joints of process required for the described primary nodal point to described content server Point, described caching judgement index be according to the plurality of node in described optimal path remaining cache tolerance and request frequency Calculated;
Second sending module, do not exist in present node in described optimal path with in described second interest bag During pre stored data corresponding to content name, according to the nodal information in described optimal path, working as in described optimal path The next node of front nodal point sends described second interest bag;
Described second sending module is additionally operable in the node in described optimal path not exist and described second interest bag During pre stored data corresponding to middle content name, described second interest bag is sent to content server;
Described second receiver module is additionally operable to receive packet, and in described packet, storage has corresponding with described content name Data and maximum caching judgement index, wherein, largest buffered index is that in described optimal path, the caching of multiple nodes is sentenced The certainly maximum in index;
Second acquisition module, is used for obtaining described largest buffered judgement index;
Second cache module, for the caching judgement index phase stored at described largest buffered judgement index and present node Deng time, store described packet as pre stored data at described present node, and described packet be sent to primary nodal point.
In a kind of possible implementation, described second sending module is additionally operable in described present node exist and institute When stating the pre stored data that in the second interest bag, content name is corresponding, described pre stored data is sent to described primary nodal point.
The present invention provides a kind of caching apparatus for placing based on content center network at the 7th aspect, it is characterised in that bag Include:
3rd receiver module, for receiving the content name that primary nodal point sends, and deposits according to the lookup of described content name Contain the content server of the data corresponding with described content name;
3rd computing module, for calculating the described primary nodal point optimal path to described content server, wherein, institute State optimal path and include multiple nodes of process required for the described primary nodal point to described content server;
3rd acquisition module, for obtaining the plurality of node spatial cache tolerance in described optimal path and request Frequency, the caching judgement calculating multiple nodes in described optimal path according to described spatial cache tolerance and described request frequency refers to Mark;
3rd sending module, is issued to each respective nodes respectively for the caching of the plurality of node is adjudicated index, And described largest buffered judgement index is sent the node to described first interest bag place, wherein, described first interest bag is deposited Containing: content name, described largest buffered index is the maximum in the caching judgement index of multiple nodes in described optimal path Value.
In a kind of possible implementation, described 3rd computing module is used for,
Obtain node RiRemaining cache space space (Ri) and described optimal path in the remaining cache of total node empty Between sum, described node on optimal path spatial cache tolerance Sp (Ri) it is described remaining cache space space (Ri) and institute State the ratio of the remaining cache space sum of total node in optimal path;
Obtain in the scheduled time, according to the second formula, calculate node RiContent requests frequency Re (Cj),
Described second formula is
Wherein, num (Cij) it is described node RiReceive about content CjRequest number, num (Ci) it is described optimum Content C that in path, total node receivesjRequest number;
According to the 3rd formula, calculate node RiAbout content CjCaching judgement index Cache (Rij),
Described formula three is: Cache (Rij)=Sp (Ri)×Re(Cij)。
The present invention provides a kind of caching apparatus for placing based on content center network in eighth aspect, including:
4th receiver module, for receiving the second interest bag that the node in optimal path sends, described second interest bag Storage has: content name and largest buffered judgement index, wherein, described largest buffered index is multiple nodes in optimal path Maximum in caching judgement index;
4th acquisition module, is used for obtaining described largest buffered judgement index;
Described 4th acquisition module is additionally operable to obtain the data corresponding with content name in described second interest bag;
4th sending module, stores in the packet, concurrently for described data and described largest buffered are adjudicated index Send described packet.
Beneficial effect
The present invention provides a kind of caching laying method based on content center network and device, is calculated described by controller Primary nodal point, to the optimal path of described content server, is measured according to node spatial cache in described optimal path and is asked Seek frequency, calculate the caching judgement index of multiple nodes in described optimal path, the caching of the plurality of node is adjudicated index It is issued to each respective nodes respectively, and described largest buffered judgement index is sent the node to described interest bag place, make Whether node can meet largest buffered judgement index by contrast current cache index, judges whether to store packet, logical Cross the path management arranged in controller Overall Network and cache decision realizes, select optimum node to cache, alleviate Offered load pressure, can be effectively improved network-caching efficiency.
According to below with reference to the accompanying drawings detailed description of illustrative embodiments, the further feature of the present invention and aspect being become Clear.
Accompanying drawing explanation
The accompanying drawing of the part comprising in the description and constituting description together illustrates the present invention's with description Exemplary embodiment, feature and aspect, and for explaining the principle of the present invention.
Fig. 1 illustrates the caching laying method flow chart based on content center network that the embodiment of the present invention provides;
Fig. 2 illustrates the structural representation caching apparatus for placing based on content center network that the embodiment of the present invention provides;
Fig. 3 illustrates the structural representation caching apparatus for placing based on content center network that the embodiment of the present invention provides;
Fig. 4 illustrates the structural representation caching apparatus for placing based on content center network that the embodiment of the present invention provides;
Fig. 5 illustrates the structural representation caching apparatus for placing based on content center network that the embodiment of the present invention provides.
Detailed description of the invention
Below in conjunction with the accompanying drawings, the detailed description of the invention of the present invention is described in detail, it is to be understood that the guarantor of the present invention Scope of protecting is not limited by detailed description of the invention.
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is The a part of embodiment of the present invention rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment obtained under not making creative work premise, broadly falls into the scope of protection of the invention.Unless Separately have other to explicitly indicate that, otherwise in entire disclosure and claims, term " include " or its conversion such as " comprising " or " include " etc. and will be understood to comprise stated element or ingredient, and do not get rid of other element or other composition Part.
The most special word " exemplary " means " as example, embodiment or illustrative ".Here as " exemplary " Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
It addition, in order to better illustrate the present invention, detailed description of the invention below gives numerous details. It will be appreciated by those skilled in the art that do not have some detail, the present invention equally implements.In some instances, for Method well known to those skilled in the art, means, element are not described in detail, in order to highlight the purport of the present invention.
Embodiment 1
Fig. 1 illustrates the caching laying method flow chart based on content center network that the embodiment of the present invention provides, such as Fig. 1 institute Showing, the method includes:
Step S101, primary nodal point receives the first interest bag of user's transmission and to described first interest Packet analyzing, described First interest bag storage has: content name, and described content name is the name prefix of content needed for user.
Step S102, primary nodal point judges whether the caching of primary nodal point exists and content name in described first interest bag Corresponding pre stored data;If there is not the pre stored data corresponding with content name in described first interest bag, in described Hold title and be sent to controller;
Step S103, controller receives the content name that primary nodal point sends, and searches storage according to described content name There is the content server of the data corresponding with described content name;Calculate the described primary nodal point optimum to described content server Path;
Wherein, described optimal path includes the multiple of required for the described primary nodal point to described content server process Node;Multiple nodes have ordering relation.Node at primary nodal point is the downstream near content server one side gusset Node, close content server one side gusset is the upstream node of the node at primary nodal point.
Step S104, controller obtains the plurality of node spatial cache tolerance in described optimal path and request frequency Rate, the caching judgement calculating multiple nodes in described optimal path according to described spatial cache tolerance and described request frequency refers to Mark;The caching of the plurality of node is adjudicated index and is issued to each respective nodes respectively, and the judgement of described largest buffered is referred to Mark sends the node to described first interest bag place, and wherein, described first interest bag storage has: content name, described maximum Caching index is the maximum in the caching judgement index of multiple nodes in described optimal path.
Specifically, for the node in an interior optimal path, controller obtains the spatial cache for this node respectively Tolerance and request frequency, calculate the caching judgement index of this node.
Step S105, primary nodal point receives the described largest buffered judgement index that described controller sends, and by described Big caching judgement index is stored in described first interest bag, and to generate the second interest bag, wherein, the second interest bag includes: institute State content name and the described largest buffered judgement index of the first interest bag storage.
Step S106, the node in optimal path receives the caching for present node respectively and adjudicates index and optimum road Nodal information in footpath.
Specifically, and the caching of each node is adjudicated index to send to corresponding node.It is to say, critical path Each node in footpath, only receives the caching judgement index of oneself.And the primary nodal point at current data packet place, should except receiving Outside the caching judgement index of node, also receive largest buffered judgement index.
It should be noted that in the present embodiment, the execution sequencing of step S105 and step S106 is not defined.
Step S107, primary nodal point next node in described optimal path sends described second interest bag.
Step S108, the present node in optimal path receives described second interest bag, if working as in described optimal path Front nodal point does not exist the pre stored data corresponding with content name in described second interest bag, then according in described optimal path Nodal information, the next node of the present node in described optimal path sends described second interest bag;
Alternatively, step 108 can also include: if existing in described present node and content name in described second interest bag Claim corresponding pre stored data, then described pre stored data is sent to described primary nodal point.Step after being performed without.Specifically Ground, after present node receives the second interest bag, if existing corresponding with described second interest bag pre-in the caching of present node Deposit data bag, then send described pre stored data to the downstream node of present node, until being sent to primary nodal point.
Step S109, corresponding with content name in described second interest bag when the node in described optimal path does not exist Pre stored data time, by described second interest bag send to content server;
Specifically, after present node receives the second interest bag, if the caching of present node not existing and described second The pre stored data that interest bag is corresponding, then send the second interest bag to the upstream node of present node, until sending to content clothes Business device, thus obtain the data corresponding with content name in the second interest bag.
Step S110, content server receives the second interest bag, and described second interest bag storage has: content name and maximum Caching judgement index;
Specifically, what content server received in optimal path that the node near described content server sends is second emerging Interest bag.
Step S111, content server obtain described largest buffered judgement index and with content in described second interest bag The data that title is corresponding, and described data and described largest buffered are adjudicated index storage in the packet, and send described number According to bag.
Step S112, the node in optimal path receives packet, and in described packet, storage has and described content name Corresponding data and maximum caching judgement index;
Specifically, described packet can be described content server or the transmission of described upstream node.When described packet For described content server send time, illustrate that present node is in optimal path, near the node of content server.
Step S113, the present node in optimal path obtains described largest buffered judgement index;If described largest buffered Judgement index is equal with the caching judgement index that present node stores, then store described packet as in advance at described present node Deposit data bag, and described packet is sent to primary nodal point.
If unequal, then continue, along optimal path, to send described packet to described primary nodal point.
Step S114, primary nodal point receives packet, and wherein, in packet, storage has and content in described second interest bag Data and described largest buffered that title is corresponding are adjudicated;
Step S115, is sent to user by described packet.
It should be noted that in the present embodiment, primary nodal point is for receiving user interest bag and sending data packets to user Node, present node is the node receiving interest bag or packet in optimal path, the next node of present node for work as Front nodal point next node on the transmit path.First interest bag is to store the content of the name prefix of content needed for promising user The interest bag of title, the second interest bag is that the largest buffered that storage has foregoing title and controller to calculate adjudicates the emerging of index Interest bag.
The present embodiment, calculates the described primary nodal point optimal path to described content server by controller;According to joint Point spatial cache in described optimal path is measured and request frequency, calculates the caching of multiple nodes in described optimal path and sentences Certainly index;The caching of the plurality of node is adjudicated index and is issued to each respective nodes respectively, and described largest buffered is sentenced Certainly index sends the node to described interest bag place, makes node whether can meet maximum by contrast current cache index slow Deposit judgement index, judge whether to store packet.Determine by arranging the path management in controller Overall Network and caching Plan realizes, and selects optimum node to cache, alleviates offered load pressure, can be effectively improved network-caching efficiency.
Embodiment 2
Step 104 in embodiment one is limited by the present embodiment further.In step 104, according to described caching sky Between tolerance and described request frequency calculate the caching judgement index of multiple nodes in described optimal path and specifically include: optimal path Including n node [R1,R2,...,Ri,...,Rn]。
Controller obtains optimal path interior joint RiRemaining cache space space (Ri) and described optimal path in total The remaining cache space sum of node, described node spatial cache tolerance Sp (R on optimal pathi) it is described remaining cache Space space (Ri) ratio of the remaining cache space sum of total node with described optimal path, refer to formula one;
S p ( R i ) = s p a c e ( R i ) Σ k = 1 n s p a c e ( R k ) - - - ( 1 )
Obtain in the scheduled time, according to the second formula, calculate node RiContent requests frequency Re (Cj);
Described second formula is
Re ( C i j ) = n u m ( C i j ) n u m ( C i ) - - - ( 2 )
Wherein, num (Cij) it is described node RiReceive about content CjRequest number, num (Ci) it is described optimum Content C that in path, total node receivesjRequest number;
According to the 3rd formula, calculate node RiAbout content CjCaching judgement index Cache (Rij),
Formula three is:
Cache(Rij)=Sp (Ri)×Re(Cij) (3)
Simultaneously by maximum caching judgement index CachemaxIt is sent to primary nodal point, makes primary nodal point by CachemaxDeposit At the head of the first interest bag, deposit the first interest bag after the caching judgement index of maximum as the second interest bag, calculating public affairs Formula is:
Cachemax=Max{Cache (Rij)} (4)
Specifically, when in the second interest bag storage have CachemaxAfterwards, the second interest bag is along optimal path [R1, R2,...,Ri,...,Rn] transmit, every node, first mate the content caching district of this node, if there being the caching of correspondence Content, directly returns packet removal request bag, if do not matched, request bag will arrive content server.When second is emerging When interest bag arrives content server, content server returns corresponding packet and stores the caching judgement of maximum in the packet Index Cachemax, when packet returns along optimal path, by the Cache (R of relatively each nodeij) and Cachemax, sentence Disconnected whether by data pack buffer at this node.
Thus, the caching judgement index of maximum is calculated by arranging controller, it is achieved that select optimum node to delay Deposit, alleviate offered load pressure, improve network-caching efficiency further.Cache hit rate can be effectively improved, reduce redundancy.
Embodiment 3
Fig. 2 illustrates the structural representation caching apparatus for placing based on content center network that the embodiment of the present invention provides, As in figure 2 it is shown, this device 10 includes: first receiver module the 110, first judge module the 120, first sending module 130.
First receiver module 110, for receiving the first interest bag of user's transmission and to described first interest Packet analyzing, institute Stating the first interest bag storage to have: content name, described content name is the name prefix of content needed for user;
First judge module 120, for judging whether and corresponding the prestoring of content name in described first interest bag Packet;If there is not the pre stored data corresponding with content name in described first interest bag, described content name is sent To controller;
Described first receiver module 110 is additionally operable to receive described largest buffered judgement index, the pin that described controller sends Present node is cached judgement index and optimal path in nodal information, by described largest buffered judgement index store to To generate the second interest bag in described first interest bag, wherein, described optimal path includes from described primary nodal point to described Multiple nodes of process required for content server, described caching judgement index is on described optimum road according to the plurality of node Remaining cache tolerance and request frequency in footpath are calculated, and described second interest bag includes: described first interest bag storage Content name and described largest buffered judgement index;
First sending module 130, sends described second interest bag for the next node in described optimal path;
Described first receiver module 110 is additionally operable to receive packet, and wherein, in packet, storage has and described second interest Data and described largest buffered that in bag, content name is corresponding are adjudicated;
Described first sending module 130, is additionally operable to described packet is sent to user.
It should be noted that the caching apparatus for placing 10, Ke Yizuo based on content center network provided in the present embodiment For the primary nodal point 1 in embodiment 1-2, and it is applicable to all embodiments of the method for the present invention.
The present embodiment, by when there is not the pre stored data corresponding with content name in described first interest bag, incites somebody to action Described content name is sent to controller, makes controller calculate the described primary nodal point optimal path to described content server; Measure and request frequency according to node spatial cache in described optimal path, calculate multiple nodes in described optimal path Caching judgement index;The caching of the plurality of node is adjudicated index and is issued to each respective nodes respectively, and by described maximum Caching judgement index sends the node to described interest bag place, makes whether node can be met by contrast current cache index Largest buffered judgement index, judges whether to store packet.By arranging the path management in controller module Overall Network And cache decision realizes, select optimum node to cache, alleviate offered load pressure, network-caching can be effectively improved Efficiency.
Embodiment 4
Fig. 3 illustrates the structural representation caching apparatus for placing based on content center network that the embodiment of the present invention provides, As it is shown on figure 3, this device 20 includes: second receiver module the 210, second sending module the 220, second acquisition module 230, second delays Storing module 240.
Second receiver module 210, the caching sent for receiving described controller is adjudicated in index and optimal path Nodal information, wherein, described optimal path includes that process is many required for the described primary nodal point to described content server Individual node, described caching judgement index be according to the plurality of node in described optimal path remaining cache tolerance and request Frequency is calculated;
Second sending module 220, does not exist and described second interest in the present node in described optimal path During the pre stored data that in bag, content name is corresponding, according to the nodal information in described optimal path, in described optimal path Present node next node send described second interest bag;
Described second sending module 220 is additionally operable in the node in described optimal path not exist and described second interest During the pre stored data that in bag, content name is corresponding, described second interest bag is sent to content server;
Described second receiver module is additionally operable to receive packet, and in described packet, storage has corresponding with described content name Data and maximum caching judgement index, wherein, largest buffered index is that in described optimal path, the caching of multiple nodes is sentenced The certainly maximum in index;
Second acquisition module 230, is used for obtaining described largest buffered judgement index;
Second cache module 240, refers to for the caching judgement stored at described largest buffered judgement index and present node When marking equal, at the described present node described packet of storage as pre stored data, and described packet is sent to first Node.
Optionally, described second sending module 220 is additionally operable in described present node exist and described second interest bag During pre stored data corresponding to middle content name, described pre stored data is sent to described primary nodal point.
It should be noted that the caching apparatus for placing 20, Ke Yizuo based on content center network provided in the present embodiment For the present node 2 in embodiment 1-2, and it is applicable to all embodiments of the method for the present invention.
Present node can be as the present node 2 in Fig. 1 or the next node 1 of present node.Present node is optimum Path receives the node of interest bag or packet, the next node of present node be present node on the transmit path under One node.
, by there is not the pre stored data corresponding with content name in described first interest bag, by described in the present embodiment Content name is sent to controller, makes controller connect the optimal path calculating described primary nodal point to described content server;Root According to node spatial cache tolerance in described optimal path and request frequency, calculate the slow of multiple nodes in described optimal path Deposit judgement index;The caching of the plurality of node is adjudicated index and is issued to each respective nodes respectively, and described maximum is delayed Deposit judgement index and send the node to described interest bag place, make whether node can be met by contrast current cache index Big caching judgement index, judges whether to store packet.By arranging the path management in controller Overall Network and delaying Deposit decision-making to realize, select optimum node to cache, alleviate offered load pressure, network-caching efficiency can be effectively improved.
Embodiment 5
Fig. 4 illustrates the structural representation caching apparatus for placing based on content center network that the embodiment of the present invention provides, As shown in Figure 4, this device 30 includes: the 3rd receiver module the 310, the 3rd computing module the 320, the 3rd acquisition module 330, the 3rd Send module 340.
3rd receiver module 310, for receiving the content name that primary nodal point sends, and searches according to described content name Storage has the content server of the data corresponding with described content name;
3rd computing module 320, for calculating the described primary nodal point optimal path to described content server, wherein, Described optimal path includes multiple nodes of process required for the described primary nodal point to described content server;
3rd acquisition module 330, for obtain the plurality of node in described optimal path spatial cache tolerance and Request frequency, calculates the caching of multiple nodes in described optimal path according to described spatial cache tolerance and described request frequency and sentences Certainly index;
3rd sending module 340, is issued to each corresponding joint respectively for the caching of the plurality of node is adjudicated index Point, and described largest buffered judgement index is sent the node to described first interest bag place, wherein, described first interest bag Storage has: content name, described largest buffered index be multiple nodes in described optimal path caching judgement index in Big value.
The present embodiment, by calculating the described primary nodal point optimal path to described content server, according to node in institute Stating the spatial cache tolerance in optimal path and request frequency, the caching judgement calculating multiple nodes in described optimal path refers to Mark, adjudicates the caching of the plurality of node index and is issued to each respective nodes respectively, and the judgement of described largest buffered referred to Mark sends the node to described interest bag place, makes node whether can meet largest buffered by contrast current cache index and sentences Certainly index, judges whether to store packet.It is achieved that by arrange path management in controller module Overall Network with And cache decision realizes, select optimum node to cache, alleviate offered load pressure, network-caching effect can be effectively improved Rate.
The most described 3rd computing module 340 is used for,
Obtain node RiRemaining cache space space (Ri) and described optimal path in the remaining cache of total node empty Between sum, described node on optimal path spatial cache tolerance Sp (Ri) it is described remaining cache space space (Ri) and institute State the ratio of the remaining cache space sum of total node in optimal path;
Obtain in the scheduled time, according to the second formula, calculate node RiContent requests frequency Re (Cj),
Described second formula is
Wherein, num (Cij) it is described node RiReceive about content CjRequest number, num (Ci) it is described optimum Content C that in path, total node receivesjRequest number;
According to the 3rd formula, calculate node RiAbout content CjCaching judgement index Cache (Rij),
Described formula three is: Cache (Rij)=Sp (Ri)×Re(Cij)。
It should be noted that the caching apparatus for placing 30, Ke Yizuo based on content center network provided in the present embodiment For the controller 3 in embodiment 1-2, and it is applicable to all embodiments of the method for the present invention.
Thus, the caching judgement index of maximum is calculated by arranging controller, it is achieved that select optimum node to delay Deposit, alleviate offered load pressure, improve network-caching efficiency further.Cache hit rate can be effectively improved, reduce redundancy.
Embodiment 6
Fig. 5 illustrates the structural representation caching apparatus for placing based on content center network that the embodiment of the present invention provides, As it is shown in figure 5, this device 40 includes: the 4th receiver module the 410, the 4th acquisition module the 420, the 4th sending module 430.
4th receiver module 410, for receiving the second interest bag that the node in optimal path sends, described second interest Bag storage has: content name and largest buffered judgement index, wherein, described largest buffered index is multiple nodes in optimal path Caching judgement index in maximum;
4th acquisition module 420, is used for obtaining described largest buffered judgement index;
Described 4th acquisition module is additionally operable to obtain the data corresponding with content name in described second interest bag;
4th sending module 430, stores in the packet for described data and described largest buffered are adjudicated index, and Send described packet.
It should be noted that the caching apparatus for placing 40, Ke Yizuo based on content center network provided in the present embodiment For the content server 4 in embodiment 1-2, and it is applicable to all embodiments of the method for the present invention.
The present embodiment, calculates the described primary nodal point optimal path to described content server by controller;According to joint Point spatial cache in described optimal path is measured and request frequency, calculates the caching of multiple nodes in described optimal path and sentences Certainly index, adjudicates index by the caching of the plurality of node and is issued to each respective nodes respectively, and described largest buffered sentenced Certainly index sends the node to described interest bag place, makes node whether can meet maximum by contrast current cache index slow Deposit judgement index, judge whether to store packet.It is achieved that by arrange path management in controller Overall Network with And cache decision realizes, select optimum node to cache, alleviate offered load pressure, network-caching effect can be effectively improved Rate.
The aforementioned description to the specific illustrative embodiment of the present invention illustrates that and the purpose of illustration.These describe It is not wishing to limit the invention to disclosed precise forms, and it will be apparent that according to above-mentioned teaching, can much change And change.The purpose selected exemplary embodiment and describe is to explain that the certain principles of the present invention and reality thereof should With so that those skilled in the art be capable of and utilize the present invention various different exemplary and Various different selections and change.The scope of the present invention is intended to be limited by claims and equivalents thereof.
Device embodiment described above is only schematically, and the wherein said unit illustrated as separating component can To be or to may not be physically separate, the parts shown as unit can be or may not be physics list Unit, i.e. may be located at a place, or can also be distributed on multiple NE.Can be selected it according to the actual needs In some or all of module realize the purpose of the present embodiment scheme.Those of ordinary skill in the art are not paying creativeness Work in the case of, be i.e. appreciated that and implement.

Claims (12)

1. a caching laying method based on content center network, it is characterised in that including:
Receiving the first interest bag of user's transmission and to described first interest Packet analyzing, described first interest bag storage has: content Title, described content name is the name prefix of content needed for user;
Judge whether the pre stored data corresponding with content name in described first interest bag;
If there is not the pre stored data corresponding with content name in described first interest bag, described content name is sent to control Device processed;
Receive described largest buffered judgement index that described controller sends, caching for present node adjudicates index and Nodal information in shortest path, stores described largest buffered judgement index to described first interest bag to generate the second interest Bag, wherein, described optimal path includes multiple nodes of process required for the described primary nodal point to described content server, Described caching judgement index be according to the plurality of node in described optimal path remaining cache tolerance and request frequency meter Obtaining, described second interest bag includes: content name and the judgement of described largest buffered of described first interest bag storage refer to Mark;
Next node in described optimal path sends described second interest bag;
Receiving packet, wherein, in packet, storage has data corresponding with content name in described second interest bag and described Largest buffered is adjudicated;
Described packet is sent to user.
2. a caching laying method based on content center network, it is characterised in that including:
Receive the nodal information cached in judgement index and optimal path that described controller sends, wherein, described optimum road Footpath includes that multiple nodes of process required for the described primary nodal point to described content server, described caching judgement index are Calculated with request frequency according to the plurality of node remaining cache tolerance in described optimal path;
If the present node in described optimal path does not exist the pre-poke corresponding with content name in described second interest bag According to bag, then according to the nodal information in described optimal path, the next node of the present node in described optimal path sends Described second interest bag;
When the node in described optimal path does not exist the pre stored data corresponding with content name in described second interest bag Time, described second interest bag is sent to content server;
Receiving packet, in described packet, storage has the data corresponding with described content name and the judgement of maximum caching to refer to Mark, wherein, largest buffered index is the maximum in the caching judgement index of multiple nodes in described optimal path;
Obtain described largest buffered judgement index;
If described largest buffered judgement index is equal with the caching judgement index that present node stores, then deposit at described present node Store up described packet as pre stored data, and described packet is sent to primary nodal point.
Caching laying method the most according to claim 2, it is characterised in that the caching that the described controller of described reception sends After nodal information in judgement index and optimal path, also include:
If described present node existing the pre stored data corresponding with content name in described second interest bag, then by described pre- Deposit data bag is sent to described primary nodal point.
4. a caching laying method based on content center network, it is characterised in that including:
Receive the content name that primary nodal point sends, and have corresponding with described content name according to the lookup storage of described content name The content server of data;
Calculating the described primary nodal point optimal path to described content server, wherein, described optimal path includes from described Primary nodal point is to multiple nodes of process required for described content server;
Obtain the plurality of node spatial cache tolerance in described optimal path and request frequency, according to described spatial cache Tolerance and described request frequency calculate the caching judgement index of multiple nodes in described optimal path;
The caching of the plurality of node is adjudicated index and is issued to each respective nodes respectively, and the judgement of described largest buffered is referred to Mark sends the node to described first interest bag place, and wherein, described first interest bag storage has: content name, described maximum Caching index is the maximum in the caching judgement index of multiple nodes in described optimal path.
Caching laying method the most according to claim 4, it is characterised in that described according to described spatial cache tolerance and institute State request frequency and calculate the caching judgement index of multiple nodes in described optimal path, including:
Obtain node RiRemaining cache space space (Ri) and described optimal path in total node remaining cache space it With, described node spatial cache tolerance Sp (R on optimal pathi) it is described remaining cache space space (Ri) with described The ratio of the remaining cache space sum of total node in shortest path;
Obtain in the scheduled time, according to the second formula, calculate node RiContent requests frequency Re (Cj),
Described second formula is
Wherein, num (Cij) it is described node RiReceive about content CjRequest number, num (Ci) it is described optimal path In content C that receives of total nodejRequest number;
According to the 3rd formula, calculate node RiAbout content CjCaching judgement index Cache (Rij),
Described formula three is: Cache (Rij)=Sp (Ri)×Re(Cij)。
6. a caching laying method based on content center network, it is characterised in that including:
Receiving the second interest bag that the node in optimal path sends, described second interest bag storage has: content name and maximum Caching judgement index, wherein, described largest buffered index is the maximum in the caching judgement index of multiple nodes in optimal path Value;
Obtain described largest buffered judgement index;
Obtain the data corresponding with content name in described second interest bag;
Described data and described largest buffered are adjudicated index store in the packet, and send described packet.
7. a caching apparatus for placing based on content center network, it is characterised in that including:
First receiver module, for receiving the first interest bag that user sends and to described first interest Packet analyzing, described first Interest bag storage has: content name, and described content name is the name prefix of content needed for user;
First judge module, for judging whether the pre stored data corresponding with content name in described first interest bag; If there is not the pre stored data corresponding with content name in described first interest bag, it is sent to described content name control Device;
Described first receiver module is additionally operable to receive the described largest buffered judgement index of described controller transmission, for working as prosthomere Nodal information in the caching judgement index of point and optimal path, stores described largest buffered judgement index to described first To generate the second interest bag in interest bag, wherein, described optimal path includes from described primary nodal point to described content service Multiple nodes of process required for device, described caching judgement index is according to the plurality of node remaining in described optimal path Remaining caching tolerance and request frequency are calculated, and described second interest bag includes: the content name of described first interest bag storage Claim and described largest buffered adjudicates index;
First sending module, sends described second interest bag for the next node in described optimal path;
Described first receiver module is additionally operable to receive packet, and wherein, in packet, storage has interior with described second interest bag Hold data corresponding to title and the judgement of described largest buffered;
Described first sending module, is additionally operable to described packet is sent to user.
8. a caching apparatus for placing based on content center network, it is characterised in that including:
Second receiver module, the caching sent for receiving described controller adjudicates the node letter in index and optimal path Breath, wherein, described optimal path includes multiple nodes of process required for the described primary nodal point to described content server, Described caching judgement index be according to the plurality of node in described optimal path remaining cache tolerance and request frequency meter Obtain;
Second sending module, does not exist and content in described second interest bag in the present node in described optimal path During pre stored data corresponding to title, according to the nodal information in described optimal path, in described optimal path, work as prosthomere The next node of point sends described second interest bag;
Described second sending module be additionally operable in the node in described optimal path not exist with described second interest bag in When holding pre stored data corresponding to title, will described second interest bag transmission to content server;
Described second receiver module is additionally operable to receive packet, and in described packet, storage has the number corresponding with described content name Adjudicating index according to maximum caching, wherein, largest buffered index is that the caching judgement of multiple nodes in described optimal path refers to Maximum in mark;
Second acquisition module, is used for obtaining described largest buffered judgement index;
Second cache module, for equal with the caching judgement index that present node stores in described largest buffered judgement index Time, at the described present node described packet of storage as pre stored data, and described packet is sent to primary nodal point.
Caching apparatus for placing the most according to claim 8, it is characterised in that described second sending module is additionally operable to described When present node exists the pre stored data corresponding with content name in described second interest bag, described pre stored data is sent out Give described primary nodal point.
10. a caching apparatus for placing based on content center network, it is characterised in that including:
3rd receiver module, for receiving the content name that primary nodal point sends, and has according to the lookup storage of described content name The content server of the data corresponding with described content name;
3rd computing module, for calculating the described primary nodal point optimal path to described content server, wherein, described optimum Path includes multiple nodes of process required for the described primary nodal point to described content server;
3rd acquisition module, for obtaining the plurality of node spatial cache tolerance in described optimal path and request frequency Rate, the caching judgement calculating multiple nodes in described optimal path according to described spatial cache tolerance and described request frequency refers to Mark;
3rd sending module, is issued to each respective nodes respectively for the caching of the plurality of node is adjudicated index, and will Described largest buffered judgement index sends the node to described first interest bag place, and wherein, described first interest bag storage has: Content name, described largest buffered index is the maximum in the caching judgement index of multiple nodes in described optimal path.
11. caching laying methods according to claim 10, it is characterised in that described 3rd computing module is used for,
Obtain node RiRemaining cache space space (Ri) and described optimal path in total node remaining cache space it With, described node spatial cache tolerance Sp (R on optimal pathi) it is described remaining cache space space (Ri) with described The ratio of the remaining cache space sum of total node in shortest path;
Obtain in the scheduled time, according to the second formula, calculate node RiContent requests frequency Re (Cj),
Described second formula is
Wherein, num (Cij) it is described node RiReceive about content CjRequest number, num (Ci) it is described optimal path In content C that receives of total nodejRequest number;
According to the 3rd formula, calculate node RiAbout content CjCaching judgement index Cache (Rij),
Described formula three is: Cache (Rij)=Sp (Ri)×Re(Cij)。
12. 1 kinds of caching apparatus for placing based on content center network, it is characterised in that including:
4th receiver module, for receiving the second interest bag that the node in optimal path sends, described second interest bag storage Have: content name and largest buffered judgement index, wherein, described largest buffered index is the caching of multiple nodes in optimal path Maximum in judgement index;
4th acquisition module, is used for obtaining described largest buffered judgement index;
Described 4th acquisition module is additionally operable to obtain the data corresponding with content name in described second interest bag;
4th sending module, stores in the packet for described data and described largest buffered are adjudicated index, and sends institute State packet.
CN201610617647.8A 2016-07-29 2016-07-29 A kind of caching laying method and device based on content center network Active CN106254446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610617647.8A CN106254446B (en) 2016-07-29 2016-07-29 A kind of caching laying method and device based on content center network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610617647.8A CN106254446B (en) 2016-07-29 2016-07-29 A kind of caching laying method and device based on content center network

Publications (2)

Publication Number Publication Date
CN106254446A true CN106254446A (en) 2016-12-21
CN106254446B CN106254446B (en) 2019-07-02

Family

ID=57605419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610617647.8A Active CN106254446B (en) 2016-07-29 2016-07-29 A kind of caching laying method and device based on content center network

Country Status (1)

Country Link
CN (1) CN106254446B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106790638A (en) * 2017-01-10 2017-05-31 北京邮电大学 Data transmission method and system based on active cache in name data network
CN108234319A (en) * 2017-12-29 2018-06-29 北京奇虎科技有限公司 The transmission method and device of a kind of data
CN108650070A (en) * 2018-05-11 2018-10-12 全球能源互联网研究院有限公司 A kind of System and method for of information centre's network phasor measurement unit communication

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104468351A (en) * 2014-11-13 2015-03-25 北京邮电大学 SDN-based CCN route assisting management method, CCN forwarding device and network controller
CN104756449A (en) * 2012-11-26 2015-07-01 三星电子株式会社 Method of packet transmission from node and contentowner in content-centric networking
CN104901980A (en) * 2014-03-05 2015-09-09 北京工业大学 Popularity-based equilibrium distribution caching method for named data networking
CN105262833A (en) * 2015-10-30 2016-01-20 北京邮电大学 Cross-layer catching method and node of content centric network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104756449A (en) * 2012-11-26 2015-07-01 三星电子株式会社 Method of packet transmission from node and contentowner in content-centric networking
CN104901980A (en) * 2014-03-05 2015-09-09 北京工业大学 Popularity-based equilibrium distribution caching method for named data networking
CN104468351A (en) * 2014-11-13 2015-03-25 北京邮电大学 SDN-based CCN route assisting management method, CCN forwarding device and network controller
CN105262833A (en) * 2015-10-30 2016-01-20 北京邮电大学 Cross-layer catching method and node of content centric network

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106790638A (en) * 2017-01-10 2017-05-31 北京邮电大学 Data transmission method and system based on active cache in name data network
CN106790638B (en) * 2017-01-10 2019-10-11 北京邮电大学 Name data transmission method and system based on active cache in data network
CN108234319A (en) * 2017-12-29 2018-06-29 北京奇虎科技有限公司 The transmission method and device of a kind of data
CN108234319B (en) * 2017-12-29 2021-10-19 北京奇虎科技有限公司 Data transmission method and device
CN108650070A (en) * 2018-05-11 2018-10-12 全球能源互联网研究院有限公司 A kind of System and method for of information centre's network phasor measurement unit communication

Also Published As

Publication number Publication date
CN106254446B (en) 2019-07-02

Similar Documents

Publication Publication Date Title
Manzoor et al. Performance analysis and route optimization: redistribution between EIGRP, OSPF & BGP routing protocols
US7339937B2 (en) Wide-area content-based routing architecture
CN102523166B (en) Structured network system applicable to future internet
CN104780205B (en) The content requests and transmission method and system of content center network
CN104836732B (en) The automatic selecting method and system of network connection
CN108476208A (en) Multi-path transmission designs
CN104937901B (en) For providing the method for the traffic engineering of routing and storage in the network of content oriented
EP3021537B1 (en) Method, device and system for determining content acquisition path and processing request
CN104639512B (en) Network security method and equipment
CN106254446A (en) A kind of caching laying method based on content center network and device
CN105099944B (en) A kind of data cached method and forwarding unit
CN106302630A (en) Transmit private data and data object
CN105933234A (en) Node management method and system in CDN network
CN106537824B (en) Method and apparatus for the response time for reducing information centre's network
CN104301305B (en) Interest bag is forwarded under information centre's network method and forwarding terminal
CN105872008A (en) System and method for on-demand content exchange with adaptive naming in information-centric networks
CN106922008A (en) A kind of IPv6 wireless sense network multi-path transmission methods based on RPL Routing Protocols
CN106210116A (en) A kind of differentiation based on content center network storage method and device
CN108924825A (en) A kind of high energy efficiency trust management and credible routing method towards SDWSNs
CN103379035A (en) Transport system, central control computer, and transport method
CN108093056A (en) Information centre's wireless network virtualization nodes buffer replacing method
CN106230723A (en) A kind of message forwarding cache method and device
CN103368798B (en) The method and network components of addressing based on content in data transmission network
Li et al. A smart routing scheme for named data networks
Zhang et al. Multi-path interests routing scheme for multi-path data transfer in content centric networking

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant