CN105049254A - Data caching substitution method based on content level and popularity in NDN/CCN - Google Patents

Data caching substitution method based on content level and popularity in NDN/CCN Download PDF

Info

Publication number
CN105049254A
CN105049254A CN201510460211.8A CN201510460211A CN105049254A CN 105049254 A CN105049254 A CN 105049254A CN 201510460211 A CN201510460211 A CN 201510460211A CN 105049254 A CN105049254 A CN 105049254A
Authority
CN
China
Prior art keywords
data
node
request
grade
packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510460211.8A
Other languages
Chinese (zh)
Other versions
CN105049254B (en
Inventor
黄胜
滕明埝
何玉杰
向劲松
刘焕淋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201510460211.8A priority Critical patent/CN105049254B/en
Publication of CN105049254A publication Critical patent/CN105049254A/en
Application granted granted Critical
Publication of CN105049254B publication Critical patent/CN105049254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention relates to a data caching substitution method based on content level and popularity in NDN/CCN and belongs to the technical field of Internet communications. According to the method, under the premise of guaranteeing the performance of the whole NDN/CCN network, data is cached in a node as close to a client as possible when the position of a data placement node is selected, and meanwhile, in order to guarantee the performance of the whole network, the node for caching the data has relatively high requirements on both the average request level of data and the request frequency of the data; besides, when data needs to be cached in another node, the weight value of the data in the present node needs to be calculated according to the request level of the data in the present node and the request frequency of the data in different time slots if the caching space is insufficient, and then whether the existing data in a cache needs to be deleted for saving the data is judged on the basis of the weight value. Data high in request level and popularity is retained, thus improving the caching space utilization efficiency of the whole network.

Description

The data buffer storage replacement method of content-based grade and popularity in a kind of NDN/CCN
Technical field
The invention belongs to Internet communication technology field, relate to the data buffer storage replacement method of content-based grade and popularity in a kind of NDN/CCN.
Background technology
Along with the development of the Internet, the new service continued to bring out out, new opplication make the data traffic in network be that explosion type increases, and the requirement of user to request of data constantly promotes.In this context, traditional network architecture centered by main frame has been difficult to the development meeting current internet.Researcher, in order to the constraint of the Internet TCP/IP structure that breaks traditions, designs a kind of brand-new Internet architecture.In the research to new network configuration, regard data as server, information centre network (Information-CentricNetworking, ICN) as a kind of Future Internet mentality of designing of revolution formula (clean-slate), the important model of Future Internet design is become.Wherein, named data network (NamedDataNetworking, NDN)/content center network (ContentCentricNetworking, CCN) as typical ICN architecture example, in intermediate layer, named data replaces IP, pattern that transfer of data adopts " request-response ", directly carries out route with content name, realizes point-to-multipoint efficient contents distribution.The most important thing is in NDN/CCN node, to add the content store (ContentStore storing data, CS), therefore, all NDN/CCN nodes all have the function stored the packet of process, just can directly data corresponding in CS be returned, without the need to mailing to server node when again there being identical request bag to arrive this node.Experience when improve user's request msg also alleviates the pressure of server end.
In this Future Internet framework being core with data (or content), mainly contain the packet of two types, the first is the request bag (InterestPacket) comprising data name that user asks to send during certain data, this request bag mainly comprises the title of data, field and random number can be selected, request situation in this patent in order to make downstream node know upstream node data, therefore, in request bag, current request is also added in the average request grade of present node and total request frequency.The second is the packet (DataPacket) carrying corresponding contents with request name-matches, mainly carries the information such as data name and data in this packet.
In NDN/CCN, except the above-mentioned CS of refer to module, NDN/CCN also has unsettled required list (PendingInterestTable, PIT) module and forwarding information base (ForwardingInformationBase, FIB) module.When interest bag arrives node, first search in CS according to the data name of asking in interest bag the data whether stored corresponding to this request, if exist, then packet is sent, otherwise, node can be searched for PIT table and judge whether to there is identical request entry, if exist, then request bag being entered interface corresponding to node adds in corresponding entry, otherwise, the new PIT entry of this request correspondence will be added in PIT, and search the FIB of node, if there is the forwarding interface corresponding with request in FIB, so, just according to corresponding interface, data are carried out being forwarded to next node, otherwise, this request bag is abandoned.When packet arrives, node will search the entry corresponding with package name in PIT, enters the interface of node according to request bag, and packet is mail to required client node along asking the rightabout of bag.
Document (ChiocchettiR.PerinoD.CarofiglioG, etal. " INFORM:adynamicInterestForwardingMechanismforInformation CentricNetworking [C] .ACMSIGCOMMWorkshoponInformation-CentricNetworking ", HongKong.China, 2013:9-14), NDN is pointed out, rational Content placement and cache decision, it is the key factor effectively playing network performance, but because NDN spreads unchecked whole cache way (CacheEverythingEverywhere, the CE of formula on the way 2), and when cache decision, lacking the consideration to content popularit and request of data grade factor, the optimization that cannot realize content stores, and well can not meet the demand of user.Document (LiTao, LiYuhong. " AnewmethodbasedontheheatofthecontentofNDNcachereplacemen talgorithm. " ChinaScienceandtechnologypaperonline.2012) point out, by calculating hot value, and be kept in CPT (contentpopularitytable) table newly-increased in NDN, the content selecting temperature minimum in buffer memory replacement process is replaced, although consider content popularit, but lack and request of data grade is considered, effectively can not improve experience during user's request msg.
Above-mentioned NDN buffer memory or replacement policy effectively can improve the performance of whole network on certain depth.But, replace the same with cache policy with current most NDN data, have ignored the otherness of client to demand data.Because different clients often has different demand to same data, the frequency that certain data may be asked a certain user is lower, but, these data have very high grade (namely these data are very important for user) to user, so, we also need to provide necessary service (mainly referring to the buffer memory to data and replacement here) to these data, and most strategy all have ignored the real-time of data rights weight values in the acquisition of data rights weight values, therefore, the request grade of the present invention according to data and the request frequency of different time sections, obtain the weighted value that more reasonably can react current data situation.
Summary of the invention
In view of this, the object of the present invention is to provide the data buffer storage replacement method of content-based grade and popularity in a kind of NDN/CCN, the method is under the prerequisite ensureing whole NDN/CCN network performance, by the important evidence that the request grade of user to data stores as data, necessary service can be provided important data like this, thus improve user to the satisfaction of request of data.And, under the prerequisite not affecting whole network request packet, significant data is buffered in from the node close to client as much as possible, when user's request msg, effectively decrease transmission range and the time delay of packet, avoid the waste of Internet resources simultaneously, alleviate the pressure of the congested of network and server to a certain extent.
For achieving the above object, the invention provides following technical scheme:
In NDN/CCN, a data buffer storage replacement method for content-based grade and popularity, comprises data cache method and data replacement method;
Described data cache method be according to packet in the average request grade of node and request frequency, return in all nodes of client process at packet, select data memory node according to data storage condition;
When described data replacement method is the inadequate buffer space when node, node is according to the request frequency of client user in different time sections to the request grade of data and respective level, obtain the weighted value that can reflect data present case, this node judges whether to store it by deleting existing data in buffer memory according to the weighted value obtained, if spatial cache enough stores data, so, the node position that these data will be selected to store in the buffer according to this weighted value.
Further, described data cache method specifically comprises the following steps:
First, when packet arrives node, check the data storaging mark d in packet, namely be used for marking the distance that the present node at this data place and its last time are stored node, if mark d is less than Δ d, then can transfer data to next-hop node, otherwise, by comparing average request grade and the request frequency of this requesting client of this node and next-hop node;
Next, by with whether determination data is stored into this node, if two conditions are all satisfied, then data is passed to next node, otherwise data being stored into this node, setting to 0 simultaneously by identifying d in packet, after packet to next-hop node, mark d adds 1.
Further, determine to store at certain node in data, if during this nodal cache insufficient space, according to data in the request grade of present node and the request frequency of this data in the different time sections of this grade of present node, obtain the weighted value W of response data in present node request situation i, node is according to W idata are replaced and stores;
Ask the flow process of bag as follows in the method:
1) when asking bag to arrive a certain node, the time of advent of minute book secondary data, and from this request bag, obtain this request of upper hop node average request grade and request frequency;
2) search in present node CS and whether have the data corresponding with this request, if do not arrive step 3), otherwise return corresponding data bag;
3) search in PIT and whether have the entry corresponding with this request, have, the interface this request being entered node adds in corresponding entry, does not have, and adds the PIT entry corresponding with this request and forward step 4 in unsettled required list);
4) this request is calculated in the average request grade of present node and request frequency, and by adding in request bag, to step 5);
5) fib table is searched, by request forward to next-hop node;
6) by that analogy, step 1 is repeated) to step 5), node or the server of data corresponding thereto will be had in this request forward to CS.
Further, the flow process of packet is as follows in the method:
1) when certain node has data corresponding with this request, return corresponding data bag, add data buffer storage mark d simultaneously in the packet;
2) data buffer storage mark d when packet arrives node, first, is checked, if d is less than Δ d, the node storage that current data packet is not far in upstream is described, in order to avoid these data amount of redundancy in net is excessive, do not store these data, to step 6), otherwise to step 3);
3) average request grade and the request frequency of these data of present node is calculated, compare with the value that request wraps the next-hop node that carries corresponding, judge whether these data can store in next node, if, next node meets buffer memory condition, so, in order to make data buffer storage from the position close to client, data are passed to next node, and the data buffer storage mark in packet is added 1, forward step 2 to), if, next node does not meet cache bar part or next node when being client node, data are stored at present node, to step 4),
4) if data will store at node, according to the grade of request, and the request frequency of this request of different time sections, calculate the weighted value of these data at present node;
5) check whether the CS of present node has living space storage data, if had, data are stored, and the data buffer storage mark in packet is set to 0, otherwise, judge whether the weighted value of current data is greater than the data that in CS, weighted value is minimum, if be not more than, cannot these data of buffer memory, mail to next node after data buffer storage corresponding in packet mark is added 1, forward step 6 to), otherwise, delete the data of minimal weight, again perform step 5);
6) again step 2 is performed) to step 5) until packet arrives corresponding client.
Beneficial effect of the present invention is:
The present invention mainly considers client to the request grade of data and user to the impact of the request frequency of data on Consumer's Experience, when selecting data placement node location, in order to improve user to experience during request of data, the present invention can by data buffer storage at the node as far as possible near client, simultaneously, in order to ensure the performance of whole network, data cached node all needs relatively high to the average request grade of data and the request frequency of these data; In addition, when data need at certain nodal cache, if during inadequate buffer space, need to calculate its weighted value at this node by these data in the request grade of this node and the request frequency of this data in different time sections, then, with this weighted value for foundation, judge whether to need to store this data by deleting existing data in buffer memory.Retain some request grades and the high data of popularity, be conducive to improving the utilization ratio of whole network to spatial cache.Thus the present invention effectively raises the experience of user to the request of data, and improve the performance of whole NDN/CCN network.
Accompanying drawing explanation
In order to make object of the present invention, technical scheme and beneficial effect clearly, the invention provides following accompanying drawing and being described:
Fig. 1 is the request bag flow chart in the present invention;
Fig. 2 is the packet flow chart in the present invention;
Fig. 3 is tactful schematic diagram.
Embodiment
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described in detail.
Method of the present invention is made up of two parts, based on request of data grade and frequency date storage method with based on the data replacement method of asking grade and frequency.Wherein, date storage method is by considering that the frequency of user to the request grade of data and user's request msg preferably stores data near the node of client; so that follow-up same request utilizes the request of cached copies customer in response end when arriving sooner; improve response efficiency, and ensure that the data of high request grade can be protected as far as possible.Data replacement method, refer to when inadequate buffer space, the weighted value of data at this node is obtained in the request frequency of different time sections by the request grade of data and data, if the weighted value of these data is greater than data with existing MINIMUM WEIGHT weight values in nodal cache, so, met the demand storing these data by the data replacing MINIMUM WEIGHT weight values, thus not only make the data of buffer memory more can meet the demand of user, the performance of whole network can also be improved.
Therefore, before being stored into node to data, need according to the date storage method based on request of data grade and frequency and based on asking the replacement method of grade and frequency to process, its particular content is as follows:
Based on the request grade of data and the date storage method of frequency:
Storage means is when mainly determination data returns from server end, and deposit data on which node, thus can provide for more user the lifting better ensureing or can bring larger performance to whole network.This patent mainly considers the impact that user experiences user's request msg the request grade of data and request frequency, therefore, is being placed on what node to data, weighs mainly through the average request grade of data user on node and request frequency.Meanwhile, because data buffer storage is at the node near client, can react the request of user more rapidly, improve the experience of user's request msg, therefore, the present invention is placed on from the node close to client as much as possible by making data.
In order to better obtain the situation of user in these data of node request, each node will safeguard the information of each request in these data of this node request by newly-built data request information table (InformationTableofDataRequest, ITDR) coming.Its tableau format is as shown in Table 1:
Table one ITDR
Client is contained to the request grade (grade) of data and the data request frequency (frequency) in these level in IDTR.Therefore, the request frequency summation F of data i at present node can just be obtained by data request information table (ITDR) tishown in (1):
F ti = Σ q i = min { q i } max { q i } f iq i - - - ( 1 )
Wherein: F tirepresent the request frequency summation of data i at present node, q irepresent the request grade of data i, min{q irepresent the minimum requested grade of request i at node, max{q irepresent the highest request grade of request i at node, represent that request i is at grade q irequest frequency.
Meanwhile, the average request grade of this request at node can be drawn according to formula (1) and ITDR table shown in (2):
q i ‾ = Σ q i = min { q i } max { q i } q i f iq i F ti - - - ( 2 )
Wherein: represent the summation that the request that all users send data i is amassed in grade and the corresponding request frequency of present node, F tirepresent request msg i in present node request frequency and.
Client can be obtained to the request situation of data at node by formula (1) and formula (2), wherein, formula (1) embodies the importance of data at node, formula (2) embodies the request popularity of data at node, if data have higher average request grade and request frequency at a certain node, so, illustrate that this node is very important to these data, namely a lot of client can arrive these these data of node request and this data very important for requestor, therefore, can consider to store this data at this category node.But, if next-hop node can have the average request grade and request frequency that are more or less the same with present node, so, in order to make client ask corresponding data faster, now, data can be passed to next-hop node and judging whether again to store it.
In order to make present node know the solicited message of data i at next-hop node, to judge whether to store data at this node.Therefore, whenever asking to mail to next node by present node, the average request grade of data i at present node will be carried with request frequency total amount F tito next-hop node.Now, request pack arrangement is now as shown in Table 2:
Table two InterestPacket
When packet sends from responsive node, now, by compare this node with the average request grade of next-hop node with request frequency total amount F ti-d1determine how to place these data, it places comparison expression such as formula shown in (3), (4);
q i ‾ - q i - d 1 ‾ ≤ q , q > 0 - - - ( 3 )
F ti-F t-d1≤f,f>0(4)
Wherein: represent the average request grade of data at present node, data are in the average request grade of next node; F tirepresent data in present node request frequency and, F ti-d1represent data in next node request frequency and.
When formula (3) and formula (4) meet simultaneously, prove that next node meets data pack buffer condition, in order to make this packet toward the nodal cache near client, therefore, packet is passed to next-hop node, simultaneously in order to avoid netting the too much redundant data of interior generation, present node does not store these data.Here consider that formula (3) and formula (4) are but stored in the nearer node of inner client in order to avoid the request grade of data is very low or request frequency is very low, thus waste is more conducive to the spatial cache of user awareness simultaneously.
Meanwhile, because q, f value is constant, therefore, can by regulating the position of the value adjustment data placement of q, f;
If wish, data are closer to nearly client node buffer memory, so, these two constants can be made larger, otherwise, reduce the value of these two constants.When meeting when formula (3) is different with formula (4), prove that next node does not meet data buffer storage condition, now, data are stored into present node, add data buffer storage mark d in the packet simultaneously, now d is 0, packet often passes to next-hop node, just mark distance d is added one, the redundant data of data in net effectively can be reduced by this value, control to have in whole network certain data sample to provide response for client-requested simultaneously, when d is more than or equal to setting Δ d (i.e. d >=Δ d), (Δ d size is determined by internetworking the subject of knowledge and the object of knowledge here, its value can not arrange too large, otherwise can due to excessively far away between data buffer storage node, and cause user when accessing these data, does not have data to meet the request of client in net, its value can not arrange too little, otherwise can due to excessively near between data buffer storage node, and cause the waste of data redundancy increase and nodal cache) and again judging whether present node can store data by formula (3), formula (4), all the other nodes select data placement node in this way.
Data replacement method based on request of data grade and frequency:
When data reach cache node, if the spatial cache of this node is enough, so, just can directly current data be stored among this node.But, if inadequate buffer space, with regard to needing, the data in spatial cache are replaced.The replacement algorithm of data is mainly determined according to the weighted value of data, need first to replace weighted value minimum data when replacing the data in spatial cache at every turn, if spatial cache replaces the secondary little data of weighted value not again, until data can store in the buffer, if current data weights are less than the minimum data of weights in spatial cache, then cannot replace data with existing in spatial cache, now, not these data of buffer memory.In the present invention, the weighted value of data remains the request frequency according to data data in the different time sections that the client of this node asks grade and correspondence.Data i at the weighted value of present node such as formula shown in (5):
W i = Σ q = min { q i } max { q i } w iq - - - ( 5 )
Wherein: w iqrepresent the weights of data i when grade is q, W irepresent the total weight value of data.
w iq current = q current Σ q = min { q i } max { q i } q * [ θ i Σ t - Δt t f iq ( t ) + θ 2 Σ t - 2 Δt t - Δt f iq ( t ) + θ 3 Σ t - 3 Δt t - 2 Δt f iq ( t ) ] - - - ( 6 )
Wherein: q currentrepresent the request grade of current logarithmic according to i, q represents all request grade sums of data i at present node, and t represents it is current time, f iqt () represents that data are in the request frequency of grade q within certain time period, θ i(i ∈ 1,2,3}) represent the proportion of different time sections to the request frequency of data i, θ 1+ θ 2+ θ 3=1, regulation θ 1> θ 2> θ 3> 0, its objective is in order to get rid of client to data for a long time before request frequency very high, nearest request frequency is very low, and causes the situation that total request frequency is higher, can not situation that well response data is current.By regulation the closer to current time segment data request frequency shared by proportion larger, ensure that weighted value can the real-time of response data.The present invention has only got the request frequency in three nearest time periods in formula (6), can obtain the larger time period, and specify that t-Δ t, t-2 Δ t, t-3 Δ t is meaningful by regulating Δ t.As shown in Figure 1, when packet arrives node 4, if determine by deposit data to node 4, if the inadequate buffer space of now node 4, through type (5), (6) are calculated the weight of data compared with the weighted value of other data by this weighted value again, if other data weightings are less, will be replaced by data i, replace until memory space can store lower data i; Otherwise, do not replace data with existing in node.
Replace algorithm below in conjunction with the data buffer storage of a kind of content-based grade and popularity in accompanying drawing 1 and accompanying drawing 2 couples of NDN/CCN to be described, request bag can be divided into several step below with the groundwork flow process of packet:
Request bag flow process:
Step1: when asking bag to arrive a certain node, the t time of advent of minute book secondary data, and the average request grade obtaining this request of upper hop node from this request bag with request frequency sum F ti-d1, and store;
Step2: search in present node CS whether have the data corresponding with this request, if not, to Step3, otherwise returns corresponding data bag;
Step3: search in PIT table whether have the entry corresponding with this request, have, the interface this request being entered node adds in corresponding entry, does not have then in unsettled required list, add the PIT entry corresponding with this request and arrive Step4;
Step4: calculate this request in the average request grade of present node and total request frequency, be stored in request bag, to Step5;
Step5: search FIB, by request forward to next-hop node;
Step6: by that analogy (Step1-Step5), will have node or the server of data corresponding thereto in this request forward to CS;
As shown in Figure 3, suppose have three clients (client 1, client 2, client 3) to ask same data i, the ITDR table of this request of this node can be upgraded according to the grade of request bag at node 3, node 4 in figure, and record request time of advent.When the packet returned arrives node 4, first, judge whether the data storaging mark d in packet is less than Δ d, if so, illustrate that nearlyer node stored this data to these data in upstream, now, only data need be passed to next-hop node; Otherwise through type (1) calculates F ti=100, formula (2) calculates now, by obtain and F tithe average request grade of this request of node 3 of carrying is wrapped respectively with request and request frequency F ti-d1=80 compare.Substituted in formula (3), formula (4) that to obtain result respectively as follows:
q i ‾ - q i - d 1 ‾ = - 0.325 ≤ q
F ti-F t-d1=20≤f
Due to q>0, now, if during f >=20, then inequality is set up, then packet is passed to node 3 place, if the node 4 original hit node that is data, then add mark d in the packet and set to 0, if node 4 is not server node, now deletion of node 4 place data i, carries out aforesaid operations again when packet arrives 3 node; If f < 20 illustrates that node 3 does not meet data storage condition, now data i is stored in 4 nodes, and storaging mark d in packet is set to 0, then judged whether by replacement method can this data i of buffer memory; By that analogy, until transfer of data is to corresponding client node.
Packet flow process:
Step1: when having the data corresponding with request in certain node CS, return corresponding data, and add data buffer storage mark d in the packet, and its initial value is set to 0;
Step2: when packet arrives node, first, checks the data buffer storage mark d in packet, if d is less than Δ d, the node storage that current data packet is not far in upstream is described, in order to avoid these data amount of redundancy in net is excessive, now do not store these data, to Step6, otherwise to Step3;
Step3: the average request grade obtaining this request of next-hop node according to the information in request bag and request sum frequency F ti-d1, and calculate the average request grade of this request of present node with total request frequency F ti, then, according to formula (3), formula (4) two inequality, judge that these data whether can at next-hop node buffer memory, if, next-hop node meet buffer memory condition and next-hop node for client node time, so, data are passed to next node, to Step2, if, next-hop node does not meet cache bar part or next-hop node when being client node, data store at present node, and the data buffer storage mark d in packet is set to 0, to Step4;
Step4: if data when certain node stores, will calculate the weighted value W of these data at present node by formula (5), formula (6) i;
Step5: check whether the CS of present node has living space and can store these data, if had, by data buffer storage, otherwise, judge whether the weighted value of current data is greater than the data that in CS, weighted value is minimum, if be not more than, cannot these data of buffer memory, now, data buffer storage mark d in packet is added one, then, these data are mail to next-hop node; Otherwise, delete the data of minimal weight, again perform Step11, until there are enough spatial caches to store current data, to Step6;
Step6: judge whether next-hop node is client node, if not, by this transfer of data to next-hop node, and identified distance d and added one, then performed Step2.
Step7: repeat Step2-Step7 until packet arrives relative client.
What finally illustrate is, above preferred embodiment is only in order to illustrate technical scheme of the present invention and unrestricted, although by above preferred embodiment to invention has been detailed description, but those skilled in the art are to be understood that, various change can be made to it in the form and details, and not depart from claims of the present invention limited range.

Claims (4)

1. the data buffer storage replacement method of content-based grade and popularity in NDN/CCN, is characterized in that: comprise data cache method and data replacement method;
Described data cache method be according to packet in the average request grade of node and request frequency, return in all nodes of client process at packet, select data memory node according to data storage condition;
When described data replacement method is the inadequate buffer space when node, node is according to the request frequency of client user in different time sections to the request grade of data and respective level, obtain the weighted value that can reflect data present case, this node judges whether to store it by deleting existing data in buffer memory according to the weighted value obtained, if spatial cache enough stores data, so, the node position that these data will be selected to store in the buffer according to this weighted value.
2. the data buffer storage replacement method of content-based grade and popularity in a kind of NDN/CCN according to claim 1, is characterized in that: described data cache method specifically comprises the following steps:
First, when packet arrives node, check the data storaging mark d in packet, namely be used for marking the distance that the present node at this data place and its last time are stored node, if mark d is less than Δ d, then can transfer data to next-hop node, otherwise, by comparing average request grade and the request frequency of this requesting client of this node and next-hop node;
Next, by and F ti-F ti-d1whether≤f determination data is stored into this node, if two conditions are all satisfied, then data is passed to next node, otherwise data being stored into this node, setting to 0 simultaneously by identifying d in packet, after packet to next-hop node, mark d adds 1.
3. the data buffer storage replacement method of content-based grade and popularity in a kind of NDN/CCN according to claim 1, it is characterized in that: determine to store at certain node in data, if during this nodal cache insufficient space, according to data in the request grade of present node and the request frequency of this data in the different time sections of this grade of present node, obtain the weighted value W of response data in present node request situation i, node is according to W idata are replaced and stores;
Ask the flow process of bag as follows in the method:
1) when asking bag to arrive a certain node, the time of advent of minute book secondary data, and from this request bag, obtain this request of upper hop node average request grade and request frequency;
2) search in present node CS and whether have the data corresponding with this request, if do not arrive step 3), otherwise return corresponding data bag;
3) search in PIT and whether have the entry corresponding with this request, have, the interface this request being entered node adds in corresponding entry, does not have, and adds the PIT entry corresponding with this request and forward step 4 in unsettled required list);
4) this request is calculated in the average request grade of present node and request frequency, and by adding in request bag, to step 5);
5) fib table is searched, by request forward to next-hop node;
6) by that analogy, step 1 is repeated) to step 5), node or the server of data corresponding thereto will be had in this request forward to CS.
4. the data buffer storage replacement method of content-based grade and popularity in a kind of NDN/CCN according to any one of claim 1 to 3, is characterized in that: the flow process of packet is as follows in the method:
1) when certain node has data corresponding with this request, return corresponding data bag, add data buffer storage mark d simultaneously in the packet;
2) data buffer storage mark d when packet arrives node, first, is checked, if d is less than Δ d, the node storage that current data packet is not far in upstream is described, in order to avoid these data amount of redundancy in net is excessive, do not store these data, to step 6), otherwise to step 3);
3) average request grade and the request frequency of these data of present node is calculated, compare with the value that request wraps the next-hop node that carries corresponding, judge whether these data can store in next node, if, next node meets buffer memory condition, so, in order to make data buffer storage from the position close to client, data are passed to next node, and the data buffer storage mark in packet is added 1, forward step 2 to), if, next node does not meet cache bar part or next node when being client node, data are stored at present node, to step 4),
4) if data will store at node, according to the grade of request, and the request frequency of this request of different time sections, calculate the weighted value of these data at present node;
5) check whether the CS of present node has living space storage data, if had, data are stored, and the data buffer storage mark in packet is set to 0, otherwise, judge whether the weighted value of current data is greater than the data that in CS, weighted value is minimum, if be not more than, cannot these data of buffer memory, mail to next node after data buffer storage corresponding in packet mark is added 1, forward step 6 to), otherwise, delete the data of minimal weight, again perform step 5);
6) again step 2 is performed) to step 5) until packet arrives corresponding client.
CN201510460211.8A 2015-07-30 2015-07-30 Data buffer storage replacement method based on content rating and popularity in a kind of NDN/CCN Active CN105049254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510460211.8A CN105049254B (en) 2015-07-30 2015-07-30 Data buffer storage replacement method based on content rating and popularity in a kind of NDN/CCN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510460211.8A CN105049254B (en) 2015-07-30 2015-07-30 Data buffer storage replacement method based on content rating and popularity in a kind of NDN/CCN

Publications (2)

Publication Number Publication Date
CN105049254A true CN105049254A (en) 2015-11-11
CN105049254B CN105049254B (en) 2018-08-21

Family

ID=54455476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510460211.8A Active CN105049254B (en) 2015-07-30 2015-07-30 Data buffer storage replacement method based on content rating and popularity in a kind of NDN/CCN

Country Status (1)

Country Link
CN (1) CN105049254B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105656788A (en) * 2015-12-25 2016-06-08 中国科学院信息工程研究所 CCN (Content Centric Network) content caching method based on popularity statistics
CN105760543A (en) * 2016-03-16 2016-07-13 重庆邮电大学 Data storage method based on node interface storage information differentiation announcement in NDN/CCN (Named Data Networking/Content Centric Networking)
CN105939385A (en) * 2016-06-22 2016-09-14 湖南大学 Request frequency based real-time data replacement method in NDN cache
CN106131182A (en) * 2016-07-12 2016-11-16 重庆邮电大学 A kind of cooperation caching method based on Popularity prediction in name data network
CN106394621A (en) * 2016-09-27 2017-02-15 株洲中车时代电气股份有限公司 Train data transmission method and system
CN106899692A (en) * 2017-03-17 2017-06-27 重庆邮电大学 A kind of content center network node data buffer replacing method and device
CN107070995A (en) * 2017-03-16 2017-08-18 中国科学院信息工程研究所 The caching method and device of a kind of content center network
WO2017157126A1 (en) * 2016-03-15 2017-09-21 华为技术有限公司 Data management method and apparatus, and network device
CN108124166A (en) * 2017-12-27 2018-06-05 北京工业大学 A kind of internet live broadcast system
CN108540569A (en) * 2018-04-23 2018-09-14 冼汉生 A kind of software installation packet replacement method, device and computer storage media
CN108900618A (en) * 2018-07-04 2018-11-27 重庆邮电大学 Content buffering method in a kind of information centre's network virtualization
CN109644160A (en) * 2016-08-25 2019-04-16 华为技术有限公司 The mixed method of name resolving and producer's selection is carried out in ICN by being sorted in
CN109921997A (en) * 2019-01-11 2019-06-21 西安电子科技大学 A kind of name data network caching method, buffer and storage medium
CN110677190A (en) * 2019-10-09 2020-01-10 大连大学 Static processing and caching method for space-ground integrated intelligent network node
CN111107000A (en) * 2019-12-13 2020-05-05 东南大学 Content caching method in named data network based on network coding
CN114745440A (en) * 2022-03-09 2022-07-12 南京邮电大学 CCN cache replacement method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215756A1 (en) * 2012-02-17 2013-08-22 Electronics And Telecommunications Research Institute Apparatus and method for managing contents cache considering network cost
CN103581052A (en) * 2012-08-02 2014-02-12 华为技术有限公司 Data processing method, router and NDN system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215756A1 (en) * 2012-02-17 2013-08-22 Electronics And Telecommunications Research Institute Apparatus and method for managing contents cache considering network cost
CN103581052A (en) * 2012-08-02 2014-02-12 华为技术有限公司 Data processing method, router and NDN system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄胜 等: "命名数据网络中基于数据请求代价与流行度的动态替换策略", 《计算机应用》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105656788A (en) * 2015-12-25 2016-06-08 中国科学院信息工程研究所 CCN (Content Centric Network) content caching method based on popularity statistics
CN105656788B (en) * 2015-12-25 2019-08-06 中国科学院信息工程研究所 CCN content buffering method based on popularity statistics
WO2017157126A1 (en) * 2016-03-15 2017-09-21 华为技术有限公司 Data management method and apparatus, and network device
CN105760543A (en) * 2016-03-16 2016-07-13 重庆邮电大学 Data storage method based on node interface storage information differentiation announcement in NDN/CCN (Named Data Networking/Content Centric Networking)
CN105760543B (en) * 2016-03-16 2019-03-26 重庆邮电大学 A kind of date storage method based on node interface storage informative differentiation notice in NDN/CCN
CN105939385A (en) * 2016-06-22 2016-09-14 湖南大学 Request frequency based real-time data replacement method in NDN cache
CN105939385B (en) * 2016-06-22 2019-05-10 湖南大学 Real time data replacement method based on request frequency in a kind of NDN caching
CN106131182B (en) * 2016-07-12 2019-04-09 重庆邮电大学 Name a kind of cooperation caching method based on Popularity prediction in data network
CN106131182A (en) * 2016-07-12 2016-11-16 重庆邮电大学 A kind of cooperation caching method based on Popularity prediction in name data network
CN109644160B (en) * 2016-08-25 2020-12-04 华为技术有限公司 Hybrid method for name resolution and producer selection in ICN by classification
CN109644160A (en) * 2016-08-25 2019-04-16 华为技术有限公司 The mixed method of name resolving and producer's selection is carried out in ICN by being sorted in
CN106394621A (en) * 2016-09-27 2017-02-15 株洲中车时代电气股份有限公司 Train data transmission method and system
CN107070995A (en) * 2017-03-16 2017-08-18 中国科学院信息工程研究所 The caching method and device of a kind of content center network
CN106899692A (en) * 2017-03-17 2017-06-27 重庆邮电大学 A kind of content center network node data buffer replacing method and device
CN108124166B (en) * 2017-12-27 2020-02-18 北京工业大学 Internet live broadcast system
CN108124166A (en) * 2017-12-27 2018-06-05 北京工业大学 A kind of internet live broadcast system
CN108540569B (en) * 2018-04-23 2020-01-24 燕东科技(广东)有限公司 Software installation package replacement method and device and computer storage medium
CN108540569A (en) * 2018-04-23 2018-09-14 冼汉生 A kind of software installation packet replacement method, device and computer storage media
CN108900618A (en) * 2018-07-04 2018-11-27 重庆邮电大学 Content buffering method in a kind of information centre's network virtualization
US11502956B2 (en) * 2018-07-04 2022-11-15 Chongqing University Of Posts And Telecommunications Method for content caching in information-centric network virtualization
CN109921997A (en) * 2019-01-11 2019-06-21 西安电子科技大学 A kind of name data network caching method, buffer and storage medium
CN110677190A (en) * 2019-10-09 2020-01-10 大连大学 Static processing and caching method for space-ground integrated intelligent network node
CN111107000A (en) * 2019-12-13 2020-05-05 东南大学 Content caching method in named data network based on network coding
CN111107000B (en) * 2019-12-13 2021-09-07 东南大学 Content caching method in named data network based on network coding
CN114745440A (en) * 2022-03-09 2022-07-12 南京邮电大学 CCN cache replacement method and device
CN114745440B (en) * 2022-03-09 2023-05-26 南京邮电大学 CCN cache replacement method and device

Also Published As

Publication number Publication date
CN105049254B (en) 2018-08-21

Similar Documents

Publication Publication Date Title
CN105049254A (en) Data caching substitution method based on content level and popularity in NDN/CCN
CN106982248B (en) caching method and device for content-centric network
CN112218337B (en) Cache strategy decision method in mobile edge calculation
EP2975820A1 (en) Reputation-based strategy for forwarding and responding to interests over a content centric network
EP2985970A1 (en) Probabilistic lazy-forwarding technique without validation in a content centric network
CN105407055B (en) A kind of consumption control method of content center network
CN103023768A (en) Edge routing node and method for prefetching content from multisource by edge routing node
CN104660507B (en) The control method and device of forwarding data flow routing
CN103312725B (en) A kind of content center network-caching decision method based on node significance level
US20140040432A1 (en) Content caching device for managing contents based on content usage features
CN105262833B (en) A kind of the cross-layer caching method and its node of content center network
CN108366089B (en) CCN caching method based on content popularity and node importance
JP2010273298A (en) Content distribution system, distribution control device, and distribution control program
CN103905538A (en) Neighbor cooperation cache replacement method in content center network
CN103607386B (en) A kind of cooperation caching method in P2P Cache systems
CN106998353A (en) A kind of optimal cached configuration method of file in content center network
CN102647357A (en) Context routing processing method and context routing processing device
Khelifi et al. A QoS-aware cache replacement policy for vehicular named data networks
CN111200627A (en) Method and device for determining transfer starting port of information center network
CN106790552A (en) A kind of content providing system based on content distributing network
CN113255004A (en) Safe and efficient federal learning content caching method
CN106550408A (en) A kind of data object integration method based on MANET
CN103905539A (en) Optimal cache storing method based on popularity of content in content center network
CN112052198B (en) Hash route cooperative caching method based on node betweenness popularity under energy consumption monitoring platform
CN105227665A (en) A kind of caching replacement method for cache node

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant