CN106131182B - Name a kind of cooperation caching method based on Popularity prediction in data network - Google Patents

Name a kind of cooperation caching method based on Popularity prediction in data network Download PDF

Info

Publication number
CN106131182B
CN106131182B CN201610549740.XA CN201610549740A CN106131182B CN 106131182 B CN106131182 B CN 106131182B CN 201610549740 A CN201610549740 A CN 201610549740A CN 106131182 B CN106131182 B CN 106131182B
Authority
CN
China
Prior art keywords
content
popularity
node
cache
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610549740.XA
Other languages
Chinese (zh)
Other versions
CN106131182A (en
Inventor
刘期烈
秦庆伟
夏远鹏
李云
冯志宇
刘竞成
李国军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201610549740.XA priority Critical patent/CN106131182B/en
Publication of CN106131182A publication Critical patent/CN106131182A/en
Application granted granted Critical
Publication of CN106131182B publication Critical patent/CN106131182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5681Pre-fetching or pre-delivering data based on network characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Abstract

A kind of cooperation caching method based on Popularity prediction in name data network is claimed in the present invention; name data network (Named Data Networking; NDN low with caching space utilization rate using net internal segment point data High redundancy is caused along path many places cache policy or in important node cache policy in being netted in) when storage content.There is employed herein the modes of " part collaboration caching ", and after carrying out the following Popularity prediction to content first, the spatial cache that each node separates best proportion serves as local cache space, to store the high content of popularity.Each remaining part of node stores the relatively low content of popularity by neighborhood cooperation mode.Optimal space division proportion is calculated by considering the interest packet hop count that node hit and server end request hit are passed through in a network.This method is compared with traditional cache policy, is increased spatial cache utilization rate in net, is reduced the caching amount of redundancy in network, improves net interior nodes cache hit rate, promote the performance of whole network.

Description

Name a kind of cooperation caching method based on Popularity prediction in data network
Technical field
The present invention is to belong to the technical field of future network to the research of cache policy in name data network.In particular to Cooperation caching strategy into network between the forecasting mechanism of content popularit, neighborhood node, the division of nodal cache space proportion Method.
Background technique
With the rapid development of Internet technology and application, " broadband ", " content " and " personalization " have become net The theme of network development, people are increasingly strong for the demand of data content, the main body of network application gradually to content requests and Information service evolution transfer.Predicted according to Cisco company, to internet in 2016 on the relevant flow of all the elements will occupy and be more than 97.5% share, the development that network architecture of the tradition centered on host is difficult to meet current network information service are wanted It asks.For this purpose, a kind of future of the information centre's network (Information-Centric Networking, ICN) as revolution formula Internet mentality of designing allows data content itself to become the main unit of network communication, by network communication mode from concern " where " (address, server) is changed into concern " what ", i.e. the purpose and intention of user and application communication, becomes future The important model of Internet design.Wherein, name data network (Named Data Networking, NDN) is as typical ICN network architecture replaces IP in middle layer name data, and data transmission uses " publication-request-response " mode, directly It connects and is routed with content name, realize point-to-multipoint efficient content distribution.
In the design of NDN, by the way of generally being cached in network, interest packet (interest packet) on the way Storage hit content on all nodes of forward-path (on-path).In routing forwarding, when node receives interest packet, foundation Content name is successively in content memorizer (Content Store, CS), interest the package list (Pending Interest to be processed Table, PIT) and the middle progress matching inquiry of forwarding information list (Forwarding Information Base, FIB).Response Data packet (data packet) carries request content, according to the record of node PIT list item, along request path carry out it is reversed by Jump set is defeated.
For NDN network, reasonable content is placed and cache decision, is effective key factor for playing network performance.It closes In the research of NDN caching, when proposing content center network at the beginning, based on its " caching tape swapping is wide " basic thought, just Set about cache policy in research network.But since NDN spreads unchecked cache way (the Leave Cache whole on the way of formula Everywhere, LCE), cause nodal cache content to tend to homogeneity, causes largely to cache redundancy.There is research to carry out in succession Improvement proposes LCD, MCD, Prob, WAVE cache policy, and LCD is no longer deposited in all nodes when return path stores Part is laid in, is only stored in the downstream individual node of hit node.MCD is compared with LCD, after downstream node caching backup, The content of hit node is deleted, although greatly reducing the caching redundancy in net in this way, every request is primary, request Content backup position will shift once in a network, increase the traffic overhead of network.WAVE considers single file and exists It is divided into the feature that several chunk data blocks are stored in network, there are relevances between chunk data block, work as user When making requests as unit of file, with increasing for request number of times, increase the caching in net according to index on the return path Backup.The content for requesting multiple content, that is, popularity high quickly can be diffused into network edge in this way.Above-mentioned strategy is slow When depositing, the considerations of shortage for content differentiation feature itself, the optimization storage of content cannot achieve.Therefore, in order to effectively send out The advantage of content net internal segment point cache is waved, the design of efficient cache algorithm just becomes the critical issue for needing to solve.
Existing policies do not consider that position of the node in network topology structure influences caching bring, Chai etc. People proposes Betw cache policy, i.e., when interest packet is hit in net interior nodes or server end, on return road after acquisition data packet Caching backup is carried out at betweenness (betweenness centrality) highest node on diameter.This " less for More " thought is to allow node few as far as possible to meet user's requests more as far as possible, but there is no in return for above-mentioned strategy Hold the differentiation for carrying out popularity height, the nodal cache space for causing betweenness high is filled in a short time, high interior of popularity Having may be replaced in the short time by the low content of subsequent popularity.When user requests again, net interior nodes cannot Hit, needs route to server, increases network overhead.Based on this, Cui Xiandong et al. is carried out on Betwe policy grounds It improves, it is contemplated that content replacement rate is excessively high in the node chosen in Betwe strategy, they joined replacement rate factor, chooses and is situated between The high relatively low node of content replacement rate simultaneously of number is as cache location.Above-mentioned strategy has only carried out content not in a small number of positions It stores with adding differentiation, the bad node space in remaining position will be idle, and cause the waste of spatial cache in netting.To asking Ask frequency height and the low content of frequency to be not different when treating, distribution trail long low popularity content tied up need by The space of the high popularity content of caching.
Summary of the invention
Problem to be solved by this invention is: existing cache policy one kind is that content express delivery is pushed away by universal cache way It is sent to fringe node, leads to occur netting interior data cached High redundancy.One kind is to choose important node in network topology and concentrate to delay Content is deposited, causes net memory space most of by vacant waste.In order to be maximally utilized to net interior nodes spatial cache, this Invention carries out Popularity prediction to the content in network, and the strategy of proposition is both multiple standby to the Content Implementation that popularity in network is high Part, and the Content Implementation cooperation caching that right pop degree is low, a small amount of backup is saved, so that net interior nodes spatial cache sufficiently quilt It utilizes.It proposes a kind of high-frequency request that not only can satisfy user to high popularity content, but also can satisfy user in net internal segment The low content of point hit popularity, is effectively reduced the load pressure of content source server, reduces the request delay of user, simultaneously Improve a kind of request cooperation caching method based on Popularity prediction in the name data network of the hit rate of net interior nodes.This The technical solution of invention is as follows:
Name a kind of cooperation caching method based on Popularity prediction in data network comprising following steps:
101, Popularity prediction: according to content in name data network accessed frequency and content to request end away from From carrying out Popularity prediction to content, obtain the high content of the following popularity and the low content of popularity;The slow of each node It deposits space and is divided into two parts x and (c-x), (c-x) is used to cache the content of high popularity, and x is used to cache the interior of low popularity Hold;
102, all nodes all cache a backup to the content high to popularity in step 101 on the return path, and flow The low content of the row degree node that comprehensive state is optimal on return path neighbor domain of node carries out cooperation caching, only caches single standby Part.
Further, the number that the prediction of popularity is accessed in individual node in addition to considering content in step 101 Rcount, it is also contemplated that average number of hops H when being requested by a useraverage, cycle T is counted by setting to be accessed in record period Number Rcount, recorded by interest packet every time request hit hop count, in this way consideration content node local popularity with Cache location in the entire network is far and near, predicts the popularity of next cycle.
Further, it includes: to calculate in next popularity P for counting week that step 101, which carries out Popularity prediction to content, (Tn+1):
Wherein P (Tn) indicate the current popularity of request content, HaverageIt is that data are accessed every time in one cycle Average number of hops, RcountIndicate the requested number of the content in hit node, TnIndicate the hit content current count period, α is Regulating constant, Δ P indicate the difference of the prediction popularity value and upper Cyclic epidemic degree of current period, Δ P=P (Tn+1)-P (Tn), if Δ P > 0, show that content is in popularity ascent stage, each node caches on the return path when caching;If Δ P < 0 shows that the content is in the popularity decline stage, on the return path with other node cooperation cachings.
Further, after prediction calculates content popularit, (1) is when the content k of request is in net interior nodes or server end After hit, according to itself current popularity P (Tn) and the forwarding hop count that is attached to of interest packet and request content in hit node Requested number calculates following fashion trend P (Tn+1), then show that Δ P's is positive and negative;
(2) it if all nodes of Δ P > 0, content k on the return path are stored, and is drawn no more than node storage The c-x capacity separated;If nodal cache space c-x has expired, the cache replacement policy LRU in individual node is enabled;
(3) if data k is in the popularity decline phase, the corresponding node of maximum CoP (v) value on path is found out first, if having Content k illustrates there has been the requested identical content of other nodes during routing, and data k is deposited according to cache policy Storage directly abandons at this time in the point;If the not content with the neighbor node CoP (v) in its tablenValue compares, and finds out CoP (v) maximum value is stored in corresponding node.
Further, there are three kinds of storage modes in the whole network: each node stores depositing for high popularity content It stores up space (c-x);Each node stores the memory space x of low popularity content;Server stores the storage of initial data backup Space.
Further, the comprehensive state of the low content of popularity node on return path neighbor domain of node is used in step 102 CoP (v) indicates the comprehensive state of node: setting in present node PIT each content name corresponding interface quantity as Dname1, Dname2..., Dname n, then total request interface number is Dtotal(v)=Dname1+Dname2+…Dname n, current interface number can be straight Reversed to reflect the requested rate of node, C (v) states the connectivity of each node, and formula is as follows:
CoP (v) value is bigger, indicates that the rate being accessed under the premise of the switching performance is good is slower.
It advantages of the present invention and has the beneficial effect that:
The present invention is distinguished the popularity height of content, is carried out in caching by the Accurate Prediction to content popularit Different strategies is taken, the content that popularity can be made high is quickly pushed to fringe node, makes user's quickly hit in net The content of request.The content for keeping popularity low occupies a small amount of memory space, can satisfy use while saving memory space Family, even if cannot hit in net, can be routed to the content that server end finds request to its low-frequency request, additional Expense is smaller to the impacts of whole network also within tolerance.The novel cache policy both can satisfy user couple The high-frequency of high popularity content is requested, and can satisfy user's content low in net interior nodes hit popularity, is effectively reduced The load pressure of content source server, reduces the request delay of user, while improving request in the hit rate of net interior nodes.
Detailed description of the invention
Fig. 1 is that the present invention provides preferred embodiment content popularit prediction flow chart;
Fig. 2 is cooperation caching strategic process figure in the present invention;
Fig. 3 is that node space division proportion flow chart is determined in the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, detailed Carefully describe.Described embodiment is only a part of the embodiments of the present invention.
1. predicting flow chart as shown in Figure 1 for content popularit.Node request arriving rate generally uses Poisson distribution, and whole Content distribution in a network meets zipf distribution.Namely sixteen distributions, 20% content can expire before the rank order in network 80% request amount in sufficient network.It is shown below:
C is constant
P (i) indicates the frequency that i corresponding content occurs, and i indicates the order in all the elements sequence, sorts according to content The height of the frequency of occurrences, sequential arrangement from high to low.Then sum is the requested probability of content that i is ordered as in N number of content Are as follows:
Wherein s is bigger, indicates that popular content is more concentrated.The probability distribution of k content before ranking are as follows:
Other than the content of server memory storage has long-term steady-state, the content in net is in high dynamic, 20% stream The high content of row degree is also constantly substituting, and constantly has high popularity content to be degenerated to the low part of popularity, low interior of popularity Appearance also has part to evolve to popularity height.Studies have shown that the popularity of dynamic content or temperature all meet from low ebb in network To reach to peak value, low ebb is fallen back to after a period of time has passed.So we, which set, counts the period, a meter is added in interest packet Numerical digit HcountWith time Tn, every forwarding is primary, and count is incremented, stops counting after hit and reaches HcountNamely request end to hit Total hop count of node hits the requested primary also counting of the content in node and once, uses RcountIt indicates, when arrival cycle TnWhen stop Only count.HcountIt can reflect that the backup location of the content in the entire network illustrates because hop count is bigger from request end indirectly It is remoter, RcountIt is bigger, illustrate it is more to the content requests, fringe node do not have it is enough backup come meet request.If only Consider that content can not predict that popularity of the corresponding content in whole network, general content more lean in the requested frequency of node Nearly server end, requested number is opposite will be fewer, relies solely on the requested number of content in node at this time, mostly Number content will be identified as the low content of popularity.HaverageIt is the average number of hops that data are accessed every time in one cycle, Haverage*RcountVariation not only can reflect popularity of the content in net, also reflect the content in the network in the period Expense is especially sensitiveer to the judgement of the node content popularity close to server.It calculates in next stream for counting week Row degree P (Tn+1):
Wherein P (Tn) indicate the current popularity of request content, TnIndicate the hit content current count period, α is to adjust often Number.Above formula is unfolded:
The ratio that counting closer to current period accounts for following Popularity prediction is higher, and influence is bigger, and distance is current Period is remoter, influences on following popularity of content weaker.
Δ P=P (Tn+1)-P(Tn)
If Δ P > 0, show that content is in popularity ascent stage, each node caches on the return path when caching. If Δ P < 0, show that the content is in the popularity decline stage.On the return path with other node cooperation cachings.
2. if Fig. 2 is cooperation caching strategic process figure.As Δ P < 0, content is in the popularity decline phase, using cooperation Caching.The node of cooperation caching is chosen according to connectivity C (v) quality of each node.We calculate each node herein Internuncial method is as follows:
Wherein π1(n) all nodes for being 1 apart from node n, π are indicated1(v) all nodes for being 1 apart from node v are indicated.β It is the regulating constant belonged between 0 to 1, wherein
eijIndicate the side that point i is connected with point j, CD(ω) indicates all number of nodes being connected with ω, and C (v) is bigger, indicates v The compactness of point is better.The number for not only considering the neighbor node of the radius r=2 of node in the above-mentioned methods, is also examined Consider the Pair Analysis between neighbor node.The better node of compactness can be more accurately filtered out in this way.Only consider node connection It is too fast that property may cause the good node convergence rate of performance, makes content quickly to these node aggregations, causes node congestion, in order to Alleviate congestion, mitigate the load of node, we comprehensively consider node connectivity and node is accessed speed.Contain in PIT structure { content name;Request interface }, we set in present node PIT each content name corresponding interface quantity as Dname1, Dname2..., Dname n, then total request interface number is Dtotal(v)=Dname1+Dname2+…Dname n.Current interface number can be straight It is reversed to reflect the requested rate of node.So it is proposed that indicating the comprehensive state of node with CoP (v):
CoP (v) value is bigger, indicates that the rate being accessed under the premise of the switching performance is good is slower, if slow in the point Deposit content, not only ensure that subsequent request can rapid requests to the point, but also be not in high replacement rate and request rate, while can The load of node is effectively relieved.
Each node increases a new data structure in original data structure, and structural unit group includes { node name Claim, contents list in corresponding CS, corresponding PIT interface number, for recording the cooperation that distance around it is node in the range of 2 Cache contents list and access interface number.If neighborhood node has duplicate content item, then only retain CoP (v) highest section It is backed up at point.
Step:
(1) when the content k of request is after net interior nodes or server end are hit, according to itself current popularity P (Tn) The forwarding hop count and request content being attached to interest packet calculate following fashion trend P in the hit requested number of node (Tn+1), then show that Δ P's is positive and negative,
(2) it if all nodes of Δ P > 0, content k on the return path are stored, and is drawn no more than node storage The c-x capacity separated;If nodal cache space c-x has expired, the cache replacement policy LRU in individual node is enabled.
(3) if data k is in the popularity decline phase, the corresponding node of maximum CoP (v) value on path is found out first, if having Content k illustrates there has been the requested identical content of other nodes during routing, and data k is deposited according to cache policy Storage directly abandons at this time in the point.If the not content with the neighbor node CoP (v) in its tablenValue compares, and finds out CoP (v) maximum value is stored in corresponding node.
3. Fig. 3 is the process that nodal cache space divides.When storing the content of popularity upward period, in return path (c-x) of upper each node division is partially cached to guarantee that more requests can most be hit fastly.Divide proper ratio Spatial cache takes corresponding storage strategy that will become particularly significant, if (c-x) that divides is very few, is filled with, is replaced quickly Change that rate is excessively high, if what is divided is too many, cooperation caching effect will be reduced, and whole cache policy is almost degenerated for LCE caching plan Slightly, very limited to the promotion of network performance.Made herein using interest packet from the hop count that client is issued between hit node To request cost, under the premise of node linear processing speed, hop count also reflects request delay indirectly.
In strategy proposed in this paper, there are three kinds of storage modes in content, therefore the content requested is in network in a network There is also differences for the average number of hops of middle hit.Assuming that c-x partial hit of the interest packet in node, corresponding hop count is h1:
Interest packet is hit in the space cooperation caching x of node, and corresponding hop count is h2:
Assuming that interest packet is routed to server, and corresponding hop count is h without finding corresponding content in network3:
Constraint condition are as follows:
0 < xj< c
0 < c-xj< c
yik∈{0,1}
Wherein n represents number of nodes, and N represents chunk number of data blocks, rikIt indicates that each node i observes to ask content k Rate is sought, Poisson distribution is obeyed.yikIt indicates whether request content hits, hits yik=1, otherwise yik=0.H is that hit is asked later Seek the hop count in path.According to the different cache way of content, local cache content is respectively indicated with k ∈ c-x, k ∈ x, k ∈ s, is assisted Make cache contents and server cache contents.If x takes 0, it is equivalent to absolutely not cooperation caching, is cache policy everywhere, leads Node space is caused quickly to be filled with, popular content is also replaced quickly, and intermediate node hit rate is very in a network for interest packet Low, hop count total in this way will necessarily be very big.When x takes c, the cache policy of network is that all spatial caches of all nodes are assisted It caches, will lead to the backup of the high content of popularity in a network in this way can be reduced, high interior of the popularity of client request The hop count for holding the content low with popularity is essentially the same, and hop count also can be very big.So making there are such a point of x ∈ (0, c) Total hop count of network is minimum.Problem can be converted into described below:
Min(h1+h2+h3)
The request hop count of h expression whole network.Solution for x, we enumerate from 0 to c to x, find out most The corresponding x of small h is used as x*.Be normalized is optimal collaboration depth value ρ:
The above embodiment is interpreted as being merely to illustrate the present invention rather than limit the scope of the invention.? After the content for having read record of the invention, technical staff can be made various changes or modifications the present invention, these equivalent changes Change and modification equally falls into the scope of the claims in the present invention.

Claims (4)

1. naming a kind of cooperation caching method based on Popularity prediction in data network, which comprises the following steps:
101, Popularity prediction: according to the distance of content in name data network accessed frequency and content to request end, Popularity prediction is carried out to content, obtains the high content of the following popularity and the low content of popularity;The caching of each node Space is divided into two parts x and (c-x), and (c-x) is used to cache the content of high popularity, and x is used to cache the content of low popularity; It includes: to calculate in next popularity P (T for counting week that step 101, which carries out Popularity prediction to content,n+1):
Wherein P (Tn) indicate the current popularity of request content, HaverageIt is that data are accessed every time in one cycle to be averaged Hop count, RcountIndicate the requested number of the content in hit node, TnIndicate the hit content current count period, α is to adjust Constant, Δ P indicate the difference of the prediction popularity value and upper Cyclic epidemic degree of current period, Δ P=P (Tn+1)-P(Tn), if Δ P > 0 shows that content is in popularity ascent stage, and each node caches on the return path when caching;If Δ P < 0, Show that the content is in the popularity decline stage, on the return path with other node cooperation cachings;
102, all nodes all cache a backup to the content high to popularity in step 101 on the return path, and popularity The low content node that comprehensive state is optimal on return path neighbor domain of node carries out cooperation caching, only caches single backup, has Body includes: (1) when the content k of request is after net interior nodes or server end are hit, according to itself current popularity P (Tn) and The forwarding hop count and request content that interest packet is attached to calculate following fashion trend P in the hit requested number of node (Tn+1), then show that Δ P's is positive and negative;
(2) it if all nodes of Δ P > 0, content k on the return path are stored, and is marked off no more than node storage C-x capacity;If nodal cache space c-x has expired, the cache replacement policy LRU in individual node is enabled;
(3) if data k is in the popularity decline phase, the corresponding node of maximum comprehensive state value CoP (v) on path is found out first, If there is content k, illustrate there has been the requested identical content of other nodes during routing, and according to cache policy data K is stored in the point, directly abandons at this time;If the not content with the neighbor node CoP (v) in its tablenValue compares, CoP (v)nThe comprehensive state value for indicating n-th of neighbor node, finds out CoP (v) maximum value, is stored in corresponding node.
2. a kind of cooperation caching method based on Popularity prediction in name data network according to claim 1, special Sign is that the prediction of popularity is in addition to considering content number R accessed in individual node in step 101count, it is also contemplated that quilt Average number of hops H when user requestsaverage, cycle T is counted come number R accessed in record period by settingcount, pass through Interest packet records the hop count of request hit every time, considers content in the local popularity and in the entire network of node in this way Cache location is far and near, predicts the popularity of next cycle.
3. a kind of cooperation caching method based on Popularity prediction in name data network according to claim 1, special Sign is that there are three kinds of storage modes in whole network: each node stores the memory space (c-x) of high popularity content;Often A node stores the memory space x of low popularity content;Server stores the memory space of initial data backup.
4. a kind of cooperation caching method based on Popularity prediction in name data network according to claim 1, special Sign is, in step 102 the comprehensive state value of the low content of popularity node on return path neighbor domain of node with CoP (v) come It indicates, if each content name corresponding interface quantity is D in present node PITname1, Dname2..., Dname n, then total request Number of ports is Dtotal(v)=Dname1+Dname2+…Dname n, current interface number can directly reflect the requested rate of node, C (v) connectivity of each node is stated, formula is as follows:
CoP (v) value is bigger, indicates that the rate being accessed under the premise of the switching performance is good is slower.
CN201610549740.XA 2016-07-12 2016-07-12 Name a kind of cooperation caching method based on Popularity prediction in data network Active CN106131182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610549740.XA CN106131182B (en) 2016-07-12 2016-07-12 Name a kind of cooperation caching method based on Popularity prediction in data network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610549740.XA CN106131182B (en) 2016-07-12 2016-07-12 Name a kind of cooperation caching method based on Popularity prediction in data network

Publications (2)

Publication Number Publication Date
CN106131182A CN106131182A (en) 2016-11-16
CN106131182B true CN106131182B (en) 2019-04-09

Family

ID=57283817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610549740.XA Active CN106131182B (en) 2016-07-12 2016-07-12 Name a kind of cooperation caching method based on Popularity prediction in data network

Country Status (1)

Country Link
CN (1) CN106131182B (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686399B (en) * 2016-12-22 2019-07-02 陕西尚品信息科技有限公司 It is a kind of based on joint cache structure network in video cache method
CN106982248B (en) * 2017-03-01 2019-12-13 中国科学院深圳先进技术研究院 caching method and device for content-centric network
CN106899692A (en) * 2017-03-17 2017-06-27 重庆邮电大学 A kind of content center network node data buffer replacing method and device
CN107404530B (en) * 2017-07-27 2019-12-31 北京邮电大学 Social network cooperation caching method and device based on user interest similarity
CN107948247B (en) * 2017-11-01 2020-04-10 西安交通大学 Virtual cache channel cache management method of software defined network
CN107911471B (en) * 2017-12-01 2019-12-03 中国联合网络通信集团有限公司 The distributed caching method and equipment of data
CN108259929B (en) * 2017-12-22 2020-03-06 北京交通大学 Prediction and caching method for video active period mode
CN108093056B (en) * 2017-12-25 2020-11-10 重庆邮电大学 Node cache replacement method in information center wireless network virtualization network
CN108183867B (en) * 2017-12-25 2021-08-10 重庆邮电大学 Information center network node cache replacement method
CN108307446B (en) * 2018-01-05 2020-01-21 华中科技大学 Wireless network edge cooperative caching system and method based on software definition
CN108616923A (en) * 2018-02-27 2018-10-02 南京邮电大学 A kind of cooperation caching system based on mobile ad-hoc network
CN108540569B (en) * 2018-04-23 2020-01-24 燕东科技(广东)有限公司 Software installation package replacement method and device and computer storage medium
CN108566434B (en) * 2018-05-03 2020-04-17 北京邮电大学 Caching method and device based on popularity and node importance
CN108900570B (en) * 2018-05-30 2020-11-03 南京邮电大学 Cache replacement method based on content value
CN108900618A (en) * 2018-07-04 2018-11-27 重庆邮电大学 Content buffering method in a kind of information centre's network virtualization
CN109413694B (en) * 2018-09-10 2020-02-18 北京邮电大学 Small cell caching method and device based on content popularity prediction
CN109951875A (en) * 2019-02-22 2019-06-28 广州大学 Caching method, device, equipment and medium based on content popularit prediction
CN110086857B (en) * 2019-04-15 2022-01-14 哈尔滨工程大学 Information center network cache deployment method based on cache node state and position
CN110677190B (en) * 2019-10-09 2021-06-22 大连大学 Static processing and caching method for space-ground integrated intelligent network node
CN110602652B (en) * 2019-10-15 2021-07-27 中移信息技术有限公司 Complaint model training method, and user complaint prediction method, device and equipment
CN110602653A (en) * 2019-10-30 2019-12-20 中国科学技术大学 Pre-caching method based on track prediction
CN111294394B (en) * 2020-01-19 2022-09-27 扬州大学 Self-adaptive caching strategy method based on complex network junction
CN112039784B (en) * 2020-07-27 2022-01-28 南京邮电大学 Popularity-based routing expansion techniques in named data networks
CN112052198B (en) * 2020-08-11 2022-09-20 大连理工大学 Hash route cooperative caching method based on node betweenness popularity under energy consumption monitoring platform
CN111935301B (en) * 2020-08-20 2021-10-01 北京邮电大学 Method and device for determining cache placement decision information of named data network
CN112565437B (en) * 2020-12-07 2021-11-19 浙江大学 Service caching method for cross-border service network
CN113542349A (en) * 2021-05-31 2021-10-22 北京邮电大学 Cache replacement method and device for named data network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102523256A (en) * 2011-11-30 2012-06-27 华为技术有限公司 Content management method, device and system
CN104853384A (en) * 2015-05-14 2015-08-19 南京邮电大学 Popularity-based content cache method in fifth-generation mobile communication system
CN105049254A (en) * 2015-07-30 2015-11-11 重庆邮电大学 Data caching substitution method based on content level and popularity in NDN/CCN
CN105357246A (en) * 2015-08-12 2016-02-24 北京交通大学 Caching method and system based on information centre network
CN105407055A (en) * 2015-12-30 2016-03-16 深圳先进技术研究院 Energy consumption control method of content center network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102523256A (en) * 2011-11-30 2012-06-27 华为技术有限公司 Content management method, device and system
CN104853384A (en) * 2015-05-14 2015-08-19 南京邮电大学 Popularity-based content cache method in fifth-generation mobile communication system
CN105049254A (en) * 2015-07-30 2015-11-11 重庆邮电大学 Data caching substitution method based on content level and popularity in NDN/CCN
CN105357246A (en) * 2015-08-12 2016-02-24 北京交通大学 Caching method and system based on information centre network
CN105407055A (en) * 2015-12-30 2016-03-16 深圳先进技术研究院 Energy consumption control method of content center network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Improving Content-centric Networks Performance with Progressive, diversity-load driven Caching;Jason Min Wang, Brahim Bensaou;《Proceedings of the 2012 1st IEEE International Conference on Communications in China.Piscataway: IEEE》;20121231;85-90
信息中心网络缓存技术研究综述;张天魁 等;《北京邮电大学学报》;20160630;第39卷(第3期);1-15
智慧协同网络服务内容在传输路径上的缓存分配策略;冯博昊 等;《通信学报》;20160331;第37卷(第3期);129-138

Also Published As

Publication number Publication date
CN106131182A (en) 2016-11-16

Similar Documents

Publication Publication Date Title
CN106131182B (en) Name a kind of cooperation caching method based on Popularity prediction in data network
CN105049254B (en) Data buffer storage replacement method based on content rating and popularity in a kind of NDN/CCN
CN105022700B (en) A kind of name data network cache management system and management method based on spatial cache division and content similarity
CN104811493B (en) The virtual machine image storage system and read-write requests processing method of a kind of network aware
Carofiglio et al. LAC: Introducing latency-aware caching in information-centric networks
Dutta et al. Caching scheme for information‐centric networks with balanced content distribution
CN108900570B (en) Cache replacement method based on content value
CN109905480B (en) Probabilistic cache content placement method based on content centrality
An et al. An in-network caching scheme based on energy efficiency for content-centric networks
CN108366089B (en) CCN caching method based on content popularity and node importance
CN110365801A (en) Based on the cooperation caching method of subregion in information centre&#39;s network
CN106899692A (en) A kind of content center network node data buffer replacing method and device
CN105656788B (en) CCN content buffering method based on popularity statistics
CN103905538A (en) Neighbor cooperation cache replacement method in content center network
CN108769252A (en) A kind of ICN network pre-cache methods based on request content relevance
Zhang et al. A hybrid ICN cache coordination scheme based on role division between cache nodes
CN103905539A (en) Optimal cache storing method based on popularity of content in content center network
Ramadan et al. Big cache abstraction for cache networks
CN107302571B (en) The routing of information centre&#39;s network and buffer memory management method based on drosophila algorithm
CN108093056A (en) Information centre&#39;s wireless network virtualization nodes buffer replacing method
CN108183867A (en) Information centre&#39;s network node buffer replacing method
Chao et al. A game theoretic approach for energy-efficient in-network caching in content-centric networks
CN112822275B (en) Lightweight caching strategy based on TOPSIS entropy weight method
Li et al. HCaching: high-speed caching for information-centric networking
Yuan et al. An optimal fair resource allocation strategy for a lightweight content-centric networking architecture

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant