CN103023768B - Edge routing node and its method from multi-source prefetching content - Google Patents

Edge routing node and its method from multi-source prefetching content Download PDF

Info

Publication number
CN103023768B
CN103023768B CN201310011815.5A CN201310011815A CN103023768B CN 103023768 B CN103023768 B CN 103023768B CN 201310011815 A CN201310011815 A CN 201310011815A CN 103023768 B CN103023768 B CN 103023768B
Authority
CN
China
Prior art keywords
content
routing node
source
cached
described content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310011815.5A
Other languages
Chinese (zh)
Other versions
CN103023768A (en
Inventor
林涛
周旭
范鹏飞
王博
付通敏
刘银龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Acoustics CAS
Original Assignee
Institute of Acoustics CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Acoustics CAS filed Critical Institute of Acoustics CAS
Priority to CN201310011815.5A priority Critical patent/CN103023768B/en
Publication of CN103023768A publication Critical patent/CN103023768A/en
Application granted granted Critical
Publication of CN103023768B publication Critical patent/CN103023768B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The edge routing node that the present invention relates to a kind of content center network is from the method for multi-source prefetching content and edge routing node.Method comprises: judge whether packet is arrive first; If so, inquire about this node and whether be cached with corresponding content; If nothing, then judge whether described content meets the condition starting multi-source and look ahead, if meet, carries out multi-source and looks ahead according to prefetch policy.Described multi-source is looked ahead and is specially: inquire about which routing node and be cached with described content, and described edge routing node walks abreast from the content source server of content center network and those routing nodes of being cached with described content and obtains the different piecemeals of described content.The present invention is based on content size startup multi-source to look ahead, based on link available bandwidth setting multi-source parallel prefetching ratio, newly-increased route routes field realizes specifying routed path, improves efficient network resource usage, improves the distribution of content and obtain efficiency.

Description

Edge routing node and its method from multi-source prefetching content
Technical field
The present invention relates to a kind of content obtaining field of content center network, particularly relate to a kind of edge routing node and its method from multi-source prefetching content.
Background technology
Current Internet technology and application develop rapidly, broadband, contentization has become the theme of internet development.It is predicted, IP flow is by with the speed increment of annual compound growth rate 34%, and the IP flow to the whole world in 2015 will reach 80.5EB every month.Meanwhile, the share that the flow that on the Internet in 2015, all the elements are correlated with will occupy more than 97.5%.In current solution Future Internet, the scheme of the demand of extensive contents distribution mainly comprises P2P (Peer-to-peer, between peer node) technology, content caching technology, content delivery CDN technology and content center network CCN technology etc.
For content caching technology, there is a kind of prefetch policy and carry out the content that cache user not yet asks, it to the access situation of content, calculates user to the access module of content, prediction user future operation, the content that the user that looks ahead may access by analyzing user in the past.And CDN technology by increasing the new network architecture of one deck in existing the Internet, content being published to the network edge closest to user, enabling user obtain required content nearby, improve the response speed of user's access.
The people such as L.Zhang are at article (" Named Data Networking (NDN) Project ", PARCTechnical Report NDN-0001, October2010) framework of CCN (Content-CentricNetwork, content center network) is proposed in improve the distribution of content in network and acquisition efficiency.CCN has abandoned the mode that IP network IP address is the name of each main frame completely, but names content.In CCN network, each file is split into several fixed-size piece (Chunk), and each piece is assigned with a fixed name, as: ccnx: //hpnl.ioa.ac.cn/video/filename/_chunknum/_version.Two kinds of packets are had, Interest(interest in CCN) wrap and Data(data) wrap.Content name and other relevant information in Interest bag, the version, access rights etc. of such as content.And content name in Data bag, corresponding to the content-data of this content name and other relevant information.Interest bag and Data bag one_to_one corresponding.In CCN network, the router of contents processing packet is different from conventional router: CCN router have for by content caching at local CS(Content Store, content caching) in caching function, it carries out buffer memory according to certain strategy to the Data bag through it, in addition, CCN router carries out route (its FIB(Forwarding Information Base to Interest bag according to name, forwarding information base) show the corresponding informance preserving content name and interface), according to Interest, contrary path is wrapped to Data bag and transmits (PIT(Pending Interest Table, unsettled interest table) preserve the state information of this Interest in table).Terminal use sends Interest bag, CCN router carries out route according to name to this Interest bag, if there is this content in the buffer memory on certain router node, then directly return corresponding Data to wrap, if router does not all have this content on the way, then this Interest is finally forwarded in the content source server of this network, described content source server also can be called central server, described content source server is according to described Interest bag returned content, router on the way on path then can carry out buffer memory to this content, if router receives this Interest again, then directly can return Data.
Adopt P2P technology to carry out contents distribution, user can obtain data from multiple peer node simultaneously.But, because P2P technology adopts overlay(overlay network) structure connect according to the logic between user and carry out networking, and do not consider the physical topology of network, there is drawback.Such as, adopt the mode seizing/monopolize Internet resources to carry out data interaction between node, network bandwidth multiplexing efficiency is low.In addition, in P2P network, often there is the situation of carrying out data interaction between the node that network distance is far, transmit consuming time comparatively large, and easily cause flow chaotic.
Caching technology and CDN technology are all received within place near user to improve speed of download interior, optimize flow to a certain extent.But owing to being operated in application layer, for the optimization that specific application, specific website are carried out, data message transmission efficiency is not high on the whole.In addition, the content prefetch policy in caching system is the history access situation prediction Access Model according to user, and then judges the file that user may access and look ahead.Because user behavior is complicated and changeable, prefetch policy is difficult to make accurate judgement to next step behavior of user, this prefetch policy limited efficiency.
CCN network directly adopts the framework centered by content, carries out route to content, ensure that content message high efficiency of transmission in a network.But, due to high speed forward equipment rely on search memory expensive and power consumption is high, the routing list capacity of support is limited, cannot carry out fine-grained content scheduling, and the content of each fringe node buffer memory often compares dispersion, be difficult to utilize by other fringe node.In addition, in CCN network centered by content forwarding data bag, the interest bag of same content often can only along a paths transmission, and user is difficult to from multiple data source download file simultaneously, and efficiency is not high.
Summary of the invention
For the problems referred to above of the prior art, the invention provides a kind of edge routing node and its method from multi-source prefetching content.
In first aspect, embodiments provide a kind of method of edge routing node from multi-source prefetching content of content center network, described method comprises: when described edge routing node receives interest data packet request, judges whether described interest packet is first time arrive this edge router; When judged result is for being, inquires about this edge nodes and whether being cached with the content corresponding with content name in described interest packet; If Query Result is nothing, then judge whether described content meets the condition starting multi-source and look ahead according to prefetch policy, when judged result is for meeting, carry out multi-source to look ahead, described multi-source is looked ahead and is specially: described edge routing node is inquired about which routing node in described content center network and is cached with described content, and described edge routing node walks abreast from the content source server of content center network and those routing nodes of being cached with described content and obtains the different piecemeals of described content.
Preferably, described prefetch policy comprises: the size weighing described content according to the type of described content, only looks ahead for just starting multi-source time larger at weighing result.
Preferably, described prefetch policy also comprises: if the type of described content is audio frequency, video or software, then the size weighing result of described content is larger.
Preferably, described edge routing node is inquired about which routing node in described content center network and is cached with described content, is specially: described edge routing node is inquired about which routing node in described content center network according to content location mechanism and is cached with described content.
Preferably, described edge routing node walks abreast from the content source server of content center network and those routing nodes of being cached with described content and obtains the different piecemeals of described content, be specially: described edge routing node to obtain the different proportion of described content according to the content source server of itself and content center network and the network idle allocated bandwidth of those routing nodes that is cached with described content from the content source server of described content center network and those routing nodes of being cached with described content, and to walk abreast from the content source server of described content center network and those routing nodes of being cached with described content according to described different proportion and obtain the different piecemeals of the described content corresponding to described different proportion.
Preferably, described edge routing node to walk abreast from the content source server of described content center network and those routing nodes of being cached with described content according to described different proportion and obtains the different piecemeals of the described content corresponding to described different proportion, be specially: the interest packet of the different piecemeals corresponding to the described content of described different proportion is sent to described content source server and is cached with those routing nodes of described content by described edge routing node respectively, described interest packet comprises route routes field, this route routes field specifies the routed path returning described different piecemeal.
In second aspect, the embodiment of the present invention provides a kind of edge routing node for realizing the described method that described first aspect provides, described edge routing node comprises: arrive judge module first, when receiving interest data packet request for described edge routing node, judge whether described interest packet is first time arrive this edge router; Local content query module, for when judged result is for being, inquires about this edge nodes and whether being cached with the content corresponding with content name in described interest packet; Multi-source prefetch module, if be nothing for Query Result, then judge whether described content meets the condition starting multi-source and look ahead according to prefetch policy, when judged result is for meeting, carry out multi-source to look ahead, described multi-source is looked ahead and is specially: described edge routing node is inquired about which routing node in described content center network and is cached with described content, and described edge routing node walks abreast from the content source server of content center network and those routing nodes of being cached with described content and obtains the different piecemeals of described content.
Preferably, described prefetch policy comprises: the size weighing described content according to the type of described content, only looks ahead for just starting multi-source time larger at weighing result.
Preferably, described edge routing node walks abreast from the content source server of content center network and those routing nodes of being cached with described content and obtains the different piecemeals of described content, be specially: described edge routing node to obtain the different proportion of described content according to the content source server of itself and content center network and the network idle allocated bandwidth of those routing nodes that is cached with described content from the content source server of described content center network and those routing nodes of being cached with described content, and to walk abreast from the content source server of described content center network and those routing nodes of being cached with described content according to described different proportion and obtain the different piecemeals of the described content corresponding to described different proportion.
Preferably, described edge routing node to walk abreast from the content source server of described content center network and those routing nodes of being cached with described content according to described different proportion and obtains the different piecemeals of the described content corresponding to described different proportion, be specially: the interest packet of the different piecemeals corresponding to the described content of described different proportion is sent to described content source server and is cached with those routing nodes of described content by described edge routing node respectively, described interest packet comprises route routes field, this route routes field specifies the routed path returning described different piecemeal.
The embodiment of the present invention is in conjunction with the advantage of P2P, buffer memory, CDN, CCN, devise edge routing node and its method from multi-source prefetching content in a kind of content delivery network, the anticipation of content-based size starts multi-source prefetch policy, the ratio of the bandwidth setting multi-source parallel prefetching of link Network Based, in interest packet, newly-increased route routes field specifies routed path, effectively improves the utilization ratio of Internet resources, improves the distribution of content in network and obtain efficiency.
Accompanying drawing explanation
Be described in detail specific embodiment of the invention scheme below with reference to accompanying drawings, advantage of the present invention will highlight more accordingly.In the accompanying drawings:
Fig. 1 is the content center network CCN schematic diagram of the embodiment of the present invention;
Fig. 2 is the schematic flow sheet of edge routing node from multi-source prefetching content of the embodiment of the present invention;
Fig. 3 is the interest packet forwarding process schematic diagram of the embodiment of the present invention;
Fig. 4 is that the multi-source of the embodiment of the present invention is looked ahead scene schematic diagram;
Fig. 5 is that the startup multi-source of embodiment of the present invention performance verification is looked ahead the contrast schematic diagram of front and back user downloaded content speed.
Embodiment
Below by drawings and Examples, technical scheme of the present invention is described in further detail.
In order to solve the defect of existing scheme, the present invention is in conjunction with the advantage of the technology such as P2P, buffer memory, CDN, CCN, devise the method for edge routing node multi-source prefetching content in a kind of content delivery network, to realize in CCN network edge routing node from multi-source prefetching content, thus accelerate the efficiency of Data dissemination, promote Consumer's Experience.Looked ahead by multi-source, accelerate content distribution efficiency in a network.Source in described multi-source refers to content source server in CCN network or all routing nodes.
Below the embodiment of the present invention is described in more details, so that those skilled in the art understand its principle and implementation detail better.
Particularly, first describe edge routing node multi-source on the whole in conjunction with the CCN network architecture and to look ahead the process of file, then introduce the detailed problem involved by this process in detail, as multi-source prefetch policy, the realization etc. of content location mechanism and multi-source route.
Overall procedure
As shown in Figure 1, the routing node in network has the function of buffer memory and route to the structure of CCN network simultaneously.User is by sending the content required for interest data packet request, and each routing node obtains data corresponding to interest request from local cache node or other routing node, returns Data response packet.
Present invention achieves edge routing node from multi-source prefetching content, for user's download file provides acceleration.The flow process of whole process as shown in Figure 2.When edge routing node receives Client-initiated interest data packet request, first this node is analyzed this interest packet, determine whether first interest packet of demand file/content, if first interest packet, then inquire about this node whether buffer memory user file/content of asking, if the content that this nodal cache has user to ask, the interest packet that this node receives according to CCN protocol processes.Otherwise according to prefetch policy, this node determines that whether enabling multi-source to this file looks ahead.If this file meets the condition that multi-source is looked ahead, edge routing node first in Network Search which nodal cache have this file/content, then obtain the different piecemeals of file from the node of this file/content of default path and these buffer memorys simultaneously.Otherwise, directly obtain this file from default path.Here default path is by the path of the determined acquisition file/content of CCN Routing Protocol.
Multi-source prefetch policy
Very large difference is there is in the content/file in the Internet in size, type etc., multi-source is looked ahead and can be improved the speed of acquisition content, but carry out Resource orientation and also can expend certain expense with the process that other routing node connects, for the file that size is less, expense spent by multi-source obtains may be greater than multi-source and to look ahead the lifting of the acquisition content speed brought, and the present invention only enables multi-source to larger-size file and looks ahead.
In CCN network, in Interest packet, contain information associated with the file, as filename, type etc., but the size of file can not be determined before getting the data that file is correlated with.The present invention weighs the size of file according to file type, then determines whether enable parallel prefetching.Generally, the size of the contents such as audio frequency, video, software is larger, in multi-source prefetch policy, the content which type first we define needs to look ahead, and when receiving first Interest packet of user, routing node resolves the type of the file that user asks, and compare with the file type that predefined needs are looked ahead, if the type of this file is being looked ahead in list of types, then by content location mechanism, in Network Search, having other routing node of this file.From default path (namely this node leads to the path of content source server) and these network nodes, obtain corresponding data simultaneously.
In CCN network, file is split into several fixed-size piece (Chunk) and transmits.When carrying out multi-source and looking ahead, the present invention adopt according to link circuit condition never homology obtain the mechanism of different masses of file.Such as, the file f that in Fig. 1, user asks is stored in content source server and routing node F simultaneously, as user's demand file f, routing node A can obtain file f from these two data sources simultaneously, under prefetch mechanisms of the present invention, routing node A will obtain the front portion data of file f from content source server, the data of remainder of file of simultaneously looking ahead from network node F.The ratio obtaining data from each data source is determined according to the link circuit condition between network node.Suppose that routing node A is BW0 to the bandwidth of content source server, the bandwidth to routing node F is BW1.So, from the look ahead ratio of file of routing node F be:
Ratio F = α · B W 1 B W 0 + B W 1
Wherein, α is surplus of looking ahead, and can get the number between (0,1).Routing node A will from the data block corresponding to content source server download file [0, RatioF] part, from the block corresponding to routing node F download file [1-RatioF, 1] part.Ideally, after routing node A to have downloaded the front portion burst of file from content source server, the rear portion burst of the file of looking ahead from routing node F has also just been downloaded.Like this, user just can directly obtain from local routing node when downloading to the rear portion burst of file, because fringe node is closer from user, can ensure larger speed of download.
Content location mechanism
When carrying out content location, centralized or distributed two kinds of modes can be adopted to realize.When adopting centralized realization, each routing node initiatively by the information reporting of local cache content to content navigation system (content locate system, CLS), described CLS is arranged in the content source server of CCN network, for the information of each routing node institute cache contents in supervising the network.When certain routing node need to carry out multi-source look ahead time, only need to send inquiry request toward CLS, CLS can return to the position of the routing node having requested document.When adopting distributed implementation content location, the mode that routing node adopts local to flood, send inquiry request to neighbouring routing node, each routing node is inquired about the content of local cache and is answered the file whether buffer memory is asked to some extent.A kind of mode can be chosen when specific implementation as required and realize content location.
Multi-source route
In the content in heart network, each routing node carries out the route of packet according to content name, and does not pay close attention to packet and be addressed to which routing node.And when multi-source is looked ahead, we need the interest packet of file of looking ahead to mail to the routing node of specifying, and in order to realize this function, the present invention have modified Interest data packet head, add routes field wherein, this field denotes the routed path of packet.Under centralized content locate mode, the routed path in routes is generated according to network topology by CLS.Under distributed content locate mode, the routed path in routes is determined according to the forward-path information of reply data bag by the routing node sending the request of content locating query.
See Fig. 3, when routing node receives an Interest packet, if this locality does not have the data that this packet of buffer memory is corresponding, and the content corresponding to interest bag is at the unsettled interest table of PIT(, be arranged in fringe node) show also not match, this node will check in data packet head whether have routes field, if had, then according to the instruction of routes, packet is forwarded to corresponding face(panel) mouth, otherwise, according to content name inquiry FIB(forwarding information base, be positioned at fringe node) show to carry out routing forwarding.Detailed process as shown in Figure 3.
The experimental verification of performance
In order to verify the concrete effect of this programme, we modify to CCNx project, have built verification environment.The topological diagram of verification environment as shown in Figure 4.Suppose that from user to the available bandwidth of routing node A be 100Mbps, be 20Mbps from routing node A to the available bandwidth of routing node B, be 25Mbps from routing node A to the available bandwidth of routing node F, the parameters surplus α that looks ahead is 0.8.
In order to verify the effect that multi-source is looked ahead, we in advance by user 2 download file f, thus make file f be buffered in network node B, C, F, then by user 1 download file f.We test respectively and enable before and after multi-source prefetch policy, and the speed of user 1 download file f, experimental result as shown in Figure 5.As can be seen from the figure, when not enabling cache prefetching, the speed of user 1 download file maintains about 18Mbps always, and the time spent by download file f is 30s.After enabling cache prefetching, at first, the speed of download of user 1 maintains about 18Mbps, and after a period of time, the speed of download of user 1 rises to close to 75Mbps, and the total time spent by download file f is only 18s.Adopt cache policy of the present invention, network node A not only can obtain file f by default path from buffer memory Node B, can to look ahead file f from network node F simultaneously, finally improve the efficiency that user obtains file.
To sum up, visible key point of the present invention and advantage as follows: in content center network, edge routing node is by multi-source prefetch policy, a part for the file of user being asked in advance downloads to this locality, and user directly can get content from fringe node, improves the speed of user's download file; Specified circuit is achieved by internodal data bag route by increasing routes field in content center network; In content center network, multi-source prefetch policy determines whether look ahead, ensure that the service efficiency that multi-source is looked ahead according to file size; Prefetch policy just starts after user initiates first interest request to content/file, the i.e. remainder of prefetching content/file, to look ahead the mode of whole file relative to asking model according to user, the probability that content/file data of looking ahead is asked by user is larger, improves efficiency; In content center network, multi-source prefetch policy is according to link circuit condition, the ratio from each data source prefetching content is determined according to link circuit condition when carrying out multi-source and looking ahead, obtain the different masses of file from different node based on this ratio, the Appropriate application network bandwidth, improves efficiency equally.
Those skilled in the art should recognize further, in conjunction with each exemplary module/system/unit/device and method step that embodiment disclosed herein describes, can realize with electronic hardware, computer software or the combination of the two, in order to the interchangeability of hardware and software is clearly described, generally describe composition and the step of each example in the above description according to function.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Those skilled in the art can use distinct methods to realize described function to each specifically should being used for, but this realization should not think the scope exceeding the application.
In conjunction with the software unit that the method step of embodiment disclosed herein description can use hardware, processor to perform, or the combination of the two is implemented.Software unit can be placed in the storage medium of other form any known in random asccess memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technical field.
It is to be noted, these are only present pre-ferred embodiments, not be used for limiting practical range of the present invention, the technical staff with professional knowledge base can realize the present invention by above embodiment, therefore every any change according to making within the spirit and principles in the present invention, amendment and improvement, all cover by the scope of the claims of the present invention.Namely, above embodiment is only in order to illustrate technical scheme of the present invention and unrestricted, although with reference to preferred embodiment to invention has been detailed description, those of ordinary skill in the art is to be understood that, can modify to technical scheme of the present invention or equivalent replacement, and not depart from the spirit and scope of technical solution of the present invention.

Claims (8)

1. the edge routing node of content center network is from a method for multi-source prefetching content, it is characterized in that, described method comprises:
When described edge routing node receives interest data packet request, judge whether described interest packet is first time arrive this edge router;
When judged result is for being, inquires about this edge nodes and whether being cached with the content corresponding with content name in described interest packet;
If Query Result is nothing, then judge whether described content meets the condition starting multi-source and look ahead, and when judged result is for meeting, carry out multi-source and look ahead, described multi-source is looked ahead and is specially according to prefetch policy:
Described edge routing node is inquired about which routing node in described content center network and is cached with described content, and described edge routing node walks abreast from the content source server of content center network and those routing nodes of being cached with described content and obtains the different piecemeals of described content;
Described edge routing node walks abreast from the content source server of content center network and those routing nodes of being cached with described content and obtains the different piecemeals of described content, be specially: described edge routing node according to the content source server of itself and content center network and be cached with described content those routing nodes between idle link allocated bandwidth to obtain the different proportion of described content from the content source server of described content center network and those routing nodes of being cached with described content, and to walk abreast from the content source server of described content center network and those routing nodes of being cached with described content according to described different proportion and obtain the different piecemeals of the described content corresponding to described different proportion.
2. method according to claim 1, is characterized in that, described prefetch policy comprises: the size weighing described content according to the type of described content, looks ahead for starting multi-source time larger at weighing result.
3. method according to claim 2, is characterized in that, described prefetch policy also comprises: if the type of described content is audio frequency, video or software, then the size weighing result of described content is larger.
4. method according to claim 1, it is characterized in that, described edge routing node is inquired about which routing node in described content center network and is cached with described content, is specially: described edge routing node is inquired about which routing node in described content center network according to content location mechanism and is cached with described content.
5. method according to claim 1, it is characterized in that, described edge routing node to walk abreast from the content source server of described content center network and those routing nodes of being cached with described content according to described different proportion and obtains the different piecemeals of the described content corresponding to described different proportion, be specially: the interest packet of the different piecemeals corresponding to the described content of described different proportion is sent to described content source server and is cached with those routing nodes of described content by described edge routing node respectively, route routes field is comprised in described interest packet, this route routes field specifies the routed path returning described different piecemeal.
6. for an edge routing node for method described in claim 1, it is characterized in that, described edge routing node comprises:
Arrive judge module first, when receiving interest data packet request for described edge routing node, judge whether described interest packet is first time arrive this edge router;
Local content query module, for when judged result is for being, inquires about this edge nodes and whether being cached with the content corresponding with content name in described interest packet;
According to prefetch policy, multi-source prefetch module, if be nothing for Query Result, then judges whether described content meets the condition starting multi-source and look ahead, and when judged result is for meeting, carry out multi-source and look ahead, described multi-source is looked ahead and is specially:
Described edge routing node is inquired about which routing node in described content center network and is cached with described content, and described edge routing node walks abreast from the content source server of content center network and those routing nodes of being cached with described content and obtains the different piecemeals of described content;
Wherein, described edge routing node walks abreast from the content source server of content center network and those routing nodes of being cached with described content and obtains the different piecemeals of described content, be specially: described edge routing node to obtain the different proportion of described content according to the content source server of itself and content center network and the network idle allocated bandwidth of those routing nodes that is cached with described content from the content source server of described content center network and those routing nodes of being cached with described content, and to walk abreast from the content source server of described content center network and those routing nodes of being cached with described content according to described different proportion and obtain the different piecemeals of the described content corresponding to described different proportion.
7. edge according to claim 6 routing node, is characterized in that, described prefetch policy comprises: the size weighing described content according to the type of described content, only looks ahead for just starting multi-source time larger at weighing result.
8. edge according to claim 6 routing node, it is characterized in that, described edge routing node to walk abreast from the content source server of described content center network and those routing nodes of being cached with described content according to described different proportion and obtains the different piecemeals of the described content corresponding to described different proportion, be specially: the interest packet of the different piecemeals corresponding to the described content of described different proportion is sent to described content source server and is cached with those routing nodes of described content by described edge routing node respectively, described interest packet comprises route routes field, this route routes field specifies the routed path returning described different piecemeal.
CN201310011815.5A 2013-01-11 2013-01-11 Edge routing node and its method from multi-source prefetching content Expired - Fee Related CN103023768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310011815.5A CN103023768B (en) 2013-01-11 2013-01-11 Edge routing node and its method from multi-source prefetching content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310011815.5A CN103023768B (en) 2013-01-11 2013-01-11 Edge routing node and its method from multi-source prefetching content

Publications (2)

Publication Number Publication Date
CN103023768A CN103023768A (en) 2013-04-03
CN103023768B true CN103023768B (en) 2015-11-04

Family

ID=47971915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310011815.5A Expired - Fee Related CN103023768B (en) 2013-01-11 2013-01-11 Edge routing node and its method from multi-source prefetching content

Country Status (1)

Country Link
CN (1) CN103023768B (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103312725B (en) * 2013-07-05 2016-05-25 江苏大学 A kind of content center network-caching decision method based on node significance level
CN103457999B (en) * 2013-08-06 2016-05-04 北京大学深圳研究生院 A kind of P2P document transmission method based on the NDN network architecture
CN103442039B (en) * 2013-08-13 2016-12-28 南京师范大学 A kind of caching cooperative system based on caching Partition of role
CN104426769A (en) * 2013-09-09 2015-03-18 北京大学 Routing method and router
CN103546559B (en) * 2013-10-24 2018-02-02 网宿科技股份有限公司 Data distributing method and device
CN104065760B (en) * 2013-11-25 2017-08-25 中国科学院计算机网络信息中心 The credible addressing methods of CCN and system based on DNS and its Extended Protocol
CN105210340B (en) * 2013-11-29 2018-09-07 华为技术有限公司 cache decision method and device
CN104717186B (en) 2013-12-16 2019-06-25 腾讯科技(深圳)有限公司 A kind of method, apparatus and data transmission system for transmitting data in network system
CN103747083B (en) * 2014-01-02 2015-10-14 北京邮电大学 A kind of content delivery method based on CCN
CN104811323A (en) * 2014-01-23 2015-07-29 腾讯科技(深圳)有限公司 Data requesting method, data requesting device, node server and CDN (content delivery network) system
US9979644B2 (en) * 2014-07-13 2018-05-22 Cisco Technology, Inc. Linking to content using information centric networking
EP3207687B1 (en) * 2014-10-14 2020-07-08 IDAC Holdings, Inc. Anchoring ip devices in icn networks
CN104661249B (en) * 2014-12-29 2018-07-06 中国科学院计算机网络信息中心 A kind of system and method for reducing the delay of NDN network Mobile users content obtaining
CN105812840A (en) * 2014-12-29 2016-07-27 乐视网信息技术(北京)股份有限公司 Live video transmission method, live video transmission device, and video direct broadcast system
US9954795B2 (en) * 2015-01-12 2018-04-24 Cisco Technology, Inc. Resource allocation using CCN manifests
CN107181775B (en) * 2016-03-10 2020-09-04 北京大学 Routing method and routing device in content-centric network
CN105847393A (en) * 2016-04-25 2016-08-10 乐视控股(北京)有限公司 Content distribution method, device and system
CN106294702A (en) * 2016-08-08 2017-01-04 龙官波 A kind of information query method and device
CN106452923B (en) * 2016-11-30 2019-07-19 重庆邮电大学 A kind of the flow simulation generation system and method for content oriented central site network
CN107302571B (en) * 2017-06-14 2019-10-18 北京信息科技大学 The routing of information centre's network and buffer memory management method based on drosophila algorithm
CN109561355B (en) * 2017-09-27 2020-07-17 中国科学院声学研究所 System and method for CCN/NDN content registration, content location analysis and content routing
CN108449608B (en) * 2018-04-02 2020-12-29 西南交通大学 Block downloading method corresponding to double-layer cache architecture
CN110380979A (en) * 2018-04-17 2019-10-25 北京升鑫网络科技有限公司 A kind of method and system of chained record distribution
CN108768857B (en) * 2018-08-30 2021-04-02 中国联合网络通信集团有限公司 Virtual route forwarding method, device and system
CN111654873B (en) * 2019-09-27 2022-08-16 西北大学 Mobile CDN link selection energy consumption optimization method based on global utility cache strategy
CN111432231B (en) * 2020-04-26 2023-04-07 中移(杭州)信息技术有限公司 Content scheduling method of edge network, home gateway, system and server
CN112637908B (en) * 2021-03-08 2021-06-25 中国人民解放军国防科技大学 Fine-grained layered edge caching method based on content popularity

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101272404A (en) * 2008-05-15 2008-09-24 中国科学院计算技术研究所 Link selection method of P2P video living broadcast system data scheduling
CN102638405A (en) * 2012-04-12 2012-08-15 清华大学 Routing method of content-centric network strategy layer

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8667172B2 (en) * 2011-06-07 2014-03-04 Futurewei Technologies, Inc. Method and apparatus for content identifier based radius constrained cache flooding to enable efficient content routing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101272404A (en) * 2008-05-15 2008-09-24 中国科学院计算技术研究所 Link selection method of P2P video living broadcast system data scheduling
CN102638405A (en) * 2012-04-12 2012-08-15 清华大学 Routing method of content-centric network strategy layer

Also Published As

Publication number Publication date
CN103023768A (en) 2013-04-03

Similar Documents

Publication Publication Date Title
CN103023768B (en) Edge routing node and its method from multi-source prefetching content
US10667172B2 (en) Download management with congestion mitigation for over the air content delivery to vehicles
US10404790B2 (en) HTTP scheduling system and method of content delivery network
CN105450780B (en) A kind of CDN system and its return source method
US9967780B2 (en) End-user carried location hint for content in information-centric networks
CN102291447B (en) Content distribution network load scheduling method and system
CN107517228B (en) Dynamic acceleration method and device in content distribution network
US11012362B2 (en) Download management with congestion mitigation for over the air content delivery to vehicles
JP5745169B2 (en) Content processing method, content processing device, and content processing system
CN105409248B (en) System and method for enhancing discovery
US20110131341A1 (en) Selective content pre-caching
WO2016015582A1 (en) Packet transmission method, apparatus and system
KR20130088774A (en) System and method for delivering segmented content
Scherb et al. Resolution strategies for networking the IoT at the edge via named functions
JP2016059039A (en) Interest keep alive in intermediate router in ccn
Trossen et al. Towards an information centric network architecture for universal internet access
Naeem et al. Caching content on the network layer: A performance analysis of caching schemes in icn-based internet of things
AU2020274472A1 (en) Cache management in content delivery systems
CN113301079B (en) Data acquisition method, system, computing device and storage medium
CN101635741A (en) Method and system thereof for inquiring recourses in distributed network
US11606415B2 (en) Method, apparatus and system for processing an access request in a content delivery system
Ye et al. PIoT: Programmable IoT using information centric networking
CN109644160B (en) Hybrid method for name resolution and producer selection in ICN by classification
KR20170103286A (en) Method for providing of content and caching, recording medium recording program therfor
Monteiro et al. Decentralized storage for networks of hand-held devices

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151104

Termination date: 20190111

CF01 Termination of patent right due to non-payment of annual fee