CN105357278A - Guandu cache strategy for named-data mobile ad hoc network - Google Patents

Guandu cache strategy for named-data mobile ad hoc network Download PDF

Info

Publication number
CN105357278A
CN105357278A CN201510674595.3A CN201510674595A CN105357278A CN 105357278 A CN105357278 A CN 105357278A CN 201510674595 A CN201510674595 A CN 201510674595A CN 105357278 A CN105357278 A CN 105357278A
Authority
CN
China
Prior art keywords
node
data
buffer memory
space
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510674595.3A
Other languages
Chinese (zh)
Other versions
CN105357278B (en
Inventor
张丽
毕帅
石振莲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201510674595.3A priority Critical patent/CN105357278B/en
Publication of CN105357278A publication Critical patent/CN105357278A/en
Application granted granted Critical
Publication of CN105357278B publication Critical patent/CN105357278B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5683Storage of data provided by user terminals, i.e. reverse caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to a Guandu cache strategy for a named-data mobile ad hoc network, belonging to the field of network caching. The Guandu strategy aims to reduce the cache number of contents on an MANET mobile node as mush as possible, reduce calculation and maintenance overheads of the node, and simultaneously still maintain a rapid response time due to an NDN mode, such that the Guandu strategy is suitable for a mobile node having limited NDM storage and computing capabilities; the core concept of the Guandu strategy is as follows: if the distance between a forwarding node and a content node is too far, a storage space exists, data are possibly requested again simultaneously and the data are not cached around, the forwarding node caches the data; otherwise, the forwarding node does not cache the data; therefore, redundancy of the data in a network and occupation of the storage space of mobile equipment are reduced; the data are also placed in a place closer to users as close as possible; therefore, use of network bandwidths is reduced; the response speed is increased; occupation of a cache space of the mobile node can be greatly reduced; no much network flow is increased; and furthermore, a good response delay is provided.

Description

The port owned by the government cache policy of named data mobile ad-hoc network
Technical field
The invention belongs to network-caching technical field.
Background technology
Mobile ad-hoc network MANET (MobileAdhocNetwork) has been concerned for a long time.But under the application and research of MANET rests on military and emergency environmental more, instead of in public network application.This makes MANET really not popularize.The development of radio network technique enables more public's mobile device access network easily.Investigation shows that on present the Internet, increasing flow is contributed by mobile device.
The mobile device of various enormous amount may concentrate in certain region within certain time period, as travelling compartment, the busy airport etc. of crowded subway, movement.If there is available MSNET network, can be these users and a lot of attractive application is provided.Such as, pass through MANET, people in same Tourist can watch the video on wherein certain mobile device together, but need not pay any campus network, need not carry out any band and exchange outward, do not know video is from which equipment, also without the need to the support of any network infrastructure even completely.These are all set enough technical support of MANET and application market.
NDN (NameddataNetworking) is in recent ten years according to a kind of novel network interconnection structure that the feature of Internet application development puts forward.Resource and its place main frame, using the name of content as the index of route, are separated by NDN.The data of NDN Buffer forwarding in a network.NDN, using the entity of content as addressing, makes content store anywhere, so the copy of possibility retrieval of content.This not only increases the response time, but also alleviates the impact that network failure brings.
NDN has born support advantage to mobility.Mobile subscriber leaves origin-location, as long as application acquisition data just and can may regain data near reposition again, and does not need to keep being connected and re-establishing connection with original data source.NDN does not have center, does not have stratified pattern to be more suitable for supporting not have the MSNET network of structure.NDN model application network is in a manet called as named data MSNET network (Named-dataMANET), is called for short NDM.
In NDN fundamental mode, all nodes on content forward-path all can this content of buffer memory.This is called as along path buffer memory (on-pathcache).Although only on the way, on node, cache contents has decreased the number of buffer memory and has sent separately flow and the administration overhead of buffer memory packet distribution.But the memory space of MSNET network mobile node is general very limited, thus available on overall network spatial cache is also just very limited.NDN can form great burden to node along the mechanism of road buffer memory.Thus, need the suitable cache policy of design that NDM just can be made to have the possibility of practical application.
Have much for the cache algorithm research work of the ICN/NDN network of cable network, but, specially for NDM network considers that the cache algorithm of mobile environment is little.
The cache policy of NDM needs to reduce data redundancy in a network as far as possible, should the content of buffer memory by limited spatial cache buffer memory, content caching can improved on the node of network performance simultaneously.In addition, because the disposal ability of node is limited, therefore cache policy also should reduce the cache management expense of node, adopts simple as far as possible method.Accordingly analyze, provide a cache algorithm herein, the cache node that this algorithm will reduce on path on the way, data are disperseed as much as possible, and is buffered in from the position away from data source.This algorithm is called " port owned by the government " algorithm by us, and meaning gets the strategy and policy that the war of Cao behaviour at port owned by the government is disposed: be not the Yellow River that divides the forces for defence, but muster superior forces, hold strategic pass, emphasis is set up defences, wait at ease for an exhausted opponent, finally realization defeat with a force inferior in number.Algorithm is intended to be arranged on most suitable node by the most useful content block interest bag as much as possible.
For using the network of caching technology, what first will solve is exactly the problem of where cache contents.NDN fundamental mode is cache contents on content bag approach node.The benefit done like this is can incidentally buffer memory, reaches large possible buffer memory with few traffic.Problem possible occupy too much spatial cache, when especially in NDM, mobile node spatial cache is limited.The people such as IoannisPsaras are in order to reduce the redundancy of buffer memory in ICN network, propose the idea of carrying out global optimization, the probability that the spatial cache summation that they can support according to whole piece path on the node of content bag process and node draw from the distance of user carries out buffer memory, and the nodal cache probability making distance users nearer is larger.But this method needs the overall buffer memory knowing all nodes on the length in path, each node position in the paths, path, implements expense larger, especially in change in topology faster mobile network, is not thus suitable for NDM network.
It is data cached that WAVE also selects some node on reply path, and whether buffer memory is determined according to the situation of number of request by upstream node.This algorithm totally seems that data can be distributed on path, but its real data is buffered in around data source more, and the nodal cache that namely distance users is nearer is fewer.This strategy can be more applicable for being similar to tree-like topological structure, because the terminal use of fringe node access is less, intermediate node can be more fringe node service, therefore does the utilance that can improve buffer memory like this.But for the irregular mobile network of network topology, then easily cause data buffer storage to concentrate on around data source, and the user that distance needs is far away.
The people such as YuguangZeng propose an amount of redundancy simply using the strategy of segmented flow analysis to reduce buffer memory in NDN network.This algorithm adds record content and wraps in forwarding space-number in repeating process in the content bag returned.Often forward once this value and subtract 1, this content of nodal cache when being 0 on duty, and reset this value and continue to forward.This algorithm is very easy to realization, expense is very little, is worth using for reference.
Except cache location, which content should buffer memory be the another one problem needing to consider.The popularity of data is the factors be often considered.Popularity is usually by predicting the request counting of these data.But the process of various strategy to request counting is different.Such as, node is allowed regularly to collect request counting from its subtree, content counting being less than to threshold value then refuses buffer memory (this algorithm, for ISP (InternetServiceProvider) internal range network, requires the topological structure that each node maintenance is tree-like).WAVE then with the exponential function of file request counting as the file sub-block number recommending downstream node buffer memory.Node cache request counting is allowed to exceed the data of threshold value, to avoid buffer memory only can requested data once.
The minimizing of ISP external flow is exchanged for by complicated calculations.But really to implement restriction ratio comparatively large for this algorithm, and be suitable only for ISP inside, is not suitable for that computing capability is more weak, network topology is irregular and changes NDM network faster.WAVE counts in units of file, then according to file fragmentation buffer memory.This does not have a generality in NDN network, and the final Data distribution8 formed is still more concentrated, and redundancy is also higher.But it adopts exponential increase to be worth using for reference to break up buffer memory density that is popular and not popular content.] also consider that the interface to interest bag arrives counts, if this numerical value exceedes setting threshold also carry out buffer memory, reason is that more Multi net voting part needs this content.This is a way that is very meaningful and that easily realize.But mobile network is because be radio network, thus there is not the problem of multiple interface.
The realization of cache policy needs collaborative between node interchange for information about usually.The mode of information exchange is generally divided into two kinds.One is increase new message field incidentally in request bag or content bag.Another kind sends independent synergistic data bag incidentally the traffic that increases of mode is considerably less, and shortcoming is and basic forwarding strategy weave in, needs to change basic agreement.If basic agreement is universal, there is performance difficulty.And adopt independent packet can increase network traffics, such as will the summary of periodically cache exchanging.Therefore, this type of algorithm needs to take some measures minimizing flow usually, as the scale using BloomFilter to reduce summary.Same due to can network traffics be increased, so need collaborative scheme mostly within the scope of Autonomous Domain.So be convenient to realize, two for inner stream flow, and ISP also can accept.These algorithms can reduce to overseas flow in territory usually, and thus ISP has the motivation supporting them.
Summary of the invention
The target of port owned by the government strategy reduces the buffer memory quantity of content on MANET mobile node as far as possible, reduce calculating and the maintenance costs of node, but still can safeguard the fast response time that NDN pattern is brought, with the mobile node that the storage and computing capability that adapt to NDM are all limited simultaneously.The core concept of port owned by the government strategy is that these data may be again requested simultaneously, and be not also buffered around, and so forward node is with regard to these data of buffer memory, otherwise refuses buffer memory if forward node distance content node is too far away and have memory space.Reduce data redundancy in a network with this, reduce taking mobile device memory space, data can also be placed on as much as possible from the place close to user to reduce the use to the network bandwidth, raising response speed simultaneously.
According to the target of port owned by the government strategy, first data cached node to be selected.For reducing the maintenance costs of buffer memory and being consistent with NDN pattern, cache node is selected in head-end site.When data retransmission is to certain intermediate node, this node according to circumstances determines oneself whether these data of buffer memory while forwarding data.
The port owned by the government cache policy of named data mobile ad-hoc network, is characterized in that:
Select data cached node; Cache node is selected in head-end site; When data retransmission is to certain intermediate node, this node according to circumstances determines oneself whether these data of buffer memory while forwarding data; Whether forward node will consider oneself idle storage space when determining data cached is enough; The ratio of the idle storage space when buffer memory desired data amount and node is only had to be less than 1/10, could these data of buffer memory;
A hop count field is increased in content bag; This hop count field adds one when content bag is forwarded at every turn; The evolution of the nodes that regulation network can hold is network radius, and the half of getting network radius is here setpoint distance, exceedes setpoint distance, then think that this nodal distance source node is comparatively far away, can consider these data of buffer memory;
A buffer memory interval field is increased in content bag; It is initial value that response node arranges buffer memory interval field in content bag, and initial value is 1/8 of network radius; When content bag is forwarded at every turn, buffer memory interval field successively decreases 1; If content is buffered, buffer memory interval field is re-set as initial value;
Concrete principle is explained as follows:
Due to the limited storage space of mobile device in NDM, each node is intermediate forwarding devices and terminal equipment.Therefore, whether forward node will consider oneself idle storage space when determining data cached enough.Only have the ratio when the idle storage space of buffer memory desired data amount and node to be less than specific threshold, when not affecting operation that node self applies, could these data of buffer memory.Usually, this threshold value is set to 1/10 of system available free space.
The distance of forward node distance content node by content bag the jumping figure of process judge.For this reason, we increase a hop count field in content bag.This hop count field adds one when content bag is forwarded at every turn.The evolution of the nodes that regulation network can hold is network radius, and the half of getting network radius is here setpoint distance, and even nodes number is 64, then this threshold value gets 4.Exceed setpoint distance, then think that this nodal distance source node is far away.
For reducing the redundancy of data buffer storage, if neighbouring node these data of buffer memory, then intermediate node no longer buffer memory.Therefore node needs the caching situation knowing other nodes.For this reason, in content bag, a buffer memory interval field is increased.It is initial value that response node arranges buffer memory interval field in content bag, and initial value is 1/8 of network radius.Even nodes number is 64, then deposit interval field number and get 1.When content bag is forwarded at every turn, buffer memory interval field successively decreases 1.If content is buffered, buffer memory interval field is re-set as initial value.
Because spatial cache is limited, so should the content that again needed by other nodes of buffer memory most probable.Popularity about data is considered, and the most direct cache policy selects the high content of those popularities to carry out buffer memory.Realize this strategy, a way is the number of nodes records to the interest bag that each content receives, and then selects the content caching that number is higher.But this way needs for each content maintenance record, and expense is very large.Because the name space in NDM network is very large, this can cause popularity table very large, and it is larger with maintenance costs to table look-up.
For this reason, for reducing the Time and place expense of mobile node, the consideration of popularity is put into buffer memory and is replaced by us.Node can perform cache replacement algorithm when residual memory space is less than certain threshold value.We adopt the lru algorithm of most simple general-purpose, replace out minimum by the content used recently.Nearest requested content can be left, namely popular content in such buffer memory.The content that such node can receive buffer memory first time, does like this and also those contents popular not temporarily can be retained a period of time in the buffer, and be unlikely to just to consider when buffer memory popularity, and can not get buffer memory at all.Not adopting the various counting about popularity and calculating when buffer memory is replaced equally, is also the low Time and place expense considering to realize.
The detailed process of port owned by the government cache policy as shown in Figure 8.First, the intermediate node receiving packet judges the distance of oneself and data source nodes.Not data cached when the distance (jumping figure of packet) of the sending node with packet is less than setpoint distance, as long as the buffer memory space-number in node Update Table bag.If buffer memory space-number reaches 0, be then set to initial value; If do not reach 0, then reduce 1.Increase the jumping figure of packet, then forwarding data bag simultaneously.
When distance is greater than setpoint distance, buffer memory possibly.Now to consider the memory space situation of buffer memory space-number and node.When space-number is greater than 0, show around have these data of nearer nodal cache, so this node directly successively decreases space-number, incremental data bag jumping figure, forwarding data bag.If buffer memory space-number is 0, then node should consider these data of buffer memory.If but " not having space indicate " is 1, show that node does not have available memory space, then now node abandons these data of buffer memory, continues to successively decrease buffer memory space-number to-1, and incremental data bag jumping figure, forwards packet.Otherwise if node has memory space, then these data of buffer memory, are set to initial value by buffer memory space-number, incremental data bag jumping figure, forwarding data bag.
When the packet that node receives buffer memory space-number for-1 time, show that a node should these data of buffer memory, but because space problem does not have buffer memory.So node should these data of buffer memory.Buffer memory space-number is reverted to initial value by node, and incremental data bag jumping figure, forwards packet, then these data of buffer memory.If this node " not having space indicate " is 1, show that node does not have available memory space yet, then perform LRU cache replacement algorithm, data are replaced between clearancen for this reason.
When the free space of node is less than formulation threshold value, " not having space indicate " will be set to 0, otherwise be 1.
The present invention can show that port owned by the government algorithm can greatly reduce taking of the spatial cache of mobile node by experiment, and does not increase too many network traffics, and can provide a good operating lag simultaneously.
Accompanying drawing explanation
Fig. 1 is the contrast of nodal cache space hold.
Fig. 2 is overall nodal cache space hold comparison diagram.
Fig. 3 is network traffic data comparison diagram (a statistical content bag does not comprise request bag).
Fig. 4 is universe network current capacity contrast figure.
Fig. 5 is the comparison that data packet acknowledgement postpones.
Fig. 6 is the comparison of total response time.
Fig. 7 is success rate, and after namely sending out request bag, requesting node successfully receives the ratio of reply data bag.
Fig. 8 schematic flow sheet of the present invention.
Embodiment
In order to the effect of assessment algorithm, we have carried out Simulation is assessed under ndnSIM environment.NdnSIM is that a module of network simulator NS-3 expands, and it achieves NDN traffic model.Simulation is at Ubuntu-12.4systemwithlibboostofversion1.48.
4.SimulationandEvaluation
The optimum configurations of table 1 simulated environment.In the space of 1000m*1000m, simulate 802.11g Wireless Ad hoc network, the data pattern of network is ErpOfdmRate54Mbps.Nodes number is 64, initial position Random assignment.Node adopts random move mode, and maximum translational speed is 30m/s, and maximum residence time is 4s.Whole experiment test 100s.
Server-the Consumer model of ndnSIM is taked in communication, and the node of random selecting 10% is as service node, and random selecting 50% node sends request as consumer.The interest message that user sends meets poisson arrival characteristic, average transmission per second 2000 interest bags.The request msg content of consumer meets zipf distribution.The size of each content bag is set to 512B.The spatial cache upper limit of each node is set to 60000 content bags, namely provides the buffer memory of 30M size.The cache replacement policy of CS table adopts LRU (LeastRecentlyUsed) strategy.
From following four aspects, cache policy is assessed herein, is nodal cache space hold respectively, success rate that network traffics and data packet acknowledgement time delay, request obtain content.Comparing method is data retransmission global buffer strategy based on NDN and segmented flow analysis strategy, is denoted as NDN and Interval respectively.The cache node space-number of Interval and port owned by the government strategy is all set to 1, and namely consider to carry out buffer memory to data every a node, the spatial cache upper limit is set to whole buffer memory.The distance threshold of port owned by the government strategy is set to 3.The spatial cache upper limit of each node of port owned by the government algorithm is set to 1/10 of other two kinds of algorithms, i.e. 6000 content bags, 3M.
Fig. 1 is the contrast of nodal cache space hold.It is minimum for can finding out that from figure port owned by the government strategy takes up room.Interval is placed in the middle, and NDN algorithm is maximum.This is completely the same with the behavior of algorithm.Because NDN is all through nodal cache data, and Interval is because segmented flow analysis space is reduced to some extent, and segmented flow analysis is being carried out at port owned by the government from the place away from data source, thus reduce the taking of spatial cache further.
Fig. 2 is overall nodal cache space hold comparison diagram.As can be seen from the figure, the space hold of port owned by the government algorithm is greatly about 1% of NDN algorithm, and Interval algorithm is greatly about about 18%.
Fig. 3 is network traffic data comparison diagram (a statistical content bag does not comprise request bag).Because the content of NDN buffer memory is in a network maximum, therefore can with shorter path acknowledges requests person, so the bulk flow produced in network is minimum.But as can be seen from the figure, although port owned by the government buffer memory is less, the data traffic produced and Interval method comparison close.
Fig. 4 is universe network current capacity contrast figure.Therefrom can find out that the flow of port owned by the government algorithm is even also more less slightly than Interval algorithm, but closely.
Fig. 5 is the comparison that data packet acknowledgement postpones.Data packet acknowledgement postpones to be defined as after request data package sends here, until successfully receive the packet of response time of expense of growing dim.From figure can find out to be buffered on whole node buffer memory data because of NDN, thus can with less delayed response request.But although the less spatial cache in port owned by the government, as can be seen from the figure its delay and Interval are closely.
Fig. 6 is the comparison of total response time.The response time of port owned by the government algorithm and Interval algorithm can be found out closely.
Fig. 7 is success rate, and after namely sending out request bag, requesting node successfully receives the ratio of reply data bag.The success rate of port owned by the government algorithm and Interval algorithm can be found out relatively.The minimum of Ndn global buffer algorithm on the contrary.Although this is because each node buffer memory in NDN on forward-path data, the content of each nodal cache is more close, if the content that a not interested bag of node requires, the possibility of other nodes response is also smaller.That is, although a lot of nodal cache content, the spatial cache utilance of whole network is lower, the identical content of most of nodal cache.Huge due to inner capacities, causes spatial cache inadequate, causes some content not to be buffered, and the request to this part content, just can be able to be replied to node far away or even source node by forwarded.Because long distance forwards the rising causing packet loss, thus reduce success rate.Be on the contrary port owned by the government and interval because data scatter on node, the content of nodal cache is not quite similar, on the contrary can response request better.

Claims (1)

1. the port owned by the government cache policy of named data mobile ad-hoc network, is characterized in that:
Select data cached node: cache node is selected in head-end site; When data retransmission is to certain intermediate node, this node according to circumstances determines oneself whether these data of buffer memory while forwarding data; Whether forward node will consider oneself idle storage space when determining data cached is enough; The ratio of the idle storage space when buffer memory desired data amount and node is only had to be less than 1/10, could these data of buffer memory;
A hop count field is increased in content bag; This hop count field adds one when content bag is forwarded at every turn; The evolution of the nodes that regulation network can hold is network radius, and the half of getting network radius is here setpoint distance, exceedes setpoint distance, then think that this nodal distance source node is comparatively far away, can consider these data of buffer memory;
A buffer memory interval field is increased in content bag; It is initial value that response node arranges buffer memory interval field in content bag, and initial value is 1/8 of network radius; When content bag is forwarded at every turn, buffer memory interval field successively decreases 1; If content is buffered, buffer memory interval field is re-set as initial value;
Concrete steps are as follows:
First, the intermediate node receiving packet judges the distance of oneself and data source nodes; Not data cached when the distance of the sending node with packet is less than setpoint distance, as long as the buffer memory space-number in node Update Table bag; If buffer memory space-number reaches 0, be then set to initial value; If do not reach 0, then reduce 1; Increase the jumping figure of packet, then forwarding data bag simultaneously;
When distance is greater than setpoint distance, consider the memory space situation of buffer memory space-number and node: when space-number is greater than 0, show around have these data of nearer nodal cache, this node directly successively decreases space-number, incremental data bag jumping figure, forwarding data bag; If buffer memory space-number is 0, then node should consider these data of buffer memory; If but " not having space indicate " is 1, show that node does not have available memory space, then now node abandons these data of buffer memory, continues to successively decrease buffer memory space-number to-1, and incremental data bag jumping figure, forwards packet; Otherwise if node has memory space, then these data of buffer memory, are set to initial value by buffer memory space-number, incremental data bag jumping figure, forwarding data bag;
When the packet that node receives buffer memory space-number for-1 time, show that a node should these data of buffer memory, but because space problem does not have buffer memory; Node should these data of buffer memory; Buffer memory space-number is reverted to initial value by node, and incremental data bag jumping figure, forwards packet, then these data of buffer memory; If this node " not having space indicate " is 1, show that node does not have available memory space yet, then perform LRU cache replacement algorithm, data are replaced between clearancen for this reason;
When the ratio of the buffer memory desired data amount of node and the idle storage space of node is less than 1/10, " not having space indicate " will be set to 0, otherwise be 1.
CN201510674595.3A 2015-10-18 2015-10-18 Name the port owned by the government caching method of data mobile ad-hoc network Expired - Fee Related CN105357278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510674595.3A CN105357278B (en) 2015-10-18 2015-10-18 Name the port owned by the government caching method of data mobile ad-hoc network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510674595.3A CN105357278B (en) 2015-10-18 2015-10-18 Name the port owned by the government caching method of data mobile ad-hoc network

Publications (2)

Publication Number Publication Date
CN105357278A true CN105357278A (en) 2016-02-24
CN105357278B CN105357278B (en) 2018-06-19

Family

ID=55333137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510674595.3A Expired - Fee Related CN105357278B (en) 2015-10-18 2015-10-18 Name the port owned by the government caching method of data mobile ad-hoc network

Country Status (1)

Country Link
CN (1) CN105357278B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106936909A (en) * 2017-03-10 2017-07-07 北京工业大学 A kind of method of Traffic information demonstration based on numerical nomenclature network with retrieving
CN108156154A (en) * 2017-12-25 2018-06-12 北京工业大学 Name the access control method based on encryption and Bloom filter in data network
CN108616923A (en) * 2018-02-27 2018-10-02 南京邮电大学 A kind of cooperation caching system based on mobile ad-hoc network
CN109257443A (en) * 2018-11-09 2019-01-22 长安大学 A kind of name data network adaptive cache strategy towards car networking
CN111314224A (en) * 2020-02-13 2020-06-19 中国科学院计算技术研究所 Network caching method for named data
CN114039932A (en) * 2021-11-08 2022-02-11 北京工业大学 Method for reducing redundant data packet transmission in named data MANET network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102571974A (en) * 2012-02-02 2012-07-11 清华大学 Data redundancy eliminating method of distributed data center
CN103501315A (en) * 2013-09-06 2014-01-08 西安交通大学 Cache method based on relative content aggregation in content-oriented network
CN104753797A (en) * 2015-04-09 2015-07-01 清华大学深圳研究生院 Content center network dynamic routing method based on selective caching
CN104901980A (en) * 2014-03-05 2015-09-09 北京工业大学 Popularity-based equilibrium distribution caching method for named data networking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102571974A (en) * 2012-02-02 2012-07-11 清华大学 Data redundancy eliminating method of distributed data center
CN103501315A (en) * 2013-09-06 2014-01-08 西安交通大学 Cache method based on relative content aggregation in content-oriented network
CN104901980A (en) * 2014-03-05 2015-09-09 北京工业大学 Popularity-based equilibrium distribution caching method for named data networking
CN104753797A (en) * 2015-04-09 2015-07-01 清华大学深圳研究生院 Content center network dynamic routing method based on selective caching

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106936909A (en) * 2017-03-10 2017-07-07 北京工业大学 A kind of method of Traffic information demonstration based on numerical nomenclature network with retrieving
CN108156154A (en) * 2017-12-25 2018-06-12 北京工业大学 Name the access control method based on encryption and Bloom filter in data network
CN108616923A (en) * 2018-02-27 2018-10-02 南京邮电大学 A kind of cooperation caching system based on mobile ad-hoc network
CN109257443A (en) * 2018-11-09 2019-01-22 长安大学 A kind of name data network adaptive cache strategy towards car networking
CN111314224A (en) * 2020-02-13 2020-06-19 中国科学院计算技术研究所 Network caching method for named data
CN111314224B (en) * 2020-02-13 2021-03-09 中国科学院计算技术研究所 Network caching method for named data
CN114039932A (en) * 2021-11-08 2022-02-11 北京工业大学 Method for reducing redundant data packet transmission in named data MANET network

Also Published As

Publication number Publication date
CN105357278B (en) 2018-06-19

Similar Documents

Publication Publication Date Title
CN105357278A (en) Guandu cache strategy for named-data mobile ad hoc network
Kumar et al. Peer-to-peer cooperative caching for data dissemination in urban vehicular communications
Jin et al. Information-centric mobile caching network frameworks and caching optimization: a survey
Jia et al. Effective information transmission based on socialization nodes in opportunistic networks
Pan et al. A comprehensive-integrated buffer management strategy for opportunistic networks
Chand et al. Cooperative caching in mobile ad hoc networks based on data utility
Cheng et al. Adaptive lookup protocol for two-tier VANET/P2P information retrieval services
Borgia et al. Making opportunistic networks in IoT environments CCN-ready: A performance evaluation of the MobCCN protocol
Chen et al. Leveraging social networks for p2p content-based file sharing in mobile ad hoc networks
Majd et al. Split-Cache: A holistic caching framework for improved network performance in wireless ad hoc networks
Ishimaru et al. DTN-based delivery of word-of-mouth information with priority and deadline
Lu et al. Geographic information and node selfish-based routing algorithm for delay tolerant networks
Quan et al. Content retrieval model for information-center MANETs: 2-dimensional case
Gu et al. Latency analysis for thrown box based message dissemination
CN103532865A (en) Congestion control method based on socially aware in delay tolerant network
Joseph et al. Energy efficient data retrieval and caching in mobile peer-to-peer networks
Lai et al. Data gathering and offloading in delay tolerant mobile networks
Wu et al. Information transmission probability and cache management method in opportunistic networks
Zhang et al. Congestion control strategy for opportunistic network based on message values
Xuyan et al. Cellular traffic offloading utilizing set-cover based caching in mobile social networks
Jayasooriya et al. Decentralized peer to peer web caching for Mobile Ad Hoc Networks (iCache)
Wang et al. Edge caching via content offloading in heterogeneous mobile opportunistic networks
CN105743975A (en) Cache placing method and system based on data access distribution
Teixeira et al. Increasing Network Resiliency via Data-Centric Offloading
CN107634906A (en) A kind of multiple attribute decision making (MADM) VDTN method for routing based on sub-clustering

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180619

Termination date: 20201018