CN108282528B - Data caching method and device - Google Patents

Data caching method and device Download PDF

Info

Publication number
CN108282528B
CN108282528B CN201810063151.XA CN201810063151A CN108282528B CN 108282528 B CN108282528 B CN 108282528B CN 201810063151 A CN201810063151 A CN 201810063151A CN 108282528 B CN108282528 B CN 108282528B
Authority
CN
China
Prior art keywords
node
cache
caching
node group
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810063151.XA
Other languages
Chinese (zh)
Other versions
CN108282528A (en
Inventor
董爱强
颜拥
于卓
刘周斌
郝艳亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd
Beijing China Power Information Technology Co Ltd
Original Assignee
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd
Beijing China Power Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, State Grid Information and Telecommunication Co Ltd, Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd, Beijing China Power Information Technology Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201810063151.XA priority Critical patent/CN108282528B/en
Publication of CN108282528A publication Critical patent/CN108282528A/en
Application granted granted Critical
Publication of CN108282528B publication Critical patent/CN108282528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the invention provides a data caching method and a data caching device, which can be used for each network node in a network node set: determining the network node as a content node, determining all other network nodes except the network node in the network node set as cache nodes, and determining the content node and the cache nodes as a node group for each cache node; for each node group: obtaining the unit income and the flow of the node group, and multiplying the unit income of the node group, the flow of the node group and the cache factor to obtain the cache income of the node group; determining caching factors corresponding to each node group respectively when the sum of caching gains of each node group is maximum on the premise of meeting a preset constraint condition; and storing each caching factor into the caching node so that the caching node caches at least part of the content in the content node. The invention provides a node caching scheme under the condition of maximum caching income, and the network burden is reduced.

Description

Data caching method and device
Technical Field
The present invention relates to the field of data caching technologies, and in particular, to a data caching method and apparatus.
Background
With the development of the internet, more and more data are in the network.
A node in the network often needs to acquire data from another node (i.e., a source node storing the data), and when the two nodes are far away from each other, the process of acquiring the data takes much time and system resources. To solve this problem, data caching techniques have evolved. By caching data to a certain node in the middle, the node can directly acquire the data from a node which is closer to the node and caches the data, and compared with the data acquired from a source node, the data caching technology can bring about little benefit.
However, the number of nodes in the network is large, each node may issue and request content, the distances of the nodes and the access paths between the nodes in different networks are different, and how to control the nodes to cache the content can maximize the benefit after data caching is still a technical problem to be solved in the field.
Disclosure of Invention
The embodiment of the invention aims to provide a data caching method and device so as to maximize the benefit of data caching. The specific technical scheme is as follows:
a data caching method, comprising:
for each network node in the set of network nodes: determining the network node as a content node, determining all other network nodes except the network node in the network node set as cache nodes, and determining the content node and the cache node as a node group for each cache node;
for each node group: obtaining the shortest hop count from the content node to the cache node in the node group, determining the obtained shortest hop count as the unit profit of the node group, determining the number of other nodes accessing the content node in the node group through the cache node in the node group, and taking the determined number as the flow of the node group;
for each node group: multiplying the unit income of the node group, the flow of the node group and the caching factor to obtain the caching income of the node group;
determining a cache factor corresponding to each node group when the sum of the cache gains of each node group is maximum on the premise of meeting a preset constraint condition, wherein the cache factor is 0 or 1;
and storing each caching factor into a caching node in the corresponding node group, so that the caching node in the node group caches at least part of contents in the content node in the node group according to the stored caching factor.
Optionally, the storing each caching factor into a caching node in a corresponding node group, so that the caching node in the node group caches at least part of contents in a content node in the node group according to the stored caching factor, includes:
and storing each caching factor into a caching node in a corresponding node group, so that the caching nodes in the node groups cache at least part of contents in the content nodes in the node groups according to a storage probability corresponding to the stored caching factor, wherein the storage probability corresponding to the caching factor of 1 is a first probability, and the storage probability corresponding to the caching factor of 0 is a second probability, and the first probability is greater than the second probability.
Optionally, the storing each caching factor into a caching node in a corresponding node group, so that the caching node in the node group caches at least part of contents in the content node in the node group according to a storage probability corresponding to the stored caching factor, includes:
storing each caching factor into a caching node in a corresponding node group, so that each time the caching node in the node group receives at least part of contents in the content node in the node group, a random number within 0-1 is generated and whether the random number is not greater than a storage probability corresponding to the stored caching factor is judged, if so, the received contents are cached, otherwise, the caching is not performed.
Optionally, the content node is viThe cache node is vjThe network node set is V, the unit profit is d (j, i), the flow is w (i, j), and the caching factor is
Figure BDA0001555895840000021
The determining of the cache factors corresponding to each node group when the sum of the cache gains of each node group is maximum on the premise that the preset constraint condition is met includes:
by the formula
Figure BDA0001555895840000031
Figure BDA0001555895840000032
Determining cache factors corresponding to each node group, wherein the preset constraint conditions include a first constraint condition and a second constraint condition, and the first constraint condition is as follows:
Figure BDA0001555895840000033
the second constraint condition is as follows:
Figure BDA0001555895840000034
f isiContent cached for a cache node, wherein F is a set of content issued by each network node in the network node set, C is total data traffic in a network in unit time, k is a cache proportion, and CjIs a node vjThe buffer space of (2).
Optionally, the determining the number of other nodes accessing the content node in the node group through the cache node in the node group includes:
obtaining a shortest path tree of content nodes in the node group;
determining the number of nodes in a subtree which takes the cache node in the node group as a root node in the shortest path tree;
the number of the nodes is determined as the number of other nodes accessing the content node in the node group through the cache node in the node group.
A data caching apparatus, comprising: a node determining unit, a hop count determining unit, a profit obtaining unit, a buffer factor determining unit and a buffer factor storing unit,
the node determining unit is configured to, for each network node in the network node set: determining the network node as a content node, determining all other network nodes except the network node in the network node set as cache nodes, and determining the content node and the cache node as a node group for each cache node;
the hop count determination unit is configured to, for each node group: obtaining the shortest hop count from the content node to the cache node in the node group, determining the obtained shortest hop count as the unit profit of the node group, determining the number of other nodes accessing the content node in the node group through the cache node in the node group, and taking the determined number as the flow of the node group;
the revenue obtaining unit is configured to, for each node group: multiplying the unit income of the node group, the flow of the node group and the caching factor to obtain the caching income of the node group;
the cache factor determination unit is configured to determine a cache factor corresponding to each node group when the sum of the cache gains of each node group is maximum on the premise that a preset constraint condition is met, where the cache factor is 0 or 1;
the cache factor storage unit is configured to store each cache factor into a cache node in a corresponding node group, so that the cache node in the node group caches at least part of contents in the content node in the node group according to the stored cache factor.
Optionally, the cache factor storage unit is specifically configured to:
and storing each caching factor into a caching node in a corresponding node group, so that the caching nodes in the node groups cache at least part of contents in the content nodes in the node groups according to a storage probability corresponding to the stored caching factor, wherein the storage probability corresponding to the caching factor of 1 is a first probability, and the storage probability corresponding to the caching factor of 0 is a second probability, and the first probability is greater than the second probability.
Optionally, the cache factor storage unit is specifically configured to: storing each caching factor into a caching node in a corresponding node group, so that each time the caching node in the node group receives at least part of contents in the content node in the node group, a random number within 0-1 is generated and whether the random number is not greater than a storage probability corresponding to the stored caching factor is judged, if so, the received contents are cached, otherwise, the caching is not performed.
Optionally, the content node is viThe cache node is vjThe network node set is V, the unit profit is d (j, i), the flow is w (i, j), and the caching factor is
Figure BDA0001555895840000041
The cache factor determination unit is specifically configured to:
by the formula
Figure BDA0001555895840000042
Figure BDA0001555895840000043
Determining the caching factors respectively corresponding to each node group, wherein the preset constraint condition comprises a first constraint condition and a second constraint condition,wherein the first constraint condition is as follows:
Figure BDA0001555895840000051
the second constraint condition is as follows:
Figure BDA0001555895840000052
f isiContent cached for a cache node, wherein F is a set of content issued by each network node in the network node set, C is total data traffic in a network in unit time, k is a cache proportion, and CjIs a node vjThe buffer space of (2).
Optionally, the hop count determining unit includes: a unit profit determination subunit and a flow rate determination subunit,
the unit profit determination subunit is configured to, for each node group: obtaining the shortest hop count from the content node to the cache node in the node group, and determining the obtained shortest hop count as the unit income of the node group;
the traffic determining subunit is configured to, for each node group: obtaining a shortest path tree of content nodes in the node group; determining the number of nodes in a subtree which takes the cache node in the node group as a root node in the shortest path tree; and determining the number of the nodes as the number of other nodes accessing the content node in the node group through the cache nodes in the node group, and taking the determined number as the flow of the node group.
The data caching method and device provided by the embodiment of the invention can be used for each network node in the network node set: determining the network node as a content node, determining all other network nodes except the network node in the network node set as cache nodes, and determining the content node and the cache node as a node group for each cache node; for each node group: obtaining the shortest hop count from the content node to the cache node in the node group, determining the obtained shortest hop count as the unit profit of the node group, determining the number of other nodes accessing the content node in the node group through the cache node in the node group, and taking the determined number as the flow of the node group; for each node group: multiplying the unit income of the node group, the flow of the node group and the caching factor to obtain the caching income of the node group; determining caching factors corresponding to each node group respectively when the sum of caching gains of each node group is maximum on the premise of meeting a preset constraint condition; and storing each caching factor into a caching node in the corresponding node group, so that the caching node in the node group caches at least part of contents in the content node in the node group according to the stored caching factor. The invention provides a node caching scheme under the condition of maximum caching income, which can effectively improve the network utilization rate and reduce the network burden.
Of course, it is not necessary for any product or method of practicing the invention to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a data caching method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a unit of revenue provided by an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a data caching apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a data caching method provided in an embodiment of the present invention may include:
s100, for each network node in the network node set: determining the network node as a content node, determining all other network nodes except the network node in the network node set as cache nodes, and determining the content node and the cache node as a node group for each cache node;
wherein, the content node can be viThe cache node may be vjThe set of network nodes may be V.
Suppose that the network node set includes three nodes, respectively node v1、v2And v3For node v, then1Determines it as a content node, and sets node v as a content node2And v3If the node is determined to be a cache node, two node groups can be obtained: (v)1,v2)、(v1,v3);
Similarly, for node v2Determines it as a content node, and sets node v as a content node1And v3If the node is determined to be a cache node, two node groups can be obtained: (v)2,v1)、(v2,v3) (ii) a For node v3Determines it as a content node, and sets node v as a content node1And v2If the node is determined to be a cache node, two node groups can be obtained: (v)3,v1)、(v3,v2)。
In summary, six node groups can be obtained in the above example.
S200, for each node group: obtaining the shortest hop count from the content node to the cache node in the node group, determining the obtained shortest hop count as the unit profit of the node group, determining the number of other nodes accessing the content node in the node group through the cache node in the node group, and taking the determined number as the flow of the node group;
the unit profit may be d (j, i), and the flow rate may be w (i, j).
The following illustrates the unit profit, node v as shown in FIG. 2iFor the source node storing the first content, node vjFor a node cached with the first content, node v1Is the node that needs to obtain the first content. Node vjAt node v1To node viOn the shortest path of (c), at node vjWhen the first content is not cached, the node v1Needs to node viThe distance to be passed is the node v1To node viThe distance of (c). And node vjNode v, when first content is cached1Only need to node vjThe distance to be passed is the node v1To node vjThe distance of (c). Thus, it can be seen that at node vjCaching the first content and at the other node to the node viWhen on the shortest path of (c), every other node passes through node vjThe node v can be saved by acquiring the first contentjTo node viAnd this saved distance is the unit gain.
The determining the number of other nodes accessing the content node in the node group through the cache node in the node group may include:
obtaining a shortest path tree of content nodes in the node group;
determining the number of nodes in a subtree which takes the cache node in the node group as a root node in the shortest path tree;
the number of the nodes is determined as the number of other nodes accessing the content node in the node group through the cache node in the node group.
Specifically, the invention can obtain the Shortest paths from a certain node to other nodes respectively based on dijkstra calculation method, and exchange routing state information through an Open Shortest Path First (OSPF) message, thereby obtaining the Shortest Path tree of the certain node by using the certain node as a root node.
It can be understood that, in the shortest path tree, each other node in the subtree taking a certain child node as the root node communicates with the root node of the shortest path tree through the child node, and therefore, the number of each other node in the subtree taking a certain child node as the root node is the number of other nodes accessing the root node of the shortest path tree through the child node. Each other node has the unit gain when accessing the root node of the shortest path tree through the child node, so the invention can multiply the number of the other nodes by the unit gain, thereby obtaining the gain brought by the child node when caching the content of the root node of the shortest path tree and determining whether the child node needs to cache the content of the root node according to the gain.
S300, for each node group: multiplying the unit income of the node group, the flow of the node group and the caching factor to obtain the caching income of the node group;
wherein the buffer factor may be
Figure BDA0001555895840000081
The buffering factor may be 0 or 1.
S400, determining caching factors corresponding to each node group when the sum of caching gains of each node group is maximum on the premise of meeting a preset constraint condition, wherein the caching factors are 0 or 1;
specifically, step S400 may specifically include:
by the formula
Figure BDA0001555895840000082
Figure BDA0001555895840000083
Determining cache factors corresponding to each node group, wherein the preset constraint conditions include a first constraint condition and a second constraint condition, and the first constraint condition is as follows:
Figure BDA0001555895840000084
the second constraint condition is as follows:
Figure BDA0001555895840000085
f isiContent cached for a cache node, wherein F is a set of content issued by each network node in the network node set, C is total data traffic in a network in unit time, k is a cache proportion, and CjIs a node vjThe buffer space of (2).
Wherein k may be greater than 0.6 and less than 0.8.
S500, storing each caching factor into a caching node in the corresponding node group, so that the caching node in the node group caches at least part of contents in the content node in the node group according to the stored caching factor.
In practical applications, step S500 may further include, for each node group: and correspondingly storing the identification information and the cache factor of the content node in the node group in the cache node in the node group. Thus, when the cache node receives the content, whether the identification information of the content sender is the same as the stored at least one piece of identification information can be judged, and if so, the received content is cached according to the caching factor corresponding to the identification information of the content sender.
Wherein, step S500 may specifically include:
and storing each caching factor into a caching node in a corresponding node group, so that the caching nodes in the node groups cache at least part of contents in the content nodes in the node groups according to a storage probability corresponding to the stored caching factor, wherein the storage probability corresponding to the caching factor of 1 is a first probability, and the storage probability corresponding to the caching factor of 0 is a second probability, and the first probability is greater than the second probability.
Specifically, the storing each caching factor into a caching node in a corresponding node group, so that the caching node in the node group caches at least part of contents in a content node in the node group according to a storage probability corresponding to the stored caching factor, may include:
storing each caching factor into a caching node in a corresponding node group, so that each time the caching node in the node group receives at least part of contents in the content node in the node group, a random number within 0-1 is generated and whether the random number is not greater than a storage probability corresponding to the stored caching factor is judged, if so, the received contents are cached, otherwise, the caching is not performed.
Thus, the more times the content in the content node in the node group is transferred to the cache node, the greater the likelihood that the cache node will cache the content in the content node.
The data caching method provided by the embodiment of the invention comprises the following steps of for each network node in a network node set: determining the network node as a content node, determining all other network nodes except the network node in the network node set as cache nodes, and determining the content node and the cache node as a node group for each cache node; for each node group: obtaining the shortest hop count from the content node to the cache node in the node group, determining the obtained shortest hop count as the unit profit of the node group, determining the number of other nodes accessing the content node in the node group through the cache node in the node group, and taking the determined number as the flow of the node group; for each node group: multiplying the unit income of the node group, the flow of the node group and the caching factor to obtain the caching income of the node group; determining caching factors corresponding to each node group respectively when the sum of caching gains of each node group is maximum on the premise of meeting a preset constraint condition; and storing each caching factor into a caching node in the corresponding node group, so that the caching node in the node group caches at least part of contents in the content node in the node group according to the stored caching factor. The invention provides a node caching scheme under the condition of maximum caching income, which can effectively improve the network utilization rate and reduce the network burden.
Corresponding to the embodiment of the method, the invention also provides a data caching device.
As shown in fig. 3, a data caching apparatus provided in an embodiment of the present invention may include: a node determination unit 100, a hop count determination unit 200, a profit obtainment unit 300, a caching factor determination unit 400, and a caching factor storage unit 500,
the node determining unit 100 is configured to, for each network node in the network node set: determining the network node as a content node, determining all other network nodes except the network node in the network node set as cache nodes, and determining the content node and the cache node as a node group for each cache node;
wherein, the content node can be viThe cache node may be vjThe set of network nodes may be V.
The hop count determining unit 200 is configured to, for each node group: obtaining the shortest hop count from the content node to the cache node in the node group, determining the obtained shortest hop count as the unit profit of the node group, determining the number of other nodes accessing the content node in the node group through the cache node in the node group, and taking the determined number as the flow of the node group;
the unit profit may be d (j, i), and the flow rate may be w (i, j).
Wherein, the hop count determining unit 200 may include: a unit profit determination subunit and a flow rate determination subunit,
the unit profit determination subunit is configured to, for each node group: obtaining the shortest hop count from the content node to the cache node in the node group, and determining the obtained shortest hop count as the unit income of the node group;
the traffic determining subunit is configured to, for each node group: obtaining a shortest path tree of content nodes in the node group; determining the number of nodes in a subtree which takes the cache node in the node group as a root node in the shortest path tree; and determining the number of the nodes as the number of other nodes accessing the content node in the node group through the cache nodes in the node group, and taking the determined number as the flow of the node group.
Specifically, the invention can obtain the Shortest paths from a certain node to other nodes respectively based on dijkstra calculation method, and exchange routing state information through an Open Shortest Path First (OSPF) message, thereby obtaining the Shortest Path tree of the certain node by using the certain node as a root node.
It can be understood that, in the shortest path tree, each other node in the subtree taking a certain child node as the root node communicates with the root node of the shortest path tree through the child node, and therefore, the number of each other node in the subtree taking a certain child node as the root node is the number of other nodes accessing the root node of the shortest path tree through the child node. Each other node has the unit gain when accessing the root node of the shortest path tree through the child node, so the invention can multiply the number of the other nodes by the unit gain, thereby obtaining the gain brought by the child node when caching the content of the root node of the shortest path tree and determining whether the child node needs to cache the content of the root node according to the gain.
The revenue obtaining unit 300 is configured to, for each node group: multiplying the unit income of the node group, the flow of the node group and the caching factor to obtain the caching income of the node group;
wherein the buffer factor may be
Figure BDA0001555895840000111
The buffering factor may be 0 or 1.
The cache factor determining unit 400 is configured to determine a cache factor corresponding to each node group when the sum of cache gains of each node group is maximum on the premise that a preset constraint condition is met, where the cache factor is 0 or 1;
wherein the content node may be viThe cache node may be vjThe set of network nodes may be V, the unit profit may be d (j, i), the traffic may be w (i, j), and the buffering factor may be V
Figure BDA0001555895840000112
The caching factor determining unit 400 may be specifically configured to:
by the formula
Figure BDA0001555895840000113
Figure BDA0001555895840000114
Determining cache factors corresponding to each node group, wherein the preset constraint conditions include a first constraint condition and a second constraint condition, and the first constraint condition is as follows:
Figure BDA0001555895840000115
the second constraint condition is as follows:
Figure BDA0001555895840000116
f isiContent cached for a cache node, wherein F is a set of content issued by each network node in the network node set, C is total data traffic in a network in unit time, k is a cache proportion, and CjIs a node vjThe buffer space of (2).
The caching factor storage unit 500 is configured to store each caching factor into a caching node in a corresponding node group, so that the caching node in the node group caches at least part of contents in a content node in the node group according to the stored caching factor.
In practical applications, the caching factor storage unit 500 may further: and correspondingly storing the identification information and the cache factor of the content node in the node group in the cache node in the node group. Thus, when the cache node receives the content, whether the identification information of the content sender is the same as the stored at least one piece of identification information can be judged, and if so, the received content is cached according to the caching factor corresponding to the identification information of the content sender.
The cache factor storage unit 500 may be specifically configured to:
and storing each caching factor into a caching node in a corresponding node group, so that the caching nodes in the node groups cache at least part of contents in the content nodes in the node groups according to a storage probability corresponding to the stored caching factor, wherein the storage probability corresponding to the caching factor of 1 is a first probability, and the storage probability corresponding to the caching factor of 0 is a second probability, and the first probability is greater than the second probability.
Further, the cache factor storage unit 500 may be specifically configured to: storing each caching factor into a caching node in a corresponding node group, so that each time the caching node in the node group receives at least part of contents in the content node in the node group, a random number within 0-1 is generated and whether the random number is not greater than a storage probability corresponding to the stored caching factor is judged, if so, the received contents are cached, otherwise, the caching is not performed.
Thus, the more times the content in the content node in the node group is transferred to the cache node, the greater the likelihood that the cache node will cache the content in the content node.
The data caching device provided by the embodiment of the invention comprises the following steps of for each network node in a network node set: determining the network node as a content node, determining all other network nodes except the network node in the network node set as cache nodes, and determining the content node and the cache node as a node group for each cache node; for each node group: obtaining the shortest hop count from the content node to the cache node in the node group, determining the obtained shortest hop count as the unit profit of the node group, determining the number of other nodes accessing the content node in the node group through the cache node in the node group, and taking the determined number as the flow of the node group; for each node group: multiplying the unit income of the node group, the flow of the node group and the caching factor to obtain the caching income of the node group; determining caching factors corresponding to each node group respectively when the sum of caching gains of each node group is maximum on the premise of meeting a preset constraint condition; and storing each caching factor into a caching node in the corresponding node group, so that the caching node in the node group caches at least part of contents in the content node in the node group according to the stored caching factor. The invention provides a node caching scheme under the condition of maximum caching income, which can effectively improve the network utilization rate and reduce the network burden.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (6)

1. A method for caching data, comprising:
determining each binary node group corresponding to the network node set, wherein the process of determining each binary node group corresponding to the network node set comprises the following steps: for each network node in the set of network nodes: determining the network node as a content node, determining all other network nodes except the network node in the network node set as cache nodes, and determining the content node and the cache nodes as a binary node group for each cache node to obtain each binary node group corresponding to the network node set;
for each binary node group: obtaining the shortest hop count from the content node to the cache node in the binary node group, determining the obtained shortest hop count as the unit profit of the binary node group, determining the number of other nodes accessing the content node in the binary node group through the cache node in the binary node group, and taking the determined number as the flow of the binary node group;
for each binary node group: multiplying the unit income of the binary node group, the flow of the binary node group and the cache factor corresponding to the binary node group to obtain the cache income of the binary node group;
determining cache factors respectively corresponding to all binary node groups when the sum of cache gains of all binary node groups corresponding to the network node set is maximum on the premise of meeting a preset constraint condition, wherein the cache factors are 0 or 1;
storing each caching factor into a caching node in a corresponding binary node group so that the caching node in the binary node group caches at least part of contents in content nodes in the binary node group according to the stored caching factor corresponding to the binary node group, wherein the caching factor comprises the following steps:
storing each caching factor into a caching node in a corresponding binary node group, so that the caching node in the binary node group caches at least part of contents in content nodes in the binary node group according to a storage probability corresponding to the stored caching factor, wherein the storage probability corresponding to the caching factor being 1 is a first probability, and the storage probability corresponding to the caching factor being 0 is a second probability, and the first probability is greater than the second probability;
wherein, storing each caching factor into the caching node in the corresponding binary node group so that the caching node in the binary node group caches at least part of the content in the content node in the binary node group according to the storage probability corresponding to the stored caching factor, includes:
storing each caching factor into the caching node in the corresponding binary node group, so that the caching node in the binary node group generates a random number within 0 to 1 and judges whether the random number is not greater than the storage probability corresponding to the stored caching factor when receiving at least part of content in the content node in the binary node group every time, if so, caching the received content, otherwise, not caching.
2. The method of claim 1, wherein the content node is viThe cache node is vjThe network node set is V, the unit profit is d (j, i), the flow is w (i, j), and the caching factor is
Figure FDA0003046381080000025
The determining of the cache factors corresponding to each node group when the sum of the cache gains of each node group is maximum on the premise that the preset constraint condition is met includes:
by the formula
Figure FDA0003046381080000021
Figure FDA0003046381080000022
Determining cache factors corresponding to each node group, wherein the preset constraint conditions include a first constraint condition and a second constraint condition, and the first constraint condition is as follows:
Figure FDA0003046381080000023
the second constraint condition is as follows:
Figure FDA0003046381080000024
f isiContent cached for a cache node, wherein F is a set of content issued by each network node in the network node set, C is total data traffic in a network in unit time, k is a cache proportion, and CjIs a node vjThe buffer space of (2).
3. The method of claim 1, wherein determining the number of other nodes accessing the content node in the node group through the cache node in the node group comprises:
obtaining a shortest path tree of content nodes in the node group;
determining the number of nodes in a subtree which takes the cache node in the node group as a root node in the shortest path tree;
the number of the nodes is determined as the number of other nodes accessing the content node in the node group through the cache node in the node group.
4. A data caching apparatus, comprising: a node determining unit, a hop count determining unit, a profit obtaining unit, a buffer factor determining unit and a buffer factor storing unit,
the node determining unit is configured to determine each binary node group corresponding to the network node set, where the process of determining each binary node group corresponding to the network node set includes: for each network node in the set of network nodes: determining the network node as a content node, determining all other network nodes except the network node in the network node set as cache nodes, and determining the content node and the cache nodes as a binary node group for each cache node to obtain each binary node group corresponding to the network node set;
the hop count determination unit is configured to, for each binary node group: obtaining the shortest hop count from the content node to the cache node in the binary node group, determining the obtained shortest hop count as the unit profit of the binary node group, determining the number of other nodes accessing the content node in the binary node group through the cache node in the binary node group, and taking the determined number as the flow of the binary node group;
the gain obtaining unit is configured to, for each binary node group: multiplying the unit income of the binary node group, the flow of the binary node group and the cache factor corresponding to the binary node group to obtain the cache income of the node group;
the cache factor determining unit is configured to determine a cache factor corresponding to each binary node group when a sum of cache gains of each binary node group corresponding to the network node set is maximum on the premise that a preset constraint condition is met, where the cache factor is 0 or 1;
the cache factor storage unit is used for storing each cache factor into a cache node in a corresponding binary node group, so that the cache node in the binary node group caches at least part of contents in the content node in the binary node group according to the stored cache factor corresponding to the binary node group;
the cache factor storage unit is specifically configured to: storing each caching factor into a caching node in a corresponding node group, so that the caching nodes in the node group cache at least part of contents in the content nodes in the node group according to a storage probability corresponding to the stored caching factor, wherein the storage probability corresponding to the caching factor being 1 is a first probability, and the storage probability corresponding to the caching factor being 0 is a second probability, and the first probability is greater than the second probability;
the cache factor storage unit is further specifically configured to: storing each caching factor into a caching node in a corresponding node group, so that each time the caching node in the node group receives at least part of contents in the content node in the node group, a random number within 0-1 is generated and whether the random number is not greater than a storage probability corresponding to the stored caching factor is judged, if so, the received contents are cached, otherwise, the caching is not performed.
5. The apparatus of claim 4, wherein the content node is viThe cache node is vjThe network node set is V, the unit profit is d (j, i), the flow is w (i, j), and the caching factor is
Figure FDA0003046381080000045
The cache factor determination unit is specifically configured to:
by the formula
Figure FDA0003046381080000041
Figure FDA0003046381080000042
Determining cache factors corresponding to each node group, wherein the preset constraint conditions include a first constraint condition and a second constraint condition, and the first constraint condition is as follows:
Figure FDA0003046381080000043
the second constraint condition is as follows:
Figure FDA0003046381080000044
f isiF is a set of contents issued by each network node in the network node set for caching contents cached by the nodesC is total data flow in the network in unit time, k is cache proportion, CjIs a node vjThe buffer space of (2).
6. The apparatus of claim 4, wherein the hop count determination unit comprises: a unit profit determination subunit and a flow rate determination subunit,
the unit profit determination subunit is configured to, for each node group: obtaining the shortest hop count from the content node to the cache node in the node group, and determining the obtained shortest hop count as the unit income of the node group;
the traffic determining subunit is configured to, for each node group: obtaining a shortest path tree of content nodes in the node group; determining the number of nodes in a subtree which takes the cache node in the node group as a root node in the shortest path tree; and determining the number of the nodes as the number of other nodes accessing the content node in the node group through the cache nodes in the node group, and taking the determined number as the flow of the node group.
CN201810063151.XA 2018-01-23 2018-01-23 Data caching method and device Active CN108282528B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810063151.XA CN108282528B (en) 2018-01-23 2018-01-23 Data caching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810063151.XA CN108282528B (en) 2018-01-23 2018-01-23 Data caching method and device

Publications (2)

Publication Number Publication Date
CN108282528A CN108282528A (en) 2018-07-13
CN108282528B true CN108282528B (en) 2021-07-30

Family

ID=62804654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810063151.XA Active CN108282528B (en) 2018-01-23 2018-01-23 Data caching method and device

Country Status (1)

Country Link
CN (1) CN108282528B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103457867A (en) * 2013-09-04 2013-12-18 清华大学 Method and device for P2P traffic caching deployment
CN104166630A (en) * 2014-08-06 2014-11-26 哈尔滨工程大学 Method oriented to prediction-based optimal cache placement in content central network
CN106851741A (en) * 2016-12-10 2017-06-13 浙江大学 Distributed mobile node file caching method based on social networks in cellular network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103457867A (en) * 2013-09-04 2013-12-18 清华大学 Method and device for P2P traffic caching deployment
CN104166630A (en) * 2014-08-06 2014-11-26 哈尔滨工程大学 Method oriented to prediction-based optimal cache placement in content central network
CN106851741A (en) * 2016-12-10 2017-06-13 浙江大学 Distributed mobile node file caching method based on social networks in cellular network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于收益感知的信息中心网络缓存机制;陈龙,等;《通信学报》;20160515;第130-142页 *

Also Published As

Publication number Publication date
CN108282528A (en) 2018-07-13

Similar Documents

Publication Publication Date Title
JP5329939B2 (en) Context search method and apparatus
CN107317879B (en) A kind of distribution method and system of user's request
CN103581230B (en) Document transmission system and method, receiving terminal, transmitting terminal
CN106982248B (en) caching method and device for content-centric network
WO2021135835A1 (en) Resource acquisition method and apparatus, and node device in cdn network
US10103989B2 (en) Content object return messages in a content centric network
CN105657006B (en) A kind of access acceleration method and system for the first time accelerating network based on online
CN105407128B (en) Interest keeping method and system on intermediate router in CCN
CN108429701A (en) network acceleration system
WO2021223662A1 (en) Page access based on code scanning
CN109788319B (en) Data caching method
CN109672558A (en) A kind of polymerization and Method of Optimal Matching towards third party's service resource, equipment and storage medium
CN105991763A (en) Pending interest table behavior
CN107493232A (en) A kind of access accelerating method and device of CDN
CN108282528B (en) Data caching method and device
Zhang et al. DENA: An intelligent content discovery system used in named data networking
JP2012507064A5 (en)
CN109981460B (en) Service-oriented converged network, calculation and storage integrated method and device
CN112579639A (en) Data processing method and device, electronic equipment and storage medium
CN114124778B (en) Anycast service source routing method and device based on QoS constraint
CA3102943A1 (en) Directory assisted routing of content in an information centric network
CN111562990B (en) Lightweight serverless computing method based on message
CN110517009B (en) Real-time public layer construction method and device and server
US20170048185A1 (en) Method for posing requests in a social networking site
CN110012071B (en) Caching method and device for Internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant