CN112887992B - Dense wireless network edge caching method based on access balance core and replacement rate - Google Patents

Dense wireless network edge caching method based on access balance core and replacement rate Download PDF

Info

Publication number
CN112887992B
CN112887992B CN202110035595.4A CN202110035595A CN112887992B CN 112887992 B CN112887992 B CN 112887992B CN 202110035595 A CN202110035595 A CN 202110035595A CN 112887992 B CN112887992 B CN 112887992B
Authority
CN
China
Prior art keywords
node
access
nodes
cache
core
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110035595.4A
Other languages
Chinese (zh)
Other versions
CN112887992A (en
Inventor
王蒙蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Binzhou University
Original Assignee
Binzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Binzhou University filed Critical Binzhou University
Priority to CN202110035595.4A priority Critical patent/CN112887992B/en
Publication of CN112887992A publication Critical patent/CN112887992A/en
Application granted granted Critical
Publication of CN112887992B publication Critical patent/CN112887992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention belongs to the technical field of dense wireless networks, and discloses a dense wireless network edge caching method based on an access balance core and a replacement rate, which comprises the following steps: constructing an edge cache network model; identifying the importance of the nodes based on a weight self-adaptive algorithm; self-adaptive dynamic node sorting based on weight; and realizing a buffer decision strategy based on weight self-adaptation. The invention selects the cache base station node more accurately by the self-adaptive weighting of various characteristics, improves the marginal cache hit rate and the response efficiency of the user request, and has great advantages in the aspects of identification precision and cache efficiency of cache important nodes. Meanwhile, the invention adopts a plurality of network models and simulates the algorithm from a plurality of angles. The results show that the scheme can obtain better access delay and caching efficiency in a more complex network access environment compared with the existing mechanism.

Description

Dense wireless network edge caching method based on access balance core and replacement rate
Technical Field
The invention belongs to the technical field of dense wireless networks, and particularly relates to a dense wireless network edge caching method based on an access balance core and a replacement rate.
Background
Currently, as the 5G era approaches, mobile devices and data traffic will exhibit explosive growth. In order to deal with access of mass mobile devices and large-capacity service transmission, the most direct and effective method is to densely deploy base stations to form a dense wireless network. Currently, the current practice is. Intensive wireless networks have been extensively studied in academia and industry. Dense wireless networks are in fact a dense deployment of various types of wireless access points, such as traditional macro base stations, pico base stations, nano base stations, remote radio units, and relay node stations. Densely deployed base stations are not only closer to users, but also eliminate some signal holes. Through the spatial multiplexing of frequency spectrums, the frequency spectrum multiplexing rate is improved by a dense wireless network, the signal coverage of a marginal area is enhanced, and the total capacity of the whole system is improved. Given the high cost of wired backhaul deployment, wireless backhaul has gradually become an effective alternative. In a distributed wireless backhaul, neighboring base stations form a cluster. The base stations in the system transmit data via wireless backhaul to a particular small base station, which is connected to the core network via optical fiber.
The energy consumption optimization model based on the weighted graph provided by the prior art is used for calculating the data transmission energy consumption of each link in the network, and on the premise of ensuring the QoS of a user, a large amount of research is carried out on data resource caching, so that an optimal buffering strategy is obtained. In the prior art, network storage is optimized by analyzing a large amount of request data, and a mobile network optimization architecture for improving user experience quality and a data caching strategy based on user mobile perception are provided. The non-cooperative data caching strategies have certain limitations, cannot play a synergistic role of neighborhoods, and are not high in caching efficiency.
The existing edge network caching algorithm is mainly focused on a single base station or a local server, an edge server and a cloud server hierarchical cooperative caching, the problem of the edge caching network is lack of research, and the efficiency of access response is difficult to improve to a great extent. In addition, these discussions are directed to a fixed network topology, access time, access content, and the like of an edge cache network are random, a user can only access one base station due to access limitation of a base station node, then popularity of the content depends on the number of accesses to the base station, and the number of cache content determines that the base station cannot provide service for other base station users, because popular content will be cached in multiple base stations, resulting in redundancy, which will greatly reduce caching efficiency and service quality. Therefore, a new method for dense wireless network edge caching is needed.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) the existing non-cooperative data caching strategy has certain limitation, cannot play the synergy of the neighborhood, and has low caching efficiency.
(2) The existing edge network caching algorithm is mainly focused on a single base station or a local server, an edge server and a cloud server for hierarchical cooperative caching, the problem of the edge caching network is lack of research, and the efficiency of access response is difficult to improve to a great extent.
(3) Because of access limitation of base station nodes, a user can only access one base station in the edge cache network, the number of cache contents determines that the base station cannot provide service for other base station users, and popular contents are cached in a plurality of base stations, so that redundancy is caused, and the cache efficiency and the service quality are greatly reduced.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a dense wireless network edge caching method based on an access balance core and a replacement rate.
The invention is realized in such a way that a dense wireless network edge caching method based on an access balance core and a replacement rate comprises the following steps:
step one, constructing an edge cache network model;
secondly, identifying the importance of the nodes based on a weight adaptive algorithm;
step three, based on the dynamic node sequencing of weight self-adaptation;
and step four, realizing a buffer decision strategy based on weight self-adaption.
Further, in the first step, the edge cache network is composed of a base station, a base station server and a RAN controller; the base stations are connected wirelessly, and the base stations are connected to the core cloud through specific base stations; the server has a cache function and is used for providing services for users; the RAN controller is connected to the edge servers in the cluster for collecting all server information.
Further, the user may randomly access any base station node to request any content, and assuming that the transmission time between any nodes in the cluster is less than the transmission time to the core cloud, the method for the user to obtain a response is as follows:
(1) if the accessed base station server caches the content of the user request, the server directly responds to the user request; otherwise, executing B, checking whether the single-hop neighbor server accessing the base station server caches the content;
(2) if the neighbor server caches the user request content, the neighbor server sends the content to a basic site server which is accessed by the user and responds to the user request; otherwise, executing C to determine whether other base station servers cache the content;
(3) if all the base station nodes in the search cluster do not have the cache content, the access request is forwarded to the core cloud;
(4) the user request is responded in the core cloud, the core cloud sends the request content to the cluster head server, then sends the request content to the user access server, and finally sends the request content to the user; obtaining responses in the core cloud would be at a greater cost.
Further, in step two, the node importance identification based on the weight adaptive algorithm includes:
assuming that the edge cache network is G ═ (V, L), V ═ {1, 2, …, n } is the set of network nodes, L ═ {1, 2, … L } is the set of edges in the network, L ═ L is the set of edges in the network, L ij Representing an edge between node i and node j, L ij Bandwidth of (w) ij And (4) showing. When there is an edge between node i and node j, then w ij >0, otherwise w ij =0。
(1) Node cache space
Node cache space ca i Indicating the available resources of the node. The more certificate resources that are available, the more important the node.
(2) Contiguous bandwidth of nodes and
the expression of the adjacency bandwidth sum of the nodes is:
Figure BDA0002893137790000041
where N (i) represents a set of neighbor nodes for node i in the network. The larger the sum of the contiguous bandwidths of the nodes, the more important the node.
(3) Number of times of accessing node
The number of accesses to the node depends on the type and popularity of the cached content, i.e., the more types, the more accesses to the node, the more popular the content, and the more times the node is accessed.
(4) Accessing a core center
Constructing access core centrality by introducing access times to nodes
Figure BDA0002893137790000042
And using the cache location as one of cache decision criteria for determining a network cache policy, determining the cache location by considering the location and access condition of the node, wherein C may be used ac (i) To represent the heart rate in the access core of any content. The heart rate in the access core degrees of the nodes is the access core degrees of all the neighbor nodes and the access times of the nodes, and the expression is as follows:
Figure BDA0002893137790000043
wherein kf j Is the access core of node j, f j Is the number of accesses to node j within the statistical time.
(5) Rate of access balancing
Starting from shannon entropy, defining the access balance of the following nodes:
Figure BDA0002893137790000044
where j and g are neighbor nodes of (j, g ∈ N (i)), kf j Is the access core of node j,
Figure BDA0002893137790000045
is the boltzmann constant, which represents the properties of the system itself.
Figure BDA0002893137790000046
The larger the value of (3), the more balanced the access to the node, the easier the access request of the user is to be satisfied, and the greater the importance of the node. .
Further, in step (3), the number of times of accessing the node includes:
1) content popularity
Assuming that the content popularity satisfies the Zipf distribution, the popularity of the content k for rank τ is:
Figure BDA0002893137790000051
where num represents the total number of contents, λ represents the skewness coefficient of the Zipf distribution, and a larger λ means that contents with high popularity are more easily accessed.
2) Base station user interest
Obtaining the stable interest preference of the user by analyzing the long-term access records of the user, and defining the long-term interest of the user to the access base station i as
Figure BDA0002893137790000052
Figure BDA0002893137790000053
Wherein,
Figure BDA0002893137790000054
and representing the statistical flow of the current base station node i. f. of long (Δt long ) Representing the current statistical access of all users in the edge cache cluster.
Defining the interest of the user accessing the node i in the latest time as the short-term interest
Figure BDA0002893137790000055
Figure BDA0002893137790000056
Wherein,
Figure BDA0002893137790000057
representing the statistical access to node i in the last period of time, f short (Δt short ) Representing the statistical flow accessed by all users in the edge cache cluster in the last period of time, wherein the current interest preference of the users depends on the long-term interest and the short-term interest of the users, so the potential interest of the users accessing the base station i is defined as:
Figure BDA0002893137790000058
wherein phi 1 And phi 2 Respectively representing the influence proportions of long-term interest and short-term interest on the current interest of the user, and considering that the recent influence is larger, phi 2 Should be greater than phi 1
3) Potential access willingness of user
Since the user's willingness to access the content is not only influenced by the interest preferences but also closely related to the popularity of the content, i.e. the user always tends to request popular content that they like. Probability of request for content k in node i
Figure BDA0002893137790000059
Comprises the following steps:
Figure BDA0002893137790000061
suppose the total number of user accesses in the last cache cycle is f ave Then the potential access amount of the user to the content k in the base station i
Figure BDA0002893137790000062
Further, in step (4), the method for determining an access core includes:
(A) and deleting all nodes and edges with the connectivity of 1, and recording the access times of the nodes. If nodes with the connectivity of 1 still exist, the process is continued, the core degree of the deleted nodes is marked to be 1, the access core degree is the number of the access of the node plus 1, and if the number of the access of the node j is f j Then the access core degree of the node j is kf j =f j +1。
(B) For nodes with connectivity 2, the above operation is repeated and the access cores of these nodes are marked as access number + 2.
(C) And circularly executing the process until all the nodes are deleted, and obtaining the corresponding access core.
Further, the numbers next to the nodes indicate the number of accesses. According to the expression of the visit core center, the heart rate in the visit core of the node nl is 18, and the node n2 is 21. Likewise, heart rates in the access cores of the other nodes can be calculated (the result shows that jun does not exceed 10). Therefore, node n2 has the greatest access core center rate, and therefore the content cached on node n2 will have a higher response efficiency. The following is the result of comparing the contents cached at nodes n1 and n2 by transmission energy consumption (assuming 1 energy per hop):
E(n1)=3×2×1+(7+1+1)×2=24
E(n2)=3×2×2+(7+1+1)×1=21;
obviously, the node n2 is more advantageous as a cache node for responding to the access request of the user, that is, the node n2 is more important.
Further, in step three, the method for dynamically ordering nodes based on weight adaptation includes:
and (4) carrying out self-adaptive weighting on each index according to the change of the network index parameter by using the information entropy theory. According to the information entropy theory, the higher the disorder of the index set is, the larger the information quantity provided by the index is, and the higher the weight of the comprehensive evaluation index is.
(1) Decision model
Suppose there are n nodes to be sorted, each node has 4 evaluation indexes, and the q-th evaluation index of the node i has the value x iq ( i 1, 2.. multidot.n; q 1, 2.. multidot.4), a decision matrix composed of all network nodes and evaluation indexes thereof are shown as follows:
Figure BDA0002893137790000071
(2) nonlinear programming decision matrix
Because the dimension of each index is different, the magnitude difference exists, and the direct comparison is inconvenient. To eliminate the dimensional difference between the indices, the index should be normalized as shown in the following equation:
Figure BDA0002893137790000072
thus, the standard normalization matrix is:
Figure BDA0002893137790000073
(3) weights calculated from exponential entropy
Equation (12) gives the formula for calculating the exponential entropy value:
Figure BDA0002893137790000074
the information entropy redundancy is calculated as follows:
rr q =1-h q
the exponential weight is calculated as follows:
Figure BDA0002893137790000081
an exponential weighting matrix W is obtained as shown by:
W=[w 1 w 2 w 3 w 4 ];
the weighted normalized decision matrix may be expressed as:
Figure BDA0002893137790000082
the weighted attribute value of node i can be expressed as:
Figure BDA0002893137790000083
because cooperative caching can be realized among nodes in the edge cache network, and adjacent nodes can also contribute to the caching importance of the target node, an importance evaluation matrix is constructed:
Figure BDA0002893137790000084
wherein, delta ij Is a contribution assignment parameter, which has a value of 1 if two nodes are connected, and 0 otherwise. Gamma ray i Is the degree of the node i and gamma is the average degree of the node.
(4) Importance calculation
And on the basis of the importance evaluation matrix, summing the attribute value of the node i and the importance contributions of all the nodes adjacent to the node i to obtain the importance of the node i.
Figure BDA0002893137790000085
Wherein eta i Reflecting the value of node i in the network. The importance evaluation matrix fully considers the node position, the access frequency, the cache space and the available bandwidth, and the importance of the node is reflected more comprehensively around the target of the cache value.
Further, in step four, the implementation of the weight-based adaptive cache decision strategy includes
(1) Rate of replacement
The cache replacement rate is represented by r (i):
Figure BDA0002893137790000091
wherein, S (rf) k ) Is the size of the replaced content rfk from node i, and c (i) is the cache space size of node i. M is the number of contents replaced from node i per unit time.
And (3) standardization:
Figure BDA0002893137790000092
(2) node caching value metric
Designing a new measure I (i) comprising access balancing core degree center rate and cache replacement rate:
Figure BDA0002893137790000093
i k =argmax{I(i)};
wherein eta is i Is the importance of the node i after the comprehensive evaluation, r (i) is the node cache replacement rate, if r (i) is 0: this means that the node cache space is not full or no new content arrives. To make the node metric expression consistent, let r (i) be ∈, which is a very small positive number.
Further, the dense wireless network edge caching method based on the access balancing core and the replacement rate further includes:
1) establishing a network topological structure based on node attributes, and calculating the number of accessible node contents;
2) calculating corresponding weight according to the node attribute and the access times to the node;
3) calculating the access core centrality of the node according to the rule of the core centrality;
4) access balance of the computing nodes;
5) constructing a multi-index decision matrix, and calculating an index weight value to obtain a node attribute value;
6) constructing an importance evaluation matrix and calculating the importance of the nodes;
7) and calculating the short-term replacement rate of the nodes and the cache values of the nodes, and finding the node with the maximum cache value.
It is another object of the present invention to provide a computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface to implement the access balancing core and replacement rate based dense wireless network edge caching method when executed on an electronic device.
The invention also aims to provide a dense wireless network edge cache system based on the access balance core and the replacement rate, which is used for implementing the dense wireless network edge cache method based on the access balance core and the replacement rate.
Another object of the present invention is to provide an information data processing terminal, which includes a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute the access balancing core and replacement rate based dense wireless network edge caching method.
Another object of the present invention is to provide a computer-readable storage medium storing instructions which, when executed on a computer, cause the computer to perform the access balancing core and replacement rate based dense wireless network edge caching method.
By combining all the technical schemes, the invention has the advantages and positive effects that: the dense wireless network edge caching method based on the access balance core and the replacement rate can more accurately select the cache base station node through the self-adaptive weighting of various characteristics, and improve the edge cache hit rate and the response efficiency of user requests. The invention adopts a plurality of network models and simulates the algorithm from a plurality of angles. The results show that the scheme can obtain better access delay and caching efficiency in a more complex network access environment compared with the existing mechanism.
The distributed wireless edge cache network cluster is used as a research object, and the cache position of the hot content is determined by the factors such as node cache size, adjacent bandwidth, heart rate in access core, access balance and the like. The method utilizes an information entropy method to distribute self-adaptive weights, and combines the node replacement rate to obtain the node with the maximum cache value. Compared with other related solutions, the method has great advantages in the identification precision and the caching efficiency of the caching important nodes. The invention makes the following main contributions:
(1) the edge cache network structure is analyzed, the characteristics that the base station node can access and forward the access request are considered, the characteristics related to the node cache are calculated, including cache space, adjacent bandwidth, access core centrality and access balance, and a basis is provided for considering the importance of the node cache.
(2) The effect on node cache values for a plurality of characteristic values. The present invention adaptively assigns a weight of each feature using the information entropy. This facilitates a flexible and accurate determination of the importance of the node; by calculating the replacement rate, the content in the hot base node is prevented from being replaced frequently, and the efficiency of the system is improved.
(3) The invention performs multi-angle experiments under a multi-network model. The experimental result shows that compared with the existing algorithm, the algorithm can more accurately identify the importance of the node and can more effectively improve the caching efficiency and the access response rate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a dense wireless network edge caching method based on an access balancing core and a replacement rate according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of an edge cache network structure according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an example 1 of an access core provided in an embodiment of the present invention.
Fig. 4 is a schematic diagram of an example 2 of an access core provided in an embodiment of the present invention.
Fig. 5 is a schematic diagram of a 14-mode simulation network according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a topology provided by an embodiment of the present invention.
Fig. 7(a) - (d) are schematic diagrams illustrating the difference between the first three nodes of each evaluation method provided by the embodiment of the present invention in terms of the efficiency of satisfying the access request.
FIG. 8 shows LCE, Betw, and C required to respond to a request provided by an embodiment of the present invention wa Required by the algorithmMean hop count diagram.
FIG. 9 shows LCE, Betw, and C provided by an embodiment of the present invention wa The comparison result of the edge cache hit rate of the three algorithms is shown in the figure.
Fig. 10 is a schematic diagram illustrating an analysis result of a load condition of a cache system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Aiming at the problems in the prior art, the invention provides a dense wireless network edge caching method based on an access balancing core and a replacement rate, and the invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the method for caching the edge of the dense wireless network based on the access balancing kernel and the replacement rate according to the embodiment of the present invention includes the following steps:
s101, constructing an edge cache network model;
s102, identifying the importance of the nodes based on weight self-adaption;
s103, carrying out self-adaptive dynamic node sequencing based on the weight;
and S104, realizing a buffer decision strategy based on weight self-adaption.
The present invention will be further described with reference to the following examples.
Summary of the invention
With the deployment of 5G dense base stations, the energy consumption and latency of the backhaul network have a large impact on the user experience. The mobile edge network caching technology is an effective method for solving the loopback load, reducing the network delay and improving the user experience. However, the selection of content caching locations in edge cache networks has a large impact on network caching efficiency. The invention researches various important node evaluation strategies and network caching strategies. Aiming at the fact that the algorithms only attach importance to the network structure and ignore other context characteristics, a weight-adaptive dense wireless network edge cache decision-making strategy is provided to improve the response efficiency of user requests. By the self-adaptive weighting of various characteristics, the cache base station node is selected more accurately, and the hit rate of the edge cache is improved. The invention adopts a plurality of network models and simulates the algorithm from a plurality of angles. The results show that the scheme can obtain better access delay and caching efficiency in a more complex network access environment compared with the existing mechanism.
In order to solve the decision problem of popular content cache positioning, the invention provides an edge cache method (C) based on weight self-adaptation wa ). The distributed wireless edge cache network cluster is used as a research object, and the cache position of the hot content is determined by the factors such as node cache size, adjacent bandwidth, heart rate in access core, access balance and the like. The method utilizes an information entropy method to distribute self-adaptive weights, and combines the node replacement rate to obtain the node with the maximum cache value. Compared with other related solutions, the method has great advantages in the identification precision and the caching efficiency of the caching important nodes. The invention makes the following main contributions:
the edge cache network structure is analyzed, the characteristics that the base station node can access and forward the access request are considered, the characteristics related to the node cache are calculated, including cache space, adjacent bandwidth, access core centrality and access balance, and a basis is provided for considering the importance of the node cache.
The effect on node cache values for a plurality of characteristic values. The present invention adaptively assigns a weight of each feature using the information entropy. This facilitates a flexible and accurate determination of the importance of the node.
By calculating the replacement rate, the content in the hot base node is prevented from being replaced frequently, and the efficiency of the system is improved.
The invention performs multi-angle experiments under a multi-network model. The experimental result shows that compared with the existing algorithm, the algorithm can more accurately identify the importance of the node and can more effectively improve the caching efficiency and the access response rate.
The rest of the organization of the invention is as follows: in a second section, the present invention describes an edge cache network model. And the third section provides a node importance identification algorithm based on weight value self-adaption. In the fourth section, the invention provides a node sorting method based on weight adaptation and a cache decision strategy based on weight adaptation. Section five describes an algorithm that was experimentally analyzed in section six and concluded in section eight.
Second, edge cache network model
The invention constructs an edge cache network, which mainly comprises a base station, a base station server and a RAN (radio Access network) controller. As shown in fig. 2, the base stations are wirelessly connected. The base stations are connected to the core cloud through a specific base station (cluster head). The server has a cache function and can provide services for users; the RAN controller is connected to the edge servers in the cluster to collect all server information.
A user may randomly access any base station node to request any content. It is assumed that the transit time between any node in the cluster is less than the transit time to the core cloud. The way in which the user gets a response is as follows:
1. if the accessed base station server caches the content requested by the user, the server directly responds to the user request. Otherwise, executing B, and checking whether the single-hop neighbor server accessing the base station server caches the content.
2. If the neighbor server caches the user request content, the neighbor server sends the content to a basic site server which the user accesses and responds to the user request. Otherwise, C is performed to determine whether other base station servers cache the content.
3. If all base station nodes in the search cluster do not have cached content, the access request will be forwarded to the core cloud.
4. The user request is responded in the core cloud, the core cloud sends the request content to the cluster head server, then sends the request content to the user access server, and finally sends the request content to the user. Obtaining responses in the core cloud would be at a greater cost.
Obviously, the closer the requested content is to the access base station, the smaller the overhead paid; the more content that is cached in an edge cache network cluster, the lower the average cost of a user request response. However, the caching space of the edge server is limited, and the access requests of the users are distributed unevenly, so that the high-efficiency caching strategy is beneficial to improving the service quality.
Third, node importance identification based on weight adaptive algorithm
Assuming that the edge cache network is G ═ V, L, then V ═ {1, 2, …, n } is the set of network nodes, L ═ {1, 2, … L } is the set of edges in the network, L ═ L is the set of edges in the network, L ij Representing an edge between node i and node j, L ij Bandwidth of (w) ij And (4) showing. When there is an edge between node i and node j, then w ij >0, otherwise w ij =0。
A. Node cache space
Node cache space ca i Indicating the available resources of the node. The more certificate resources that are available, the more important the node.
B. Contiguous bandwidth of nodes and
the contiguous bandwidth sum of the nodes is shown in equation 1. Where N (i) represents a set of neighbor nodes for node i in the network.
The larger the sum of the contiguous bandwidths of the nodes, the more important the node.
Figure BDA0002893137790000151
C. Number of times of accessing node
The number of accesses to the node depends on the type and popularity of the cached content, i.e., the more types, the more accesses to the node, the more popular the content, and the more times the node is accessed.
1) Content popularity
Assuming that the content popularity satisfies the Zipf distribution, the popularity of the content k at the rank τ is:
Figure BDA0002893137790000152
where num represents the total number of contents, λ represents the skewness coefficient of the Zipf distribution, and a larger λ means that contents with high popularity are more easily accessed.
2) Base station user interest
Since the user's long-term (more than seven days) access record can reflect the user's intention, it is possible to obtain the user's stable interest preference by analyzing the user's long-term access record. So define the user's long-term interest in accessing base station i as
Figure BDA0002893137790000153
Then:
Figure BDA0002893137790000154
wherein,
Figure BDA0002893137790000155
and representing the statistical flow of the current base station node i. f. of long (Δt long ) Representing the current statistical access of all users in the edge cache cluster. However, considering that the search access behavior of the user has little influence on the current interest of the user and the life cycle of popular content is generally not too long, it is difficult to predict the interest of the user in a short time only according to the long-term interest preference of the user. The current interest preference of the user is closely related to the content and the access mode of the user which are accessed recently, namely, the user frequently accesses in the near future, and the probability of being requested again by the user is higher. Therefore, the accuracy of the user interest prediction can be improved by considering the recent interest and the access mode of the user. Because recent interests are dynamic, the user's recent interests should be time-dependent. However, the interests of individual users may be influenced by more factors and are prone to great fluctuation, while in a fixed user group, the interests of the group of users in a short period are relatively small, and the average number of user visits can effectively reflect the short-term interests of the whole users. Defining the interest of the user accessing the node i in the latest time as the short-term interest
Figure BDA0002893137790000161
Then:
Figure BDA0002893137790000162
wherein,
Figure BDA0002893137790000163
representing the statistical access to node i in the last period of time, f short (Δt short ) Representing the statistical traffic accessed by all users in the edge cache cluster in the last period of time, the current interest preference of the users depends on the long-term interest and the short-term interest of the users, therefore, the potential interest of the users accessing the base station i is defined as:
Figure BDA0002893137790000164
wherein phi 1 And phi 2 Respectively representing the influence proportions of long-term interest and short-term interest on the current interest of the user, and considering that the recent influence is larger, phi 2 Should be greater than phi 1
3) Potential access willingness of user
Since the user's willingness to access the content is not only influenced by the interest preferences but also closely related to the popularity of the content, i.e. the user always tends to request popular content that they like. Probability of request for content k in node i
Figure BDA0002893137790000165
Comprises the following steps:
Figure BDA0002893137790000166
suppose the total number of user accesses in the last cache cycle is f ave Then the potential access amount of the user to the content k in the base station i
Figure BDA0002893137790000167
Is composed of
Figure BDA0002893137790000168
D. Accessing a core center
In the edge cache network, the access of the user to the base station node is uncertain, and therefore, which node is in the central position of the network needs to consider the current access situation. The invention constructs the access core centrality by introducing the access times of the nodes
Figure BDA0002893137790000171
And taking the cache location as one of cache decision criteria for determining a network cache policy, and determining the cache location by considering the location and access condition of the node, wherein C can be used ac (i) To represent the heart rate in the access core of any content. The heart rate in the access core degrees of the nodes is the access core degrees of all the neighbor nodes and the access times of the nodes, and the expression is as follows:
Figure BDA0002893137790000172
wherein kf j Is the access core of node j, f j Is the number of accesses to node j within the statistical time.
The determination of the access core requires the following procedure:
(A) and deleting all nodes and edges with the connectivity of 1, and recording the access times of the nodes. If nodes with the connectivity of 1 still exist, the process is continued, the core degree of the deleted nodes is marked to be 1, the access core degree is the access number of the node plus 1, and if the access number of the node j is f j Then the access core degree of the node j is kf j =f j +1。
(B) For nodes with connectivity 2, the above operation is repeated and the access cores of these nodes are marked as access number + 2.
(C) And circularly executing the process until all the nodes are deleted, and obtaining the corresponding access core.
As shown in fig. 3, the numbers next to the nodes indicate the number of accesses. According to equation (7), the visit core heart rate of node nl is 18, and node n2 is 21. Likewise, heart rates in the access cores for other nodes may be computed (results show that no jun exceeds 10). Therefore, node n2 has the greatest access core center rate, and therefore the content cached on node n2 will have a higher response efficiency. The following is the result of comparing the contents cached at nodes n1 and n2 by transmission energy consumption (assuming 1 energy per hop):
E(n1)=3×2×1+(7+1+1)×2=24
E(n2)=3×2×2+(7+1+1)×1=21;
obviously, the node n2 is more advantageous as a cache node for responding to the access request of the user, that is, the node n2 is more important.
E. Rate of access balancing
Since the total number of accesses from the neighbor nodes of n1 and n2 in fig. 3 is different, the present invention solves the relative size problem of the cache value between nodes n1 and n2 by accessing core centrality. How are the importance of n1 and n2 decided if they have the same number of accesses?
Fig. 4 shows that the total access numbers of the eight nodes of n1 and n2 at this time are the same. Calculation of the finding, C ac (n1)=C ac It is obviously impossible to determine which node is more important by the access core center rate alone (18) (n 2). Since both have the same heart rate in the visiting core, nodes nl and n2 have the same importance. However, it can be observed that although the total number of accesses of the neighbor nodes of the nodes n1 and n2 are the same and are both 6, the distributions are not the same. Thus, if the link between nodes n1 and n8 crashes, only 2 access requests (from nodes n9 and n10) can be satisfied, while for node n1, if one link between nodes n5, n6, n7 and node n1 crashes, the remaining 4 accesses can still be guaranteed, that is, under the same connectivity stability condition, node n1 has a greater impact on the network than node n2, so node n1 has a greater importance. In order to measure the influence of the number of the neighbor node accesses on the importance of the node, the invention defines the node accesses from Shannon entropyThe balance is asked.
Figure BDA0002893137790000181
j and g i Is a neighbor node of (j, g ∈ N (i)), kf j Is the access core of node j,
Figure BDA0002893137790000182
is the boltzmann constant, which represents the properties of the system itself.
Figure BDA0002893137790000183
The larger the value of (3), the more balanced the access to the node, the easier the access request of the user is to be satisfied, and the greater the importance of the node.
Through analysis, the importance of the node in the dense edge cache network is determined to mainly depend on four aspects of cache space, adjacent bandwidth, heart rate in access core and access balance. But how these metrics affect the importance of the nodes is a considerable matter.
Node sorting method based on weight self-adaptation and caching decision strategy based on weight self-adaptation
4.1 dynamic node ordering method based on weight self-adaptation
In previous studies, the determination of impact index weights was mainly by manual assignment and lack of flexibility. The invention utilizes the information entropy theory to carry out self-adaptive weighting on each index according to the change of the network index parameter. According to the information entropy theory, the higher the disorder of the index set is, the larger the information quantity provided by the index is, and the higher the weight of the comprehensive evaluation index is.
A. Decision model
Suppose there are n nodes to be sorted, each node has 4 evaluation indexes, and the q-th evaluation index of the node i has the value x iq ( i 1, 2.. multidot.n; q 1, 2.. multidot.4), a decision matrix composed of all network nodes and evaluation indexes thereof are shown in formula (9):
Figure BDA0002893137790000191
B. nonlinear programming decision matrix
Because the dimension of each index is different, the magnitude difference exists, and the direct comparison is inconvenient. In order to eliminate the dimensional difference between the indices, the index should be normalized as shown in equation (10):
Figure BDA0002893137790000192
thus, the standard normalization matrix is:
Figure BDA0002893137790000193
according to the weight of the exponential entropy calculation, formula (12) gives the calculation formula of the exponential entropy value:
Figure BDA0002893137790000194
calculating the information entropy redundancy as shown in equation (13):
rr q =1-h q (13)
the exponential weights are calculated as shown in equation (14):
Figure BDA0002893137790000195
an exponential weight matrix W is obtained, as shown in equation (15):
W=[w 1 w 2 w 3 w 4 ] (15)
the weighted normalized decision matrix can be expressed as:
Figure BDA0002893137790000201
the weighted attribute value of node i can be expressed as:
Figure BDA0002893137790000202
because cooperative caching can be realized among nodes in the edge cache network, and adjacent nodes can also contribute to the caching importance of the target node, an importance evaluation matrix is constructed:
Figure BDA0002893137790000203
wherein, delta ij Is a contribution assignment parameter, which has a value of 1 if two nodes are connected, and 0 otherwise. Gamma ray i Is the degree of the node i and gamma is the average degree of the node.
E. Importance calculation
And on the basis of the importance evaluation matrix, summing the attribute value of the node i and the importance contributions of all the nodes adjacent to the node i to obtain the importance of the node i.
Figure BDA0002893137790000204
Wherein eta is i Reflecting the value of node i in the network. The importance evaluation matrix fully considers the node position, the access frequency, the cache space and the available bandwidth, and the importance of the node is reflected more comprehensively around the target of the cache value.
4.2 weight-adaptive-based cache decision strategy
In addition to node importance factors, the cache value of a node is also related to the replacement rate.
A. Rate of replacement
The above algorithm can determine the base station node with the largest influence in the edge cache network cluster, but if all accessed contents are cached on the node, the negative influence of the change of the network parameters on the network performance, such as frequent replacement, inevitably results in short cache time of popular contents in the node, reduces the network cache hit rate, and finally influences the edge cache performance. Therefore, the replacement rate is also an important factor to be considered by the present invention, and the cache replacement rate is represented by r (i):
Figure BDA0002893137790000211
wherein, S (rf) k ) Is a replaced content rf from a node i k C (i) is the cache space size of node i. M is the number of contents replaced from node i per unit time.
And (3) standardization:
Figure BDA0002893137790000212
B. node caching value metric
In order to more conveniently represent the node cache value, a new measurement I (i) is designed, and comprises an access balance core degree center rate and a cache replacement rate.
Figure BDA0002893137790000213
i k =argmax{I(i)}
Wherein eta i Is the importance of the node i after the comprehensive evaluation, r (i) is the node cache replacement rate, if r (i) is 0: this means that the node cache space is not full or no new content arrives. To make the node metric expression consistent, let r (i) ═ epsilon, epsilon is a very small positive number.
Description of the Algorithm
The method mainly comprises the following steps:
1) and establishing a network topology structure based on the node attributes, and calculating the accessible node content number.
2) And calculating corresponding weight according to the node attribute and the access times of the node.
3) And calculating the access core centrality of the node according to the rule of the core centrality.
4) And balancing the access of the computing nodes.
5) And constructing a multi-index decision matrix, and calculating an index weight value to obtain a node attribute value.
6) And constructing an importance evaluation matrix and calculating the importance of the nodes.
7) And calculating the short-term replacement rate of the nodes and the cache values of the nodes, and finding the node with the maximum cache value.
Figure BDA0002893137790000221
Figure BDA0002893137790000222
Figure BDA0002893137790000231
Figure BDA0002893137790000232
Sixth, analysis of experiment
In order to verify the performance of the weighted adaptive caching decision strategy provided by the invention, the algorithm is compared with other node importance evaluation algorithms (21-24), access request response times and caching efficiency. The experiment carried out importance evaluation from two perspectives, namely, calculating the importance and the grade of the nodes according to the topology (10 nodes) in the figure 3, and constructing a new topology (14 nodes) to rank the nodes from the perspective of network efficiency
The experimental platform of the experiment was Intel (R) core (TM) i7-6200 CPU, 8g memory, and the operating system used was ubuntu-14.04.4-desktop-i 386. The simulated edge implements a cache network through the NS3 simulator, and implements the cache decision strategy proposed by the invention through coding. And importing the simulation data into Matlab for processing. Table 1 shows the parameter settings for the simulation.
A. Topology of nodes
Taking the small sample network of fig. 4 as an example, the importance of the nodes is evaluated and ranked by using various core node verification methods, respectively. Table 2 shows the results of the calculated values and the ordering of the different core node validation methods. The results show that many algorithms do not accurately distinguish the relative importance of nodes, and therefore it is important to use a more accurate method, for example at C wa Nodes nl and n2 are used more finely.
Figure BDA0002893137790000241
B. Comparison of network access efficiency
Network Access Efficiency (NAE) means that all edges connected to a node are deleted at the same time, which may result in an increase in the shortest access path between nodes, thereby increasing the total length of the path that the entire network satisfies for all accesses. NAE reflects:
Figure BDA0002893137790000251
wherein d is the shortest path between the node i and the index node j, f, and is the potential access times to the node and the number of N nodes in the network.
As shown in FIG. 5, the network has 14 nodes and 28 edges, the number of accesses of each node is random, and all nodes in the graph are subjected to importance evaluation. Table 3 shows node ranks of different ranking methods in a 14-node network, and it can be seen that different methods have different node ranks in the same network, so it is necessary to analyze the importance of nodes in the network more accurately by means of NAE performance indicators.
Figure BDA0002893137790000252
Table 4 shows the NAE values of the network after the node is deleted.Table 4 shows that the NAE value is the largest when the node n8 is deleted, that is, after the node n8 is deleted, the access connection of the entire network is most affected, and thus it is judged that the node n8 is the most important node in the network. As can be seen from Table 3, C wa Node n8 was selected as the most important node, which is consistent with the results of table 4. Thus, in a 14-node simulation network, C wa Has better performance and higher precision.
Figure BDA0002893137790000261
C. Capability comparison to satisfy access requests
In order to verify the effectiveness of the method more accurately, a Zachary network (small detection network of a complex network, social analysis and the like) is used in the experiment, a topological structure is shown in fig. 6, and the effects of different methods on selecting different nodes for the access request are compared by using an independent cascade model.
In order to analyze the difference between the method proposed by the present invention and other methods in evaluating the capability of a node to respond to an access request, comparison was made with the DC, BC, CC and EC methods, respectively. Since the same node satisfies the same number of access requests in the same round, three nodes with better evaluability and incomplete agreement are taken as source nodes (when node decomposition cannot be distinguished, experimental nodes are selected in turn wa BC and EC have two different nodes; c wa CC and DC have different nodes).
The difference in efficiency of satisfying access requests for the first three nodes of each evaluation method is shown in fig. 7.
It can be observed from fig. 7(a) (B) (C) (D) that the method is superior to the other four methods in satisfying the access request and can satisfy more access requests under the same propagation round. Analysis showed that C wa The method introduces the concept of access core, comprehensively considers the connectivity, the access frequency and the access diversity of the node, and can better meet the access request.
1) Average response hop
Figure BDA0002893137790000262
Wherein hp a Indicating the number of hops (rounds) required to satisfy access request a, and a indicates the total number of requests. This parameter is an important factor in system latency and also a key factor in measuring the performance of the cache system. It should be noted that the algorithm uses the total amount of buffering in observing system performance, since it takes into account the effect of the buffer size of the individual base stations on the algorithm. FIG. 8 shows LCE, Betw, and C required to respond to a request wa Average number of hops required for the algorithm. As can be seen from the figure, the response hop count decreases on average with the increase of the total amount of cache, because the cache space becomes larger, more content can be cached on the node with the larger cache value, and therefore the hop count is decreasing. Due to C wa The algorithm takes into account the access frequency of the requesting node, and the discovered "important" node is the node with the least number of synthetic hops. It can find the "important" nodes more accurately than Betw.
2) Average hit rate
Figure BDA0002893137790000271
This parameter reflects the cache hit rate and the value of the cache system. FIG. 9 compares mainly LCE, Betw and C wa Edge cache hit rates for the three algorithms. The higher the edge cache hit rate, the more requests the edge side has, thereby reducing bandwidth consumption and transmission delay. As can be seen from the results, C wa The algorithm has great advantages because the Betw algorithm ignores the number of accesses to the base station, the cache capacity of the nodes is limited, many contents must be replaced frequently, and some access requests can only be responded again through the cloud. In the experiment, the cache capacity of each node is random, so that the replacement frequency of the content is increased to a certain extent. When the LCE algorithm occupies the cache space due to edge fluctuation, the edge cache hit rate is the lowest, C wa The edge cache hit rate of the algorithm is about 9% higher than Betw and about 15% higher than LCE.
3) Total amount of cache replacement
The total cache replacement is the number of contents replaced by the node, and the parameters mainly comprise the number of caches in the simulation time and the total number of caches (the simulation time in the invention is 120 s). The load condition of the cache system can be analyzed by the parameter, as shown in FIG. 10, C wa This is a great advantage over LCE and Betw because LCE creates a lot of redundancy in its implementation and therefore more cache replacements and the cache contents will be the most. Betw selects the node with the largest number of requests for caching, which results in that a large amount of content is cached indiscriminately on the limited node, which results in cache non-uniformity and more cache replacement, and the amount of cache content will increase. C wa The cache location is determined by considering the importance and replacement frequency of the nodes, thereby making the load of each node more balanced and efficient.
In order to solve the selection problem of a cache base node in an edge cache network, weight-based self-adaptation (C) is provided wa ) The cache decision method takes the requirement of a user as a guide research object, adaptively distributes weights to various factors influencing the cache importance of the node, and more effectively meets the requirement of the user than other cache decision methods.
Similar to other research works at present, the present invention mainly studies how to cache content to better meet access requirements, but does not discuss the problems of transmission delay and energy consumption. In the edge network, these problems cannot be ignored, and the influence of transmission delay and energy consumption on the buffer value of the base station node will be further considered in the future.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When used in whole or in part, can be implemented in a computer program product that includes one or more computer instructions. When loaded or executed on a computer, cause the flow or functions according to embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.)). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. A dense wireless network edge caching method based on an access balance core and a replacement rate is characterized by comprising the following steps:
constructing an edge cache network model;
identifying the importance of the nodes based on a weight self-adaptive algorithm;
weight-based adaptive dynamic node ranking;
realizing a weight-based adaptive cache decision strategy;
the edge cache network consists of a base station, a base station server and a RAN controller; the base stations are connected wirelessly, and the base stations are connected to the core cloud through specific base stations; the server has a cache function and is used for providing services for users; the RAN controller is connected to an edge server in the cluster and used for collecting all server information;
the user can randomly access any base station node to request any content, and assuming that the transmission time between any nodes in the cluster is less than the transmission time to the core cloud, the method for the user to obtain a response is as follows:
1) if the accessed base station server caches the content of the user request, the server directly responds to the user request; otherwise, executing 2), and checking whether the single-hop neighbor server accessing the base station server caches the content;
2) if the neighbor server caches the user request content, the neighbor server sends the content to a basic site server which is accessed by the user and responds to the user request; otherwise, executing 3) to determine whether other base station servers cache the content;
3) if all the base station nodes in the search cluster do not have the cache content, the access request is forwarded to the core cloud;
4) the user request is responded in the core cloud, the core cloud sends the request content to the cluster head server, then sends the request content to the user access server, and finally sends the request content to the user; obtaining responses in the core cloud would require a greater cost;
the node importance identification based on the weight value self-adaptive algorithm comprises the following steps:
assuming that the edge cache network is G ═ (V, L), V ═ {1, 2, …, n } is the set of network nodes, L ═ {1, 2, … L } is the set of edges in the network, L ═ L is the set of edges in the network, L ij Representing an edge between node i and node j, L ij Bandwidth of (w) ij Represents; when there is an edge between node i and node j, then w ij >0, otherwise w ij =0;
(1) Node cache space
Node cache space ca i Representing available resources of the node; the more cache resources available, the more important the node is;
(2) contiguous bandwidth of nodes and
the expression of the adjacency bandwidth sum of the nodes is:
Figure FDA0003707312510000021
wherein N (i) represents a set of neighbor nodes of node i in the network; the larger the sum of the adjacent bandwidths of the nodes is, the more important the nodes are;
(3) number of times of accessing node
The access times to the nodes depend on the types and popularity of the cache contents, namely, the more the types are, the more the access times to the nodes are, the more the contents are popular, and the more the access times to the nodes are;
(4) accessing a core center
Constructing access core centrality by introducing access times to the nodes, taking the access core centrality as one of cache decision criteria for determining a network cache strategy, and determining a cache position according to the positions and access conditions of the nodes; the heart rate in the access core degrees of the nodes is the access core degrees of all the neighbor nodes and the access times of the nodes;
the method for determining the access core center comprises the following steps:
(A) deleting all nodes and edges with the connectivity of 1, and recording the access times of the nodes; if nodes with the connectivity of 1 still exist, the process is continued, the core degree of the deleted nodes is marked to be 1, the access core degree is the number of the access of the node plus 1, and if the number of the access of the node j is f j Then the access core degree of the node j is kf j =f j +1;
(B) Repeating step (a) for nodes with a connectivity of 2, and marking the access cores of the nodes as access number + 2;
(C) circularly executing the step (A) to the step (B) until all the nodes are deleted and obtaining the corresponding access core;
(5) rate of access balancing
The access balance of the nodes is as follows:
Figure FDA0003707312510000031
where j and g are neighbor nodes of (j, g ∈ N (i)), kf j Is section (III)The access criticality of the point j,
Figure FDA0003707312510000032
boltzmann's constant, which represents the attribute of the system itself;
Figure FDA0003707312510000033
the larger the value of (3), the more balanced the access to the node, the easier the access request of the user is to be satisfied, and the greater the importance of the node is;
the dynamic node sequencing method based on weight self-adaptation comprises the following steps:
utilizing an information entropy theory to carry out self-adaptive weighting on each index according to the change of network index parameters; according to the information entropy theory, the higher the disorder of the index set is, the larger the information provided by the index is, and the higher the weight of the comprehensive evaluation index is;
(1) decision model
Suppose there are n nodes to be sorted, each node has 4 evaluation indexes, and the q-th evaluation index of the node i has the value x iq (i 1, 2.. multidot.n; q 1, 2.. multidot.4), a decision matrix composed of all network nodes and evaluation indexes thereof are shown as follows:
Figure FDA0003707312510000034
(2) nonlinear programming decision matrix
Because the dimension of each index is different, the magnitude difference exists, and the direct comparison is inconvenient; in order to eliminate the dimensional difference between the indices, the evaluation index should be normalized as shown in the following formula:
Figure FDA0003707312510000035
y iq is an index for which the evaluation index is normalized, and therefore, the standard normalization matrix is:
Figure FDA0003707312510000036
(3) weights calculated from exponential entropy
The formula for calculating the exponential entropy value is given by:
Figure FDA0003707312510000041
the information entropy redundancy is calculated as follows:
rr q =1-hh q
the exponential weight is calculated as follows:
Figure FDA0003707312510000042
an exponential weighting matrix W is obtained as shown by:
W=[w 1 w 2 w 3 w 4 ];
the weighted normalized decision matrix can be expressed as:
Figure FDA0003707312510000043
the weighted attribute value for node i is represented as:
Figure FDA0003707312510000044
because cooperative caching can be realized among nodes in the edge cache network, and adjacent nodes can also contribute to the caching importance of the target node, an importance evaluation matrix is constructed:
Figure FDA0003707312510000045
wherein, delta ij Is a contribution allocation parameter, if two nodes are connected, its value is 1, otherwise it is 0; gamma ray i Is the degree of the node i, and γ is the average degree of the node;
(4) importance calculation
On the basis of the importance evaluation matrix, summing the attribute value of the node i and the importance contributions of all the nodes adjacent to the node i to obtain the importance of the node i;
Figure FDA0003707312510000051
wherein eta is i The value of the node i in the network is reflected; the importance evaluation matrix fully considers the node position, the access frequency, the cache space and the available bandwidth, and more comprehensively reflects the importance of the node around the target of the cache value;
the implementation of the weight-based adaptive cache decision strategy comprises:
(1) rate of replacement
The cache replacement rate is represented by r (i):
Figure FDA0003707312510000052
wherein, S (rf) m ) Is a replaced content rf from a node i m C (i) is the cache space size of node i; m is the number of contents replaced from node i per unit time;
and (3) standardization:
Figure FDA0003707312510000053
(2) node caching value metric
Designing a new measure I (i) comprising access balancing core degree center rate and cache replacement rate:
Figure FDA0003707312510000054
i k =argmax{I(i)};
wherein eta is i Is the importance of the node i after comprehensive evaluation, r (i) is the node cache replacement rate, if r (i) is 0: this means that the node cache space is not full, or no new content arrives; to make the node metric expression consistent, let r (i) be ∈, which is a very small positive number;
the dense wireless network edge caching method based on the access balancing core and the replacement rate further comprises the following steps:
1) establishing a network topological structure based on node attributes, and calculating the number of accessible node contents;
2) calculating corresponding weight according to the node attribute and the access times to the node;
3) calculating the access core centrality of the node according to the rule of the core centrality;
4) access balance of the computing nodes;
5) constructing a multi-index decision matrix, and calculating an index weight value to obtain a node attribute value;
6) constructing an importance evaluation matrix and calculating the importance of the nodes;
7) and calculating the short-term replacement rate of the nodes and the cache values of the nodes, and finding the node with the maximum cache value.
2. The access balancing core and replacement rate based dense wireless network edge caching method of claim 1, wherein the number of times the node is accessed comprises:
1) content popularity
Assuming that the content popularity satisfies the Zipf distribution, the popularity of the content k at the rank τ is:
Figure FDA0003707312510000061
wherein num represents the total number of contents, λ represents the skewness coefficient of Zipf distribution, and a larger λ means that contents with high popularity are more easily accessed;
2) base station user interest
Obtaining the stable interest preference of the user by analyzing the long-term access records of the user, and defining the long-term interest of the user to the access base station i as
Figure FDA0003707312510000062
Figure FDA0003707312510000063
Wherein f is i long (Δt long ) Representing the statistical flow of the current base station node i; f. of long (Δt long ) Representing the current statistical access quantity of all users in the edge cache cluster;
defining the interest of the user accessing the node i in the latest time as the short-term interest
Figure FDA0003707312510000064
Figure FDA0003707312510000065
Wherein f is i short (Δt short ) Representing the statistical access to node i in the last period of time, f short (Δt short ) Representing the statistical flow accessed by all users in the edge cache cluster in the last period of time, wherein the current interest preference of the users depends on the long-term interest and the short-term interest of the users, so the potential interest of the users accessing the base station i is defined as:
Figure FDA0003707312510000071
wherein phi is 1 And phi 2 Respectively representing the influence proportions of long-term interest and short-term interest on the current interest of the user, and considering that the recent influence is larger, phi 2 Should be greater than phi 1
3) Potential willingness of a user to access
Because the user's willingness to access the content is not only influenced by the interest preferences, but also is closely related to the popularity of the content, i.e., users always tend to request popular content that they like; probability of request for content k in node i
Figure FDA0003707312510000072
Comprises the following steps:
Figure FDA0003707312510000073
suppose the total number of user accesses in the last cache cycle is f ave Then the potential access amount f of the user to the content k in the base station i i k Is composed of
Figure FDA0003707312510000074
3. The access balancing core and replacement rate based dense wireless network edge caching method of claim 2, wherein a number next to a node represents a number of accesses.
4. An access equalization core and replacement rate based dense wireless network edge cache system, which is characterized in that the access equalization core and replacement rate based dense wireless network edge cache system is used for implementing the access equalization core and replacement rate based dense wireless network edge cache method according to any one of claims 1 to 3.
5. An information data processing terminal, characterized in that the information data processing terminal comprises a memory and a processor, the memory stores a computer program, when the computer program is executed by the processor, the processor is caused to execute the dense wireless network edge caching method based on the access balance core and the replacement rate according to any one of claims 1 to 3.
6. A computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform the access balancing core and replacement rate based dense wireless network edge caching method as claimed in any one of claims 1 to 3.
CN202110035595.4A 2021-01-12 2021-01-12 Dense wireless network edge caching method based on access balance core and replacement rate Active CN112887992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110035595.4A CN112887992B (en) 2021-01-12 2021-01-12 Dense wireless network edge caching method based on access balance core and replacement rate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110035595.4A CN112887992B (en) 2021-01-12 2021-01-12 Dense wireless network edge caching method based on access balance core and replacement rate

Publications (2)

Publication Number Publication Date
CN112887992A CN112887992A (en) 2021-06-01
CN112887992B true CN112887992B (en) 2022-08-12

Family

ID=76044097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110035595.4A Active CN112887992B (en) 2021-01-12 2021-01-12 Dense wireless network edge caching method based on access balance core and replacement rate

Country Status (1)

Country Link
CN (1) CN112887992B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500529A (en) * 2021-12-28 2022-05-13 航天科工网络信息发展有限公司 Cloud edge cooperative caching method and system based on perceptible redundancy

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109495865A (en) * 2018-12-27 2019-03-19 华北水利水电大学 A kind of adaptive cache content laying method and system based on D2D auxiliary
WO2019095402A1 (en) * 2017-11-15 2019-05-23 东南大学 Content popularity prediction-based edge cache system and method therefor
CN111565419A (en) * 2020-06-15 2020-08-21 河海大学常州校区 Delay optimization oriented collaborative edge caching algorithm in ultra-dense network
CN111885648A (en) * 2020-07-22 2020-11-03 北京工业大学 Energy-efficient network content distribution mechanism construction method based on edge cache
CN111970733A (en) * 2020-08-04 2020-11-20 河海大学常州校区 Deep reinforcement learning-based cooperative edge caching algorithm in ultra-dense network
CN112039943A (en) * 2020-07-23 2020-12-04 中山大学 Load balancing edge cooperation caching method for internet scene differentiation service
CN112104752A (en) * 2020-11-12 2020-12-18 上海七牛信息技术有限公司 Hot spot balancing method and system for cache nodes of content distribution network

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101420364B (en) * 2007-10-26 2011-12-28 华为技术有限公司 Link selection method, method and device for determining stability metric value of link
WO2015048773A2 (en) * 2013-09-30 2015-04-02 Northeastern University System and method for joint dynamic forwarding and caching in content distribution networks
CN104539266B (en) * 2014-12-16 2017-07-18 中国人民解放军海军航空工程学院 Kalman's uniformity wave filter based on the adaptation rate factor
CN108923949A (en) * 2018-04-20 2018-11-30 西南交通大学 A kind of ambulant network edge cache regulation means of user oriented
CN109936633A (en) * 2019-03-11 2019-06-25 重庆邮电大学 Based on the cooperation caching strategy of content different degree in content center network
CN111294394B (en) * 2020-01-19 2022-09-27 扬州大学 Self-adaptive caching strategy method based on complex network junction
CN111901392B (en) * 2020-07-06 2022-02-25 北京邮电大学 Mobile edge computing-oriented content deployment and distribution method and system
CN111935783A (en) * 2020-07-09 2020-11-13 华中科技大学 Edge cache system and method based on flow perception
CN112187872B (en) * 2020-09-08 2021-07-30 重庆大学 Content caching and user association optimization method under mobile edge computing network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019095402A1 (en) * 2017-11-15 2019-05-23 东南大学 Content popularity prediction-based edge cache system and method therefor
CN109495865A (en) * 2018-12-27 2019-03-19 华北水利水电大学 A kind of adaptive cache content laying method and system based on D2D auxiliary
CN111565419A (en) * 2020-06-15 2020-08-21 河海大学常州校区 Delay optimization oriented collaborative edge caching algorithm in ultra-dense network
CN111885648A (en) * 2020-07-22 2020-11-03 北京工业大学 Energy-efficient network content distribution mechanism construction method based on edge cache
CN112039943A (en) * 2020-07-23 2020-12-04 中山大学 Load balancing edge cooperation caching method for internet scene differentiation service
CN111970733A (en) * 2020-08-04 2020-11-20 河海大学常州校区 Deep reinforcement learning-based cooperative edge caching algorithm in ultra-dense network
CN112104752A (en) * 2020-11-12 2020-12-18 上海七牛信息技术有限公司 Hot spot balancing method and system for cache nodes of content distribution network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
移动边缘网络中基于内容信息年龄和流行度的缓存机制;邱娅等;《网络空间安全》;20191125(第11期);全文 *

Also Published As

Publication number Publication date
CN112887992A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
Zhong et al. A deep reinforcement learning-based framework for content caching
He et al. QoE-driven content-centric caching with deep reinforcement learning in edge-enabled IoT
CN105282215B (en) Reputation based policies for forwarding and responding to interests through a content centric network
Zhang et al. Joint optimization of cooperative edge caching and radio resource allocation in 5G-enabled massive IoT networks
US10567538B2 (en) Distributed hierarchical cache management system and method
Dutta et al. Caching scheme for information‐centric networks with balanced content distribution
CN107404530B (en) Social network cooperation caching method and device based on user interest similarity
CN109729507B (en) D2D cooperative caching method based on incentive mechanism
CN108600014A (en) A kind of storage resource distribution method based on Stackelberg games
Krolikowski et al. A decomposition framework for optimal edge-cache leasing
Hu et al. Many-objective optimization based-content popularity prediction for cache-assisted cloud-edge-end collaborative IoT networks
CN112887992B (en) Dense wireless network edge caching method based on access balance core and replacement rate
CN107370807B (en) Server based on transparent service platform data access and cache optimization method thereof
Somesula et al. Deadline-aware caching using echo state network integrated fuzzy logic for mobile edge networks
Luo et al. Enabling balanced data deduplication in mobile edge computing
Lei et al. Partially collaborative edge caching based on federated deep reinforcement learning
Banerjee et al. Sharing content at the edge of the network using game theoretic centrality
Jia et al. Social‐Aware Edge Caching Strategy of Video Resources in 5G Ultra‐Dense Network
Boddu et al. Improving data accessibility and query delay in cluster based cooperative caching (CBCC) in MANET using LFU-MIN
Hirsch et al. Data Replication in Cooperative Mobile Ad-Hoc Networks: A Game Theoretic Replication Algorithm Using Volunteers’ Dilemma
Fang et al. Mobile Edge Data Cooperative Cache Admission Based on Content Popularity
Zhang et al. A bankruptcy game for optimize caching resource allocation in small cell networks
Mishra et al. Efficient proactive caching in storage constrained 5g small cells
Yu et al. A cache replacement policy based on multi-factors for named data networking
Vitoropoulou et al. CAUSE: Caching Aided by USer Equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant