CN109194763B - Caching method based on small base station self-organizing cooperation in ultra-dense network - Google Patents

Caching method based on small base station self-organizing cooperation in ultra-dense network Download PDF

Info

Publication number
CN109194763B
CN109194763B CN201811110511.3A CN201811110511A CN109194763B CN 109194763 B CN109194763 B CN 109194763B CN 201811110511 A CN201811110511 A CN 201811110511A CN 109194763 B CN109194763 B CN 109194763B
Authority
CN
China
Prior art keywords
base station
small base
user
base stations
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811110511.3A
Other languages
Chinese (zh)
Other versions
CN109194763A (en
Inventor
李曦
胡成佳
纪红
张鹤立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201811110511.3A priority Critical patent/CN109194763B/en
Publication of CN109194763A publication Critical patent/CN109194763A/en
Application granted granted Critical
Publication of CN109194763B publication Critical patent/CN109194763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/24Connectivity information management, e.g. connectivity discovery or connectivity update
    • H04W40/32Connectivity information management, e.g. connectivity discovery or connectivity update for defining a routing cluster membership

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention provides a caching method based on small base station self-organizing cooperation in an ultra-dense network, and belongs to the technical field of wireless communication. The method comprises the following steps: firstly, obtaining a similarity matrix of small base stations according to the load capacity and the positions of the small base stations in the ultra-dense network; then, clustering is carried out according to the similarity matrix and the number k of the small and medium-sized base stations in each cluster, and the small base station with the best load capacity is selected to serve as a cluster head; caching files into the small base station and the macro base station according to a file caching strategy; after a user accesses a base station, requesting to acquire a file; finally, judging whether the value of K exceeds the maximum value K set in the cluster, if so, outputting the K value which enables the average download time delay of the user to be minimum; otherwise, continuing to update the k value and continuing to cluster again. The invention distributes the resources in the cluster by the self-organizing idea, and effectively improves the cache hit rate and reduces the user downloading time delay by utilizing the mutual cooperation and self-organization among the small base stations, thereby meeting various service requirements.

Description

Caching method based on small base station self-organizing cooperation in ultra-dense network
Technical Field
The invention belongs to the technical field of wireless communication, and particularly relates to a caching method based on small-sized base station self-organizing cooperation in an ultra-dense network.
Background
The ultra-dense networking, namely the ultra-dense network UDN technology, can realize the great improvement of the frequency reuse efficiency by densely deploying the small-sized base stations SBS, and is one of the key technologies for meeting the 5G thousand-time capacity increase requirement. Ultra dense network UDNs are considered as one of the key technologies to meet user demand and increase system capacity. The UDN usually contains a large number of low-power small base stations SBSs and is deployed at a density much higher than that in the current mobile network scenario, which can provide users with extremely high data transmission rates. Meanwhile, a cache technology is introduced into an ultra-dense network, network edge cache attracts much attention in recent years, and the main idea is to cache contents frequently requested by a user into the network edge, so that files in the network are closer to the user, redundant data transmission can be reduced, the download time delay of the user is shortened, the user experience is improved, and the spectrum utilization rate is improved. The UDN scene is combined with a network edge cache technology, so that the throughput in a network can be improved, the downloading delay of a user is reduced, the return pressure can be reduced, and the performance of the network can be greatly improved. However, considering the cost problem in the case of ultra-dense deployment of SBSs in UDNs, the buffer capacity of a single SBS is very limited, so that each SBS cannot buffer all files that may be needed by a user, and under such a condition, the buffer mode of SBSs needs to be considered in combination with multiple factors, such as considering file popularity, hit rate and user experience comprehensively, and considering mutual cooperation and self-organization among small base stations.
In the cache research for the ultra-dense network scenario, reference 1 intends to maximize the cache utilization in consideration of the limitation of the cache space. Firstly, an optimization target is converted into a backhaul load minimization problem, and then a machine learning related algorithm is used for predicting and caching corresponding contents into a small base station, so that the algorithm is low in complexity and high in accuracy. Reference 2 proposes a small base station social awareness caching strategy to improve network throughput and energy efficiency. The social behavior of the small base stations is firstly researched by utilizing social network theory, a few very important base stations are selected from the base stations with high social tie factors, and resources are scheduled and allocated with other small cells based on the base stations.
In an ultra-dense scene, the prior art mainly optimizes cache contents around a single base station, does not fully consider mutual cooperation and self-organization among small base stations, and how to efficiently provide services for users under the condition that the cache capacity of a single SBS is very limited, reduces the download delay of the users, and can further improve the cache hit rate and the service experience of the users by considering the problems.
Reference documents:
[1]G.Shen,L.Pei,P.Zhiwen,L.Nan and Y.Xiaohu,"Machine learning basedsmall cell cache strategy for ultra dense networks,"2017 9th InternationalConference on Wireless Communications and Signal Processing(WCSP),Nanjing,2017,pp.1-6.
[2]Y.Li,X.Zhang,J.Zhang,S.Wang and D.Wang,"Base Station Social-AwareCaching Strategy for 5G Ultra Dense Networks,"2017IEEE Globecom Workshops(GCWkshps),Singapore,2017,pp.1-6.
disclosure of Invention
Aiming at the problems that in the prior art, the popularity, the hit rate and the user experience are not comprehensively considered, and the mutual cooperation and self-organization among small base stations are not fully considered, the invention provides a cache method based on the self-organization cooperation of the small base stations in an ultra-dense network, so as to achieve the purpose of reducing the download delay of a user.
The ultra-dense heterogeneous network scene comprises the following steps: the system comprises a core network, a macro base station, S small base stations densely distributed in the coverage area of the macro base station, and U mobile users randomly distributed in the whole network scene; the macro base station is connected with the core network and the micro base station is connected with the small base station through wireless links. The invention provides a caching method based on small base station self-organizing cooperation in an ultra-dense network, which comprises the following steps:
step 1, obtaining a similarity matrix of small base stations according to the load capacity and the positions of the small base stations in the ultra-dense network;
the similarity matrix records the similarity of any two different small base stations, namely a small base station S1And S2Degree of similarity of
Figure BDA0001808971970000021
Expressed as:
Figure BDA0001808971970000022
θ∈[0,1](ii) a Where, theta is a constant that is set,
Figure BDA0001808971970000023
is a base station S1And S2The degree of similarity of the positions of (a),
Figure BDA0001808971970000024
is a base station S1And S2The degree of load capacity variation of (a);
step 2, clustering all small base stations according to the similarity matrix of the small base stations and the number k of the small base stations in each cluster, and selecting the small base station with the best load capacity in each cluster as a cluster head; k is a positive integer;
step 3, caching files into the small base station and the macro base station according to a file caching strategy;
the file caching strategy is as follows: setting the same cache capacity of the small base station, and caching M complete files at most; sorting all files possibly requested by a user according to the popularity of the files; for one cluster, selecting kM files with the highest popularity and dividing each file into k segments to be respectively cached in the corresponding small base stations; continuously caching the remaining files with former popularity according to the file popularity in the macro base station;
step 4, after the user accesses the base station, the user starts to request to acquire the file, and at this time, three situations occur:
firstly, judging whether a file required by a user is cached in a small-sized base station, if so, a first condition occurs: the small base station caches files requested by a user; if not, judging whether the file required by the user is cached in the macro base station, if so, the second condition occurs: caching a file requested by a user in a macro base station; if neither of the above two cases is true, then a third case occurs: the file requested by the user is not cached in the network, and the user acquires the requested file through the core network;
calculating the transmission time delay of the user under the three conditions so as to obtain the average downloading time delay of the user, and then executing the step 5;
step 5, setting the number of the small base stations in each cluster to be at most K, wherein K is a positive integer; judging whether the value of K exceeds K, if so, selecting the K value which enables the average download time delay of the user to be minimum, and outputting the K value; otherwise, continuing to update the value of k, increasing k by 1, and then continuing to execute the step 2.
Compared with the prior art, the invention has the following obvious advantages:
(1) the method provides a collaborative caching strategy based on SBS clustering, and can be seen from simulation results, the method effectively improves the cache hit rate, simultaneously reduces the user downloading time delay, and confirms the feasibility and the applicability of the strategy to meet various service requirements in a dense scene.
(2) The method can utilize clustering and mutual cooperation of the small base stations and utilize a network self-optimization idea to realize the improvement of the resource management efficiency.
Drawings
FIG. 1 is a flow chart of the SBS self-organizing cooperative caching method according to the present invention;
FIG. 2 is a system model diagram of a collaborative caching method for SBS clustering according to the present invention;
FIG. 3 is a schematic diagram of a cooperative caching strategy of SBS clustering in the present invention;
FIG. 4 is a graph of cache hit rate versus number of small base stations in a cluster in accordance with the present invention;
FIG. 5 is a graph of the average user download delay versus the number of small base stations in a cluster in accordance with the present invention;
FIG. 6 is a diagram showing the relationship between the average download delay of the user and the total number of small and medium-sized base stations in the scene;
fig. 7 is a graph of average download delay of a user versus a zippf decay constant α in the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail below with reference to the accompanying drawings and examples.
The invention provides a caching method based on SBS self-organizing cooperation in an ultra-dense network, aiming at providing services for users efficiently and reducing the downloading time delay of the users under the condition that the caching capacity of a single SBS is very limited. The method of the invention clusters all SBSs based on the load capacity and position information of the SBSs, and selects a cluster head from each cluster to distribute cluster resources by using a self-organizing idea. And then fragmenting the corresponding files and caching different fragments into different SBSs, wherein the size of the cached files of each base station is the same and the cached files are sorted according to the popularity of the files. And setting an optimization target to minimize the average downloading time delay of the user, finally changing the number of the small base stations in the cluster, and finding the optimal solution by a traversal method.
As shown in fig. 1, the cooperative caching method based on SBS self-organization proposed by the present invention specifically includes steps 1 to 5, and the steps are sequentially described below.
Step 1, initializing parameters, and obtaining a similarity matrix of the small base stations according to the position information and the load capacity information of all the small base stations in the ultra-dense network. The initialized parameters include location information and load capacity of the base station.
As shown in fig. 2, a system model of the SBS clustering cooperative caching method of the present invention is shown, and the present invention considers the caching strategy in a super-dense heterogeneous network scenario, and in the network, the method includes: and the macro base station is connected with the core network and the small base station through wireless links. The method is characterized in that the SBS of the small base stations is densely distributed in the coverage range of the macro base station, the U mobile users are randomly distributed in the whole network scene, and the method assumes that the users are in a static state when receiving files. One set for s small base stations in a network
Figure BDA0001808971970000031
It is meant that the composition, in the form,
Figure BDA0001808971970000032
by collections
Figure BDA0001808971970000033
Which represents u mobile users in the network, in the form of,
Figure BDA0001808971970000034
the key of small base station clustering is to select similarity characteristics between base stations, and although there are many similarity characteristics that can be selected, two important characteristics are position information and load capacity of the small base station respectively [ reference 3: samarakoon, M.Bennis, W.Saad, and M.Latva-aho, "Dynamic clustering and on/offset for wireless small cell networks," IEEE Transactions on Wireless communications, vol.15, No.3, pp.2164-2178, March 2016 ]. The position information reflects the capability of mutual cooperation among the base stations and determines the operability of the cooperation among the base stations. The load capacity causes the problem of mutual interference among base stations, simultaneously displays the willingness of the base stations to participate in cooperation, and determines whether the cooperation is necessary or not. Next, the similarity of the position information and the load capacity between the base stations will be calculated respectively.
In one aspect, any two different small cell sites in a very dense network of location information calculations from small cell sitesFor any two different small base stations, calculating two small base stations S1And S2Corresponding Gaussian similarity, i.e. position similarity
Figure BDA0001808971970000041
The formula is as follows:
Figure BDA0001808971970000042
wherein the content of the first and second substances,
Figure BDA0001808971970000043
respectively representing the coordinates of the two small base stations in the Euclidean space; sigmaXIs a set constant; r represents the maximum distance of the small base stations from each other as neighbors. It can be seen that the smaller the distance between small base stations, the higher the position similarity, and the easier the base stations cooperate with each other. Further, the position similarity of all the S small cell base stations to each other can be represented by an S × S matrix.
On the other hand, the similarity is calculated from the aspect of the load capacity of the small base station. The larger the difference of the load capacities of the small base stations is, the more the small base stations are required to form a cluster for inter-cluster task unloading, so the method can calculate the load difference degree among the small base stations. For ease of calculation, the method of the present invention assumes that each user requests files as often as possible. The load capacity of the small base station can be reduced to the maximum number of connectable users. Finally, the small base station S is obtained1And S2Degree of load capacity difference therebetween
Figure BDA0001808971970000044
The calculation formula is as follows:
Figure BDA0001808971970000045
wherein σNIs a set constant for adjusting
Figure BDA0001808971970000046
To facilitate numerical calculation; sigmaNDetermines the size of the difference value range.
Figure BDA0001808971970000047
Respectively representing small base stations S1And S2The maximum number of connectable users. And the above
Figure BDA0001808971970000048
In the same way as above, the first and second,
Figure BDA0001808971970000049
an S × S matrix is also formed, and the larger the value of the matrix, the easier the corresponding two small base stations are clustered.
Then, according to the small base station S1And S2The similarity of the positions and the difference of the load capacity between the small-sized base stations S are obtained to finally determine the clustering similarity1And S2Degree of similarity of
Figure BDA00018089719700000410
The calculation formula is as follows:
Figure BDA00018089719700000411
wherein, theta is a set constant and is used for adjusting the factor influence ratio for determining the clustering; the magnitude of θ determines the degree of influence of the similarity of the positional information and the degree of difference in load capacity on the overall similarity. Calculating the similarity of any two different small base stations to form a similarity matrix with the size of S multiplied by S
Figure BDA00018089719700000412
Therefore, clustering is carried out in the next step based on the SBS load capacity similarity and the position information similarity.
And 2, clustering the small base stations according to the similarity matrix obtained by calculation and a small base station clustering strategy, and determining the number of the small and medium base stations in each cluster as k. All the small base stations in the network scene can be equally divided into clusters with the same number of small base stations in a plurality of clusters. And selecting the small base station with the best load capacity in each cluster from the divided clusters to serve as a cluster head.
After the number k of small base stations in each cluster is determined, one small base station in a scene is randomly selected as a cluster central point, and the cluster central point is determined according to a similarity matrix
Figure BDA0001808971970000051
And obtaining k-1 small base stations with the highest similarity to form a cluster. And selecting the small base station closest to the current cluster center point in the non-clustered base stations as the center point of the next group of clusters, and selecting the k-1 small base stations with the highest similarity with the cluster center point according to the similarity matrix of the small base stations to form a cluster. The clustering operation is repeated in this manner. Along with the clustering, if the number of the qualified small base stations is less than k-1, selecting a plurality of small base stations with higher similarity from the clustered small base stations to complement the base stations in the new cluster to k. And repeating the steps until all the small base stations complete clustering, wherein the cluster head in each cluster is served by the base station with the best load capacity, and the cluster head utilizes a self-optimization idea to allocate resources in the cluster and coordinate to allocate tasks in the cluster.
And 3, caching the file to the corresponding small base station according to the file caching strategy. Because the cache capacity of the small base station is very limited, in order to improve the cache hit rate, different small base stations in the same cluster cache different fragments of the file with high popularity, and the macro base station caches the remaining uncached complete files in the network.
The file caching strategy is that the small base station counts the number I of all possible requested files of a user, and sorts all the files according to the popularity of the files, wherein the smaller the sequence number is, the higher the popularity of the represented file is. Then, according to a file caching strategy, fragmenting files with the front file popularity and caching corresponding fragments into a small base station; the macro base station caches some complete files with the highest popularity, the files are cached according to the file popularity ranking, and the files with the highest popularity are cached first until the caching space is used up.
The method of the invention assumes that the buffer capacity of all small base stations is homogeneousMeanwhile, at most, M complete files are cached, the macro base station can cache N complete files, the files required by the users in the network are I in total, and M is less than N and less than I. Sorting all I types of files in the network according to the file popularity to obtain a popularity file library
Figure BDA0001808971970000052
Smaller file subscript values indicate more popular files to which they correspond. Since there are k small base stations in a cluster, each cluster can buffer kM complete files at most. As shown in fig. 3, in the cooperative caching strategy of SBS clustering in the present invention, k × M files with the top popularity are selected, each file is uniformly divided into k segments, and then the file segments are cached in different small base stations. Files with file popularity between k × M and k × M + N will be completely cached in the macro base station.
According to the caching strategy, files cached by the k small cell base stations in each cluster can be mathematically modeled. By f1,kThe kth file fragment of the file representing the first popularity is then all files C cached by the kth small cell sitekExpressed as:
Ck={f1,k,f2,k,…,fk(M-1)+1,k…,fkM,k}
wherein f iskM,kThe kth file shard of the file representing the kth popularity.
And 4, after the user accesses the base station, the user starts to request to acquire the file, the transmission strategy and the cache strategy are depended on each other and divided into three conditions, the transmission delay under each condition is calculated, the average download delay of the user is further obtained, and then the step 5 is executed.
When a user wants to acquire a certain file from a network, and the file transmission problem is involved, three situations occur at this time, and the user acquires the file he wants through three forms.
The first case is that the files requested by the user are buffered in the small base station and transmitted through the small base station.
The user first sends a request for the file to the small base station connected with the user, and the base station feeds back the request information to the cluster head. The cluster head judges whether files required by a user exist in the cluster according to the popularity of the request files, if yes, the current load condition of each small base station in the current cluster is checked to determine which small base stations are commonly served by the user, corresponding resources are allocated, and if some small base stations caching the file fragments required by the user cannot directly communicate with the user, the cluster head also confirms which base station to relay transmission. The transmission process realizes file transmission between the small base station and the user through a self-organizing idea.
The second case is that the file requested by the user is cached in the macro base station, and the downloading time delay is calculated according to the MBS transmission. If the cluster head finds that the file required by the user is not in the cluster, the cluster head asks the macro base station for the file. If the macro base station caches the corresponding file, the file is transmitted to the cluster head through the macro base station, and then the file is transmitted to the user through the cluster head.
The third situation is that the file requested by the user is not cached in the network, the file needs to be transmitted through the core network, and the downloading time delay is calculated according to the transmission of the core network. If the macro base station still has no file, the macro base station continues to send a request to the core network, the core network transmits the file required by the user to the macro base station through the wireless link, and the macro base station transmits the file to the cluster head through the wireless link and then transmits the file to the user through the cluster head. In this case, the file download delay becomes large because the transmission path becomes long.
The popularity of each file is different, and the probability that the user asks the small-sized base station for the corresponding file is different. The method assumes that the probability of different files requested by a user obeys the zipff law [ reference 4: L.Breslau, P.Cao, L.Fan, G.Phillips, and S.Shenker, "Web calling and zip-like distributions," in INFOCOM' 99. Eightening Annual Joint reference of the IEEE Computer and Communications resources.]. By PiRepresenting the probability that a user requests the files with the popularity rank i from the small base station, and obtaining P according to the Zipff lawiComprises the following steps:
Figure BDA0001808971970000061
wherein α represents attenuation constant, j represents file type, i represents popularity ranking, and cache hit rate of the theoretical small base station to the user can be calculated after calculating the requested probability of each file
Figure BDA0001808971970000062
Figure BDA0001808971970000063
For convenience of calculation, when the small base station sends a file to a user, the method of the present invention assumes that the small base station does not share spectrum resources with the macro base station. And different small base stations in the same cluster can be allocated with mutually orthogonal frequency bands for transmitting files, so that mutual interference among the small base stations is avoided. Setting the available bandwidth of the small base station and the macro base station as omega respectivelySM;hs,uIs the channel gain, h, between the s-th small cell site and the u-th userM,sIs the channel gain between the macro base station and the s-th small base station. Are used separately
Figure BDA0001808971970000071
And
Figure BDA0001808971970000072
calculating a corresponding channel gain, wherein L0Representing the gain factor, is a constant, β is the path loss indicator, ds,uRepresents the distance between the s-th small cell site and the u-th user, dM,sRepresenting the distance between the macro base station and the s-th small cell.
Calculating the SNR between the s small cell base station and the u user by the following formulas,uAnd SNR between the macro base station and the s-th small cellM,s
SNRs,u=PS·hs,u2
SNRM,s=PM·hM,s2
Wherein σ2Representing the noise power in the network scenario; pMSetting the transmission power of the macro base station and the single small base station when the macro base station communicates; pSSet to the transmit power of the small base station when communicating with a single user.
For the first case, the cluster head allocates the resources in the cluster by using the self-optimization idea, and controls all small base stations directly connected with the user in the cluster to cooperatively transmit the file together, thereby shortening the downloading time delay. However, there may be a case that a small base station in a cluster needs to transmit a file for a user through a relay, so that the download delay is also related to the number of base stations in the cluster directly connected to the user.
Under the first condition, the Shannon formula is taken as the basis to obtain the transmission delay of the u user when the u user obtains the file through the small-sized base station
Figure BDA0001808971970000073
Comprises the following steps:
Figure BDA0001808971970000074
wherein, NumsThe number of file fragments which need to be transmitted for the user by the s small base station is represented; f is the size of the required file; t denotes the number of small base stations communicating directly with the user.
In the second case, the transmission delay of the u-th user when obtaining the file through the macro base station
Figure BDA0001808971970000075
The method comprises two parts of file transmission from a macro base station to a cluster head and file transmission from the cluster head to a user:
Figure BDA0001808971970000076
in the third case, the u-th user acquires the file through the core network. The u-th user passes through the coreTransmission delay when the heart network obtains the file
Figure BDA0001808971970000077
In addition to comprising
Figure BDA0001808971970000078
Besides, the communication time between the core network and the macro base station is also included, and in order to simplify calculation, the method of the invention uses a constant Const to represent the part of time delay. Thus, it is possible to provide
Figure BDA0001808971970000079
Is calculated as follows:
Figure BDA00018089719700000710
the probability that the u-th user acquires files from the small base station, the macro base station and the core network is respectively expressed as
Figure BDA00018089719700000711
The following were used:
Figure BDA00018089719700000712
so the calculated average download Delay of the u-th useruThe following were used:
Figure BDA00018089719700000713
step 5, given that the maximum number of the small base stations which can exist in each cluster is K, the optimization target of the method of the invention is to minimize the average downloading time delay of the user under the condition of meeting a certain cache hit rate, and the calculation formula of the optimization target is as follows:
Figure BDA0001808971970000081
subject to:k∈{1,2,…,K}
and (3) changing the size of k, adjusting the number of the small and medium-sized base stations in each cluster after clustering, executing the step 2 according to the value of k after changing k each time, clustering again, and continuing to execute the steps 3 and 4 to obtain the cache hit rate of the corresponding small base station and the average downloading time delay of the user. And obtaining the k value which enables the average download time delay of the user to be minimum by traversing the value of k.
In step 5, firstly judging whether the value of K exceeds K, if so, outputting the K value which enables the average download time delay of the user to be minimum, and ending the method; if not, continuing to change the k value and executing the step two. The invention traverses the value of K from small to large until K is obtained.
The method of the present invention is verified in the following simulation, and the comparison algorithm of the present simulation is the most popular file caching scheme [ reference 5: wu, X.Li, H.Ji, and H.Zhang, "energy-efficiency sheet manager for udn with edge control," in 2017IEEE Globecom works (GCWkshps), Dec 2017, pp.1-5 ]. In the "most popular" file caching scheme, all small base stations cache the same M whole files with the highest popularity.
The simulation verification is as follows:
if not specifically stated, the number of file types possibly requested by the user is totally I200, the size of the requested file is F10 Mbits, the attenuation constant α in the Zipf's law is set to α 0.8, the storage capacity of the small base station is M10 files, the storage capacity of the macro base station is N100 files, the simulation scene size is 200M and the macro base station is placed at the center, the number of small and medium base stations in the network is 50 and the coverage radius is r 30M, and the available bandwidth ω of sub-channels of the macro base station and the small base stations is ω 30MM=ωS1MHz, the transmitting power is PS=100mW、PM20W, noise power σ2-100dBm, gain factor L0-30 dB. In order to simplify the calculation, the method sets the transmission delay from the core network to the macro base station to be a fixed value C-1 s.
As shown in fig. 4, a graph of a relationship between a cache hit rate and a number of small base stations in a cluster according to the method of the present invention shows a relationship between an average cache hit rate and a number k of small base stations in a cluster, where a horizontal axis represents the number of base stations, a vertical axis represents the average cache hit rate, different curves represent different numbers of files, and any curve is selected from fig. 4, which shows that the cache hit rate of a small base station increases with the increase of the number of small base stations in a cluster, and the increase trend is slower and slower. Because the larger the number of the small base stations in the cluster, the more the types of the different files cached in the cluster, the higher the probability that the file requested by the user can be directly obtained from the small base station, that is, the higher the cache hit rate. According to the Zipf law, the probability that files with higher popularity are requested by a user is smaller, and the caching strategy of the method caches the files in sequence according to the popularity ranking of the files, so that the contribution of the files cached later to the cache hit rate is not large, and the situation that the increase trend is slower and slower occurs. Meanwhile, changing the number of files I requested by a user in the network also affects the cache hit rate of the small base station. The larger the number of files in the network, the smaller the probability proportion of a user requesting a single file, and the smaller the probability of requesting a file cached in a cluster by the user. Therefore, as can be seen from fig. 4, when the file number I is 200, I is 240, I is 280, and I is 320, the cache hit rate of the small base station is rather increased as the file number I is smaller.
From the cache hit rate alone, it seems as better as the number of small base stations in the cluster is greater. However, as the number of small base stations in a cluster increases, both the resource allocation problem in the cluster and the computing power of the cluster head will be examined, and thus the average download delay of the user may increase. A trade-off will be made next between cache hit rate and user average download latency.
As shown in fig. 5, a graph of the relationship between the average download delay of the user and the number of the small base stations in the cluster in the method of the present invention shows the relationship between the average download delay of the user and the number k of the small base stations in the cluster. It should be noted here that when the number of small base stations in a cluster is 1, all files do not need to be fragmented, and all files cached in the base stations are complete files, which is consistent with the "most popular" caching scheme. As can be seen from fig. 5, in the process of increasing the number of small base stations in a cluster from 1 to 6, the average download delay of the user is first decreased and then increased. The reason for the reduction of the average download delay of the user is that the speed of obtaining the file from the small base station by the user is faster than the speed of obtaining the file from the macro base station or the core network. The time delay starts to increase again later because the cooperation between the small base stations in the cluster becomes complicated with the increase of the number of the base stations in the cluster, more relays are likely to be needed, and the frequency spectrum resources in the cluster are also fixed, so that the speed of acquiring the complete file from the small base station by the user becomes slow. Besides, when the total number of the small bss is 50, the "most popular" caching scheme may be compared with fig. 5, where two points when k is 1 represent the "most popular" caching scheme and the cooperative caching mechanism based on SBS clustering proposed by the present method (k is 4), and there is substantially no difference between the average download delays of the users of the two schemes, but the cache hit rate rises from 0.35 to 0.6, as can be seen from the curve corresponding to I is 200 in fig. 4, where the parameter settings are consistent with fig. 5. This proves that the cache mechanism provided by the method of the present invention can indeed improve the cache hit rate in the UDN scenario, and at the same time, the pressure of the backhaul network will be greatly reduced.
As shown in fig. 6, a relationship between the average download delay of the user and the total number of the small and medium base stations in the scene is shown. In the simulation process, the number of small base stations in a cluster is 4. The total number of small base stations in the network is increased from 50 to 10 and finally to 100. As can be seen from fig. 6, under the cooperative caching mechanism based on SBS clustering proposed in the present invention, the average download delay of users decreases with the increase of the total number of small base stations in the scene, while the average download delay of users under the "most popular" file caching scheme has basically no change. This is because in the simulation, in order to simplify the calculation, the method of the present invention assumes that the distance between the user and the cluster head base station is fixed, so the change of the total number of small base stations does not affect the distance between the user and the base station under the "most popular" caching scheme, and therefore the average download delay of the user does not change much. However, under the cooperative caching mechanism based on SBS clustering, the distance between the user and other base stations except the cluster head decreases as the total number of small base stations in the scene increases, and thus the average download delay of the user also decreases.
As shown in fig. 7, the relationship between the average download delay of the user and the zippf damping constant α for the average download delay of the user in the present invention, indicates the relationship between the average download delay of the user and the zippf damping constant α. in the simulation process of the present invention, the number of small base stations in the cluster is 4, the number of small base stations in the network is 50, the zippf damping constant α increases from 0.6 to 2.0. in fig. 7, as the zippf damping constant α increases, the average download delay of the user decreases and the trend decreases, because a high value of α means that the probability of requesting a file with a high popularity becomes larger and these files are often cached in the small base stations, and the time delay of obtaining a file from the small base station is generally lower than the time delay of obtaining a file from the macro base station or from the core network, the average download delay of the user decreases accordingly, furthermore, it is considered that all the small base stations in the cluster are commonly allocated spectrum resources in the cluster, and each of the small cache base station in the "most popular" cache base station "has a corresponding spectrum resource, therefore, the most popular" is used, the most popular "cache base station" is more generally better than the average download rate of the SBS 0 ", and the download policy of obtaining a small base station", which is provided based on the average download policy of the present invention, the average download delay of the invention, which is better than the average download probability of obtaining a scheme < 0.3878.

Claims (3)

1. A caching method based on small base station self-organizing cooperation in an ultra-dense network is characterized by comprising the following steps:
step 1, obtaining a similarity matrix of small base stations according to the load capacity and the positions of the small base stations in the ultra-dense network;
the similarity matrix records the similarity of any two different small base stations, namely a small base station S1And S2Degree of similarity of
Figure FDA0002407653250000011
Expressed as:
Figure FDA0002407653250000012
θ∈[0,1](ii) a Where, theta is a constant that is set,
Figure FDA0002407653250000013
is a base station S1And S2The degree of similarity of the positions of (a),
Figure FDA0002407653250000014
is a base station S1And S2The degree of load capacity variation of (a);
small base station S1And S2Position similarity of
Figure FDA0002407653250000015
The calculation formula of (a) is as follows:
Figure FDA0002407653250000016
wherein the content of the first and second substances,
Figure FDA0002407653250000017
respectively representing the coordinates of the two small base stations in the Euclidean space; sigmaXIs a set constant; r represents the maximum distance between the small base stations which are neighbors;
small base station S1And S2Degree of load capacity difference
Figure FDA0002407653250000018
The calculation formula of (a) is as follows:
Figure FDA0002407653250000019
wherein σNIs a constant that is set, and the setting value,
Figure FDA00024076532500000110
respectively representing small base stations S1And S2The maximum number of connectable users;
step 2, clustering all small base stations according to the similarity matrix of the small base stations and the number k of the small base stations in each cluster, and selecting the small base station with the best load capacity in each cluster as a cluster head; k is a positive integer;
step 3, caching files into the small base station and the macro base station according to a file caching strategy;
the file caching strategy is as follows: setting the same cache capacity of the small base station, and caching M complete files at most; sorting all files possibly requested by a user according to the popularity of the files; for one cluster, selecting kM files with the highest popularity and dividing each file into k segments to be respectively cached in the corresponding small base stations; continuously caching the remaining files with former popularity according to the file popularity in the macro base station;
step 4, after the user accesses the base station, the user starts to request to acquire the file, and at this time, three situations occur:
firstly, judging whether a file required by a user is cached in a small-sized base station, if so, a first condition occurs: the small base station caches files requested by a user; if not, judging whether the file required by the user is cached in the macro base station, if so, the second condition occurs: caching a file requested by a user in a macro base station; if neither of the above two cases is true, then a third case occurs: the file requested by the user is not cached in the network, and the user acquires the requested file through the core network;
calculating the transmission time delay of the user under the three conditions so as to obtain the average downloading time delay of the user, and then executing the step 5;
in step 4, the average download delay of the user is obtained as follows:
for the u-th user, the transmission delay of the u-th user under the first condition, the second condition and the third condition is obtained through calculation
Figure FDA00024076532500000111
And
Figure FDA00024076532500000112
then, calculating the average download Delay of the u useruThe following were used:
Figure FDA00024076532500000113
wherein, P1 u,
Figure FDA00024076532500000114
Respectively representing the probability of acquiring files from the small base station, the macro base station and the core network by u users;
step 5, setting the number of the small base stations in each cluster to be at most K, wherein K is a positive integer; judging whether the value of K exceeds K, wherein K belongs to {1,2 … …, K }, and if so, selecting the K value which enables the average download time delay of the user to be minimum and outputting the K value; otherwise, continuing to update the value of k, increasing k by 1, and then continuing to execute the step 2.
2. The caching method based on small base station self-organizing cooperation in the ultra-dense network according to claim 1, wherein in the step 2, after the number k of small base stations in each cluster is determined, one small base station in a scene is randomly selected as a cluster center point, and then according to a similarity matrix of the small base stations, k-1 small base stations with the highest similarity with the cluster center point are selected to form a cluster; then selecting a small base station closest to the current cluster center point from the small base stations which are not clustered as the center point of the next group of clusters, and continuously selecting the small base stations from the new cluster center point according to the similarity matrix of the small base stations to form a cluster; repeating clustering operation until all the small base stations finish clustering, and enabling the cluster head in each cluster to be served by the small base station with the best load capacity; when small base stations in a cluster are selected according to the similarity matrix of the small base stations, if less than k-1 base stations meeting the conditions are found from the small base stations which are not clustered, the base stations with higher similarity are selected from the small base stations which are clustered, and the base stations in the new cluster are supplemented to k.
3. The caching method based on small-sized base station self-organizing cooperation in the ultra-dense network according to claim 1, wherein in the step 4, the transmission delays of the users under three conditions are respectively as follows:
in the first case, the transmission delay of the u-th user when obtaining the file through the small-sized base station
Figure FDA0002407653250000021
Comprises the following steps:
Figure FDA0002407653250000022
wherein, ω isSRepresenting the available bandwidth of the small base station; numsThe number of file fragments which need to be transmitted for the user by the s small base station is represented; f is the size of the file required by the user; t represents the number of small base stations communicating directly with the user; SNRs,uRepresenting the signal-to-noise ratio between the s small cell base station and the u user; u is a positive integer;
in the second case, the transmission delay of the u-th user when obtaining the file through the macro base station
Figure FDA0002407653250000023
Comprises the following steps:
Figure FDA0002407653250000024
wherein, ω isMRepresents the available bandwidth of the macro base station; SNRM,sRepresenting the signal-to-noise ratio between the macro base station and the s-th small base station;
in the third case, the transmission delay of the u-th user when obtaining the file through the core network
Figure FDA0002407653250000025
Figure FDA0002407653250000026
Where Const denotes a set constant.
CN201811110511.3A 2018-09-21 2018-09-21 Caching method based on small base station self-organizing cooperation in ultra-dense network Active CN109194763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811110511.3A CN109194763B (en) 2018-09-21 2018-09-21 Caching method based on small base station self-organizing cooperation in ultra-dense network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811110511.3A CN109194763B (en) 2018-09-21 2018-09-21 Caching method based on small base station self-organizing cooperation in ultra-dense network

Publications (2)

Publication Number Publication Date
CN109194763A CN109194763A (en) 2019-01-11
CN109194763B true CN109194763B (en) 2020-05-26

Family

ID=64909291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811110511.3A Active CN109194763B (en) 2018-09-21 2018-09-21 Caching method based on small base station self-organizing cooperation in ultra-dense network

Country Status (1)

Country Link
CN (1) CN109194763B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109600780B (en) * 2019-02-18 2021-10-29 南京邮电大学 File copy caching method based on base station clustering
CN110022579A (en) * 2019-04-23 2019-07-16 重庆邮电大学 Content caching management method based on base station collaboration
CN110248206B (en) * 2019-07-29 2020-08-28 北京邮电大学 Resource allocation method and device for edge network system and electronic equipment
CN110611698A (en) * 2019-08-07 2019-12-24 哈尔滨工业大学(深圳) Flexible cooperative transmission method and system based on random edge cache and realistic conditions
CN112073275B (en) * 2020-09-08 2022-06-21 广西民族大学 Content distribution method and device for ultra-dense network UDN
CN112601256B (en) * 2020-12-07 2022-07-15 广西师范大学 MEC-SBS clustering-based load scheduling method in ultra-dense network
CN112995979B (en) * 2021-03-04 2022-01-25 中国科学院计算技术研究所 Wireless network cache recommendation method for QoE (quality of experience) requirements of user
CN114567898B (en) * 2022-03-09 2024-01-02 大连理工大学 Clustering collaborative caching method based on federal learning under ultra-dense network
CN117320112B (en) * 2023-10-26 2024-05-03 陕西思极科技有限公司 Dual-mode communication network energy consumption balancing method and system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2426401T3 (en) * 2010-09-22 2013-10-23 Deutsche Telekom Ag Data and coordinated multipoint transmission signaling, CoMP on the X2 interface using an additional VLAN identifier
US8862814B2 (en) * 2011-08-10 2014-10-14 International Business Machines Corporation Video object placement for cooperative caching
CN102695227B (en) * 2012-06-04 2015-05-27 中国科学技术大学 Method for cooperatively transmitting data by home enhanced Node B (HeNB) and HeNB
CN105939388B (en) * 2016-06-28 2019-03-19 华为技术有限公司 A kind of method and content controller of transmission service content
CN106792995B (en) * 2016-12-27 2020-01-10 北京邮电大学 User access method for guaranteeing low-delay content transmission in 5G network
CN107493328B (en) * 2017-08-14 2019-10-11 武汉大学 A kind of Cooperative caching method based on Fusion Features
CN107548102B (en) * 2017-08-16 2019-10-08 北京邮电大学 The node B cache method of user's time delay is minimized in a kind of edge cache network
CN108322352B (en) * 2018-03-19 2021-01-08 北京工业大学 Honeycomb heterogeneous caching method based on inter-group cooperation

Also Published As

Publication number Publication date
CN109194763A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109194763B (en) Caching method based on small base station self-organizing cooperation in ultra-dense network
CN109413724B (en) MEC-based task unloading and resource allocation scheme
CN111447619B (en) Joint task unloading and resource allocation method in mobile edge computing network
CN112616189B (en) Static and dynamic combined millimeter wave beam resource allocation and optimization method
CN112492626B (en) Method for unloading computing task of mobile user
CN111132191B (en) Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server
CN111586720B (en) Task unloading and resource allocation combined optimization method in multi-cell scene
CN108834080B (en) Distributed cache and user association method based on multicast technology in heterogeneous network
CN110493757B (en) Mobile edge computing unloading method for reducing system energy consumption under single server
CN108093435B (en) Cellular downlink network energy efficiency optimization system and method based on cached popular content
CN110290507B (en) Caching strategy and spectrum allocation method of D2D communication auxiliary edge caching system
CN108600998B (en) Cache optimization decision method for ultra-density cellular and D2D heterogeneous converged network
CN110138836B (en) Online cooperative caching method based on optimized energy efficiency
CN111800812B (en) Design method of user access scheme applied to mobile edge computing network of non-orthogonal multiple access
CN112437156B (en) Distributed cooperative caching method based on MEC-D2D
CN106792995B (en) User access method for guaranteeing low-delay content transmission in 5G network
CN108449149B (en) Energy acquisition small base station resource allocation method based on matching game
Khan et al. On the application of agglomerative hierarchical clustering for cache-assisted D2D networks
CN110602722A (en) Design method for joint content pushing and transmission based on NOMA
CN109068356A (en) A kind of wireless cache allocation method in cognitive radio networks
CN113873658B (en) Method for allocating beam hopping resources by taking user service weight gain as objective function
CN108965034B (en) Method for associating user to network under ultra-dense deployment of small cell base station
CN111479312B (en) Heterogeneous cellular network content caching and base station dormancy combined optimization method
CN108668288B (en) Method for optimizing small base station positions in wireless cache network
Li et al. Joint access point selection and resource allocation in MEC-assisted network: A reinforcement learning based approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant