CN111328156A - Access network caching method, device, base station and medium - Google Patents

Access network caching method, device, base station and medium Download PDF

Info

Publication number
CN111328156A
CN111328156A CN201811542058.3A CN201811542058A CN111328156A CN 111328156 A CN111328156 A CN 111328156A CN 201811542058 A CN201811542058 A CN 201811542058A CN 111328156 A CN111328156 A CN 111328156A
Authority
CN
China
Prior art keywords
content data
base station
network resource
cache
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811542058.3A
Other languages
Chinese (zh)
Other versions
CN111328156B (en
Inventor
杨轩
潘媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Sichuan Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Sichuan Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Sichuan Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201811542058.3A priority Critical patent/CN111328156B/en
Publication of CN111328156A publication Critical patent/CN111328156A/en
Application granted granted Critical
Publication of CN111328156B publication Critical patent/CN111328156B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/08Access point devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5681Pre-fetching or pre-delivering data based on network characteristics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The embodiment of the invention provides an access network caching method, an access network caching device, a base station and a medium. The method comprises the following steps: acquiring first content data to be cached from a target base station, and selecting a set of second content data to be deleted from the data cached by the serving base station; predicting total network resource income brought by caching the first content data and predicting total network resource loss brought by deleting the set of the second content data; and determining whether to delete the set of the second content data and cache the first content data according to the total network resource income and the total network resource loss. The cache hit rate can be improved, network access pressure is considered, and the overall performance of the network is improved.

Description

Access network caching method, device, base station and medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an access network caching method, apparatus, base station, and medium.
Background
In the 4G network, the usage amount of the service contents such as videos is greatly increased by users, and the access network caching technology is more and more important. The access network caching technology is to utilize the caching space of the base station for caching, but because the caching space of a single base station is limited, when the caching space is occupied, a solution needs to be provided for how to more effectively utilize the caching space and which strategy to replace the cached content is adopted, so as to improve the content hit rate.
At present, two main access network caching technologies exist: the first is classic cache replacement technology, and the second is centralized cooperative cache technology.
Among the typical algorithms for classical cache replacement techniques are Least Recently Used (LRU) and Least Frequently Used (LFU) algorithms.
The LRU algorithm discards the contents which are not requested for the longest time in the cache queue according to the access record condition of the cached contents, and places the contents of the new cache queue at the head of the queue. In the cache queue, cache contents are arranged in a first-in first-out order, and when one of the contents is requested, namely, hit by the cache, the contents are advanced to the head of the queue and then are reordered. If the queue is full, the content that has not been requested for the longest time (i.e., the content at the end of the queue) is discarded, and the content that is newly added to the buffer is placed at the head of the queue.
The LFU algorithm discards the content with the lowest request frequency in the cache queue according to the access record condition of the cached content, and places the content of the newly-incoming cache queue at the head of the queue. In the cache queue, cache contents are arranged according to a first-in first-out sequence, and when one of the contents is requested, namely hit by the cache, the contents are advanced to the head of the queue and then are reordered. If the queue is full, the content which is hit by the cache at least frequently is discarded, and the content which is newly added into the cache is placed at the head of the queue.
In the centralized cooperative caching technology, each base station has a certain caching space and routing equipment, and in a set of cooperative caching system, the caching spaces of all the base stations form a complete caching unit cluster. The buffer space of the base station is used as a single buffer node and is controlled by the cluster head node. The cache space of the cluster head node records the content information and the user request information of other node cache spaces in the cluster and the link condition between base stations. The communication between the cache unit cluster and the outside is carried out by a gateway connected with the cluster head node and a data gateway outside the cluster.
The working mode of the centralized cooperative caching technology is as follows: after the service base station acquires the user request, if the request content is locally hit in the service base station, a TCP link is directly established with the user, the request content is transmitted to the user, meanwhile, the request content is sent to the cluster head node, and the cache of the cluster head node is updated. If the service base station does not hit the cache content locally, the service base station sends the user request to the cluster head node, the cluster head node reads the cache space content in the cluster, and a proper cooperative base station is selected to provide the request content according to the user request and the link condition of the base station.
In summary, the existing classic cache replacement technology mainly considers an application scenario of improving the cache hit rate of the technology, and is not based on cache cooperation. Although the cache hit rate can be improved, in an actual communication network, the content (such as video) with large traffic tends to be more than the content (such as text and pictures) with small traffic. If only hit rates are used to measure network performance, it may result in high cache hit rates but not network stress.
In the centralized cooperative caching technology, whether the cache contents in other base stations in the cluster hit the request contents is judged by the cluster head node, which requires that the cluster head node can acquire information related to all caches in the cluster, and extra signaling overhead is increased, so that decision delay is increased, and the whole network overhead is increased.
Disclosure of Invention
The embodiment of the invention provides an access network caching method, an access network caching device, a base station and a medium, which are used for improving the cache hit rate, giving consideration to network access pressure and improving the overall performance of a network.
In a first aspect, an embodiment of the present invention provides an access network caching method, which is applied to a serving base station, and the method includes:
acquiring first content data to be cached from a target base station, and selecting a set of second content data to be deleted from the data cached by the serving base station;
predicting total network resource income brought by caching the first content data and predicting total network resource loss brought by deleting the set of the second content data;
and determining whether to delete the set of the second content data and cache the first content data according to the total network resource income and the total network resource loss.
In a second aspect, an embodiment of the present invention provides an access network caching apparatus, where the apparatus includes:
the first processing module is used for acquiring first content data to be cached from a target base station and selecting a set of second content data to be deleted from the data cached by the serving base station;
the second processing module is used for predicting total network resource income brought by caching the first content data and predicting total network resource loss brought by deleting the set of the second content data;
and the third processing module is used for determining whether to delete the set of the second content data and cache the first content data according to the total network resource income and the total network resource loss.
An embodiment of the present invention provides a base station, including: at least one processor, at least one memory, and computer program instructions stored in the memory, which when executed by the processor, implement the method of the first aspect of the embodiments described above.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which computer program instructions are stored, which, when executed by a processor, implement the method of the first aspect in the foregoing embodiments.
According to the access network caching method, the access network caching device, the access network caching base station and the access network caching medium, whether the set of the second content data is replaced by the first content data or not is determined by comparing the total network resource profit caused by caching the first content data with the network resource loss caused by deleting the set of the second content data, so that whether cache replacement is performed or not can be determined from a global perspective, the network access pressure can be considered while the cache hit rate is improved, the signaling overhead can not be increased, and the overall performance of a network is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating an access network caching method according to some embodiments of the present invention;
FIG. 2 illustrates a complete access network request process diagram provided in accordance with some embodiments of the invention;
FIG. 3 is a graph illustrating average hit rate per base station versus cache capacity according to some embodiments of the invention;
fig. 4 is a schematic diagram illustrating a relationship between average per-base-station traffic reduction and buffer capacity according to some embodiments of the present invention;
fig. 5 is a schematic structural diagram of an access network caching apparatus according to some embodiments of the present invention;
fig. 6 illustrates a schematic structural diagram of a base station provided in accordance with some embodiments of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the embodiment of the present invention, after receiving a content request from a user, a serving base station mainly relates to the following three scenarios:
in the first scenario, the serving base station directly caches the hit content data, i.e., the serving base station hits the content data requested by the content request in the local cache, i.e., the serving base station caches the content data, and then the serving base station directly sends the content data to the user.
In the second scenario, the serving base station indirectly caches and hits, the serving base station does not hit the content data requested by the content request in the local cache, but the content data is cached in the direct-connected base station of the serving base station. In this scenario, the serving base station obtains the content data from the directly connected base station and sends the content data to the user, and meanwhile, the serving base station stores the content data in a local cache, thereby facilitating subsequent use.
In the third scenario, the serving base station and the directly connected base station are not in cache hit, and the serving base station obtains the content data requested by the content request from the network server and stores the content data in the local cache.
The cache replacement policy is mainly embodied in the second and third scenarios, when the cache residual space of the serving base station is smaller than the size of the content data to be cached.
In the second scenario, the cache replacement scheme provided in the embodiment of the present invention determines whether to replace the content data in the cache with the content data requested by the user after estimating whether the network performance after cache replacement is gained, and the access network caching method using the cache replacement scheme will be described in detail below.
The access network caching method provided by the embodiment of the invention is applied to a service base station, and as shown in fig. 1, the specific process of the access network caching is as follows:
step 101: and acquiring first content data to be cached from the target base station, and selecting a set of second content data to be deleted from the data cached by the serving base station.
After receiving a content request of a user, if it is determined that first content data requested by the content request is not cached locally, the serving base station searches each direct connection base station of each serving base station, and determines the direct connection base station which is cached with the first content data and has the largest link bandwidth with the serving base station as a target base station. The serving base station acquires the first content data from the target base station and sends the first content data to the user.
In a specific embodiment, the serving base station performs reverse ordering on each content data in the cache according to the historical request times, and takes M content data with the least historical request times as a second content data set, where M is an integer greater than or equal to 1. The specific value of M depends on the size of the first content data, and the sum of the size of the remaining space in the cache and the data size of the set of second content data is not smaller than the size of the first content data.
Step 102: and predicting total network resource income brought by caching the first content data and predicting total network resource loss brought by deleting the set of the second content data.
In one embodiment, the process of predicting the total network resource revenue generated by caching the first content data is as follows: after the first content data cached by the serving base station is pre-estimated for each direct connection base station of the serving base station, which does not cache the first content data, the direct connection base station obtains a first network resource benefit brought by the first content data from the serving base station. And after the pre-estimated service base station caches the first content data, the service base station acquires a second network resource benefit brought by the first content data. And determining the total network resource income according to the first network resource income and the second network resource income corresponding to each direct connection base station which does not cache the first content data.
Specifically, for any one direct connection base station not caching the first content data, calculating a first network resource benefit brought by the direct connection base station acquiring the first content data from the serving base station according to the following processes:
calculating the ratio of the size of the first content data to the bandwidth of a link between the directly-connected base station and the content server to obtain a first ratio; calculating the size of the first content data and the ratio of the link bandwidth between the directly connected base station and the service base station to obtain a second ratio; calculating the difference value between the first ratio and the second ratio; calculating the product of the obtained difference and the historical request times of the directly-connected base station for the first content data; and taking the result of the multiplication as the first network resource income.
Specifically, after the pre-estimated serving base station caches the first content data, a specific process of the serving base station obtaining the second network resource revenue brought by the first content data is as follows:
calculating the ratio of the size of the first content data to the link bandwidth between the service base station and the content server to obtain a third ratio; and calculating the third ratio, and obtaining the second network resource income by multiplying the third ratio by the historical request times of the service base station for the first content data.
Specifically, after a first network resource benefit corresponding to each direct connection base station which does not cache the first content data is obtained through prediction and a second network resource benefit corresponding to the service base station is obtained, the sum of the first network resource benefits corresponding to each direct connection base station which does not cache the first content data is calculated, and a first result is obtained; and calculating the sum of the first result and the second network resource income to obtain the total network resource income.
In one embodiment, the process of predicting the total network resource loss caused by deleting the second content data set is as follows:
for each second content data of the set of second content data, respectively, the following process is performed: and respectively predicting that the service base station deletes the second content data aiming at each direct connection base station which does not cache the second content data, wherein the direct connection base station cannot acquire first network resource loss caused by the second content data from the service base station. And after the pre-estimated service base station deletes the set of the second content data, the service base station cannot directly acquire the second network resource loss caused by the set of the second content data. And determining the total network resource loss according to the first network resource loss corresponding to each second content data in the set of second content data and the second network resource loss.
Specifically, taking any one second content data as an example, after any one direct connection base station that does not cache the second content data respectively estimates that the serving base station deletes the second content data according to the following processes, the direct connection base station cannot acquire the network resource loss caused by the second content data from the serving base station, which is specifically as follows:
calculating the size of the second content data and the ratio of the size of the second content data to the bandwidth of a link between the directly-connected base station and the content server to obtain a fourth ratio; calculating the size of the second content data and the ratio of the size of the second content data to the link bandwidth between the directly-connected base station and the serving base station to obtain a fifth ratio; calculating a difference between the fourth ratio and the fifth ratio; and calculating the product of the difference and the historical request times of the directly-connected base station for the second content data to obtain the first network resource loss corresponding to the directly-connected base station after the second content data is deleted.
Specifically, after the pre-estimated serving base station deletes the set of the second content data, a process that the serving base station cannot directly acquire the second network resource loss caused by the set of the second content data is as follows:
for each second content data of the set of second content data, respectively, the following process is performed: calculating the ratio of the size of the second content data to the link bandwidth between the serving base station and the content server to obtain a sixth ratio; and calculating a sixth ratio, and multiplying the sixth ratio by the historical request times of the service base station for the second content data to obtain a third result. And calculating the sum of the third results corresponding to each second content data in the set of second content data to obtain the second network resource loss.
Specifically, after calculating to obtain the loss of each first network resource corresponding to each second content data in the set of second content data and calculating to obtain the loss of the second network resource, calculating to obtain the total network resource loss according to the following processes:
for each second content data of the set of second content data, respectively, the following process is performed: calculating the sum of the first network resource losses corresponding to each direct connection base station which does not cache the second content data to obtain a third network resource loss;
calculating the sum of the loss of the third network resource corresponding to each second content data in the set of the second content data to obtain the loss of the fourth network resource;
and calculating the sum of the fourth network resource loss and the second network resource loss to obtain the total network resource loss.
Step 103: and determining whether to delete the set of the second content data and cache the first content data according to the total network resource income and the total network resource loss.
In a specific embodiment, if it is determined that the total network resource revenue is greater than the total network resource loss, determining to delete the set of the second content data and cache the first content data; and if the total network resource income is determined to be less than or equal to the total network resource loss, determining not to delete the set of the second content data and not to cache the first content data.
In the embodiment of the invention, whether the set of the second content data is replaced by the first content data is determined by comparing the total network resource income brought by caching the first content data with the network resource loss brought by deleting the set of the second content data, so that whether cache replacement is carried out can be determined from the global perspective, the cache hit rate is improved, the network access pressure can be considered, the signaling overhead is not increased, and the overall performance of the network is improved.
Fig. 2 shows a complete access network request process, which is described in detail as follows:
step 201: a terminal initiates a content request of a user to a service base station x;
step 202: a service base station x receives a content request of a user, wherein the content request is used for requesting content data s;
step 203: the service base station x judges whether the content data s is cached locally, if so, step 204 is executed, otherwise, step 205 is executed;
step 204: the service base station x sends the content data s to the terminal and ends the process;
step 205: the service base station x searches for a direct connection base station, and judges whether the direct connection base station caches content data s, if yes, step 206 is executed, otherwise, step 207 is executed;
step 206: the serving base station x determines whether the remaining space of the local cache is greater than or equal to the size of the content data s, if so, performs step 208, otherwise, performs step 209;
step 207: after the serving base station x requests the content server for the content data s, step 211 is executed;
step 208: the service base station x caches the content data s locally and goes to execute step 204;
step 209: the serving base station x selects one base station from all the directly connected base stations storing the content data s as a target base station k, and executes step 210 after requesting the content data s from the target base station k;
step 210: after the serving base station x performs cache replacement by using the first cache replacement policy, go to execute step 208;
step 211: the serving base station x determines whether the remaining space of the local cache is greater than or equal to the size of the content data s, if so, go to step 208, otherwise, go to step 212;
step 212: after the serving base station x performs cache replacement by using the second cache replacement policy, go to execute step 208.
In the embodiment of the invention, the number of the base stations is defined as N, the content data requested by the content request of the user is represented as s, and the size of the content data s is represented as LsCaching content data sThe network gain is denoted as IsThe network loss of the cache content data s is denoted by DsThe number of times of the history request of the content data s in the base station x is expressed as
Figure BDA0001908388320000092
The total number of hits for all requested content data s is denoted as T, the total network resource consumption for all requested content data s is denoted as C, and the link bandwidth defining any two base stations (base station a, base station B) is denoted as Ba,bWhere 1. ltoreq. a, B. ltoreq. N, a. noteq.b, the bandwidth of the link between any base station (e.g. base station x) and the content server is defined and denoted Bx,serverX is more than or equal to 1 and less than or equal to N, and defines the link bandwidth B between any base station (such as the base station x) and the content request userx,clientAnd x is more than or equal to 1 and less than or equal to N. Wherein, N, Ba,b、Bx,server、Bx,clientInitial settings are made according to the network conditions of the network operator, LsSetting is made according to the actual content size of each content request,
Figure BDA0001908388320000093
t, C are initialized to zero.
When the service base station x has the content data s cached locally, the service base station directly hits the content data s, and updates the total hit number T of the content data s to T +1, and the total network resource consumption C to C + L of the content data s is updateds/Bx,clientUpdating the number of historical requests for content data s in the serving base station x
Figure BDA0001908388320000091
The method comprises the steps that a service base station x does not cache content data s locally, the content data s are cached in direct-connected base stations, and under the condition that the residual space cached locally by the service base station x is larger than or equal to the size of the content data s, the service base station x searches all the direct-connected base stations, and the base station which is cached with the content data s and has the largest link bandwidth with the service base station x is determined as a target base station k. The service base station x requests the content data s from the target base station k, sends the content data s to the user, and caches the content number locallyAccording to s. The total number of hits T of the update content data s is T +1, and the total network resource consumption C of the update content data s is C + (L)s/Bx,client)+(Ls/Bx,k) Updating the number of historical requests for content data s in the serving base station x
Figure BDA0001908388320000101
The service base station x does not locally cache the content data s, the content data s are cached in the direct connection base station, and under the condition that the residual space cached locally by the service base station x is smaller than the size of the content data s, the service base station x searches all the direct connection base stations, and determines the base station which caches the content data s and has the largest link bandwidth with the service base station x as a target base station k. The service base station x requests the content data s from the target base station k, sends the content data s to the user, and performs cache replacement according to the first cache replacement strategy.
The specific implementation process of the first cache replacement policy is as follows:
the service base station x requests the content data in the local cache according to the history times QxAnd performing reverse ordering, determining the last M content data items to be replaced as a data content set to be replaced, and representing the M content data items as { sj, j ═ 1,2.. M }, wherein the content data items included in the data content set need to satisfy that the remaining buffer space of the serving base station is enough to accommodate the content data s to be buffered after the storage space occupied by the content data items in the data content set is released.
For the direct connection base station xi (i.e. x1, x 2..) which does not cache the content data s, calculating the network resource benefit brought by the direct connection base station requesting the content data s from the serving base station x, and expressing the network resource benefit as
Figure BDA0001908388320000102
The total network revenue is
Figure BDA0001908388320000103
For any one of the data content sj in the set of data content to be replaced, for the uncached data content sA direct connection base station xi (i.e. x1, x 2..) calculates the network resource loss caused by the direct connection base station requesting the data content to be replaced from the service base station x, and the network resource loss is expressed as
Figure BDA0001908388320000104
The total network loss is
Figure BDA0001908388320000105
If Is>DsThen, the total hit number T of the update content data s is T +1, and the total network resource consumption C of the update content data s is C + (L)s/Bx,client)+(Ls/Bx,k) Updating the number of historical requests for content data s in the serving base station x
Figure BDA0001908388320000111
And delete the set of data contents to be replaced { sj, j ═ 1,2.. M } in the cache and cache the content data s.
If Is≤DsThen, the total hit number T of the update content data s is T +1, and the total network resource consumption C of the update content data s is C + (L)s/Bx,client)+(Ls/Bx,k) Updating the number of historical requests for content data s in the serving base station x
Figure BDA0001908388320000112
The content data s is not cached.
When the content data s is not cached in the serving base station x and the direct connection base station, and the remaining space locally cached by the serving base station x is greater than or equal to the size of the content data s, the serving base station x requests the content data s from the content server, sends the content data s to the user, and caches the content data s to the local, and updates the total network resource consumption C of the content data s to C + (L)s/Bx,server)+(Ls/Bx,client) Updating the number of historical requests for content data s in the serving base station x
Figure BDA0001908388320000113
And under the condition that the service base station x and the direct connection base station do not cache the content data s, and the residual space cached locally by the service base station x is smaller than the size of the content data s, the service base station x requests the content data s from the content server and sends the content data s to the user. The serving base station performs cache replacement according to the second cache replacement policy, specifically: the service base station x requests the content data in the local cache according to the history times QxAnd performing reverse sequencing, deleting data one by one from the end of the sequence until the residual cache space is larger than or equal to the size of s, caching the data content s, and updating the total network resource consumption C (L) of the content data ss/Bx,server)+(Ls/Bx,client) Updating the number of historical requests for content data s in the serving base station x
Figure BDA0001908388320000114
In the embodiment of the invention, the total hit rate is obtained by counting the ratio of the number of public hits T and the total number of requests of each content data, the total network resource consumption C of each content data and the ratio of the network resource consumption when the content data is pulled from the content server are calculated by counting, and the obtained total network gain is used as an assessment index to assess the network performance.
As shown in fig. 3, which is a schematic diagram illustrating a relationship between an average hit rate per base station and a cache capacity, in the case that there are 4 or 8 directly connected base stations per base station, the average hit rate per base station increases with an increase in the cache capacity compared to the LRU algorithm, but the hit rate is higher than the LRU algorithm. In the case of 8 directly connected base stations per base station on average, when the cache capacity accounts for 85%, the average hit rate per base station exceeds 80%, and about 55% of the hit rate results from cooperation between base stations. The embodiment of the invention utilizes the strong correlation of the request content between the base stations in time and space to share the content on the cache between the cooperative base stations, thereby enriching the cached content and improving the hit rate of the cache.
Fig. 4 is a diagram illustrating the relationship between the average per-base station reduction traffic and the buffer capacity, wherein the y-axis is a relative value normalized according to the reduction traffic per base station and the total traffic requested by the base station for 24 hours. Compared with the LRU algorithm, the average per-base station reduction flow rate in the embodiment of the invention is increased along with the increase of the cache capacity, but is still better than the LRU algorithm. As can be seen from the figure, the average traffic per bs increases with the number of directly connected bss. The method and the device for processing the cache hit rate are that the base stations cooperate with each other when processing the request, and the cache hit rate is improved. Meanwhile, when the cached content is replaced, the embodiment of the invention comprehensively considers the requested data content, the size of the cached data content, the link bandwidth between the base stations, the request times and the like.
In the embodiment of the invention, a distributed cooperation principle is utilized, content caching and replacement are cooperatively carried out between the service base station and the directly-connected base station, and during cache replacement, the content which is requested by the service base station and the directly-connected base station for the most times is cached, which is equivalent to caching the globally hottest content. The embodiment of the invention sets a cache cooperation strategy from the global angle and caches the global hottest data, and the hit rate is obviously superior to that of the prior art.
And, in the aspect of performance optimization, the network communication traffic between the base stations is less than the network communication from the base stations to the content server. When the cooperative base station caches the content, pulling data from the cooperative base station instead of pulling data from the content server may reduce traffic in this scenario. In addition, the distributed cache cooperation mode can lead to a large number of cache cooperation times among the base stations, and the flow can be saved in each cooperation, so that the reduction of the flow is large.
Through a distributed cooperative cache replacement mode, the signaling overhead of cache cooperation is reduced, the network performance is improved, and the method is more suitable for application in a complex network scene, and compared with the prior art, the performance advantage is larger under the condition that the cache utilization rate is higher. The problems of large signaling overhead and incapability of real-time decision-making caused by a centralized caching algorithm are solved.
An embodiment of the present invention further provides an access network caching apparatus, and specific implementation of the apparatus may refer to description of the method embodiment, and repeated details are not described again, as shown in fig. 5, the apparatus mainly includes:
a first processing module 501, configured to obtain first content data to be cached from a target base station, and select a set of second content data to be deleted from data cached by a serving base station;
a second processing module 502, configured to predict total network resource revenue caused by caching the first content data and predict total network resource loss caused by deleting the set of the second content data;
the third processing module 503 is configured to determine whether to delete the set of second content data and cache the first content data according to the total network resource revenue and the total network resource loss.
Specifically, the second processing module 502 is specifically configured to:
respectively aiming at each direct connection base station of the service base station, which does not cache the first content data, estimating a first network resource benefit brought by the first content data acquired by the direct connection base station from the service base station after the first content data is cached by the service base station;
after the pre-estimated service base station caches the first content data, the service base station obtains a second network resource income brought by the first content data;
and determining the total network resource income according to the first network resource income and the second network resource income corresponding to each direct connection base station which does not cache the first content data.
Specifically, the second processing module 502 is specifically configured to: calculating the ratio of the size of the first content data to the bandwidth of a link between the directly-connected base station and the content server to obtain a first ratio; calculating the size of the first content data and the ratio of the link bandwidth between the directly connected base station and the service base station to obtain a second ratio; calculating the difference value between the first ratio and the second ratio; calculating the product of the obtained difference and the historical request times of the directly-connected base station for the first content data; and taking the result of the multiplication as the first network resource income.
Specifically, the second processing module 502 is specifically configured to: calculating the ratio of the size of the first content data to the link bandwidth between the service base station and the content server to obtain a third ratio; and calculating the third ratio and the product of the third ratio and the historical request times of the service base station for the first content data to obtain the second network resource income.
Specifically, the second processing module 502 is specifically configured to: calculating the sum of first network resource profits corresponding to each direct connection base station which does not cache the first content data to obtain a first result; and calculating the sum of the first result and the second network resource income to obtain the total network resource income.
Specifically, the second processing module 502 is specifically configured to:
for each second content data of the set of second content data, respectively, the following process is performed: respectively predicting that the service base station deletes the second content data aiming at each direct connection base station which does not cache the second content data, wherein the direct connection base station cannot acquire first network resource loss caused by the second content data from the service base station;
after the pre-estimation service base station deletes the set of the second content data, the service base station cannot directly acquire the second network resource loss caused by the set of the second content data;
and determining the total network resource loss according to the first network resource loss and the second network resource loss corresponding to each second content data in the set of second content data.
Specifically, the second processing module 502 is specifically configured to:
calculating the size of the second content data and the ratio of the size of the second content data to the bandwidth of a link between the directly-connected base station and the content server to obtain a fourth ratio;
calculating the size of the second content data and the ratio of the size of the second content data to the link bandwidth between the directly-connected base station and the serving base station to obtain a fifth ratio;
calculating a difference between the fourth ratio and the fifth ratio;
and the calculated difference is multiplied by the historical request times of the directly connected base station for the second content data, so that the first network resource loss corresponding to the directly connected base station is obtained after the second content data is deleted.
Specifically, the second processing module 502 is specifically configured to:
for each second content data of the set of second content data, respectively, the following process is performed: calculating the ratio of the size of the second content data to the link bandwidth between the serving base station and the content server to obtain a sixth ratio; calculating the sixth ratio and the product of the sixth ratio and the historical request times of the service base station to the second content data to obtain a third result;
and calculating the sum of the third results corresponding to each second content data in the set of second content data to obtain the second network resource loss.
Specifically, the second processing module 502 is specifically configured to:
for each second content data of the set of second content data, respectively, the following process is performed: calculating the sum of the first network resource losses corresponding to each direct connection base station which does not cache the second content data to obtain a third network resource loss;
calculating the sum of the loss of the third network resource corresponding to each second content data in the set of the second content data to obtain the loss of the fourth network resource;
and calculating the sum of the fourth network resource loss and the second network resource loss to obtain the total network resource loss.
Specifically, the third processing module 503 is specifically configured to: if the total network resource income is determined to be larger than the total network resource loss, determining to delete the set of the second content data and caching the first content data; and if the total network resource income is determined to be less than or equal to the total network resource loss, determining not to delete the set of the second content data and not to cache the first content data.
In addition, the access network caching method according to the embodiment of the present invention described in conjunction with fig. 1 or fig. 2 may be implemented by a base station. Fig. 6 is a schematic diagram illustrating a hardware structure of a base station according to an embodiment of the present invention.
The base station may include a processor 601 and memory 602 that stores computer program instructions.
Specifically, the processor 601 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured as one or more Integrated circuits implementing embodiments of the present invention.
Memory 602 may include mass storage for data or instructions. By way of example, and not limitation, memory 602 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 602 may include removable or non-removable (or fixed) media, where appropriate. The memory 602 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 602 is a non-volatile solid-state memory. In a particular embodiment, the memory 602 includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The processor 601 may implement any of the access network caching methods in the above embodiments by reading and executing computer program instructions stored in the memory 602.
In one example, the base station can also include a communication interface 603 and a bus 610. As shown in fig. 6, the processor 601, the memory 602, and the communication interface 603 are connected via a bus 610 to complete communication therebetween.
The communication interface 603 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiments of the present invention.
The bus 610 includes hardware, software, or both to couple the components of the base station to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 610 may include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
In addition, in combination with the access network caching method in the foregoing embodiment, an embodiment of the present invention may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the access network caching methods in the above embodiments.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention.

Claims (13)

1. An access network caching method applied to a serving base station, the method comprising:
acquiring first content data to be cached from a target base station, and selecting a set of second content data to be deleted from the data cached by the serving base station;
predicting total network resource income brought by caching the first content data and predicting total network resource loss brought by deleting the set of the second content data;
and determining whether to delete the set of the second content data and cache the first content data according to the total network resource income and the total network resource loss.
2. The method for caching in an access network according to claim 1, wherein predicting a total network resource revenue from caching the first content data comprises:
respectively aiming at each direct connection base station of the service base station, which does not cache the first content data, pre-estimating a first network resource benefit brought by the first content data acquired by the direct connection base station from the service base station after the first content data is cached by the service base station;
after the first content data is pre-estimated to be cached by the service base station, the service base station acquires a second network resource income brought by the first content data;
and determining the total network resource income according to the first network resource income corresponding to the direct connection base station which does not cache the first content data and the second network resource income.
3. The access network caching method of claim 2, wherein predicting a first network resource gain brought by the serving base station acquiring the first content data from the serving base station after the serving base station caches the first content data comprises:
calculating the ratio between the size of the first content data and the bandwidth of a link between the directly-connected base station and a content server to obtain a first ratio;
calculating the ratio of the size of the first content data to the link bandwidth between the direct connection base station and the service base station to obtain a second ratio;
calculating a difference between the first ratio and the second ratio;
the calculated difference is multiplied by the historical request times of the directly connected base station for the first content data;
and taking the result of the multiplication as the first network resource income.
4. The access network caching method of claim 3, wherein predicting a second network resource gain brought by the serving base station obtaining the first content data after the serving base station caches the first content data comprises:
calculating the ratio of the size of the first content data to the link bandwidth between the serving base station and the content server to obtain a third ratio;
and calculating the third ratio and the product of the historical request times of the service base station for the first content data to obtain the second network resource income.
5. The method according to claim 4, wherein determining the total network resource revenue according to the first network resource revenue corresponding to each of the directly connected base stations that do not cache the first content data and the second network resource revenue comprises:
calculating the sum of first network resource profits corresponding to the direct connection base station which does not cache the first content data to obtain a first result;
and calculating the sum of the first result and the second network resource income to obtain the total network resource income.
6. The method for caching in an access network according to claim 1, wherein predicting a total loss of network resources due to deletion of the second set of content data comprises:
for each second content data of the set of second content data, respectively, performing the following process: respectively predicting a first network resource loss caused by the fact that the direct connection base station cannot acquire the second content data from the service base station after the service base station deletes the second content data aiming at each direct connection base station which does not cache the second content data;
estimating a second network resource loss caused by the fact that the serving base station cannot directly acquire the set of the second content data after the serving base station deletes the set of the second content data;
and determining the total network resource loss according to the first network resource loss corresponding to each second content data in the set of second content data and the second network resource loss.
7. The access network caching method of claim 6, wherein predicting a first network resource loss caused by the fact that the direct connection base station cannot acquire the second content data from the serving base station after the serving base station deletes the second content data comprises:
calculating the ratio between the size of the second content data and the bandwidth of a link between the directly-connected base station and a content server to obtain a fourth ratio;
calculating the ratio of the size of the second content data to the link bandwidth between the direct connection base station and the service base station to obtain a fifth ratio;
calculating a difference between the fourth ratio and the fifth ratio;
and calculating the product of the difference and the historical request times of the directly connected base station for the second content data to obtain the first network resource loss corresponding to the directly connected base station after the second content data is deleted.
8. The access network caching method according to claim 7, wherein predicting a second network resource loss caused by the fact that the serving base station cannot directly acquire the set of the second content data after the serving base station deletes the set of the second content data comprises:
for each second content data of the set of second content data, respectively, performing the following process: calculating the ratio of the size of the second content data to the link bandwidth between the serving base station and the content server to obtain a sixth ratio; calculating a sixth ratio, which is multiplied by the number of times of the historical requests of the serving base station for the second content data, to obtain a third result;
and calculating the sum of the third results corresponding to each second content data in the set of second content data to obtain the second network resource loss.
9. The method according to claim 8, wherein determining the total network resource loss according to the first network resource loss corresponding to each second content data in the set of second content data and the second network resource loss comprises:
for each second content data of the set of second content data, respectively, performing the following process: calculating the sum of the first network resource losses corresponding to each direct connection base station which does not cache the second content data to obtain a third network resource loss;
calculating the sum of the respective third network resource losses of each second content data in the set of second content data to obtain a fourth network resource loss;
and calculating the sum of the fourth network resource loss and the second network resource loss to obtain the total network resource loss.
10. The access network caching method according to any one of claims 1 to 9, wherein determining whether to delete the set of second content data and cache the first content data according to the total network resource revenue and the total network resource loss comprises:
if the total network resource income is determined to be larger than the total network resource loss, determining to delete the set of the second content data and caching the first content data;
and if the total network resource revenue is determined to be less than or equal to the total network resource loss, determining not to delete the set of the second content data and not to cache the first content data.
11. An access network caching apparatus, the apparatus comprising:
the first processing module is used for acquiring first content data to be cached from a target base station and selecting a set of second content data to be deleted from the data cached by the serving base station;
the second processing module is used for predicting total network resource income brought by caching the first content data and predicting total network resource loss brought by deleting the set of the second content data;
and the third processing module is used for determining whether to delete the set of the second content data and cache the first content data according to the total network resource income and the total network resource loss.
12. A base station, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory that, when executed by the processor, implement the method of any of claims 1-10.
13. A computer-readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1-10.
CN201811542058.3A 2018-12-17 2018-12-17 Access network caching method, device, base station and medium Active CN111328156B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811542058.3A CN111328156B (en) 2018-12-17 2018-12-17 Access network caching method, device, base station and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811542058.3A CN111328156B (en) 2018-12-17 2018-12-17 Access network caching method, device, base station and medium

Publications (2)

Publication Number Publication Date
CN111328156A true CN111328156A (en) 2020-06-23
CN111328156B CN111328156B (en) 2023-04-07

Family

ID=71172631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811542058.3A Active CN111328156B (en) 2018-12-17 2018-12-17 Access network caching method, device, base station and medium

Country Status (1)

Country Link
CN (1) CN111328156B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080013514A1 (en) * 2006-07-14 2008-01-17 Samsung Electronics Co. Ltd. Multi-channel MAC apparatus and method for WLAN devices with single radio interface
CN103781115A (en) * 2014-01-25 2014-05-07 浙江大学 Distributed base station cache replacement method based on transmission cost in cellular network
US20140226476A1 (en) * 2011-10-07 2014-08-14 Telefonaktiebolaget L M Ericsson (Publ) Methods Providing Packet Communications Including Jitter Buffer Emulation and Related Network Nodes
CN106028400A (en) * 2016-06-30 2016-10-12 华为技术有限公司 Content caching method and base station
CN106231622A (en) * 2016-08-15 2016-12-14 北京邮电大学 A kind of content storage method limited based on buffer memory capacity
CN108092787A (en) * 2016-11-21 2018-05-29 中国移动通信有限公司研究院 A kind of cache regulation means, network controller and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080013514A1 (en) * 2006-07-14 2008-01-17 Samsung Electronics Co. Ltd. Multi-channel MAC apparatus and method for WLAN devices with single radio interface
US20140226476A1 (en) * 2011-10-07 2014-08-14 Telefonaktiebolaget L M Ericsson (Publ) Methods Providing Packet Communications Including Jitter Buffer Emulation and Related Network Nodes
CN103781115A (en) * 2014-01-25 2014-05-07 浙江大学 Distributed base station cache replacement method based on transmission cost in cellular network
CN106028400A (en) * 2016-06-30 2016-10-12 华为技术有限公司 Content caching method and base station
CN106231622A (en) * 2016-08-15 2016-12-14 北京邮电大学 A kind of content storage method limited based on buffer memory capacity
CN108092787A (en) * 2016-11-21 2018-05-29 中国移动通信有限公司研究院 A kind of cache regulation means, network controller and system

Also Published As

Publication number Publication date
CN111328156B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
JP5444477B2 (en) Method, server, computer program, and computer program product for caching
US9626413B2 (en) System and method for ranking content popularity in a content-centric network
CN113687960B (en) Edge computing intelligent caching method based on deep reinforcement learning
CN113255004A (en) Safe and efficient federal learning content caching method
CN109413694B (en) Small cell caching method and device based on content popularity prediction
CN111881358A (en) Object recommendation system, method and device, electronic equipment and storage medium
CN110856004B (en) Message processing method and device, readable storage medium and electronic equipment
Akhtar et al. Avic: a cache for adaptive bitrate video
CN113094392A (en) Data caching method and device
CN111328156B (en) Access network caching method, device, base station and medium
CN114168328A (en) Mobile edge node calculation task scheduling method and system based on federal learning
CN109769135B (en) Online video cache management method and system based on joint request rate
CN105338088B (en) A kind of mobile P 2 P network buffer replacing method
CN109511009B (en) Video online cache management method and system
JP2018511131A (en) Hierarchical cost-based caching for online media
KR102235622B1 (en) Method and Apparatus for Cooperative Edge Caching in IoT Environment
CN115484314A (en) Edge cache optimization method for recommending performance under mobile edge computing network
CN115361710A (en) Content placement method in edge cache
CN113852692B (en) Service determination method, device, equipment and computer storage medium
CN113760178A (en) Cache data processing method and device, electronic equipment and computer readable medium
JP6450672B2 (en) Network quality prediction apparatus, network quality prediction method, and program
CN110933119B (en) Method and equipment for updating cache content
KR102407235B1 (en) Storage method and apparatus considering the number of transmissions in a caching system with limited cache memory
CN114900477B (en) Message processing method, server, electronic equipment and storage medium
CN116828053B (en) Data caching method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant