CN110602653A - Pre-caching method based on track prediction - Google Patents

Pre-caching method based on track prediction Download PDF

Info

Publication number
CN110602653A
CN110602653A CN201911045208.4A CN201911045208A CN110602653A CN 110602653 A CN110602653 A CN 110602653A CN 201911045208 A CN201911045208 A CN 201911045208A CN 110602653 A CN110602653 A CN 110602653A
Authority
CN
China
Prior art keywords
caching
mobile user
base station
small base
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911045208.4A
Other languages
Chinese (zh)
Inventor
陈双武
彭雨荷
杨坚
吴枫
张勇东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201911045208.4A priority Critical patent/CN110602653A/en
Publication of CN110602653A publication Critical patent/CN110602653A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a pre-caching method based on track prediction, which comprises the following steps: learning a moving mode of the mobile user according to n-1 position areas passed by the mobile user in the past, thereby predicting probability distribution of the nth position area; wherein, each position area corresponds to the coverage area of one small base station; and selecting the small base stations meeting the caching requirement by combining the predicted probability distribution of the nth position area and the caching utility in the network, and caching a part of data requested by the mobile user in each small base station meeting the caching requirement in advance. The method can accurately predict the probability of the mobile user reaching the next base station, and can improve the downloading efficiency and the user experience by matching with the corresponding pre-caching strategy.

Description

Pre-caching method based on track prediction
Technical Field
The invention relates to the technical field of computer networks, in particular to a track prediction-based pre-caching method.
Background
In recent years, with the rapid increase of mobile network users, global mobile data traffic has been growing explosively. According to predictions, mobile data traffic will grow seven times during five years from 2016 to 2021, and worldwide mobile data traffic will reach 49EB per month 2021.
With the advent of the 5G era, in order to meet the requirements of mobile user content services and considering the problems of high cost of Macro Base Stations (MBS), high deployment difficulty and the like, operators generally adopt densely deployed Small Base Station (SBS) networks at present. Compared with macro base stations, the small base stations have the advantages of low power consumption, small size, flexible networking and the like, and the densely deployed small base stations have the unique advantage of indoor network coverage, so that the small base stations become a mainstream network architecture.
Due to the frequent movement of the user terminal, the user terminal may pass through a plurality of small base stations in the process of downloading a file, for some delay-sensitive applications, such as video streaming media and online games, the QoE (quality of experience) of the user may be seriously affected by the problem of transmission interruption when a link is switched between the small base stations. Since high-heat content is repeatedly requested, the related work is often focused on pre-caching the content, i.e., if the user's request contains cached high-heat content, the download speed will be significantly increased. However, the high-heat content is continuously changed along with the time and other factors, and the acquisition is difficult, which is not beneficial to the saving of resources.
In addition, in order to accurately cache the content required by the user in the corresponding small cell, the mobility of the user terminal must be considered. Many works take the historical movement track of the user as a known condition to carry out experiments, and although the experimental result is good, the track data of the user terminal in reality cannot be directly obtained because the track data changes continuously due to factors such as crowd types, time and the like, so that the track prediction of the mobile terminal is very important. The existing Markov chain-based track prediction is simple in model, long-time correlation of a moving track is difficult to consider at a low order, calculation is complex at a high order, and prediction accuracy is very low. Some studies have adopted hidden markov chain models (HMMs) to improve the adaptability of the system in a big data environment, but at the same time, the computational complexity is increased. In any method, the user trajectory needs to be fitted to the corresponding model, but the user movement does not necessarily satisfy the expectation.
Disclosure of Invention
The invention aims to provide a pre-caching method based on track prediction, which can accurately predict the probability of a mobile user reaching the next base station and can improve the downloading efficiency and the user experience by matching with a corresponding pre-caching strategy.
The purpose of the invention is realized by the following technical scheme:
a pre-caching method based on trajectory prediction comprises the following steps:
learning a moving mode of the mobile user according to n-1 position areas passed by the mobile user in the past, thereby predicting probability distribution of the nth position area; wherein, each position area corresponds to the coverage area of one small base station;
and selecting the small base stations meeting the caching requirement by combining the predicted probability distribution of the nth position area and the caching utility in the network, and caching a part of data requested by the mobile user in each small base station meeting the caching requirement in advance.
According to the technical scheme provided by the invention, the future moving state of the user can be accurately predicted according to the historical moving track of the user under the condition of not depending on a user moving model through the track prediction algorithm, and the problem that the parameters of a high-dimensional model of the traditional prediction algorithm are difficult to estimate is solved; meanwhile, the content requested by a certain user is pushed to the next base station which is possibly accessed by the user in advance through a corresponding pre-caching strategy, so that the user can acquire the requested content nearby, the content service quality is improved, and the bandwidth consumption of a core network is reduced. In conclusion, the invention provides a more intelligent pre-caching method which is in accordance with the actual environment and is based on the mobile terminal track prediction.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of a pre-caching method based on track prediction according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a network system architecture according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of model training for trajectory prediction according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a testing phase of trajectory prediction according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a caching algorithm based on a track prediction result according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of the accuracy of trajectory prediction according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating an average download speed according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a cache hit rate according to an embodiment of the present invention;
fig. 9 is a schematic diagram of cache utility according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Because the existing mobile user trajectory prediction is usually based on a simple Markov chain, the implicit conditions such as the environment where the user is located are ignored, and when the spatial dimension of the mobile state of the user is higher, the calculated amount is larger, and the prediction accuracy is lower. Therefore, the present invention provides a pre-caching method based on trajectory prediction, which predicts the upcoming SBS of the user through a user data model obtained from a historical trajectory. In practical application, the probability of reaching the next base station is predicted through the analysis of n-1 base stations passing by in the past, and higher accuracy can be obtained. The accurate track prediction has direct influence on the effective utilization of the pre-cached content, when a user accesses the predicted small base station and sends a request, the pre-cached content can be downloaded at a higher speed, and the downloading efficiency and the user experience are greatly improved.
As shown in fig. 1, a flowchart of a pre-caching method based on track prediction according to an embodiment of the present invention mainly includes the following steps:
step 1, learning a moving mode of a mobile user according to n-1 position areas passed by the mobile user in the past, thereby predicting probability distribution of the nth position area; wherein, each position area corresponds to the coverage area of one small base station;
and 2, combining the predicted probability distribution of the nth position area and the caching utility in the network, selecting the small base stations meeting the caching requirement, and caching a part of data requested by the mobile user in each small base station meeting the caching requirement in advance.
For ease of understanding, the following detailed description is directed to the above-described arrangements.
In the embodiment of the present invention, a layered network architecture composed of a plurality of densely deployed small base stations and macro base stations is considered, as shown in fig. 2. All the small base stations have limited storage capacity and are converged to the macro base station through wired connection, and the macro base station is connected to the core network and can acquire downloaded content from a remote server. The mobility management entity is associated with the macro base station and is responsible for collecting, managing and maintaining mobility information for all users. Each hexagon represents the coverage area of a respective small base station, which a mobile user (smartphone, tablet, etc.) can access during movement to download its desired content. By learning the movement pattern from the historical trajectory, its future location area, i.e. the next small base station to be accessed, can be predicted. If the small base station that is predicted to arrive has pre-cached a portion of the requested content before the relevant user connects to it, it will deliver them from the local cache at a higher rate when the user is in its coverage area, and therefore the user's download speed will be significantly improved. It is assumed that the gain is only obtained when the mobile subscriber accesses the content pre-cached in the small cell, whereas waste is generated if due to a wrong prediction, or the content cached in the corresponding small cell exceeds the subscriber's demand. Finally, the buffer utility is defined as the total gain of all base stations minus the total cost.
In order to implement the track prediction-based pre-caching function, the MBS needs to collect necessary information including the historical tracks and content requests of the users and guide the SBS to make caching decisions. The whole process comprises the following three steps:
1) request processing: each mobile user sends a request with a unique id (rid) when wanting to download content, while the request needs to be sent repeatedly each time the mobile user connects to a new SBS during the download, taking into account frequent interruptions and re-establishments of sessions in the mobile network. Once the SBS receives the request, it first submits the User Identification (UID), the RID together with a time stamp to the MBS, and then checks if it stores locally the content corresponding to the RID. If so, it will transmit directly from its local cache, otherwise, it will download the content from a remote content server and then send it to the user.
2) And (3) track prediction: the Mobility Management Entity (MME) records the historical trajectories of all users. When a user establishes a connection with a new SBS during movement, the latter sends the UID and a timestamp of the user to the MME, which will then modify the entry of the record belonging to the UID, i.e. append the ID and timestamp of the new SBS to the corresponding trajectory data. When the MBS receives the request message (UID, RID and timestamp), it can predict the probability of all SBS that the user may visit in the future from the user's historical trajectory recorded in the MME.
3) Pre-caching strategy: the MBS is responsible for sending a caching instruction to the SBS in the selected set and appointing the proportion of the files requested by the respective caching users. Once the SBS receives the buffering instruction from the MBS, it downloads a corresponding amount of content from a remote content server in advance.
The following mainly describes the preferred embodiments of two steps of trajectory prediction and pre-caching.
First, track prediction
In the embodiment of the invention, a Conditional Variable Auto Encoder (CVAE) is adopted to predict the track of a user, so that the probability distribution of an nth position area is predicted according to n-1 position areas passed by the mobile user in the past.
The principle is as follows: the conditional variation automatic encoder realizes sample reconstruction under the constraint of a conditional vector, and firstly compresses vectors of n-1 position regions passing through in the past under the constraint of the condition through an encoding layer; then, the compressed vector is decompressed and reconstructed by the decoding layer under the same condition constraint to form a vector containing the nth position region, and the parameter updating goal of the conditional variation automatic encoder is to minimize the reconstruction error.
The method mainly comprises the following stages:
1) and (6) acquiring data.
Use of1,l2,…,lnTo represent n successive locations traversed by a mobile user, the object of the invention is to predict the probability distribution of the nth location, i.e. P (l), given the first n-1 locationsn|l1,l2,…,ln-1). In the training phase, each trace sample in the dataset may be represented by a vector: x ═ l1,l2,…,ln]These vectors are generated based on historical access information for the user. Where the first n-1 positions can be represented as a vector: y ═ l1,l2,…,ln-1]。
2) Encoder Q (z | X, Y) is constructed.
Taking the distribution Q (z | X, Y) as the encoder, defined as expressing the feature distribution, representing the input sample trajectory vector X as the distribution of the latent variable z, when given the vector Y, to approximate P (z | X, Y), a gaussian distribution is chosen as the constrained form of the a posteriori probability distribution:
Q(z|X,Y)=N(z|μ(X,Y),∑(X,Y))
wherein P (z | X, Y) represents the probability of encoding (compressing) the input sample trajectory vector X into a low-dimensional latent variable z given the vector Y; μ (X, Y) represents a mean value for normal distribution sampling under the condition that the condition variable is Y and the input variable is X; Σ (X, Y) represents the corresponding variance; n (.) represents a normal distribution.
Illustratively, a Multi-Layer Perceptron (MLP) may be selected as a specific constituent of the encoder.
3) A probability decoder P (X | Y, z) is constructed.
Since the actual distribution of P (X | Y) is difficult to determine, CVAE is used to find a distribution P (X | Y, z; θ) to approximate the unknown distribution P (X | Y), where z is a latent variable sampled from a known distribution (usually a Gaussian distribution), θ is a parameter of the neural network, and P (X | Y, z; θ) represents the probability that decoding will yield the input sample trajectory vector X given that the vector Y, the latent variable z, and the parameter of the neural network are θ; p (X | Y) represents the probability of obtaining the input sample trajectory vector X with the known vector Y, which is not approximated by a neural network, and is an unknown probability to be solved.
And (5) encoding the obtained latent variable z by using an encoder, and combining the vector Y to reconstruct a sample track vector X' containing the nth position area.
4) And (5) parameter training.
The goal of the conditional variational autoencoder parameter training is to maximize the log-likelihood function of each sample trajectory vector X in the dataset, conditioned on vector Y, according to the following formula:
logP(X|Y)=log∫P(X|Y,z;θ)P(z)dz
where p (z) represents the probability distribution of a sample of the hidden variable z;
the KL divergence between P (z | X, Y) and Q (z | X, Y) is:
D[Q(z|X,Y)||P(z|X,Y)]=Ez~Q(·|X,Y)[logQ(z|X,Y)-logP(z|X,Y)]
wherein D is [.]The magnitude of the KL divergence between Q (z | X, Y) and P (z | X, Y) is shown to measure the degree of similarity between the actual value P (z | X, Y) and the approximate estimate Q (z | X, Y) of the neural network. Ez~Q(·|X,Y)Represents the expectation under the condition that the hidden variable z follows an approximate distribution of Q (· | X, Y), where z to Q (· | X, Y) are understood as a whole to represent that the random variable z follows a probability distribution, Q (· | X, Y) is sampled, Q (· | X, Y) represents the probability distribution that the desired hidden variable z satisfies, and a sign represents a default value.
By usingReplacing P (z | X, Y) in the log-likelihood function, rearranging the equation yields:
logP(X|Y)-D[Q(z|X,Y)||P(z|X,Y)]=Ez~Q(·|X,Y)[logP(X|Y,z)]-D[Q(z|X,Y)||P(z|Y)]
where P (z | Y) represents the probability of obtaining the latent variable z given the vector Y, which is obtained from the previous equation decomposition with no practical meaning. If the approximation of Q is sufficiently good (i.e., meets the requirements, which can be set by the user), the second term on the left side of the above equation approaches 0, and P (z | Y) is assumed to be a normal positive distribution. To maximize logP (X | Y), we only need to maximize the right side of the above equation.
For ease of calculation, Σ in Q (z | X, Y) is constrained to a diagonal matrix. To learn complex distributions, CVAE employs neural networks to implement the encoder Q (z | X, Y) and decoder P (X | Y, z). For the first term on the right side of the above equation, the technique of reparameterization is used, i.e., the standard normal distribution ε -N (0, I) is first sampled, and then:
where Σ (X, Y) represents the variance for normal distribution sampling under the condition that the condition variable is Y and the input variable is X,represents the square of ∑ (X, Y), a nonnegative standard deviation, and ∈ represents a sample value following a (0,1) normal distribution.
5) And (4) predicting the track.
Fig. 3 and 4 depict the training and testing phases of trajectory prediction, respectively.
In the test phase, given Y and z sampled from a standard normal distribution, the trained CVAE decoder can generate predicted trajectories X ', X' that are nearly identical to X obtained from P (X | Y). Using the current location area of the user (by |)n-1Representation) and the preceding n-2 positions (l)1,l2,…,ln-2) The probability distribution of the next location area of the user can be predicted by sampling in sufficient numbers (i.e., a specified number, and the specific number can be set according to actual conditions).
For example, for each condition variable (the first n-1 base stations) Y, the hidden variable z is sampled 200 times, and according to the result obtained by sampling, the hidden variable z is input to a decoder to obtain different X ', and finally the probability distribution of X' is obtained in a statistical manner.
Second, precaching strategy
Under accurate track prediction, the excellent caching strategy can greatly reduce the waste of caching space. The invention provides a pre-caching strategy on the basis of track prediction, and specifies an SBS set which should cache a request file and the size of a file which should be cached by each SBS in the set. The specific caching strategy is shown in fig. 5.
1) And (4) parameter definition.
Predicting probability distribution of nth location areaRepresenting the probability that the mobile user who sent the ith request will access each small base station, which is a vector; by usingRepresenting the amount of data that each SBS needs to pre-buffer; where M is the total number of small base stations in the network.
Suppose that the user making the ith request remains at the base station where its s-th pre-buffer scheduling is performedAt the base station downloadAssume that the user next enters base station j, where he staysThe download speed of the user directly acquiring the content from the small base station is R, and the download speed of the user directly acquiring the content from the origin server (i.e., the content server in fig. 2) is R.
The caching utility is obtained by subtracting the punishment brought by the wasted part in the pre-cached content from the income brought by the part downloaded by the mobile user in the pre-cached content; the profit of the mobile user downloading one unit content from the small base station local cache is represented by alpha, and the penalty of wasting one unit content is represented by beta. The specific values of α and β can be set according to practical situations.
2) And (5) pre-caching.
Record the remaining size of the mobile user request file as fleftThe size of the cache content of the base station is f _ cache, the n-1 th position area of the mobile user is the current position area, and the average lingering time of the mobile user in the first n-2 position areas is recorded as Taver
Setting: stay time of mobile user in current location areaIs equal to TaverSet of small base stations SfIs an empty set;
traversing M base stations, if the jth small base station is fullFootPutting the jth small base station into a small base station set Sf(ii) a If the size f _ cache of the cache content of the jth small base station is larger than the size T of the content acquired in the stay timeaverR, the mobile user is considered to download the content size at the jth small base stationIs TaverR; otherwiseF _ cache cached from base station plus remaining part cached from source server in lingering timeWherein j is 1,2, …, M, R represents the download speed of the mobile user directly obtaining the content from the small base station; re-comparison fleft-fi sAnd TaverR, selecting the smaller one as the data size pre-buffered by the jth small base station, i.e. R
In order to illustrate the effects of the above-described aspects of the present invention, a specific example is shown below.
In this example, the GPS trajectory dataset collected using the Geolife project is used. The data set contains 182 users and 17621 tracks, mainly collected in Beijing, China. The longitude 116.3-116.35 and the latitude 39.97-40.02 are taken as the user trajectory research area in the example and are divided into a plurality of hexagons. The removal of the portion where almost no user passes results in 77 hexagons, i.e. the coverage area of the SBS. In the trajectory prediction process, the length n is set in the range of 4 to 8, and the dimension of the latent variable z is set to 60. The proportion of training set and test set was 80% and 20%, respectively. For each Y at test, z is sampled 200 times from the standard normal distribution to obtain the distribution for the next location, and finally, the probability set that each small base station will be reached can be obtained.
The verification effect is mainly divided into the following parts:
1) and (4) the track prediction precision.
As shown in fig. 6, the present example contrasts the trajectory prediction accuracy based on CVAE and hidden markov chain (HMM) models as the trajectory sample length N varies. It can be found that CVAE performs best, reaching around 80% when n is 5, and accuracy reaches around 70% when n is 4 for HMM. However, the HMM has not been as accurate as CVAE, particularly when N is large. Although the prediction accuracy of CVAE decreases with increasing N, it can still be kept above 74%, which demonstrates its applicability and effectiveness in real scenarios.
2) Utility of data caching.
The present example compares the caching performance of CVAE-based policy (CVAEBS), HMM-based policy (HMMBS), and random waypoint model-based policy (RWMBS). Assume that a total of 300 requests are serviced (K300) and each request is associated with a particular trace in the test set. The download speeds R and R are set to 16Mbps and 8Mbps, respectively. In addition, the profit coefficient α and the penalty coefficient β of the caching and wasting unit content are both set to 1. During the validation process, the size of the request file varies from 50MB to 350 MB.
The results are shown in FIG. 7. Fig. 7 depicts the average download speed for the three strategies, and it can be seen that the average download speed increases as the file size increases, and CVAEBS always performs better than the other two strategies. Fig. 8 shows the comparison of the average cache hit rate, which is defined as the ratio of the cache content downloaded by the user to the total size of the SBS pre-cache. It can be seen that RWMBS remains at a low level due to the large number of cache misses caused by random caching. Whereas both CAVEBS and HMMBS decrease as the file size increases. However, CAVEBS always maintains the highest cache hit rate. Fig. 9 shows the average utility of the cache file unit, where CVAEBS has much higher cache utility than HMMBS and RWMBS, as is apparent. It should be noted that as the file size increases, the average utility of CVAEBS and HMMBS increases and then decreases. The above results show that the invention has better performance and application prospect in the aspect of data caching.
Through the above description of the embodiments, it is clear to those skilled in the art that the above embodiments can be implemented by software, and can also be implemented by software plus a necessary general hardware platform. With this understanding, the technical solutions of the embodiments can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments of the present invention.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (5)

1. A pre-caching method based on trajectory prediction is characterized by comprising the following steps:
learning a moving mode of the mobile user according to n-1 position areas passed by the mobile user in the past, thereby predicting probability distribution of the nth position area; wherein, each position area corresponds to the coverage area of one small base station;
and selecting the small base stations meeting the caching requirement by combining the predicted probability distribution of the nth position area and the caching utility in the network, and caching a part of data requested by the mobile user in each small base station meeting the caching requirement in advance.
2. The pre-caching method based on track prediction as claimed in claim 1, wherein a conditional variational automatic encoder is used to predict the track of the user, so as to predict the probability distribution of the n-th position area according to the n-1 position areas passed by the mobile user in the past;
the conditional variation automatic encoder realizes sample reconstruction under the constraint of a conditional vector, and firstly compresses vectors of n-1 position regions passing through in the past under the constraint of the condition through an encoding layer; then, decompressing and reconstructing the compressed vector under the same condition constraint through a decoding layer to obtain a vector containing an nth position area, wherein the aim of updating the parameters of the conditional variation automatic encoder is to minimize reconstruction errors; thus, in the test phase, the probability distribution of the nth position region is predicted by a specified number of samples.
3. The pre-caching method based on trajectory prediction according to claim 1 or 2,
in the training phase, each trace sample in the dataset is represented using a vector: x ═ l1,l2,...,ln]The n-1 location areas that the mobile user has passed through in the past are represented using a vector as: y ═ l1,l2,...,ln-1];
Constructing an encoder Q (z | X, Y), defined to express a feature distribution, representing encoding the input sample trajectory vector X as a latent variable z to approximate P (z | X, Y) given the vector Y, selecting a gaussian distribution as a constrained form of the a posteriori probability distribution:
Q(z|X,Y)=N(z|μ(X,Y),∑(X,Y))
wherein P (z | X, Y) represents the probability of encoding the input sample trajectory vector X into the latent variable z given the vector Y; μ (X, Y) represents a mean value for normal distribution sampling under the condition that the condition variable is Y and the input variable is X; Σ (X, Y) represents the corresponding variance; n (.) represents a normal distribution;
and constructing a probability decoder P (X | Y, z), utilizing the latent variable z obtained by the encoder coding, and combining the vector Y to reconstruct a sample track vector X' containing the nth position area.
4. The method of claim 3, wherein the training of the parameters of the conditional variational automatic encoder aims at maximizing the log-likelihood function of each sample trajectory vector X in the data set conditioned on vector Y according to the following formula:
logP(X|Y)=log∫P(X|Y,z;θ)P(z)dz
where p (z) represents the probability distribution of a sample of the hidden variable z;
the KL divergence between P (z | X, Y) and Q (z | X, Y) is:
D[Q(z|X,Y)||P(z|X,Y)]=EZ~Q(·|X,Y)[logQ(z|X,Y)-logP(z|X,Y)]
wherein D is [.]The magnitude of the KL divergence between Q (z | X, Y) and P (z | X, Y) is shown to measure the degree of similarity between the actual values P (z | X, Y) and Q (z | X, Y); eZ~Q(·|X,Y)Represents the expectation under the condition that the hidden variable z obeys an approximate distribution of Q (. | X, Y), and z-Q (. | X, Y) is understood as a whole and represents that the random variable z obeys a probability distribution;
by usingReplacing P (z | X, Y) in the log-likelihood function, rearranging the equation yields:
logP(X|Y)-D[Q(z|X,Y)||P(z|X,Y)]=EZ~Q(·|X,Y)[logP(X|Y,z)]-D[Q(z|X,Y)||P(z|Y)]
wherein, P (z | Y) represents the probability of obtaining the latent variable z under the condition of the known vector Y;
if the approximation performance of Q meets the requirement, the second term on the left side of the above equation approaches 0, and meanwhile, P (z | Y) is assumed to be a standard positive distribution; for the first term on the right side of the above equation, the technique of reparameterization is used, i.e., the standard normal distribution ε -N (0, I) is first sampled, and then:
where, Σ (X, Y) represents a normal distribution for a condition where the condition variable is Y and the input variable is XThe variance of the samples is determined by the variance of the samples,represents the square of ∑ (X, Y), a nonnegative standard deviation, and ∈ represents a sample value following a (0,1) normal distribution.
5. The method of claim 1, wherein the selecting the small cell sites satisfying the caching requirement according to the predicted probability distribution of the nth location area and the caching utility in the network, and pre-caching a part of data requested by the mobile user in each small cell site satisfying the caching requirement comprises:
predicting probability distribution of nth location areaRepresenting the probability that the mobile user sending the ith request is about to access each small base station, wherein M is the total number of the small base stations in the network; the caching utility is obtained by subtracting the punishment brought by the wasted part in the pre-cached content from the income brought by the part downloaded by the mobile user in the pre-cached content; using alpha to represent the benefit of downloading a unit content from the small base station local cache by the mobile user, and using beta to represent the punishment of wasting a unit content; record the remaining size of the mobile user request file as fleft
The n-1 position area of the mobile user is the current position area, and the average lingering time of the mobile user in the first n-2 position areas is recorded as Taver
Setting: stay time of mobile user in current location areaIs equal to TaverSet of small base stations SfIs an empty set;
traversing M base stations, if the jth small base station satisfiesPutting the jth small base station into a small base station set Sf(ii) a If the size f _ cache of the cache content of the jth small base station is larger than the size T of the content acquired in the stay timeaverR, the mobile user is considered to download the content size f at the jth small base stationi sIs TaverR; otherwise fi sF _ cache cached from base station plus remaining part cached from source server in lingering timeWherein, j is 1, 2., M, R represents the download speed of the mobile user directly obtaining the content from the small base station, and R represents the download speed of the content obtained from the source server; re-comparison fleft-fi sAnd TaverR, and selecting the smaller one as the data size buffered in advance by the jth small base station.
CN201911045208.4A 2019-10-30 2019-10-30 Pre-caching method based on track prediction Pending CN110602653A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911045208.4A CN110602653A (en) 2019-10-30 2019-10-30 Pre-caching method based on track prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911045208.4A CN110602653A (en) 2019-10-30 2019-10-30 Pre-caching method based on track prediction

Publications (1)

Publication Number Publication Date
CN110602653A true CN110602653A (en) 2019-12-20

Family

ID=68852161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911045208.4A Pending CN110602653A (en) 2019-10-30 2019-10-30 Pre-caching method based on track prediction

Country Status (1)

Country Link
CN (1) CN110602653A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111343147A (en) * 2020-02-05 2020-06-26 北京中科研究院 Network attack detection device and method based on deep learning
CN113242574A (en) * 2021-04-30 2021-08-10 平安科技(深圳)有限公司 Load balancing method, system, computer equipment and readable storage medium
EP4084530A1 (en) * 2021-04-30 2022-11-02 Koninklijke Philips N.V. System and method for efficient upload or download of transmission data over mobile access devices
WO2022229454A1 (en) * 2021-04-30 2022-11-03 Koninklijke Philips N.V. System and method for efficient upload or download of transmission data over mobile access devices
CN117459901A (en) * 2023-12-26 2024-01-26 深圳市彩生活网络服务有限公司 Cloud platform data intelligent management system and method based on positioning technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131182A (en) * 2016-07-12 2016-11-16 重庆邮电大学 A kind of cooperation caching method based on Popularity prediction in name data network
CN106304147A (en) * 2016-07-25 2017-01-04 北京航空航天大学 A kind of cooperation caching method based on traffic infrastructure under car networked environment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131182A (en) * 2016-07-12 2016-11-16 重庆邮电大学 A kind of cooperation caching method based on Popularity prediction in name data network
CN106304147A (en) * 2016-07-25 2017-01-04 北京航空航天大学 A kind of cooperation caching method based on traffic infrastructure under car networked environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨崇旭等: "一种离散分布的移动感知缓存策略", 《小型微型计算机系统》 *
陈正勇等: "深度学习框架下的移动感知预缓存策略", 《小型微型计算机系统》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111343147A (en) * 2020-02-05 2020-06-26 北京中科研究院 Network attack detection device and method based on deep learning
CN113242574A (en) * 2021-04-30 2021-08-10 平安科技(深圳)有限公司 Load balancing method, system, computer equipment and readable storage medium
EP4084530A1 (en) * 2021-04-30 2022-11-02 Koninklijke Philips N.V. System and method for efficient upload or download of transmission data over mobile access devices
WO2022229454A1 (en) * 2021-04-30 2022-11-03 Koninklijke Philips N.V. System and method for efficient upload or download of transmission data over mobile access devices
CN117459901A (en) * 2023-12-26 2024-01-26 深圳市彩生活网络服务有限公司 Cloud platform data intelligent management system and method based on positioning technology
CN117459901B (en) * 2023-12-26 2024-03-26 深圳市彩生活网络服务有限公司 Cloud platform data intelligent management system and method based on positioning technology

Similar Documents

Publication Publication Date Title
CN110602653A (en) Pre-caching method based on track prediction
Zhang et al. Long-term mobile traffic forecasting using deep spatio-temporal neural networks
CN110213627B (en) Streaming media cache allocation method based on multi-cell user mobility
CN113242469B (en) Self-adaptive video transmission configuration method and system
CN109982104B (en) Motion-aware video prefetching and cache replacement decision method in motion edge calculation
CN108833880A (en) Using across user behavior pattern carry out view prediction and realize that virtual reality video optimizes the method and apparatus transmitted
CN108900980B (en) Resource allocation optimization method based on mobility prediction in heterogeneous network
CN113115067A (en) Live broadcast system, video processing method and related device
Chen et al. Compression of GPS trajectories
CN112752308B (en) Mobile prediction wireless edge caching method based on deep reinforcement learning
CN114039870B (en) Deep learning-based real-time bandwidth prediction method for video stream application in cellular network
CN111491312A (en) Method and equipment for predicting, allocating, acquiring and training neural network of wireless resources
CN115633380B (en) Multi-edge service cache scheduling method and system considering dynamic topology
CN117221403A (en) Content caching method based on user movement and federal caching decision
CN114040257B (en) Self-adaptive video stream transmission playing method, device, equipment and storage medium
Wu et al. Paas: A preference-aware deep reinforcement learning approach for 360 video streaming
Chan et al. Big data driven predictive caching at the wireless edge
CN116886619A (en) Load balancing method and device based on linear regression algorithm
CN114885388A (en) Multi-service type self-adaptive switching judgment method combined with RSS prediction
CN110913239A (en) Video cache updating method for refined mobile edge calculation
Zhou et al. Presr: Neural-enhanced adaptive streaming of vbr-encoded videos with selective prefetching
Deng et al. Utility maximization of cloud-based in-car video recording over vehicular access networks
Zhang et al. Accelerated deep reinforcement learning for wireless coded caching
Kim et al. HTTP adaptive streaming scheme based on reinforcement learning with edge computing assistance
CN114785858B (en) Active resource caching method and device applied to mutual inductor online monitoring system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191220