CN112911717A - Method for transmitting MDS (Multi-request System) coded data packet of fronthaul network - Google Patents

Method for transmitting MDS (Multi-request System) coded data packet of fronthaul network Download PDF

Info

Publication number
CN112911717A
CN112911717A CN202110167807.4A CN202110167807A CN112911717A CN 112911717 A CN112911717 A CN 112911717A CN 202110167807 A CN202110167807 A CN 202110167807A CN 112911717 A CN112911717 A CN 112911717A
Authority
CN
China
Prior art keywords
file
transmission
network
user
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110167807.4A
Other languages
Chinese (zh)
Other versions
CN112911717B (en
Inventor
刘越
周一青
崔新雨
刘玲
石晶林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN202110167807.4A priority Critical patent/CN112911717B/en
Publication of CN112911717A publication Critical patent/CN112911717A/en
Application granted granted Critical
Publication of CN112911717B publication Critical patent/CN112911717B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • H04W72/044Wireless resource allocation based on the type of the allocated resource
    • H04W72/0446Resources in time domain, e.g. slots or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0231Traffic management, e.g. flow control or congestion control based on communication conditions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a method for transmitting MDS (maintenance request distribution) encoded data packets of a fronthaul network, wherein the fronthaul network is positioned in a distributed Fog-RAN (trusted remote Access network) network architecture, the network architecture comprises a BBU (base band Unit) centralized resource pool, a distributed RRH (radio remote Unit) with a caching function and the fronthaul network positioned between the BBU and the RRH, the BBU pool transmits data to the RRH through the fronthaul network, the RRH transmits the data to a user through an access network, each RRH can cache M files, a network file library is provided with a plurality of files, each file is divided into n sub-files, and the size of each sub-file is equal to that of each sub-file
Figure DDA0002938062090000011
Each subfile is encoded into a packet using the MDS, and the size of each packet is
Figure DDA0002938062090000012
S is the file size, n is a positive integer, the method comprises: according to each file WfPrevalence p offDetermining WfNumber of packets m buffered per RRHf,mfProportional to popularity, where F ∈ F, the set F ═ 1, 2. According to the embodiment of the invention, the transmission bandwidth requirement of the fronthaul network can be reduced.

Description

Method for transmitting MDS (Multi-request System) coded data packet of fronthaul network
Technical Field
The invention relates to a wireless communication system, in particular to an MDS asynchronous code cache.
Background
When a centralized network architecture C-ran (centralized radio access network) is endowed with a rrh (remote radio head) caching capability, a Fog-ran (Fog radio access network) network architecture is formed, and the front transmission bandwidth requirement is reduced based on storage and communication fusion [ TS16 ]. In the Fog-RAN network architecture, RRH cache capacity limitation is one of the factors affecting cache hit rate improvement. In order to reduce the cost, the RRH buffer capacity is generally in the TB level, and the space of the file library is in the PB level (1PB 1024 TB). Assuming that the probability of a user requesting a file is consistent with the file popularity distribution, the cache hit rate is low even if the file popularity distribution is particularly biased (i.e., very high popularity for very few files) [ SGS18 ]. Therefore, how to utilize the limited buffer space of the RRH and improve the buffer hit rate is a core challenge to reduce the transmission bandwidth requirement of the front-transmission network of the Fog-RAN network architecture. Distributed caching is an effective method for solving the problem of limited caching capacity of the RRH. Files in the file library are divided into a plurality of subfiles to be cached in different RRHs, the plurality of RRHs jointly transmit different subfiles to a user, and the user receives the files in a continuous interference elimination mode [ JC18 ]. When the user successfully receives all the subfiles forming the request file, the request file can be recovered, and at the moment, the user request file is hit in the RRH cache, so that the requirement on the transmission bandwidth of the forward network is reduced. However, the distributed cache has a very high requirement for sub-file transmission, and the user must successfully receive all the sub-files to restore the requested file. To solve the problem of distributed caching, researchers have combined encoding and caching to propose a scheme for MDS encoded caching [ BGL15 ]. In the MDS coding cache, the subfiles are coded into data packets, the number of the coded data packets is larger than that of the subfiles, and a user can recover the requested file by receiving any data packets with specified number. By adopting MDS coding cache, a user only needs to pay attention to the number of the data packets and does not need to pay attention to the content of the data packets, thereby reducing the requirement on the transmission process of the data packets.
In the existing research on MDS coding cache, [ BGL15] assumes that a plurality of data packets of each encoded file are cached in a plurality of small base stations in a non-overlapping manner, and reduces backhaul traffic between a macro base station and a small base station in a heterogeneous network by optimizing the number of data packets cached in the small base stations by different popularity files under the constraint condition of the cache capacity of each small base station. Based on [ BGL15], the [ OG18] considers the moving scene of the user, the user advances according to a certain path with a certain probability, when the maximum transmission delay is cut off, if the user cannot collect the data packets needed for restoring the original file, the residual data packets are extracted from the macro base station, and backhaul traffic is increased. However, both [ BGL15] and [ OG18] assume that wireless transmission in the network is an error-free link, and influence of wireless transmission performance on the system is not considered. In a distributed cache scenario, a user needs to receive different data packets from multiple small base stations, and if multiple small base stations simultaneously transmit data packets for the user at the same frequency, serious interference is generated. Unlike [ BGL15] and [ OG18], [ XT17] and [ KHC19] consider wireless channel transmission of MDS encoded data packets, and study the influence of non-overlapping buffers of the number of packets such as RRH and RRH probabilistic buffers on backhaul traffic, respectively. However, [ XT17] and [ KHC19] both discuss only the case where a single user receives a packet in the network, and cannot be generalized to the case where a plurality of users request a file.
The above study considered the case of an RRH and an inter-user access network transmitting MDS encoded data packets. In contrast to the above studies, [ WLL19] and [ LWZ17] propose methods for wireless transmission of MDS encoded packets in a legacy network in a Fog-RAN network architecture. In the buffer file transmission stage, first, the bbu (baseband unit) transmits a data packet to the RRH in a fronthaul network wireless transmission mode. Since the wireless transmission has multicast properties, each RRH can receive the data packet transmitted by the BBU. Then, the RRH combines the received data packet with the cached data packet to recover the original file, and a plurality of RRHs form a service cluster and jointly transmit the request file to a plurality of users through an access network. However, [ WLL19] and [ LWZ17] both assume that multiple users make file requests at the same time. In a real system, the time at which multiple users request files is asynchronous and random, and the research methods of [ WLL19] and [ LWZ17] are no longer applicable.
Reference documents:
[TS16]Tandon R,Simeone O.Harnessing cloud and edge synergies:toward an information theory of fog radio access networks[J].IEEE Communications Magazine,2016,54(8):44-50.
[SGS18]Sermpezis P,Giannakas T,Spyropoulos T,Vigneri L.Soft cache hits:improving performance through recommendation and delivery of related content[J].IEEE Journal on Selected Areas in Communications,2018,36(6):1300-1313.
[JC18]Jiang D,Cui Y.Partition-based caching in large-scale SIC-enabled wireless networks[J].IEEE Transactions on Wireless Communications,2018,17(3):1660-1675.
[BGL15]Bioglio V,Gabry F,Land I.Optimizing MDS codes for caching at the edge[C].IEEE Globecom 2015,2015:1-6.
[OG18]Ozfatura E,Gunduz D.Mobility and popularity-aware coded small-cell caching[J].IEEE Communications Letters,2018,22(2):288-291.
[XT17]Xu X,Tao M.Modeling,analysis,and optimization of coded caching in small-cell networks[J].IEEE Transactions on Communications,2017,65(8):3415-3428.
[KHC19]Ko D,Hong B,Choi W.Probabilistic caching based on maximum distance separable code in a user-centric clustered cache-aided wireless network[J].IEEE Transactions on Wireless Communications,2019,18(3):1792-1804.
[WLL19]Wu X,Li Q,Leung Victor C M,Ching P C.Joint fronthaul multicast and cooperative beamforming for cache-enabled cloud-based small cell networks:An MDS codes-aided approach[J].IEEE Transactions on Wireless Communications,2019,18(10):4970-4982.
[LWZ17]Liao J,Wong K K,Zhang Y,Zheng Z,Yang K.Coding,multicast,and cooperation for cache-enabled heterogeneous small cell networks[J].IEEE Transactions on Wireless Communications,2017,16(10):6838-6853.
disclosure of Invention
The present invention is directed to the above problems, and according to a first aspect of the present invention, a method for transmitting MDS encoded data packets in a fronthaul network is provided, where the fronthaul network is located in a distributed Fog-RAN network architecture, the network architecture includes a BBU centralized resource pool, a distributed radio remote unit RRH with a caching function, and a fronthaul network located between the BBU and the RRH, the BBU pool transmits data to the RRH through the fronthaul network, the RRH transmits data to a user through an access network, each RRH can cache M files, a network file library includes a plurality of files, each file is divided into n sub-files, and each sub-file has a size equal to that of the n sub-files
Figure BDA0002938062070000031
Each subfile is encoded into a packet using the MDS, and the size of each packet is
Figure BDA0002938062070000041
S is the file size, n is a positive integer, the method comprises:
step 100: according to each file WfPrevalence p offDetermining WfNumber of packets m buffered per RRHf,mfProportional to popularity, where F ∈ F, the set F ═ 1, 2.
In an embodiment of the present invention, there are U users in the network, and a user set U ═ 1, 2.
Step 200: receiving user uf,iEmitted pair WfFile request ofAccording to the document WfDetermines the response time gf,iThe time delay from request to response of the file with high popularity is larger than that of the file with low popularity, wherein the user set U is divided into F subsets U according to different files requested by the user1,U2,...,UFSet UfThe ith user in (ii) is denoted as uf,i,i=1,2,…,|Uf|;
Step 300: according to each file WfM offAnd gf,iCalculating the time-lapse forwarding network transmission file W at the time T (g)fAllocated bandwidth of data packets
Figure BDA0002938062070000042
Wherein T (g) g.TS,g=1,2,...,G。
In one embodiment of the present invention, the step 100 comprises:
step 110: for each file F e F in the file library, m is calculatedfTheoretical value of (1)
Figure BDA0002938062070000043
And calculated according to the following formula
Figure BDA0002938062070000044
Correction value of
Figure BDA0002938062070000045
Figure BDA0002938062070000046
Wherein
Figure BDA0002938062070000047
Representing a ceiling operation.
In one embodiment of the present invention, step 100 further comprises:
step 120: when in use
Figure BDA0002938062070000048
According to
Figure BDA0002938062070000049
The relationship between n.M is as follows
Figure BDA00029380620700000410
Making a secondary correction so that
Figure BDA00029380620700000411
When in use
Figure BDA00029380620700000412
Then, the correction values are sequentially changed from low to high in the order of file popularity (F ═ F, F-1.., 2,1)
Figure BDA00029380620700000413
Is reduced to zero until
Figure BDA00029380620700000414
Until the end;
when in use
Figure BDA00029380620700000415
Then, the correction values are sequentially changed from high to low (F1, 2,1, F) according to the file popularity
Figure BDA0002938062070000051
Increasing to n-1 until
Figure BDA0002938062070000052
Until the end;
step 130: order to
Figure BDA0002938062070000053
In one embodiment of the present invention, the step 200 comprises:
step 210: when user uf,iWhen the requested file is a file 1,2, F/2 with higher popularity, the latest response time of the user request isMaximum value of latest response time, when user uf,iAnd when the request file is a file F/2+1, F/2+2 and F with low popularity, the latest response time requested by the user is the minimum value of the latest response time.
In one embodiment of the present invention, step 200 further includes sequentially determining each response time t (G) in an order of increasing response times, G ═ 1, 2.
In one embodiment of the invention, the method comprises the following steps:
step 221: when the response time T (g) of the user request comes, searching the user request with the response time T (g) as the latest response time, and when the user request is not met, the user request is a priority request;
step 222: determining a search range at a time T (g), wherein the search range is a user request which can be satisfied at the time T (g), and determining a user request except a priority request in the search range as a non-priority request;
step 223: for a single priority request, calculating the maximum allowable transmission delay of the fronthaul network and the corresponding minimum transmission rate required by the fronthaul network;
step 224: and for a plurality of priority requests requesting the same file, determining that the maximum allowable transmission delay of the fronthaul network is the minimum value of the maximum allowable transmission delays of the plurality of the same file requests, and calculating the corresponding minimum transmission rate.
In one embodiment of the present invention, further comprising:
step 225: if the minimum value of the maximum allowable transmission delay determined in step 224 does not satisfy the constraint condition for the file with the later time slot transmitted by the access network and the start time of the access network transmission should not be earlier than the end time of the access network transmission of the file with the earlier time slot, the minimum value of the maximum allowable transmission delay is reduced until the constraint condition is satisfied, and the corresponding minimum transmission rate is calculated and n-m of the file transmitted by the fronthaul network at the time T (g)fDefinition of transmission bandwidth of data packet in proportion to total bandwidth of fronthaul network
Figure BDA0002938062070000054
According to a second aspect of the present invention, a computer-readable storage medium is provided, in which one or more computer programs are stored which, when executed, are adapted to implement the method of fronthaul network MDS encoded packet transmission of the present invention.
According to a third aspect of the invention there is provided a computing system comprising: a storage device, and one or more processors; wherein the storage means is adapted to store one or more computer programs which, when executed by the processor, are adapted to implement the method of the invention for fronthaul network MDS encoded packet transmission.
Compared with the prior art, the method has the advantages that the bandwidth requirement of the fronthaul network can be reduced, and the performance evaluation result shows that under the condition of different popularity bias coefficients, compared with the method that different popularity files cache the same number of data packets in the RRH and the system response time is fixed as the time slot end time, the method can reduce the transmission bandwidth requirement of the fronthaul network by 68.91%.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
fig. 1 shows a schematic diagram of a Fog-RAN network architecture employing a wireless fronthaul network according to an embodiment of the invention;
figure 2 illustrates a schematic diagram of an MDS encoding process according to an embodiment of the invention;
FIG. 3 is a diagram illustrating buffering and transmission of MDS encoded data packets, according to an embodiment of the invention;
FIG. 4 shows a comparison of front belt bandwidth requirements under different popularity bias factors;
fig. 5 shows a comparison of fronthaul network bandwidth requirements for different user distribution densities.
Detailed Description
In order to solve the problems in the background art, the inventor of the present application has developed an MDS encoding cache and content transmission method for unknown asynchronous user request information through research.
As shown in fig. 1, the Fog-RAN network architecture with edge caching function is composed of three parts: the system comprises a BBU centralized resource pool, a distributed radio remote unit RRH with a cache function and a fronthaul network between the RRH and the RRH. The invention considers the wireless transmission method of the fronthaul network, and the BBU pool transmits fronthaul network data to the RRH in a wireless transmission mode. And the RRH transmits the data to the user through the access network. Therefore, the file transmission to the user through the Fog-RAN comprises two stages of fronthaul network transmission and access network transmission, the method assumes that a frequency division multiplexing mode is adopted in the fronthaul network transmission stage, so that a plurality of files can be transmitted simultaneously, and assumes that a time division multiplexing mode is adopted in the access network transmission stage, so as to avoid interference, and only one file can be transmitted at the same time.
In the file caching and transmission optimization of the invention, the condition of a plurality of users is considered, in particular the condition that a plurality of users request the same file. The invention divides the time section of file transmission into G time slots, each time slot is also a time section, after receiving the request of the user to the file, the user request is responded, and the decision of responding to the request is only made at the moment when each time slot is finished.
In the stage of fronthaul network transmission, on the premise of meeting the assumption of the invention, a plurality of RRHs receive MDS packets required by file recovery at the same time and recover the files at the same time. Because the RRH cache is limited, the invention assumes that the RRH immediately transmits the file to the user through the access network after recovering the file. However, in the transmission process of the access network, the transmission conditions of each user are different, the time for receiving the same file is different, and different files in the transmission network cannot be transmitted simultaneously, so that in the transmission of the access network, the time delay length of file transmission is the same as the maximum access network transmission time delay in the user requesting the file, and after the transmission of the access network for the user is finished, the next file can be transmitted in the access network.
The invention aims to reduce the transmission bandwidth requirement of the fronthaul network, firstly, files with limited RRH cache, high popularity and more frequent user request are cached on the RRH, and the quantity of the files transmitted from BBU to RRH can be reduced for the files, so as to reduce the transmission bandwidth requirement of the fronthaul network.
Secondly, for the files with high popularity, the later the time when the system responds to the user request, the more the number of the user requests which are possibly met is, the fewer the file transmission times are, and the transmission bandwidth requirement of the fronthaul network can be reduced, and certainly, the response time cannot be too late, so that the allowable range of service delay is exceeded. Suppose there are F files in the file library, and each file size is S bits. A set F ═ {1, 2., F } is defined as a set of files, and the F-th file in the library of files is denoted as Wf(F. epsilon. F). In the document repository, the frequency with which a document is accessed by a user is defined as the popularity of the document. Assuming that the popularity of files in the repository obeys Zipf distribution, file WfPrevalence p offIs shown as
Figure BDA0002938062070000071
In the Zipf distribution expression, β is a popularity bias coefficient (β ≧ 0). The larger the beta is, the more biased the file popularity distribution is, and the higher the access frequency of the files with higher popularity is. As can be seen from the Zipf distribution expression, the popularity distribution of F files in the file library satisfies p1>p2>…>pFAnd is
Figure BDA0002938062070000081
For example, 3 files, F ═ {1,2,3} β ═ 1
Then the popularity is respectively
p1=1/(1+(1/2)+(1/3))=1/(11/6)=6/11
p2=(1/2)/(1+(1/2)+(1/3))=(1/2)/(11/6)=3/11
p3=(1/3)/(1+(1/2)+(1/3))=(1/3)/(11/6)=2/11
Assuming that there are R RRHs in the network, which constitute a set of RRHs, R {1, 2., R }, each RRHr can cache M files (R ∈ R), and the cache capacity is smaller than the cache capacity required by all files in the file library (M < F). Meanwhile, suppose there are U users in the network, which form a user set U ═ 1, 2. The workflow of the Fog-RAN network architecture is divided into two phases: a cache file placing stage and a cache file transmission stage. The cached file transmission stage includes three processes of file request sent by the user, forwarding network transmission and access network transmission, which are described in detail below.
In the stage of placing the cache files, the BBU carries out MDS coding on each file in the file library, and caches the coded files in the RRH. For each file W with size S bits in the file libraryfThe flow of MDS encoding is shown in figure 2. Firstly, the sub-files are divided, each file in the file library is divided into n sub-files, and the size of each sub-file is equal to that of each sub-file
Figure BDA0002938062070000082
Each subfile is then encoded into a packet, each packet also having a size of
Figure BDA0002938062070000083
The coded data packets are placed in RRH buffers, and the number of the data packets in each RRH buffer is mfAnd the content of the data packet cached by different RRHs is different. By the nature of MDS encoding, the RRH can recover the file W by receiving any n data packetsf. Since the RRH buffer capacity is limited, the invention assumes mf< n. Thus, to restore the file WfRemove cached mfIn addition to a packet, the RRH also needs to receive n-mfAs will be described in more detail below, data packets that are to be transmitted wirelessly via the fronthaul network.
The cache file transfer phase operates at peak flow. At this stage, first, the user issues a file request. Dividing the time period of file transmission in the network into G time slots, wherein the time length of each time slot is TSThe definition set G ═ {1, 2., G } represents a set of all slots. At G.TSIn a time period, U users request files in a file library according to file popularity distribution, and the higher the file popularity is, the more the number of the users requesting the files is. Suppose each user is at G.TSA file request is issued only once during a time period. Considering F files in the file library, dividing the user set U into F subsets U according to different files requested by the user1,U2,...,UFSubset UfThe number of users (F ∈ F) is | UfI, this | UfAll | users request the file Wf. Definition set UfThe ith user in (ii) is denoted as uf,i(i=1,2,…,|Uf|)。
The invention takes into account the situation of multi-user asynchronous requests, user uf,iAt G.TSRandomly sending a pair of files W within a time periodfThe request of (1). Defining user uf,iRequest document WfAt the moment of time of
Figure BDA0002938062070000091
When user uf,iAt the moment of time
Figure BDA0002938062070000092
After sending out the file request, the system is at the moment
Figure BDA0002938062070000093
Responding to user uf,iFile request of (2), file WfTo user uf,i. To simplify the analysis, the system responds to the user's request only at the end of the time slot, assuming that the system responds to user uf,iThe time of the request is gf,iEnd time of time slot
Figure BDA0002938062070000094
The process of the system transmitting the file to the user can be divided into two stages, namely a fronthaul network transmission stage of BBU transmission to RRH and an access network transmission stage of RRH transmission to the user.
In the stage of wireless transmission of the fronthaul network, the file is recovered by the property of MDS codingWfM that RRH should receive and buffer through fronthaul networkfThe remaining n-m of different packet contentsfAnd (4) a data packet. Each RRH can receive the data packet transmitted by the BBU in consideration of the multicast property of the wireless transmission of the fronthaul network. As shown in FIG. 3, when the remaining packets required by R RRHs are the same, the number of packets transmitted by the forwarding network is the minimum, which is n-mf. To reduce the transmission bandwidth requirement of the fronthaul network, the present invention assumes a file WfThe number of the data packets transmitted in the fronthaul network is n-mf
When a user sends a file request, the BBU is transmitted to all R RRH recovery files W in a wireless transmission mode through a fronthaul networkfThe required remaining n-mfAnd (4) a data packet. Considering that the fronthaul network may transmit data packets belonging to a plurality of files at the same time, in order to avoid interference generated by transmitting data packets of different files, the fronthaul network is assumed to transmit data packets of different files in a frequency division multiplexing mode at the same time. Definition of
Figure BDA0002938062070000095
Indicates that the file W is allocated at time T (g) g.TSfN-m for fronthaul network transmissionfThe ratio of the transmission bandwidth of each data packet to the total bandwidth of the fronthaul network
Figure BDA0002938062070000096
Figure BDA0002938062070000097
Indicating that at the time T (g), the file W is not transmitted by the forward transmission networkfThe data packet of (1). Suppose that the fronthaul network has a maximum transmission rate of C due to limited bandwidth. Thus, document WfN-m offThe transmission rate of the data packet in the forward transmission network is
Figure BDA0002938062070000098
At time T (g), file W is transmitted by forward transmission networkfN-m offThe transmission delay of each data packet is
Figure BDA0002938062070000099
Definition of the variables 0 to 1
Figure BDA00029380620700000910
Indicating whether the forward network transmits the recovery file W at the time T (g)fRequired n-mfA data packet, represented as
Figure BDA0002938062070000101
Thus, the transmission bandwidth requirement Q of the fronthaul network during the time period 0 ≦ t ≦ G · TS is expressed as
Figure BDA0002938062070000102
Definition of Ng(ii) the number of documents transmitted by the head-end network at time T (g) g.TS, as shown in (3)
Figure BDA0002938062070000103
As defined in (1), NgIs shown as
Figure BDA0002938062070000104
Definition set Fg={g(1),g(2),...,g(Ng) Denotes N transmitted by the fronthaul network at time T (g)gDocument collection composed of documents
Figure BDA0002938062070000105
Note the set Ug(θ)Is represented in G.TSRequesting a file W within a time periodg(θ)Is given as a set of users of (1, 2.., N)g) Define a set Mg,g(θ)For transmitting the file W at the time T (g)g(θ)User of service in set Ug(θ)Is selected. Thus, at time T (g), file W is transmittedg(θ)The users of the service are collected as
Figure BDA0002938062070000106
For example, at time T (1) ═ TSThe forward network transmits 3 files W3,W5,WF(F > 5) of packets, the number of packets being n-m3,n-m5And n-mFThen N is13. At this time, set FgThat is, { g (1), g (2), g (3) }, denotes a set of 3 files transmitted by the fronthaul network at time t (g) (g ═ 1), where g (1) is file W3G (2) is a document W5G (3) is a document WF. Visible, set FgThe middle file is a subset of the set F
Figure BDA0002938062070000107
In this example, when g is 1, N g3, 5, and F for g (1), 3, and 3. Note the set Ug(1)=U3,Ug(2)=U5And Ug(3)=UFIs represented in G.TSRequesting a file W within a time period3,W5And WFIs selected. Note that in G.TSIn the time period, each user in the U users only sends out a file request once, so that the user sets requesting different files are different. Suppose U 31,4,6, i.e., user 1 and user 4 and also user 6 are at G · TSRequest file 3 during the time period while assuming U5={3,5,9},UF={2,8,12}。
Because at time T (g) g.TS(g is 1), NgTherefore, θ can take the value of θ 1,2, 3. Set Mg,g(1)For transmitting the file W at the time T (g)3Served user in set U3Of (c) is used. Suppose Mg,g(1)When the file W is transmitted at time t (g) {2,3}, the file W is transmitted3Served user in set U3Positions in (2) and (3). Due to U 31,4,6, the users of the service are therefore user 4 and user 6. Similarly, assume Mg,g(2)={1,3},Mg,g(3)If it is {3}, the file W is transmitted5The users served are user 3 and user 9, transferring the file WFThe served users are users 12.
As can be seen from (2), for a document WfN-m offThe transmission time delay of each data packet is the same as that of the fronthaul network. Therefore, at time T (g) + DF(g, f), all RRHs can receive the file WfN-m offA data packet and m of buffer memoryfRecovering a file W from a combination of data packetsf. However, for different files, the number of data packets n-mfAnd transmission rate
Figure BDA0002938062070000111
Different, the transmission time delay of the fronthaul network is different, so the time for the RRH to recover different files is also different. Considering that the RRH has limited cache capacity, it is not possible to cache a single file in the file library. Thus, the present invention assumes that the RRH transmits the file to the user over the access network immediately after recovering the file. Because the time for recovering the same file by all the RRHs is the same, a plurality of RRHs can simultaneously transmit the same file to the user in the transmission process of the access network, and the file transmission rate is improved. Suppose C is closest to the usermaxThe RRHs form a service cluster to jointly transmit the same file to the user, and different RRHs use the same time-frequency resource when transmitting the file to the user. In order to avoid interference caused by the fact that a plurality of RRHs in the access network transmit different files simultaneously, the invention assumes that the access network transmits different files in a time division multiplexing mode. Because the service clusters of different users have different wireless channel conditions when transmitting files to the users, the access network transmission time delay is different, and in the access network transmission, the time slot length of the file transmission is the same as the maximum access network transmission time delay of the file service users.
Definition of phif,iRepresenting user uf,iSet of RRHs of the serving cluster, which is a distance user uf,iMost recent CmaxAnd (4) RRHs. The invention considers the large scale fading and small scale fading channel between RRH and user at the same time, and defines P as the transmitting power of each RRH, Dr,f,iRepresenting RRH r and user uf,iA, is a path loss coefficient,
Figure BDA0002938062070000112
representing RRH r and user uf,iThe small-scale fading channels between the two follow a complex Gaussian distribution with a mean value of 0 and a variance of 1, and sigma is the variance of zero mean Gaussian white noise. User uf,iReceiving a file W during transmission in an access networkfExpressed as
Figure BDA0002938062070000113
Setting the transmission bandwidth between RRH and user as W, the service cluster sends user uf,iTransfer document WfAccess network transmission delay of DR(f, i). Since the size of the transmission file in the access network is S bits, DR(f, i) is represented by
Figure BDA0002938062070000114
The time slot length of the file transmission is considered to be the same as the maximum access network transmission time delay of the file service user in the access network transmission. Thus, for a file W transmitted by the forward network at time T (g)g(θ)Length of time slot D of transmission in access networkg,g(θ)Is composed of
Figure BDA0002938062070000115
Combining the process of sending file request and file transmission by user uf,iRequest document WfService delay of
Figure BDA0002938062070000116
Time delay by system in response to user request
Figure BDA0002938062070000117
Fronthaul network transmission delay DF(g, f) and access network transmission delay DR(f, i) three-part composition, which is shown as
Figure BDA0002938062070000121
The invention solves the problem of reducing the transmission bandwidth requirement of a fronthaul network when a plurality of users send asynchronous random file requests within a period of time by considering a Fog-RAN network architecture adopting distributed MDS coding cache. The constraints of the optimization problem are set forth below. In the stage of placing the cache files, the file W is placed in the cache of each RRHfM after MDS codingfFor each packet, the buffer capacity required for buffering the packet should be less than the buffer capacity of each RRH. At the same time, the RRH recovers any original file W by the nature of MDS encodingfThe number of the required data packets is n, and the number of the RRH buffer data packets is mfShould be less than n. Therefore, the constraint condition of the cache file placing stage is
Figure BDA0002938062070000122
Wherein
Figure BDA0002938062070000123
A set of non-negative integers. At time T (g) g.TSWhen is coming into contact with
Figure BDA0002938062070000124
The file W is transmitted by the fronthaul network in a wireless transmission modefN-m offA data packet having a transmission rate of
Figure BDA0002938062070000125
The transmission rate of the fronthaul network of all F files should not exceed the transmission rate C of the fronthaul network, so the transmission constraint condition of the fronthaul network is that
Figure BDA0002938062070000126
In the transmission process of the access network, different files are transmitted in a time division multiplexing mode. Therefore, for the access network to transmit the file after the time slot,the starting time of the access network transmission is not earlier than the end time of the access network transmission of the file at the front of the time slot. For the sake of discussion, assume that at time T (g), NgTransmission time delay D of file forward transmission networkF(g,g(1))<DF(g,g(2))<...<DF(g,g(Ng))。
Considering that the larger the transmission delay of the file fronthaul network is, the later the starting time of the transmission in the access network is, the constraint condition of the transmission stage of the access network is
Figure BDA0002938062070000127
Suppose DF(g, g (1)) is 2s, DF(g, g (2)) 5s, g (1) Transmission time D in Forwarding networkF(g, g (1)) is 2s and the transmission time in the access network is Dg,g(1)In the access network, the transmission of files is time division multiplexing, so g (1) can be transmitted only after g (2), so DF(g, g (2)) and DF(g, g (1)) is required to satisfy the constraint of the formula (11).
Comprehensively considering the process from the file request of the user to the file receiving, the service time delay
Figure BDA0002938062070000128
Should not exceed the maximum allowed service delay Dmax
Figure BDA0002938062070000129
The invention optimizes the number m of the data packets placed by the RRH in the stage of placing the cache filefTransmission user u of forward network in transmission stage of cache filef,iRequest document WfTime g off,iAnd forward transmission of the file W at time T (g)fBandwidth allocation ratio of
Figure BDA0002938062070000131
Minimizing the transmission bandwidth requirement Q of the fronthaul network, optimizing the problem P0Is expressed as follows
Figure BDA0002938062070000132
s.t.(9)~(12)
The optimization variables of the optimization problem are composed of continuous variables
Figure BDA0002938062070000133
Discrete variable mfAnd gf,iThe invention provides a low-complexity heuristic algorithm for solving the optimization problem, which is a mixed integer nonlinear optimization problem and cannot find an optimal solution in polynomial time. Considering the working flow of the Fog-RAN network architecture, in the cache file placement stage, the optimization of the placement number of the data packets only needs to consider the distribution of different file popularity (namely the access frequency of the files in the cache file transmission stage), and is independent of the file request time. In the transmission stage of the cache file, when a user sends a file request, the transmission time of the fronthaul network and the transmission bandwidth allocation of the fronthaul network can be optimized according to the time when the user sends the file request. Therefore, the present invention solves the optimization problem in two steps, first optimizing the variable mfThen optimize the variable gf,iAnd
Figure BDA0002938062070000134
first, the number m of packets in the cache file placement stage is discussedfThe optimization method of (1). As can be seen from (9), the number of F different popularity file data packets cached by each RRH should satisfy the constraint condition
Figure BDA0002938062070000135
At the same time, by optimizing the problem P0The expression of the transmission bandwidth requirement Q of the forward-to-mid network can be seen, and the number m of the RRH cache data packetsfThe more, the less the fronthaul network transmission bandwidth requirements. Therefore, to reduce the fronthaul network transmission bandwidth requirement, the RRH buffer space should be maximally utilized, so that
Figure BDA0002938062070000136
In this groupOn the basis, the higher the popularity of the file is, the higher the access frequency in the file caching transmission stage and the frequency of transmitting the file data packet by the forwarding network are considered. Therefore, to reduce the transmission bandwidth requirement of the fronthaul network, the RRH should buffer more packets of the files with higher popularity. Note that F files in the file library obey Zipf distribution, popularity p1,p2,...,pFSatisfy the requirement of
Figure BDA0002938062070000137
Thus, mfShould be as theoretical as
Figure BDA0002938062070000138
However, consider that
Figure BDA0002938062070000139
It may not be possible to satisfy 0. ltoreq. m in (9)f< n and
Figure BDA00029380620700001310
is therefore subject to theoretical values
Figure BDA00029380620700001311
Correcting to obtain a corrected value
Figure BDA00029380620700001312
Figure BDA00029380620700001313
Wherein
Figure BDA0002938062070000141
Representing a ceiling operation. By
Figure BDA0002938062070000142
Is shown in
Figure BDA0002938062070000143
However, from (13)
Figure BDA0002938062070000144
Can be seen when
Figure BDA0002938062070000145
When the temperature of the water is higher than the set temperature,
Figure BDA0002938062070000146
when in use
Figure BDA0002938062070000147
When the temperature of the water is higher than the set temperature,
Figure BDA0002938062070000148
thus, there are
Figure BDA0002938062070000149
Possibly, correction values are required
Figure BDA00029380620700001410
And carrying out secondary correction. To summarize, for the cache file placement stage, the number m of packets of the RRH cache filefThe optimization process comprises the following steps:
step 1: for each file F e F in the file library, m is calculatedfTheoretical value of (1)
Figure BDA00029380620700001411
And according to (13) calculating
Figure BDA00029380620700001412
Correction value of
Figure BDA00029380620700001413
Step 2: when in use
Figure BDA00029380620700001414
According to
Figure BDA00029380620700001415
The relationship between n.M is as follows
Figure BDA00029380620700001416
Making a secondary correction so that
Figure BDA00029380620700001417
Firstly, when
Figure BDA00029380620700001418
And then, the correction value is used as the number of the RRH buffer data packets, and the buffer capacity requirement exceeds the buffer capacity of the RRH. Considering that the access frequency of the files with low popularity is low, in the secondary correction process, the correction values are sequentially arranged according to the sequence of the file popularity from low to high (F ═ F, F-1,.., 2,1)
Figure BDA00029380620700001419
Is reduced to zero until
Figure BDA00029380620700001420
Until the end;
② when
Figure BDA00029380620700001421
In time, the correction value is used as the number of RRH buffer data packets, and the RRH buffer capacity is not utilized to the maximum extent. Considering that the access frequency of the files with high popularity is high, in the secondary correction process, the correction values are sequentially corrected according to the sequence (F is 1,2, F-1, F) from high popularity to low popularity
Figure BDA00029380620700001422
Increasing to n-1 until
Figure BDA00029380620700001423
Until the end;
and step 3: assignment of value
Figure BDA00029380620700001424
For example, 3 files with popularity of 1/(1+1/2+1/3), (1/2)/(1+1/2+1/3), and (1/3)/(1+1/2+1/3), n-10, MOf three files 2
Figure BDA00029380620700001425
By
Figure BDA00029380620700001426
Calculated as 120/11, 60/11, 40/11, respectively, according to equation (13),
Figure BDA00029380620700001427
9,6,4, and 19, less than 20, of the second file according to step 2
Figure BDA00029380620700001428
Adjusted from 6 to 7.
Then discuss the user u in the transmission stage of the cache filef,iRequested response time slot gf,iAnd forward-to-forward network transmission file W at time T (g)fAllocated bandwidth of data packets
Figure BDA00029380620700001429
The optimization method of (1). The invention considers the situation that the user request information is unknown, that is, the system only specifies the content of the file requested by the user and the time when the user requests the file after the user sends the file request in the transmission stage of the cache file. Suppose G.TS>DmaxNote that (12) the constraint of the maximum allowable service delay of the users, the system cannot implement a single response to the file requests of all U users at the end of the G-th time slot. As shown by the expression of the transmission bandwidth requirement Q of the front transmission network in the step (4), T is more than or equal to 0 and less than or equal to G.TSIn the time period, the transmission bandwidth requirement Q of the fronthaul network is the accumulation of the transmission bandwidth requirement of the fronthaul network when the system responds to the user request every time, and is related to the transmission times of files with different popularity, and the smaller the transmission times of the files are, the smaller the transmission bandwidth requirement of the fronthaul network is. Considering service delay of user requesting file
Figure BDA0002938062070000151
Within certain tolerance, this may be satisfied the later the system responds to the user requestThe greater the number of user requests, the fewer the number of file transfers. Therefore, in order to reduce the transmission bandwidth requirement of the fronthaul network, the response time of the system to the user request should be properly delayed, so that the file transmitted by the system at the end time of a certain time slot meets the requirement of more users requesting the file, the transmission times of different files are reduced, and a multicast transmission opportunity is created.
Considering that the response time is the end time of each time slot, since the user request information is unknown, the user requests that can be satisfied by the end time of each time slot should be determined in turn according to the increasing order of time (G ═ 1, 2. The user request time range which is possibly met at a certain response time is defined as the search range of the response time. Suppose Dmax>DR(f, i) responding to the user request time in the search range of the time T (g) according to the formula (8) and the formula (12)
Figure BDA0002938062070000152
Should satisfy
Figure BDA0002938062070000153
In equation (14a), T (g) is some determined response time, DR(f, i) is a user request
Figure BDA0002938062070000154
The transmission delay in the access network is determined by the position between the RRH and the user in the network and the transmission conditions of the radio channel, DmaxIs the maximum allowed service delay. Thus, in the formula (14a), only
Figure BDA0002938062070000155
Is and optimizes the variable mfAnd
Figure BDA0002938062070000156
the associated variable value. In order to meet more user requests at a certain response time, create multicast transmission opportunities, reduce the transmission bandwidth requirement of the fronthaul network, and enlarge
Figure BDA0002938062070000157
I.e., reducing the value on the left side in equation (14a) as much as possible. Consider when n ═ mfOr
Figure BDA0002938062070000158
Time, variable DF(g, f) can go to zero indefinitely, so D can be adjustedF(g, f) zero-setting processing, there is formula (14)
Figure BDA0002938062070000159
In the search range of the response time, multicast transmission opportunities should be created as much as possible, and the requirement of the transmission bandwidth of the fronthaul network is reduced. Considering that there may be multiple user requests in the search range, due to the limitation of the transmission rate of the fronthaul network and the maximum allowable transmission delay constraint of the file transmission delay, all the user requests in the search range may not be satisfied at a certain response time, and the user requests that are preferentially satisfied should be preferentially satisfied, and the user requests that are preferentially satisfied are defined as the preferential requests.
Considering that the time and content of the user request file are gradually clear over time when the user request information is unknown, the latest response time of the user request can be determined according to the already clear user request information. When the latest response time comes, the user request is a priority request and must be satisfied at the response time. For a time of day of
Figure BDA0002938062070000161
Of the latest response time, the minimum value of
Figure BDA0002938062070000162
The time slot end time closest to the request time is expressed as
Figure BDA0002938062070000163
Note Dmax>DR(f, i) maximum of the latest response time from (8) and (12)
Figure BDA0002938062070000164
Is shown as
Figure BDA0002938062070000165
Wherein
Figure BDA0002938062070000166
Representing a floor operation. Two facts are considered: on the one hand, note that F files in the file library have different popularity, the higher the popularity, at G.TSThe more user requests within a time period, the more multicast transmission opportunities. On the other hand, as can be seen from the optimization process of the number of the RRH cache packets in the cache file placement stage, the higher the file popularity is, the more the number of the RRH cache packets is, and the less the number of the forwarding network transmission packets is. The lower the requirement on the transmission rate of the fronthaul network is under the condition of the same fronthaul network transmission delay. Therefore, when the popularity of the file requested by the user is high, the response delay of the high-popularity file should be increased, the maximum allowed transmission delay of the fronthaul network should be reduced, and a multicast transmission opportunity is created, that is, the latest response time of the request is the maximum value in (16). When the popularity of a file requested by a user is low, the response time delay of the file with low popularity is reduced, the maximum allowable transmission time delay of the fronthaul network is increased, and the requirement on the wireless transmission rate of the fronthaul network is reduced, namely the latest response time of the request is the minimum value in (15).
The variable g is introduced belowf,iAnd
Figure BDA0002938062070000167
the overall idea of the specific optimization method is as follows: assume that at response time T (g) g · TSWithin a search range (14), each user request
Figure BDA0002938062070000168
All the response times are
Figure BDA0002938062070000169
Determining variables under constraints
Figure BDA00029380620700001610
The priority requests in the search range should be met preferentially, and other non-priority requests in the search range are considered after all the priority requests are met. The method comprises the following specific steps:
step 1: requesting content information of a file by a user, wherein when the file requested by the user is a file 1,2,.. and F/2 with high popularity, the latest response time requested by the user is the maximum value in the (16), and when the file requested by the user is a file F/2+1, F/2+2,.. and F with low popularity, the latest response time requested by the user is the minimum value in the (15);
assuming that the number of files in the file library is F ═ 16, the more popular files are 1,2,.., 8 (i.e., 1,2,.., F/2), and the less popular files are 9,10,11,.., 16 (i.e., F/2+1, F/2+2,.., F).
Step 2: t is more than or equal to 0 and less than or equal to G.TSWithin the time zone, the response times increase in order of (response time T (g) ═ g · T)SG) determining in turn each response time t (G) that can be satisfied until all user requests are satisfied:
step 2.1: when the response time T (g) of the user request comes (the response time of the user request is the end time of each time slot), searching the user request taking the response time T (g) as the latest response time, and when the user request is not met, the user request is a priority request;
step 2.2: according to the search range of the response time obtained in the step (14), among the user requests which are not met in the search range, the user requests except the priority request are non-priority requests;
step 2.3: consider a single priority request, let in equation (8)
Figure BDA0002938062070000171
Further, from the formula (12), there are the followingEquation (17), calculating the maximum allowable transmission delay of the forwarding network for each priority request by (17)
Figure BDA0002938062070000172
Further calculating by (18) a minimum required transmission rate requirement for the fronthaul network
Figure BDA0002938062070000173
Step 2.4: on the basis of step 2.3, considering the priority requests of a plurality of requests for the same file, the maximum allowable transmission delay of the fronthaul network should be the minimum value of the maximum allowable transmission delays of a plurality of requests for the same file. Therefore, in the priority request, the file W is transmittedg(θ)Maximum allowed transmission delay D of fronthaul networkF(g,g(θ),i)maxAnd minimum transmission rate requirement CF(g,g(θ),i)minShould be rewritten on the basis of (17) and (18) as (19) and (20), respectively
Figure BDA0002938062070000174
Figure BDA0002938062070000175
Step 2.5: on the basis of step 2.4, considering the priority requests for different files, and meeting the constraint condition (11), the maximum allowable transmission delay D of the fronthaul networkF(g,g(θ),i)maxShould be further rewritten on the basis of (19) to
Figure BDA0002938062070000176
In the step (21), the first step is carried out,
Figure BDA0002938062070000177
Figure BDA0002938062070000181
min(A,B-Dg,g(θ)) Means taking A and B-Dg,g(θ)The smaller of the two. Furthermore, the fronthaul network minimum transmission rate requirement may still be determined according to (20), thus optimizing variables
Figure BDA0002938062070000182
Should be that
Figure BDA0002938062070000183
To satisfy more priority requests, D should be added as shown in (20) - (22)F(g,g(θ),i)maxSo that the relationship of A and B in (21) satisfies A < B. That is, in the process from step 2.4 to step 2.5, when determining the transmission sequence of the plurality of files in the access network, the larger the maximum allowable transmission delay (19) of the file in the forwarding network is, the larger the transmission delay of the forwarding network is, and the later the transmission start time of the access network is.
Step 2.6: when all priority requests are satisfied, according to the response method of the step 2.3-step 2.5 to the priority requests, the response can form a non-priority request of a multicast transmission opportunity (the file request frequency is more than 1). When B is less than or equal to Dg,g(θ)Or
Figure BDA0002938062070000184
When the constraint condition (11) or (12) can not be satisfied, the user request which can be satisfied with the response time is determined to be finished, and the response time of the user request is gf,iThe remaining unsatisfied user requests within the search range will be satisfied at a later time of response.
TABLE 1 MDS code cache simulation parameter set
System parameter Numerical value System parameter Numerical value
Number of files F 12 Fronthaul network transmission rate C 500Mbps
Time slot length TS 0.5s Coefficient of path loss alpha 4
Number of time slots G 40 RRH transmission power P 30dBm
File size S 100Mbit System bandwidth B 20MHz
Number of subfiles n 10 Noise power density sigma -174dBm/Hz
RRH buffer capacity M 2 Service cluster size C max 5
User distribution density lambda U 120/km2 RRH distribution density lambda R 40/km2
Popularity bias factor beta 1 Maximum service delay Dmax 3s
The inventor carries out simulation experiments on the invention, the experimental parameters are shown in table 1, a square area with the area of 1km multiplied by 1km is considered, and the user and the RRH respectively obey the density of lambdaUAnd λRPoisson process distribution. T is more than or equal to 0 and less than or equal to G.TSIn a time period, U users distribute p according to the popularity of F files in a file library1,p2,...,pFAn asynchronous random request is sent, and the higher the popularity of a file is, the greater the number of users requesting the file is. The optimization problem solving method considered in the simulation comprises the following steps:
NRI-OP (no request information-optimal placement): the invention provides a method for solving an optimization problem under the condition that user request information is unknown;
FRT-OP (fixed response time-optimal placement): in the transmission stage of the cache file, the system responds to the user request in the time slot at the end time of each time slot.
In the two methods, the method provided by the invention is adopted in the stage of placing the cache file, and the popularity of the file is higher, and the quantity of the data packets placed by the RRH is larger. In the simulation, a method NOP (no optimal placement) for uniformly placing data packets in the stage of placing the cache files is also considered, and files with different popularity are placed in RRH (remote record)The number of the data packets is the same. Considering the sum of the number of packets buffered in RRH for F files
Figure BDA0002938062070000191
Thus in the NOP method
Figure BDA0002938062070000192
In the simulation, the experimental parameters in table 1 are selected first, and the curve of the change of the transmission bandwidth demand Q of the fronthaul network with different methods along with the popularity offset coefficient β is shown in fig. 4. It can be seen from the figure that the Q values for the different methods decrease with increasing β. It is noted that the transmission bandwidth requirement of the fronthaul network is related to the transmission times of files with different popularity, and the smaller the transmission times, the smaller the transmission bandwidth requirement of the fronthaul network. For files with low popularity, the multicast transmission opportunity is less because the user request times are less, and the file transmission times are approximate to the user request times. For files with higher popularity, the times of user requests are more, the multicast transmission opportunities are more, and the times of file transmission are less than the times of user requests. With the increase of the popularity bias coefficient beta, more users request the same files with higher popularity, the file transmission times are reduced, and the requirement on the transmission bandwidth of the fronthaul network is reduced. In the OP method, the number of data packets of the file with higher popularity cached in the RRH is larger, the number of data packets transmitted by the fronthaul network is smaller, so that the performance is better than that of the NOP method, and the difference between the two is gradually increased along with the increase of beta.
In addition, as can be seen from fig. 4, when the optimization methods in the buffer file placement stage are the same, the forwarding network transmission bandwidth requirement of FRT is greater for methods NRI and FRT in the buffer file transmission stage. In FRT, at the end time of each time slot, the system can only respond to the user request in the time slot, the probability of meeting the user requests of a plurality of same files is low, the multicast transmission opportunity is low, the transmission times of different files are increased, and the transmission bandwidth requirement of the fronthaul network is increased. FRT can be regarded as a method in which the latest response time of each user request is the end time of the present time slot (i.e., the minimum value of the latest response time in (15)) in the case where the user request information is unknown. In the NRI, the user request information is unknown, and compared with FRT, the latest response time is set for the file with higher popularity, the latest response time is set for the file with lower popularity, the multicast transmission opportunity of the file with higher popularity is increased, the transmission times of the file with higher popularity is reduced, and the transmission bandwidth requirement of a forward network is reduced. Compared with the FRT-NOP method that the response time is fixed as the time slot ending time and the file data packets with different popularity are uniformly placed, the NRI-OP method provided by the invention can reduce the transmission bandwidth requirement of the fronthaul network by 68.91%.
Then, the simulation parameters in Table 1 are selected and the user distribution density λ is gradually increasedUDifferent methods of fronthaul network transmission bandwidth requirement Q with lambdaUThe variation curve of (2) is shown in fig. 5. It can be seen that the Q values for the different methods are a function of λUIs increased. This is due to the fact that when λ isUWhen the number of times of user requests in the time period of G time slots is increased, the transmission times of files with different popularity are increased, and the transmission bandwidth requirement of the fronthaul network is increased. On the other hand, as the number of user requests increases, the multicast transmission opportunities also increase. In the optimization methods NRI and FRT in the transmission stage of the cache file, the probability that different users requesting the same file are met at the same response time to create a multicast transmission opportunity is considered, and the NRI is larger than the FRT. Thus, with λUThe performance gap between the two gradually increases.
The previous description is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Moreover, all or a portion of any aspect and/or embodiment may be utilized with all or a portion of any other aspect and/or embodiment, unless stated otherwise. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for transmitting MDS (request for verification) coded data packets of a fronthaul network is characterized in that the fronthaul network is located in a distributed Fog-RAN (trusted remote Access network) network architecture, the network architecture comprises a BBU (base band Unit) centralized resource pool, a distributed RRH (radio remote Unit) with a caching function and the fronthaul network located between the BBU and the RRH, the BBU pool transmits data to the RRH through the fronthaul network, the RRH transmits the data to a user through an access network, each RRH can cache M files, a network file library comprises a plurality of files, each file is divided into n sub-files, and the size of each sub-file is equal to that of each sub-file
Figure FDA0002938062060000011
Each subfile is encoded into a packet using the MDS, and the size of each packet is
Figure FDA0002938062060000012
S is the file size, n is a positive integer, the method comprises:
step 100: according to each file WfPrevalence p offDetermining WfNumber of packets m buffered per RRHf,mfProportional to popularity, where F ∈ F, the set F ═ 1, 2.
2. The method of claim 1, wherein there are U users in the network, forming a set of users, U ═ 1, 2.., U }, and the transmission process includes G slots, and the users respond only at the end of each slot when they make a file request, the method further comprising:
step 200: receiving user uf,iEmitted pair WfAccording to the file WfDetermines the response time gf,iThe time delay from request to response of the file with high popularity is larger than that of the file with low popularity, wherein the user set U is divided into F subsets U according to different files requested by the user1,U2,...,UFSet UfThe ith user in (ii) is denoted as uf,i,i=1,2,…,|Uf|;
Step 300: according to each file WfM offAnd gf,iCalculating the time-lapse forwarding network transmission file W at the time T (g)fAllocated bandwidth of data packets
Figure FDA0002938062060000013
Wherein T (g) g.TS,g=1,2,...,G。
3. The method of claim 1, the step 100 comprising:
step 110: for each file F e F in the file library, m is calculatedfTheoretical value of (1)
Figure FDA0002938062060000014
And calculated according to the following formula
Figure FDA0002938062060000015
Correction value of
Figure FDA0002938062060000016
Figure FDA0002938062060000017
Wherein
Figure FDA0002938062060000018
Representing a ceiling operation.
4. The method of claim 3, step 100 further comprising:
step 120: when in use
Figure FDA0002938062060000021
According to
Figure FDA0002938062060000022
The relationship between n.M is as follows
Figure FDA0002938062060000023
Making a secondary correction so that
Figure FDA0002938062060000024
When in use
Figure FDA0002938062060000025
Then, the correction values are sequentially changed from low to high in the order of file popularity (F ═ F, F-1.., 2,1)
Figure FDA0002938062060000026
Is reduced to zero until
Figure FDA0002938062060000027
Until the end;
when in use
Figure FDA0002938062060000028
Then, the correction values are sequentially changed from high to low (F1, 2,1, F) according to the file popularity
Figure FDA0002938062060000029
Increasing to n-1 until
Figure FDA00029380620600000210
Until the end;
step 130: order to
Figure FDA00029380620600000211
5. The method of claim 2, the step 200 comprising:
step 210: when user uf,iWhen the request file is a file 1,2, F/2 with higher popularity,the latest response time of the user request is the maximum value of the latest response time, and when the user u requests the latest response timef,iAnd when the request file is a file F/2+1, F/2+2 and F with low popularity, the latest response time requested by the user is the minimum value of the latest response time.
6. The method of claim 2, step 200 further comprising determining each response time t (G) in turn in order of increasing response time, G1, 2.
7. The method of claim 6, comprising:
step 221: when the response time T (g) of the user request comes, searching the user request with the response time T (g) as the latest response time, and when the user request is not met, the user request is a priority request;
step 222: determining a search range at a time T (g), wherein the search range is a user request which can be satisfied at the time T (g), and determining a user request except a priority request in the search range as a non-priority request;
step 223: for a single priority request, calculating the maximum allowable transmission delay of the fronthaul network and the corresponding minimum transmission rate required by the fronthaul network;
step 224: and for a plurality of priority requests requesting the same file, determining that the maximum allowable transmission delay of the fronthaul network is the minimum value of the maximum allowable transmission delays of the plurality of the same file requests, and calculating the corresponding minimum transmission rate.
8. The method of claim 7, further comprising:
step 225: if the minimum value of the maximum allowable transmission delay determined in step 224 does not satisfy the constraint condition for the file with the later transmission time slot of the access network and the starting time of the access network transmission should not be earlier than the ending time of the access network transmission of the file with the earlier transmission time slot, the minimum value of the maximum allowable transmission delay is reduced until the constraint condition is satisfied, and the corresponding minimum transmission rate is calculated and when the time T (g) is reached, the file is in the file transmission time slotN-m of fronthaul network transmissionfDefinition of transmission bandwidth of data packet in proportion to total bandwidth of fronthaul network
Figure FDA0002938062060000031
9. A computer-readable storage medium, in which one or more computer programs are stored, which when executed, are for implementing the method of any one of claims 1-8.
10. A computing system, comprising:
a storage device, and one or more processors;
wherein the storage means is for storing one or more computer programs which, when executed by the processor, are for implementing the method of any one of claims 1-8.
CN202110167807.4A 2021-02-07 2021-02-07 Transmission method for MDS (data packet System) encoded data packet of forwarding network Active CN112911717B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110167807.4A CN112911717B (en) 2021-02-07 2021-02-07 Transmission method for MDS (data packet System) encoded data packet of forwarding network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110167807.4A CN112911717B (en) 2021-02-07 2021-02-07 Transmission method for MDS (data packet System) encoded data packet of forwarding network

Publications (2)

Publication Number Publication Date
CN112911717A true CN112911717A (en) 2021-06-04
CN112911717B CN112911717B (en) 2023-04-25

Family

ID=76124030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110167807.4A Active CN112911717B (en) 2021-02-07 2021-02-07 Transmission method for MDS (data packet System) encoded data packet of forwarding network

Country Status (1)

Country Link
CN (1) CN112911717B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180213442A1 (en) * 2015-08-30 2018-07-26 Lg Electronics Inc. Cluster-based collaborative transmission method in wireless communication system and apparatus therefor
CN109218747A (en) * 2018-09-21 2019-01-15 北京邮电大学 Video traffic classification caching method in super-intensive heterogeneous network based on user mobility
CN109617991A (en) * 2018-12-29 2019-04-12 东南大学 Based on value function approximate super-intensive heterogeneous network small station coding cooperative caching method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180213442A1 (en) * 2015-08-30 2018-07-26 Lg Electronics Inc. Cluster-based collaborative transmission method in wireless communication system and apparatus therefor
CN109218747A (en) * 2018-09-21 2019-01-15 北京邮电大学 Video traffic classification caching method in super-intensive heterogeneous network based on user mobility
CN109617991A (en) * 2018-12-29 2019-04-12 东南大学 Based on value function approximate super-intensive heterogeneous network small station coding cooperative caching method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIONGWEI WU等: "Joint Fronthaul Multicast and Cooperative Beamforming for Cache-Enabled Cloud-Based Small Cell Networks: An MDS Codes-Aided Approach", 《IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS》 *
YUE LIU等: "Delay Aware Flow Scheduling for Time Sensitive Fronthaul Networks in Centralized Radio Access Network", 《IEEE TRANSACTIONS ON COMMUNICATIONS》 *

Also Published As

Publication number Publication date
CN112911717B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN109905918B (en) NOMA cellular Internet of vehicles dynamic resource scheduling method based on energy efficiency
CN112616189B (en) Static and dynamic combined millimeter wave beam resource allocation and optimization method
CN109041193B (en) NOMA-based network slice dynamic combined user association and power allocation method
CN108834080B (en) Distributed cache and user association method based on multicast technology in heterogeneous network
CN106686655A (en) Heterogeneous network joint user correlation and content cache method
CN108271172B (en) Cellular D2D communication joint clustering and content deployment method
CN110290507B (en) Caching strategy and spectrum allocation method of D2D communication auxiliary edge caching system
CN108093435B (en) Cellular downlink network energy efficiency optimization system and method based on cached popular content
CN109194763A (en) Caching method based on small base station self-organizing cooperative in a kind of super-intensive network
CN110809261A (en) Network slice dynamic resource scheduling method for joint congestion control and resource allocation in H-CRAN network
CN115665804B (en) Cache optimization method for cooperative unmanned aerial vehicle-intelligent vehicle cluster
CN108848521B (en) Cellular heterogeneous network joint user association, content caching and resource allocation method based on base station cooperation
CN111698724B (en) Data distribution method and device in edge cache
CN111698732B (en) Time delay oriented cooperative cache optimization method in micro-cellular wireless network
CN103621154A (en) A method for scheduling users in a cellular environment for applying pareto optimal power control, scheduler and wireless communication network
CN109451517A (en) A kind of caching placement optimization method based on mobile edge cache network
Vu et al. Latency minimization for content delivery networks with wireless edge caching
CN106912059B (en) Cognitive relay network joint relay selection and resource allocation method supporting mutual information accumulation
CN109714790A (en) A kind of edge cooperation caching optimization method based on user mobility prediction
CN111314349B (en) Code caching method based on joint maximum distance code division and cluster cooperation in fog wireless access network
CN112911717B (en) Transmission method for MDS (data packet System) encoded data packet of forwarding network
Nicopolitidis et al. Continuous flow wireless data broadcasting for high-speed environments
CN111586439A (en) Green video caching method for cognitive content center network
CN108668288B (en) Method for optimizing small base station positions in wireless cache network
CN108601083B (en) Resource management method based on non-cooperative game in D2D communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant