CN109561504B - URLLC and eMMC resource multiplexing method based on deep reinforcement learning - Google Patents

URLLC and eMMC resource multiplexing method based on deep reinforcement learning Download PDF

Info

Publication number
CN109561504B
CN109561504B CN201811383001.3A CN201811383001A CN109561504B CN 109561504 B CN109561504 B CN 109561504B CN 201811383001 A CN201811383001 A CN 201811383001A CN 109561504 B CN109561504 B CN 109561504B
Authority
CN
China
Prior art keywords
urllc
embb
slot
mini
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811383001.3A
Other languages
Chinese (zh)
Other versions
CN109561504A (en
Inventor
赵中原
李阳
王君
高慧慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201811383001.3A priority Critical patent/CN109561504B/en
Publication of CN109561504A publication Critical patent/CN109561504A/en
Application granted granted Critical
Publication of CN109561504B publication Critical patent/CN109561504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • H04W72/044Wireless resource allocation based on the type of the allocated resource
    • H04W72/0446Resources in time domain, e.g. slots or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • H04W72/044Wireless resource allocation based on the type of the allocated resource
    • H04W72/0453Resources in frequency domain, e.g. a carrier in FDMA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • H04W72/044Wireless resource allocation based on the type of the allocated resource
    • H04W72/0473Wireless resource allocation based on the type of the allocated resource the resource being transmission power
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/53Allocation or scheduling criteria for wireless resources based on regulatory allocation policies
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a resource multiplexing method of URLLC and eMMC based on deep reinforcement learning, which comprises the following steps: collecting data packet information, channel information and queue information of URLLC and eMBB of M mini-slots as training data; establishing a URLLC and eMB resource multiplexing model based on deep reinforcement learning, and training model parameters by using training data; performing performance evaluation on the trained model until the performance requirement is met; collecting current mini-slot URLLC and eMBB data packet information, channel information and queue information, inputting the collected information into a trained model, and obtaining a resource multiplexing decision result; and according to the resource multiplexing decision result, carrying out resource allocation on the eMBB and URLLC data packets of the current mini-slot. The reasonable distribution and utilization of time-frequency resources and power under the transmission requirements of eMBB and URLLC data packets can be met.

Description

URLLC and eMMC resource multiplexing method based on deep reinforcement learning
Technical Field
The invention relates to the technical field of wireless communication, in particular to a resource multiplexing method of URLLC and eMBB based on deep reinforcement learning.
Background
In order to meet the requirements of different scene services on delay, reliability, mobility and the like in the future, in 2015, ITU formally defines three major scenes of a future 5G network: enhanced mobile broadband (eMBB), massive machine type communication (mMTC), and ultra-reliable low latency (uRLLC). The eMBB scene is mainly used for pursuing the extremely consistent communication experience among people for further improving the performance of user experience and the like on the basis of the existing mobile broadband service scene. mMTC and eMTC are application scenarios of the Internet of things, but the respective emphasis points are different: mMTC is mainly information interaction between people and objects, and eMTC mainly reflects communication requirements between the objects. One of the important objectives of the 5G NR (New Radio, New air interface) design is to enable services of different models in three scenarios to be effectively multiplexed on the same frequency band.
The URLLC/eMBB scene is the scene with the most urgent need of 5G NR at present, the eMBB service is taken as a basic requirement, and the URLLC service can coexist with the eMBB service under the condition of ensuring the eMBB service spectrum efficiency as much as possible. In order to meet the requirement of low delay of URLLC, one way is to use 60KHz subcarrier spacing to achieve 1/4 (compared with LTE) with the original slot length, and in order to further reduce the slot length, ULRLLC uses 1/14 that takes 4 symbols as one micro slot (mini-slot) and reduces the slot length to LTE. In order to save resources and improve spectrum efficiency, the base station may allocate resources already allocated to the eMBB service for randomly arriving URLLC service. The dynamic resource multiplexing method can avoid resource waste to the maximum extent during resource multiplexing, and certainly can also cause demodulation failure of eMBB service data and cause additional HARQ feedback. Therefore, how to allocate the eMBB and URLLC services in limited resources and achieve efficient utilization of the resources is an urgent problem to be solved.
Disclosure of Invention
The invention aims to provide a URLLC and eMMC resource multiplexing method based on deep reinforcement learning, which can realize reasonable distribution and utilization of time-frequency resources and power under the condition of meeting eMMC and URLLC data packet transmission requirements.
In order to achieve the above object, the present invention provides a resource multiplexing method for URLLC and eMBB based on deep reinforcement learning, which includes:
collecting data packet information, channel information and queue information of URLLC and eMBB of M micro-slots mini-slots as training data; m is a natural number;
establishing a URLLC and eMBB resource multiplexing model based on deep reinforcement learning, and training model parameters by using the training data;
performing performance evaluation on the trained model until the performance requirement is met;
collecting current mini-slot URLLC and eMBB data packet information, channel information and queue information, inputting the collected information into the trained model, and obtaining a resource multiplexing decision result;
and according to the resource multiplexing decision result, carrying out resource allocation on the eMBB and URLLC data packets of the current mini-slot.
In summary, the invention is a resource multiplexing method of URLLC and eMBB based on deep reinforcement learning, which trains the eMBB and URLLC data packet information, channel information and queue information by the deep reinforcement learning method to obtain a decision result of the eMBB and URLLC data packet multiplexing resource, reasonably distributes the multiplexing resource according to the decision result, and effectively solves the problem of power and time-frequency resource waste.
Drawings
Fig. 1 is a schematic diagram of a frame structure and a multiplexing mode for multiplexing eMBB and URLLC time-frequency resources according to the present invention.
Fig. 2 is a flowchart illustrating a resource multiplexing method of URLLC and eMBB based on deep reinforcement learning according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The core idea of the invention is that firstly, data packet information, channel information and queue information of URLLC and eMBB are collected as training data, then a URLLC and eMBB resource multiplexing model based on deep reinforcement learning is established, and the training data is utilized to train model parameters and update model parameters theta. Performing performance evaluation on the obtained URLLC and eMMC resource multiplexing model for deep reinforcement learning, and if the URLLC reliability requirement is met and an eMMC data packet has a lower retransmission rate, finishing the training process; if the performance requirements cannot be met, the model continues to be trained until the loss function converges. And then collecting the current mini-slot URLLC and eMBB data packet information, channel information and queue information, and inputting the information into a trained deep reinforcement learning model to obtain a resource multiplexing decision result. And then resource allocation is carried out on eMBB and URLLC data packets according to the decision result of resource multiplexing, so that efficient utilization of limited multiplexing resources is realized, and the problem of power and time-frequency resource waste is effectively solved.
Referring to fig. 1, a frame structure and a multiplexing method for multiplexing the eMBB and the URLLC according to the present invention are specifically described.
Specifically, in order to meet the requirement of low delay of URLLC, 1/4 (compared with LTE) with the original slot length is realized by using 60KHz subcarrier spacing, and in order to further reduce the slot length, ULRLLC uses 4 symbols as a mini-slot and 1/14 reduced to LTE one TTI length, and transmits with one mini-slot as one TTI. In order to save resources and improve spectrum efficiency, the base station may allocate resources already allocated to the eMBB service for randomly arriving URLLC service. And a dynamic scheduling method is adopted, downlink DCI signaling PI (Pre-Indication) is configured to immediately inform a user of eMMC service data preempted by URLLC service data, and the system informs an eMMC user of periodically detecting PI through RRC sublayer signaling to complete correct demodulation of the preempted eMMC resources. And the full utilization of time-frequency resources is realized.
Fig. 2 is a flowchart illustrating a URLLC and eMBB resource multiplexing method based on deep reinforcement learning according to the present invention.
Step 1, collecting data packet information, channel information and queue information of URLLC and eMBB of M micro-slots mini-slots as training data; m is a natural number;
step 101, taking the kth mini-slot in M as an example, obtaining downlink channel gain g of different subcarriers through Channel Quality Indicator (CQI) information periodically uploaded by UEk=[g1,g2,…,gi]Wherein i is the number of sub-carriers in the mini-slot; and obtaining eMBB data packet bit number Rk eMBit number R of URLLC data packetk UReMBB packet queue length Qk eMURLLC packet queue length Qk UR,k∈M;
Step 102, packaging the obtained information into a state vector sk=[Rk eM,Rk UR,gk,Qk eM,Qk UR]As training data.
Step 2, establishing a URLLC and eMBB resource multiplexing model based on deep reinforcement learning, and training model parameters by using the training data;
step 201, establishing a resource multiplexing model of URLLC and eMBB based on deep reinforcement learning, which comprises the following specific steps:
(1) setting motion vector a ═ PeM,PUR,neM,nur]In which P iseMIndicating the transmit power, P, allocated to an eMBB packet during the current mini-slot transmission timeURIndicating the transmit power, n, allocated to the URLLC packet during the current mini-slot transmission timeeMIndicates the number of sub-carriers, n, allocated to an eMBB packet in the current mini-slot transmission timeurIndicating the number of sub-carriers allocated to URLLC data packets in the current mini-slot transmission time and initializing the queue length Q of eMBB data packetseMAnd queue length Q of URLLC packetURAre all zero;
(2) constructing eval and next two identical neural networks, wherein the eval neural network is used for obtaining an action evaluation function Q of the current state and selecting an action vector a; next neural network by selecting the action valuation function argmax for which the next state is largestaQ' calculating a target action valuation function QtargetThe EVAL neural network parameter updating module is used for completing updating of the EVAL neural network parameters;
(3) setting the parameter C ═ n, n of eval neural networkh,nin,nout,θ,activate](ii) a n denotes the number of hidden layers of the neural network, nh=[nh1,nh2,...,nhn]Indicates the number of neurons included in each hidden layer, ninRepresenting the number of input layer neurons and being equal to the length of the state vector s, noutRepresents the number of output layer neurons and is equal to all possible values of the motion vector a, θ ═ weight, bias]Weight represents weight and is randomly initialized to be 0-w, bias represents bias and is initialized to be b, activate represents an activation function and adopts a linear rectification function (ReLU);
(4) the next neural network parameters C are initialized.
Step 202, the method for training the model parameters by using the training data includes:
A. (1) the state vector s of the kth mini-slotk=[Rk eM,Rk UR,gk,Qk eM,Qk UR]Inputting an eval neural network;
(2) selecting motion vector ak
In particular, the movementAs vector akThere are two options, one is to set the probabilityaBy probabilityaRandomly selecting action a from the action poolk. WhereinaIs a very small probability value.
Or, alternatively, with a probability of (1-a) Selecting satisfied conditions from eval neural network
Figure GDA0002584904160000051
Act a ofk. Wherein the action akThere are a number of possible values according to each akThe value of (a) is obtained to obtain Q(s) corresponding to the value of (a)k,akθ) value, and then selecting the largest Q(s)k,akValue of theta)
Figure GDA0002584904160000052
Corresponding to ak。Q(sk,akAnd, θ) values are calculated in detail as shown in (3) below.
(3) According to the motion vector akCalculating the prize r earnedkAnd an action valuation function Q;
(3.1) according to the motion vector akCalculating the prize r earnedkThe method comprises the following specific steps:
according to the selected action
Figure GDA0002584904160000053
The signal-to-noise ratio corresponding to the transmission of URLLC data only for the ith subcarrier can be calculated:
Figure GDA0002584904160000054
for the ith subcarrier, if only eMB data is transmitted, the corresponding signal-to-noise ratio is as follows:
Figure GDA0002584904160000055
for the ith subcarrier, if the data multiplexed by the ith subcarrier is transmitted, the corresponding signal-to-noise ratio is as follows:
Figure GDA0002584904160000056
Figure GDA0002584904160000057
therefore, for the error rate of URLLC data packet transmission on the ith subcarrier:
Figure GDA0002584904160000061
wherein QgaussDenotes a gaussian Q function, and V denotes channel dispersion. Here, the
Figure GDA0002584904160000062
The selection may be made according to whether the ith sub-carrier is transmitting only URLLC data packets or transmitting both multiplexed data.
According to
Figure GDA0002584904160000063
The transmission error rate of the obtained kth mini-slot URLLC data packet is as follows:
Figure GDA0002584904160000064
and obtaining the transmission rate of the kth mini-slot URLLC data packet on the ith subcarrier:
Figure GDA0002584904160000065
according to
Figure GDA0002584904160000066
And
Figure GDA0002584904160000067
the throughput of the URLLC data packet in the current mini-slot is obtained as follows:
Figure GDA0002584904160000068
wherein T represents the time domain length of a mini-slot;
according to
Figure GDA0002584904160000069
And skTo obtain the bit number of the discarded URLLC data packet of the k mini-slot
Figure GDA00025849041600000610
Setting the maximum queue length of URLLC data packet as HUR
According to
Figure GDA00025849041600000611
The throughput of the eMBB data packet in the current mini-slot is obtained as follows:
Figure GDA00025849041600000612
wherein n iskIndicating the number of subcarriers occupied by multiplexing the eMBB and the URLLC,
Figure GDA00025849041600000619
is Gaussian noise;
according to
Figure GDA00025849041600000613
And skTo obtain the bit number of the discarded eMBB data packet of the kth mini-slot
Figure GDA00025849041600000614
Wherein the maximum queue length of the eMBB data packet is set to be HeM
According tok UR,ak
Figure GDA00025849041600000615
And
Figure GDA00025849041600000616
receive a reward rk
Figure GDA00025849041600000617
ω1To omega5Are all constants.
Wherein the content of the first and second substances,
Figure GDA00025849041600000618
indicating URLLC packet transmission error rate, needs anderrorand (5) carrying out comparison and then taking values. When the transmission error rate of the URLLC data packet in the kth mini-slot is greater thanerrorTime of flight
Figure GDA0002584904160000071
If the transmission error rate of the URLLC data packet is less thanerrorTime of flight
Figure GDA0002584904160000072
In the context of the inventionerrorIs 10-5
(3.2) according to Bellman's equation, at state skTaking action ofkUnder the condition of (1), take action onkThe prize r earnedkAdding the Q value of the next state to obtain the expected value, and calculating the action estimation function
Figure GDA0002584904160000073
Where λ is the loss factor.
Since the Q of the current state depends on the Q of the next state, an iterative approach can be taken to solve the markov decision problem via the Bellman equation.
(4) Obtaining the next state vector s that arrivesk+1
In particular, this step sk+1Can be obtained by following s in step 1kThe obtaining is not described herein.
(5) Storing(s)k,ak,rk,sk+1) As a sample;
typically, a plurality of samples will be stored in a memory unit for subsequent training of the model.
(6) Will sk+1Input next neural network obtains maximum action estimation function argmaxa k+1Q’;
(7) According to argmaxa k+1Q' and rkTo obtain
Figure GDA0002584904160000074
Wherein gamma represents a discount factor, and theta' is a parameter of the current next neural network;
(8) randomly taking F samples from the memory unit to obtain Q of each sampletargetAnd an action valuation function Q, F being a natural number;
(9) according to
Figure GDA0002584904160000075
Substituting Q for each sampletargetObtaining a Loss function Loss (theta) by the action evaluation function Q, wherein theta is a parameter of the current eval neural network;
(10) using a gradient descent method
Figure GDA0002584904160000076
Calculating gradient, and selecting the direction with the fastest gradient descending to update the parameter theta of the eval neural network;
B. taking different k values, repeating the step A, and updating the parameters of the next neural network once every I times of updating the parameters of the eval neural network so that theta is equal to theta; i is a natural number greater than 1;
C. and taking different k values, repeating A to B, and continuously training the model until the loss function is converged.
Step 3, performing performance evaluation on the trained model until the performance requirement is met;
(1) training data s obtainedk=[Rk eM,Rk UR,gk,Qk eM,Qk UR]Inputting the trained model to obtain ak=[Pk eM,Pk UR,nk eM,nk ur],k∈M;
(2) Counting the number of eMBB and URLLC data packets sent by the base station in a predetermined time period and respectively recording the number as pEMAnd pURAnd obtaining the information reported to the base station by the UE in the time periodThe number of the URLLC and eMBB data packet transmission errors is purAnd pem(ii) a According to pURAnd purObtaining the transmission error rate of URLLC
Figure GDA0002584904160000081
According to pEMAnd pemObtaining retransmission rate of eMBB
Figure GDA0002584904160000082
(3) To peAnd preMaking a judgment if p is satisfiede<ke,keExpressing the transmission error rate requirement of the URLLC data packet under a specific scene; and satisfy pre<kre,kreIf the retransmission rate requirement of the eMB data packet under a specific scene is expressed, the performance evaluation process is completed; otherwise, continuing to train the model until the performance requirement is met.
Step 4, collecting URLLC and eMBB data packet information, channel information and queue information of the current mini-slot, inputting the collected information into the trained model, and obtaining a resource multiplexing decision result;
specifically, the collected data s of the current mini-slot is ═ ReM,RUR,g,QeM,QUR]Inputting the trained model to obtain a ═ PeM,PUR,neM,nur]. Wherein, the acquisition of s follows step 1, and is not described herein again.
And step 5, performing resource allocation on the eMBB and URLLC data packets of the current mini-slot according to the resource multiplexing decision result.
Specifically, according to the obtained resource multiplexing decision result a of the current mini-slot ═ PeM,PUR,neM,nUR]The radio network controller RNC indicates the power size P allocated to the URLLC and eMBB data packets through the radio resource control RRC sublayerURAnd PeMAnd the number of subcarriers n allocated to URLLC and eMBB packetsURAnd neMAnd indicates location information of the allocated subcarriers.
Further, the system informs the eBB user of the information that the eBB is preempted by the URLLC (namely the position information of the subcarriers multiplexed by the eBB and the URLLC) in real time by configuring a downlink DCI (Pre-Indication) signal PI, and informs the eBB user of periodically detecting the PI through an RRC sublayer signal to complete correct demodulation of the preempted eBB resources. As can be seen from the frame structure of fig. 1, each mini-slot time domain includes 4 symbol lengths. As can be seen from the time-frequency resource multiplexing manner in fig. 1, the light-color pattern is a subcarrier position where only eMBB data is transmitted on each mini-slot, and the dark-color pattern is a subcarrier position where eMBB and URLLC are multiplexed on each mini-slot. Therefore, reasonable distribution of URLLC and eMBB data packet services in time-frequency domain resources and power is realized, and efficient utilization of limited multiplexing resources is realized.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A resource multiplexing method of ultra-reliable low-delay URLLC and enhanced mobile broadband eMBB based on deep reinforcement learning is characterized by comprising the following steps:
collecting data packet information, channel information and queue information of URLLC and eMBB of M micro-slots mini-slots as training data; m is a natural number;
establishing a URLLC and eMBB resource multiplexing model based on deep reinforcement learning, and training model parameters by using the training data;
performing performance evaluation on the trained model until the performance requirement is met;
collecting current mini-slot URLLC and eMBB data packet information, channel information and queue information, inputting the collected information into the trained model, and obtaining a resource multiplexing decision result;
and according to the resource multiplexing decision result, carrying out resource allocation on the eMBB and URLLC data packets of the current mini-slot.
2. The method of claim 1, wherein the collecting packet information, channel information, and queue information for M mini-slot URLLC and eMBB as training data comprises:
for the kth mini-slot in M, acquiring the downlink channel gain g of different subcarriersk=[g1,g2,…,gi]Wherein i is the number of sub-carriers in the mini-slot; and obtaining eMBB data packet bit number Rk eMBit number R of URLLC data packetk UReMBB packet queue length Qk eMURLLC packet queue length Qk UR,k∈M;
Packaging the obtained information into a state vector sk=[Rk eM,Rk UR,gk,Qk eM,Qk UR]As training data.
3. The method of claim 2, wherein the establishing the deep reinforcement learning based URLLC and eMBB resource reuse model comprises:
setting motion vector a ═ PeM,PUR,neM,nur]In which P iseMIndicating the transmit power, P, allocated to an eMBB packet during the current mini-slot transmission timeURIndicating the transmit power, n, allocated to the URLLC packet during the current mini-slot transmission timeeMIndicates the number of sub-carriers, n, allocated to an eMBB packet in the current mini-slot transmission timeurIndicating the number of sub-carriers allocated to URLLC data packets in the current mini-slot transmission time and initializing the queue length Q of eMBB data packetseMAnd queue length Q of URLLC packetURAre all zero;
constructing eval and next two identical neural networks, wherein the eval neural network is used for obtaining an action evaluation function Q of the current state and selecting an action vector a; next neural network by selecting the action valuation function argmax for which the next state is largestaQ' calculating a target action valuation function QtargetTo complete a pairwise eval neural networkUpdating the parameters;
setting the parameter C ═ n, n of eval neural networkh,nin,nout,θ,activate](ii) a n denotes the number of hidden layers of the neural network, nh=[nh1,nh2,...,nhn]Indicates the number of neurons included in each hidden layer, ninRepresenting the number of input layer neurons and being equal to the length of the state vector s, noutRepresents the number of output layer neurons and is equal to all possible values of the motion vector a, θ ═ weight, bias]Weight represents weight and is randomly initialized to be 0-w, bias represents bias and is initialized to be b, and activate represents an activation function and adopts a linear rectification function;
the next neural network parameters C are initialized.
4. The method of claim 3, wherein the method of training model parameters using the training data comprises:
A. the state vector s of the kth mini-slotk=[Rk eM,Rk UR,gk,Qk eM,Qk UR]Inputting an eval neural network;
selecting motion vector ak
According to the motion vector akCalculating the prize r earnedkAnd an action valuation function Q;
obtaining the next state vector s that arrivesk+1
Storing(s)k,ak,rk,sk+1) As a sample;
will sk+1Input next neural network obtains maximum action estimation function argmaxa k+1Q’;
According to argmaxa k+1Q' and rkTo obtain
Figure FDA0002584904150000021
Wherein gamma represents a discount factor, and theta' is a parameter of the current next neural network;
randomly taking out F samples to obtain Q of each sampletargetAnd an action valuation function Q, F being a natural number;
according to
Figure FDA0002584904150000022
Substituting Q for each sampletargetObtaining a Loss function Loss (theta) by the action evaluation function Q, wherein theta is a parameter of the current eval neural network;
using a gradient descent method
Figure FDA0002584904150000031
Calculating gradient, and selecting the direction with the fastest gradient descending to update the parameter theta of the eval neural network;
B. taking different k values, repeating the step A, and updating the parameters of the next neural network once every I times of updating the parameters of the eval neural network so that theta is equal to theta; i is a natural number greater than 1;
C. and taking different k values, repeating A to B, and continuously training the model until the loss function is converged.
5. The method of claim 4, wherein the selection action vector akThe method comprises the following steps:
setting probabilityaBy probabilityaRandomly selecting action a from the action poolkOr with probability (1-a) Selecting satisfied conditions from eval neural network
Figure FDA0002584904150000032
Act a ofk
6. The method of claim 4, wherein a is based on the motion vector akCalculating the prize r earnedkThe method comprises the following steps:
according to ak=[Pk eM,Pk UR,nk eM,nk ur]To obtain the ithSignal-to-noise ratio corresponding to URLLC data packet transmitted by subcarrier
Figure FDA0002584904150000033
According to ak=[Pk eM,Pk UR,nk eM,nk ur]And
Figure FDA0002584904150000034
obtaining the error rate of the transmission of the kth mini-slot URLLC data packet on the ith subcarrier:
Figure FDA0002584904150000035
wherein QgaussRepresenting a gaussian Q function, V representing channel dispersion;
according toThe transmission error rate of the obtained kth mini-slot URLLC data packet is as follows:
Figure FDA0002584904150000037
and obtaining the transmission rate of the kth mini-slot URLLC data packet on the ith subcarrier:
Figure FDA0002584904150000038
according to
Figure FDA0002584904150000039
And
Figure FDA00025849041500000310
the throughput of the URLLC data packet in the current mini-slot is obtained as follows:
Figure FDA00025849041500000311
wherein T represents the time domain length of a mini-slot;
according to
Figure FDA00025849041500000312
And skTo obtain the bit number of the discarded URLLC data packet of the k mini-slot
Figure FDA0002584904150000041
Setting the maximum queue length of URLLC data packet as HUR
According to
Figure FDA0002584904150000042
The throughput of the eMBB data packet in the current mini-slot is obtained as follows:
Figure FDA0002584904150000043
wherein n iskIndicating the number of subcarriers occupied by multiplexing the eMBB and the URLLC,
Figure FDA0002584904150000044
is Gaussian noise;
according to
Figure FDA0002584904150000045
And skTo obtain the bit number of the discarded eMBB data packet of the kth mini-slot
Figure FDA0002584904150000046
Wherein the maximum queue length of the eMBB data packet is set to be HeM
According tok UR,ak
Figure FDA0002584904150000047
And
Figure FDA0002584904150000048
to obtain
Figure FDA0002584904150000049
ω1To omega5Are all constants.
7. The method of claim 6, wherein in state s, according to the Bellman equationkTaking action ofkUnder the condition of (1), take action onkThe prize r earnedkAdding the Q value of the next state to obtain the expected value, and calculating the action estimation function
Figure FDA00025849041500000410
Where λ is the loss factor.
8. The method of claim 7, wherein performing a performance assessment on the trained model until a performance requirement is met comprises:
training data s obtainedk=[Rk eM,Rk UR,gk,Qk eM,Qk UR]Inputting the trained model to obtain ak=[Pk eM,Pk UR,nk eM,nk ur],k∈M;
Counting the number of eMBB and URLLC data packets sent by the base station in a predetermined time period and respectively recording the number as pEMAnd pURAnd obtaining the number p of the transmission errors of the URLLC and the eMBB data packets in the time period through the information reported to the base station by the UEurAnd pem(ii) a According to pURAnd purObtaining the transmission error rate of URLLC
Figure FDA00025849041500000411
According to pEMAnd pemObtaining retransmission rate of eMBB
Figure FDA00025849041500000412
To peAnd preMaking a judgment if p is satisfiede<ke,keExpressed as UR under a particular scenarioLLC data packet transmission error rate requirements; and satisfy pre<kre,kreIf the retransmission rate requirement of the eMB data packet under a specific scene is expressed, the performance evaluation process is completed; otherwise, continuing to train the model until the performance requirement is met.
9. The method of claim 7, wherein the collecting URLLC and eMBB packet information, channel information, and queue information for a current mini-slot, inputting the collected information into the trained model, and obtaining a resource reuse decision result comprises:
collecting data s ═ R of the current mini-sloteM,RUR,g,QeM,QUR]Inputting the trained model to obtain a ═ PeM,PUR,neM,nur]。
10. The method of claim 9, wherein the allocating resources for eMBB and URLLC packets of a current mini-slot according to the resource multiplexing decision result comprises:
according to the obtained resource multiplexing decision result a of the current mini-slot ═ PeM,PUR,neM,nUR]The radio network controller RNC indicates the power size P allocated to the URLLC and eMBB data packets through the radio resource control RRC sublayerURAnd PeMAnd the number of subcarriers n allocated to URLLC and eMBB packetsURAnd neMAnd indicates location information of the allocated subcarriers.
CN201811383001.3A 2018-11-20 2018-11-20 URLLC and eMMC resource multiplexing method based on deep reinforcement learning Active CN109561504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811383001.3A CN109561504B (en) 2018-11-20 2018-11-20 URLLC and eMMC resource multiplexing method based on deep reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811383001.3A CN109561504B (en) 2018-11-20 2018-11-20 URLLC and eMMC resource multiplexing method based on deep reinforcement learning

Publications (2)

Publication Number Publication Date
CN109561504A CN109561504A (en) 2019-04-02
CN109561504B true CN109561504B (en) 2020-09-01

Family

ID=65866817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811383001.3A Active CN109561504B (en) 2018-11-20 2018-11-20 URLLC and eMMC resource multiplexing method based on deep reinforcement learning

Country Status (1)

Country Link
CN (1) CN109561504B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111182644B (en) * 2019-12-24 2022-02-08 北京邮电大学 Joint retransmission URLLC resource scheduling method based on deep reinforcement learning
CN111556572B (en) * 2020-04-21 2022-06-07 北京邮电大学 Spectrum resource and computing resource joint allocation method based on reinforcement learning
CN113099460B (en) * 2021-03-10 2023-03-28 西安交通大学 Reservation-based URLLC (Universal resource reservation control) hybrid multiple access transmission optimization method and system during eMBB (enhanced multimedia broadcast/multicast service) coexistence
CN113453236B (en) * 2021-06-25 2022-06-21 西南科技大学 Frequency resource allocation method for URLLC and eMBB mixed service
CN113747450B (en) * 2021-07-27 2022-12-09 清华大学 Service deployment method and device in mobile network and electronic equipment
CN113691350B (en) * 2021-08-13 2023-06-20 北京遥感设备研究所 Combined scheduling method and system of eMBB and URLLC
CN114143816A (en) * 2021-12-20 2022-03-04 国网河南省电力公司信息通信公司 Dynamic 5G network resource scheduling method based on power service quality guarantee
CN115439479B (en) * 2022-11-09 2023-02-03 北京航空航天大学 Academic image multiplexing detection method based on reinforcement learning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108633004A (en) * 2017-03-17 2018-10-09 工业和信息化部电信研究院 URLLC business occupies eMBB service resources and indicates channel indicating means

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108811115B (en) * 2017-05-05 2021-07-23 北京紫光展锐通信技术有限公司 Method and device for seizing and processing eMBB service data, base station and user equipment
CN108632861B (en) * 2018-04-17 2021-06-18 浙江工业大学 Mobile edge calculation shunting decision method based on deep reinforcement learning
CN108712755B (en) * 2018-05-18 2021-02-26 浙江工业大学 Non-orthogonal access uplink transmission time optimization method based on deep reinforcement learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108633004A (en) * 2017-03-17 2018-10-09 工业和信息化部电信研究院 URLLC business occupies eMBB service resources and indicates channel indicating means

Also Published As

Publication number Publication date
CN109561504A (en) 2019-04-02

Similar Documents

Publication Publication Date Title
CN109561504B (en) URLLC and eMMC resource multiplexing method based on deep reinforcement learning
CN111182644B (en) Joint retransmission URLLC resource scheduling method based on deep reinforcement learning
CN113498076A (en) O-RAN-based performance optimization configuration method and device
US9113371B2 (en) Cross-layer optimization for next-generation WiFi systems
CN114762295A (en) Machine learning architecture for broadcast and multicast communications
CN114731251A (en) Machine learning architecture for simultaneous connectivity with multiple carriers
Sakib et al. An efficient and lightweight predictive channel assignment scheme for multiband B5G-enabled massive IoT: A deep learning approach
KR20080070387A (en) Apparatus and method for scheduling in broadband wireless access system
Sopin et al. LTE network model with signals and random resource requirements
Fu et al. A new systematic framework for autonomous cross-layer optimization
Chehri et al. Real‐time multiuser scheduling based on end‐user requirement using big data analytics
CN112153744A (en) Physical layer security resource allocation method in ICV network
Saggese et al. Deep Reinforcement Learning for URLLC data management on top of scheduled eMBB traffic
CN112203351B (en) Method and apparatus in a node used for wireless communication
CN112838911A (en) Method and apparatus in a node used for wireless communication
CN108809884A (en) Planisphere spinning solution and device
Ganjalizadeh et al. Interplay between distributed AI workflow and URLLC
CN102970757B (en) Long-term evolution (LTE) downlink filling resource distribution method based on real-time service
WO2023146756A1 (en) Lower analog media access control (mac-a) layer and physical layer (phy-a) functions for analog transmission protocol stack
US11777866B2 (en) Systems and methods for intelligent throughput distribution amongst applications of a User Equipment
Asheralieva et al. A two-step resource allocation procedure for LTE-based cognitive radio network
CN109075888A (en) User apparatus and base station
Senapati et al. Modelling the region‐based VoLTE cell capacity estimation using resource optimisation
CN114650606A (en) Communication equipment, media access control layer architecture and implementation method thereof
Ye et al. Video streaming analysis in Vienna LTE system level simulator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant