CN114980324A - Slice-oriented low-delay wireless resource scheduling method and system - Google Patents

Slice-oriented low-delay wireless resource scheduling method and system Download PDF

Info

Publication number
CN114980324A
CN114980324A CN202210379968.4A CN202210379968A CN114980324A CN 114980324 A CN114980324 A CN 114980324A CN 202210379968 A CN202210379968 A CN 202210379968A CN 114980324 A CN114980324 A CN 114980324A
Authority
CN
China
Prior art keywords
user
resource
allocation
resource scheduling
request information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210379968.4A
Other languages
Chinese (zh)
Other versions
CN114980324B (en
Inventor
刘铭
桂振文
谢伟坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 7 Research Institute
Original Assignee
CETC 7 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 7 Research Institute filed Critical CETC 7 Research Institute
Priority to CN202210379968.4A priority Critical patent/CN114980324B/en
Publication of CN114980324A publication Critical patent/CN114980324A/en
Application granted granted Critical
Publication of CN114980324B publication Critical patent/CN114980324B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/20Control channels or signalling for resource management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • H04W72/044Wireless resource allocation based on the type of the allocated resource
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/53Allocation or scheduling criteria for wireless resources based on regulatory allocation policies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/54Allocation or scheduling criteria for wireless resources based on quality criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/56Allocation or scheduling criteria for wireless resources based on priority criteria
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a slice-oriented low-delay wireless resource scheduling method and a slice-oriented low-delay wireless resource scheduling system, wherein the method comprises the following steps: receiving resource scheduling request information sent by a physical world user; acquiring the instantaneous transmission rate of a user based on the currently received resource scheduling request information; constructing a digital twin simulation environment of user resource allocation through available computing resources; in a digital twin simulation environment, calculating the priority of each user on each resource block by combining the instantaneous transmission rate of the user, the available calculation resources and the scheduling request information of the user, and preliminarily evaluating the allocation decision of the resource blocks; based on historical allocation data of a user, optimizing allocation decisions of the primarily evaluated resource blocks through a depth certainty strategy iteration model; and completing the resource block allocation to the user according to the optimized allocation decision, and mapping the allocation decision to the physical world.

Description

Slice-oriented low-delay wireless resource scheduling method and system
Technical Field
The invention relates to the technical field of 5G wireless resource allocation, in particular to a slice-oriented low-delay wireless resource scheduling method and system.
Background
In a traditional wireless resource allocation scenario, a user sends resource request information to a cellular network, and a base station fairly cuts available resource blocks according to the number of the user and sends the resource blocks corresponding to the available resource blocks to the user. However, the characteristics of the user themselves are often ignored. Therefore, in this case, some users with high priority are often divided into resource blocks which are not enough to support the needs of the users. On the other hand, the time that the data volume to be transmitted of the user stays in the queue on the base station side directly affects the time delay of the user. When the stay time is long, the user's own needs cannot be satisfied with a high probability. On the other hand, in order to consider the time delay of the user, the existing research establishes a multi-objective optimization model, but the model cannot accurately judge the accurate resource demand of the user.
In view of the above discussion, the existing solutions are not good for accurately allocating the required resources to the resource request information of the user.
Disclosure of Invention
In order to solve the problems of the defects and shortcomings of the prior art, the invention provides a slice-oriented low-delay wireless resource scheduling method and system, which are based on a depth certainty strategy gradient algorithm to complete accurate allocation of resources so as to meet the low-delay requirement of a user.
In order to achieve the purpose of the invention, the technical scheme is as follows:
a slice-oriented low-delay wireless resource scheduling method comprises the following steps:
receiving resource scheduling request information sent by a physical world user;
acquiring the instantaneous transmission rate of a user based on the currently received resource scheduling request information;
constructing a digital twin simulation environment for user resource allocation through the existing available computing resources;
in a digital twin simulation environment, calculating the priority of each user on each resource block by combining the instantaneous transmission rate of the user, the existing available calculation resources and the scheduling request information of the user, and preliminarily evaluating the allocation decision of the resource blocks;
based on historical allocation data of a user, optimizing allocation decisions of the primarily evaluated resource blocks through a depth certainty strategy iteration model;
and completing the resource block allocation to the user according to the optimized allocation decision, and mapping the allocation decision to the physical world.
Further, a priority R is calculated for each user i, on each resource block i Expressed as:
Figure BDA0003592393670000021
wherein, ω is 1234 Represents a weight coefficient satisfying ω 1234 =1;γ i (t) represents the signal-to-noise ratio of user i at time t; r is i (t) represents the instantaneous transmission rate of user i at time t; RA i (t) represents the average transmission rate of user i for a period of time before time t; c i (t) represents the queue buffer time at time t for user i; d i (t) represents the amount of data that user i needs to transmit at time t.
Still further, the depth deterministic strategy iterative model comprises an Actor neural network and a criticc neural network;
taking the current resource scheduling request information as observation information and defining the current resource scheduling request information as S i Putting the historical allocation data into the constructed replaymemory; the current data S i Inputting the resource allocation decision a obtained from the Actor neural network i And the corresponding reward value is calculated by a given priority formula.
Further, the current resource scheduling request information S i Inputting the Actor neural network for iterative training, and after iteration is carried out for multiple times, reward considering memory discount can be rewritten as follows:
Figure BDA0003592393670000022
wherein R is i (s, a) shows the reward obtained by user i, γ i-t Represents a discount factor, is a fixed value (e.g., set to 0.999); t represents a time scale.
Still further, based on obtaining the corresponding resource block allocation strategy a i Establishing a behavior value function to express that the resource block allocation strategy a is adopted i The behavior-value function, expressed as:
Figure BDA0003592393670000023
in the formula (I), the compound is shown in the specification,
Figure BDA0003592393670000024
further, aiming at establishing a behavior value function to express that a resource block allocation strategy a is adopted i And obtaining the maximum expected return by constructing a loss function, wherein the loss function is expressed as:
Figure BDA0003592393670000025
wherein, theta Q Representing a function Q π Is determined, Y represents the true demand return of the user i,
Figure BDA0003592393670000031
Is a desired function.
Preferably, after the resource block allocation to the user is completed, the allocated resource block is deleted from the resource block list.
Further, after the allocated resource blocks are deleted from the resource block list,
judging whether the resource block list is empty or not, and finishing the distribution process if the resource block list is confirmed to be empty;
and if the resource block list is not determined to be empty, continuing to execute the allocation strategy, and sending the allocation strategy to the user in the physical world, so as to meet the requirement of the user on low time delay.
Preferably, the received resource scheduling request information is analyzed, whether the past resource demand information of the user exists in the cache of the base station side is judged, and if the past resource demand information of the user exists, the past resource demand information is added into the resource scheduling request information of the user;
the resource scheduling request information comprises the position of the user, the channel quality information transmitted by the user and the waiting time of the data of the user in the queue at the base station side.
A computer system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the slice-oriented low-latency radio resource scheduling method. The invention has the following beneficial effects:
1. the invention provides a slice-oriented low-delay wireless resource scheduling method, which effectively ensures the low-delay requirements of users and can accurately meet the delay requirements of different users.
2. The situation that resource allocation is unreasonable is overcome: and a digital twin system of the corresponding relation between the physical entity and the virtual entity is constructed at the base station side, and the current resource request of the user is accurately simulated by learning the historical distribution data.
3. And under the condition of obtaining the current resource request of the user by using a deep deterministic strategy iterative model, taking the priority as a reward accurate guide base station side to obtain an accurate resource allocation scheme.
4. The method is generally suitable for wireless resource allocation application under 5G network application.
Drawings
Fig. 1 is a block diagram of the steps of a slice-oriented low-latency radio resource scheduling method according to embodiment 1.
Fig. 2 is a flowchart of steps of a slice-oriented low-latency radio resource scheduling method according to embodiment 1.
Detailed Description
The invention is described in detail below with reference to the drawings and the detailed description.
The present embodiment may be construed as follows, referring to the terminology used in the art:
1. the user: end users connecting the same cellular network.
2. Slicing: a cellular network is cut into a plurality of virtual end-to-end networks, each network is independent logically, and the failure of any one network does not affect other virtual networks.
3. Mapping: correspondence between the digital twin physical entity and the virtual entity.
4. Digital twinning: one or more mutually dependent digit mapping systems.
Example 1
The slice-oriented low-latency radio resource scheduling method provided in this embodiment is mainly applied to a fifth generation mobile communication technology (5G) scenario, and based on different user priorities, an existing slice technology may allocate corresponding resources to users in a cellular network based on The number of users. The most popular PF algorithm preferentially provides corresponding resource blocks for users with high requirements by establishing a priority formula so as to meet the resource requirements of the users with high priority. However, in addition to the priority index of each user, the delay requirement of the user cannot be ignored, and the delay of the user mainly includes the waiting delay staying in the buffer and the request sending delay of the user. The conventional PF algorithm does not implement more reasonable resource allocation according to the time delay requirement of users, and when the resources required by some users are more, the requirement on time delay is more strict, but the resources are equally sent to the corresponding users based on the fairness of the conventional PF algorithm. Based on the faced problems, the present embodiment designs an intelligent slicing technology based on Digital Twin (DT) to implement low-latency radio resource scheduling. Based on the method of the embodiment, the cellular network can train the deep deterministic strategy iterative model by using historical distribution data and combining the time delay demand characteristics of the user. Different from the traditional resource allocation algorithm, the deep deterministic strategy iterative model can help users with different delay requirements to provide different numbers of resource blocks. The method of the embodiment can obviously reduce the time delay of wireless resource allocation, and improves the low time delay performance of the fifth generation mobile communication technology.
As shown in fig. 1 and fig. 2, the present embodiment provides a slice-oriented low-latency radio resource scheduling method, where the method includes the following steps:
s1: receiving resource scheduling request information sent by a physical world user;
s2: acquiring the instantaneous transmission rate of a user based on the currently received resource scheduling request information;
s3: digital twinning simulation environment for user resource allocation is constructed by means of available computing resources, wherein digital twinning is located on base station side, namely digital twinning simulation environment is constructed on base station side
S4: in a digital twin simulation environment, calculating the priority of each user on each resource block by combining the instantaneous transmission rate of the user, the available calculation resources and the scheduling request information of the user, and obtaining the allocation decision of the resource blocks according to the preliminary evaluation of the priority; according to the principle that the user with high priority preferentially meets the resource request of the user, the allocation decision of the resource blocks is formed according to the high-low arrangement of the priority on each resource block obtained through calculation.
S5: based on historical allocation data of a user, optimizing allocation decisions of the primarily evaluated resource blocks through a depth certainty strategy iteration model;
s6: and completing the resource block allocation to the user according to the optimized allocation decision, and mapping the allocation decision to the physical world.
In this embodiment, based on the current resource scheduling request information, the instantaneous transmission rate of the user is obtained, that is, the transmission rate of the current user data flowing into the base station side is too high, and the base station side may not be able to effectively obtain complete user information, which results in overflow of the cache data at the base station side.
In a specific embodiment, the calculationPer user i, priority R on each resource block i Expressed as:
Figure BDA0003592393670000051
wherein, ω is 1234 Represents a weight coefficient satisfying ω 1234 =1;γ i (t) represents the signal-to-noise ratio of user i at time t; r is i (t) represents the instantaneous transmission rate of user i at time t; RA i (t) represents the average transmission rate of user i for a period of time before time t; c i (t) represents the queue buffer time at time t for user i; d i (t) represents the amount of data that user i needs to transmit at time t.
In a specific embodiment, the Deep deterministic Policy iteration model (DDPG) includes an Actor neural network, a Critic neural network; the criticic network is used for evaluating the user resource allocation decision at the current moment. Taking the current resource scheduling request information as observation information and defining the current resource scheduling request information as S i Putting the historical allocation data into the constructed replaymemory; scheduling request information S of current resource i Inputting the resource allocation decision a obtained from the Actor neural network i And the corresponding reward value is calculated by a given priority formula.
In this embodiment, the replay represents experience playback, which is a part of DDPG reinforcement learning, and history data is put into the replay, so that strong correlation of data during training sampling can be reduced, and a good training result cannot be guaranteed due to the strong correlation.
Specifically, the allocation decision of the preliminarily estimated resource block is optimized through a deep deterministic strategy iterative model, historical allocation data is used as training information, an actor-critic network framework is used for estimation, the actor is the preliminarily obtained resource allocation decision (S4), and the estimation process is represented as follows:
Figure BDA0003592393670000061
wherein, theta Q Denotes the network training parameters, Q denotes the evaluation function of DDPG, μ denotes the parameter of Q, E denotes the expectation function X ═ r i +γQ μ′ (s i ,a i )|a′=μ′ i (s i ),a i Representing the motion space, s, of user i i Representing the state space of user i. Gamma is a discount factor in reinforcement learning, and a represents a i S denotes s i In the set of (1), mu represents mu i A collection of (a). In the embodiment, the representation of the next output is represented by a superscript' and is used for distinguishing the current output; such as a' representing the next output of a.
In a specific embodiment, the current resource scheduling request information S is used i Inputting the Actor neural network for iterative training, and after iteration is performed for multiple times, reward considering the memory discount can be rewritten as follows:
Figure BDA0003592393670000062
wherein R is i (S i ,a i ) Represents a reward obtained by user i; gamma ray i-t Represents a discount factor, is a fixed value (e.g., set to 0.999); t represents a time scale.
In a specific embodiment, the resource block allocation policy a is based on the acquisition of the corresponding resource block allocation policy i Establishing a behavior value function to express that the resource block allocation strategy a is adopted i The expected return is then expressed as:
Figure BDA0003592393670000063
in the formula (I), the compound is shown in the specification,
Figure BDA0003592393670000064
is a desired function.
In a specific embodiment, the resource block allocation strategy a is adopted according to the establishment of the action value function i And obtaining the maximum expected return by constructing a loss function, wherein the loss function is expressed as:
Figure BDA0003592393670000065
wherein, theta Q Representing a function Q π Is determined, Y represents the true demand return of the user i,
Figure BDA0003592393670000066
Is a desired function.
In a specific embodiment, after the resource block allocation to the user is completed, the allocated resource block is deleted from the resource block list.
In a specific embodiment, after the allocated resource blocks are removed from the resource block list,
judging whether the resource block list is empty or not, and finishing the distribution process if the resource block list is confirmed to be empty;
and if the resource block list is not determined to be empty, continuing to execute the allocation strategy, and sending the allocation strategy to the user in the physical world, so as to meet the requirement of the user on low time delay.
In a specific embodiment, the received resource scheduling request information is analyzed, whether the past resource demand information of the user exists in a cache of a base station side is judged, and if the past resource demand information of the user exists, the past resource demand information is added into the resource scheduling request information of the user;
the resource scheduling request information comprises the position of the user, the channel quality information transmitted by the user and the waiting time of the data of the user in the queue at the base station side.
The method in the embodiment is mainly used on the base station side of a 5G cellular network, and in the 5G cellular network, the request requirement of the user is acquired by the method, the resource block is reasonably scheduled and cut, and the requirement of the user on low time delay is met.
Example 2
The embodiment also provides a computer system, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method steps implemented by the processor are as follows:
s1: receiving resource scheduling request information sent by a physical world user;
s2: acquiring the instantaneous transmission rate of a user based on the currently received resource scheduling request information;
s3: constructing a digital twin simulation environment for user resource allocation through the existing available computing resources;
s4: in a digital twin simulation environment, calculating the priority of each user on each resource block by combining the instantaneous transmission rate of the user, the available calculation resources and the scheduling request information of the user, and obtaining the allocation decision of the resource blocks according to the preliminary evaluation of the priority; according to the principle that the user with high priority preferentially meets the resource request of the user, the allocation decision of the resource blocks is formed according to the high-low arrangement of the priority on each resource block obtained through calculation.
S5: based on historical allocation data of a user, optimizing allocation decisions of the primarily evaluated resource blocks through a depth certainty strategy iteration model;
s6: and completing the resource block allocation to the user according to the optimized allocation decision, and mapping the allocation decision to the physical world.
Where the memory and processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the bus connecting together various circuits of the memory and the processor or processors. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor.
Example 3
A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method steps of:
s1: receiving resource scheduling request information sent by a physical world user;
s2: acquiring the instantaneous transmission rate of a user based on the currently received resource scheduling request information;
s3: constructing a digital twin simulation environment for user resource allocation through the existing available computing resources; the currently available computing resources refer to available resources of a Central Processing Unit (CPU) of the base station;
s4: in a digital twin simulation environment, calculating the priority of each user on each resource block by combining the instantaneous transmission rate of the user, the available calculation resources and the scheduling request information of the user, and obtaining the allocation decision of the resource blocks according to the preliminary evaluation of the priority; according to the principle that the user with high priority preferentially meets the resource request of the user, the allocation decision of the resource blocks is formed according to the high-low arrangement of the priority on each resource block obtained through calculation.
S5: based on historical allocation data of a user, optimizing allocation decisions of the primarily evaluated resource blocks through a depth certainty strategy iteration model;
s6: and completing the resource block allocation to the user according to the optimized allocation decision, and mapping the allocation decision to the physical world.
Those skilled in the art will understand that all or part of the steps in the method according to the above embodiments may be implemented by a program instructing related hardware to complete, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A slice-oriented low-delay wireless resource scheduling method is characterized in that: the method comprises the following steps:
receiving resource scheduling request information sent by a physical world user;
acquiring the instantaneous transmission rate of a user based on the currently received resource scheduling request information;
constructing a digital twin simulation environment for user resource allocation through the existing available computing resources;
in a digital twin simulation environment, calculating the priority of each user on each resource block by combining the instantaneous transmission rate of the user, the available calculation resources and the scheduling request information of the user, and preliminarily evaluating the allocation decision of the resource blocks;
based on historical allocation data of a user, optimizing allocation decisions of the primarily evaluated resource blocks through a depth certainty strategy iteration model;
and completing the resource block allocation to the user according to the optimized allocation decision, and mapping the allocation decision to the physical world.
2. The slice-oriented low-latency radio resource scheduling method of claim 1, wherein: calculating the priority R of each user i on each resource block i Expressed as:
Figure FDA0003592393660000011
wherein, ω is 1 ,ω 2 ,ω 3 ,ω 4 Represents a weight coefficient satisfying ω 1234 =1;γ i (t) represents the signal-to-noise ratio of user i at time t; r is i (t) represents the instantaneous transmission rate of user i at time t; RA i (t) represents the average transmission rate of user i for a period of time before time t; c i (t) represents the queue buffer time at time t for user i; d i (t) represents the amount of data that user i needs to transmit at time t.
3. The slice-oriented low-latency radio resource scheduling method of claim 2, wherein: the depth certainty strategy iterative model comprises an Actor neural network and a criticc neural network;
taking the current resource scheduling request information as observation information and defining the current resource scheduling request information as S i Putting the historical allocation data into the constructed replaymemory; the current resource scheduling request information S i Inputting the resource allocation decision a obtained from the Actor neural network i And the corresponding reward value is calculated by a given priority formula.
4. The slice-oriented low-latency radio resource scheduling method of claim 3, wherein: scheduling request information S of current resource i Inputting the Actor neural network for iterative training, and after iteration is carried out for multiple times, reward considering memory discount can be rewritten as follows:
Figure FDA0003592393660000021
wherein R is i (s, a) indicates a reward earned by user i, γ i-t Represents a discount factor; t represents a time scale.
5. The slice-oriented low-latency radio resource scheduling method of claim 4, wherein: resource block allocation strategy a based on acquisition correspondence i Establishing a behavior value function to express that the resource block allocation strategy a is adopted i The behavior-value function, expressed as:
Figure FDA0003592393660000022
in the formula (I), the compound is shown in the specification,
Figure 1
6. the slice-oriented low-latency radio resource scheduling method of claim 5, wherein: aiming at establishing a behavior value function to express that a resource block allocation strategy a is adopted i And acquiring the maximum expected return by constructing a loss function, wherein the loss function is expressed as follows:
Figure FDA0003592393660000024
wherein, theta Q Representing a function Q π Is determined, Y represents the true demand return of the user i,
Figure FDA0003592393660000025
Is a desired function.
7. The method for scheduling of slice-oriented low-latency radio resources of claim 1, wherein: and after the resource block allocation to the user is finished, deleting the allocated resource block from the resource block list.
8. The slice-oriented low-latency radio resource scheduling method of claim 7, wherein: after removing the allocated resource blocks from the resource block list,
judging whether the resource block list is empty or not, and finishing the distribution process if the resource block list is confirmed to be empty;
and if the resource block list is not determined to be empty, continuing to execute the allocation strategy, and sending the allocation strategy to the user in the physical world, so as to meet the requirement of the user on low time delay.
9. The slice-oriented low-latency radio resource scheduling method of claim 1, wherein: analyzing the received resource scheduling request information, judging whether the past resource demand information of the user exists in a cache of the base station side, and if so, adding the past resource demand information into the resource scheduling request information of the user;
the resource scheduling request information comprises the position of the user, the channel quality information transmitted by the user and the waiting time of the data of the user in the queue at the base station side.
10. A computer system, characterized by: the method comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the slice-oriented low-latency radio resource scheduling method according to any one of claims 1 to 9.
CN202210379968.4A 2022-04-12 2022-04-12 Slice-oriented low-delay wireless resource scheduling method and system Active CN114980324B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210379968.4A CN114980324B (en) 2022-04-12 2022-04-12 Slice-oriented low-delay wireless resource scheduling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210379968.4A CN114980324B (en) 2022-04-12 2022-04-12 Slice-oriented low-delay wireless resource scheduling method and system

Publications (2)

Publication Number Publication Date
CN114980324A true CN114980324A (en) 2022-08-30
CN114980324B CN114980324B (en) 2024-09-06

Family

ID=82977728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210379968.4A Active CN114980324B (en) 2022-04-12 2022-04-12 Slice-oriented low-delay wireless resource scheduling method and system

Country Status (1)

Country Link
CN (1) CN114980324B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116761194A (en) * 2023-08-15 2023-09-15 甘肃省公安厅 Police affair cooperative communication optimization system and method in wireless communication network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112543508A (en) * 2020-12-17 2021-03-23 国网安徽省电力有限公司信息通信分公司 Wireless resource allocation method and network architecture for 5G network slice
CN114237917A (en) * 2022-02-25 2022-03-25 南京信息工程大学 Unmanned aerial vehicle auxiliary edge calculation method for power grid inspection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112543508A (en) * 2020-12-17 2021-03-23 国网安徽省电力有限公司信息通信分公司 Wireless resource allocation method and network architecture for 5G network slice
CN114237917A (en) * 2022-02-25 2022-03-25 南京信息工程大学 Unmanned aerial vehicle auxiliary edge calculation method for power grid inspection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YAWEN CHEN ET AL: ""Reinforcement Learning Meets Wireless Networks: A Layering Perspective"", 《 IEEE INTERNET OF THINGS JOURNAL 》, 21 September 2020 (2020-09-21) *
YUEYUE DAI;KE ZHANG;SABITA MAHARJAN;YAN ZHANG: "Deep Reinforcement Learning for Stochastic Computation Offloading in Digital Twin Networks", 《IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS》, vol. 17, no. 7, 31 December 2020 (2020-12-31) *
ZHOUYOU GU ET AL: ""Knowledge-Assisted Deep Reinforcement Learning in 5G Scheduler Design: From Theoretical Framework to Implementation"", 《IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS》, 10 May 2021 (2021-05-10) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116761194A (en) * 2023-08-15 2023-09-15 甘肃省公安厅 Police affair cooperative communication optimization system and method in wireless communication network
CN116761194B (en) * 2023-08-15 2023-11-03 甘肃省公安厅 Police affair cooperative communication optimization system and method in wireless communication network

Also Published As

Publication number Publication date
CN114980324B (en) 2024-09-06

Similar Documents

Publication Publication Date Title
CN112416554B (en) Task migration method and device, electronic equipment and storage medium
CN110275758B (en) Intelligent migration method for virtual network function
CN107743100B (en) Online adaptive network slice virtual resource allocation method based on service prediction
CN111835827A (en) Internet of things edge computing task unloading method and system
CN107948083B (en) SDN data center congestion control method based on reinforcement learning
CN109862610A (en) A kind of D2D subscriber resource distribution method based on deeply study DDPG algorithm
CN111918339B (en) AR task unloading and resource allocation method based on reinforcement learning in mobile edge network
CN107708152B (en) Task unloading method of heterogeneous cellular network
CN112422644A (en) Method and system for unloading computing tasks, electronic device and storage medium
CN114938381B (en) D2D-MEC unloading method based on deep reinforcement learning
CN115629865B (en) Deep learning inference task scheduling method based on edge calculation
CN113992524A (en) Network slice optimization processing method and system
CN112867066A (en) Edge calculation migration method based on 5G multi-cell deep reinforcement learning
CN111614754A (en) Fog-calculation-oriented cost-efficiency optimized dynamic self-adaptive task scheduling method
CN114980324A (en) Slice-oriented low-delay wireless resource scheduling method and system
WO2024011376A1 (en) Task scheduling method and device for artificial intelligence (ai) network function service
CN116916386A (en) Large model auxiliary edge task unloading method considering user competition and load
CN115189910A (en) Network digital twin-based deliberate attack survivability evaluation method
CN115379508A (en) Carrier management method, resource allocation method and related equipment
CN116166406B (en) Personalized edge unloading scheduling method, model training method and system
CN116801314A (en) Network slice resource allocation method based on near-end policy optimization
CN107528914B (en) Resource requisition scheduling method for data fragmentation
CN115361453A (en) Load fair unloading and transferring method for edge service network
CN116501483A (en) Vehicle edge calculation task scheduling method based on multi-agent reinforcement learning
CN105049872A (en) Cellular network vehicular mobile subscriber video service buffer management method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant