CN114760639A - Resource unit allocation method, device, equipment and storage medium - Google Patents
Resource unit allocation method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114760639A CN114760639A CN202210332282.XA CN202210332282A CN114760639A CN 114760639 A CN114760639 A CN 114760639A CN 202210332282 A CN202210332282 A CN 202210332282A CN 114760639 A CN114760639 A CN 114760639A
- Authority
- CN
- China
- Prior art keywords
- state information
- network model
- target
- distribution
- target network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000012549 training Methods 0.000 claims abstract description 39
- 238000004590 computer program Methods 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 11
- 238000002372 labelling Methods 0.000 claims description 10
- 230000000875 corresponding effect Effects 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 7
- 230000002596 correlated effect Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004088 simulation Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000010998 test method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W16/00—Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
- H04W16/22—Traffic simulation tools or models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/06—Testing, supervising or monitoring using simulated traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/50—Allocation or scheduling criteria for wireless resources
- H04W72/53—Allocation or scheduling criteria for wireless resources based on regulatory allocation policies
Abstract
The invention discloses a resource unit allocation method, a device, equipment and a storage medium, wherein the method comprises the following steps: collecting a plurality of state information of each client to obtain a data set; training a network model based on the data set to obtain a distribution model; wherein the distribution model comprises a first target network model and a second target network model; allocating resource units based on the first target network model, the second target network model and the target state information to obtain an allocation result; allocating the resource units based on the allocation result. The technical scheme of the invention adopts double networks to distribute the resource units; the first target network model is used for obtaining data and performance related characteristics of the client, and the second target network model is used for evaluating final performance by combining the current channel environment, so that a more reasonable resource unit distribution mode is obtained.
Description
Technical Field
The present invention relates to the field of technologies, and in particular, to a method, an apparatus, a device, and a storage medium for allocating resource units.
Background
The 802.11ax protocol began to introduce an Orthogonal Frequency Division Multiple Access (OFDMA) technique in which the smallest time-Frequency unit is a Resource Unit (RU), i.e., channels of various bandwidths are further subdivided into Multiple RUs. In the prior protocol, a designated channel can only be allocated to one user, and in a dense use scene, other users can only wait for resource competition after the transmission of the previous user is finished.
After the OFMDA is introduced, a channel is divided into a plurality of RUs and distributed to different users for simultaneous transmission, so that the communication experience under a multi-user scene is improved; as protocols have been developed, higher communication frequency bands and larger channel bandwidths are allowed to be used, and therefore, the RU allocation algorithm needs to consider more and more factors.
The currently common allocation scheme is to maximize an evaluation index based on characteristics such as the number of packets sent by each client, Channel State Information (CSI), or remaining time. The scheme makes an evaluation function according to a use scene, then traverses all possible distribution schemes, calculates index gain and finally selects a distribution scheme capable of maximizing the gain. However, the evaluation function adopted by the scheme generally only considers a certain index, such as only throughput or packet remaining time, and fails to consider other communication Quality indexes such as Quality of Service (QoS); in addition, with the introduction of large bandwidths of 160MHz, even 320MHz, and the like, the number of allocable RUs is significantly increased, so that the potential allocation schemes are significantly increased, and the algorithm complexity of full traversal is also greatly improved, which is not convenient to implement in a real-time system with limited computational resources.
Disclosure of Invention
The present invention is directed to solving, at least in part, one of the technical problems in the related art. Therefore, an object of the present invention is to provide a resource unit allocation method, apparatus, device and storage medium.
In order to solve the above technical problem, an embodiment of the present invention provides the following technical solutions:
a method of resource unit allocation, comprising:
collecting a plurality of state information of each client to obtain a data set of each client;
training a network model based on the data set to obtain a distribution model; wherein the distribution model comprises a first target network model and a second target network model;
allocating resource units based on the first target network model, the second target network model and the target state information to obtain an allocation result;
and allocating the resource unit based on the allocation result.
Optionally, the obtaining a data set based on a plurality of state information of each client includes:
acquiring first state information, second state information and third state information of each client;
obtaining a data set based on the first state information, the second state information, and the third state information.
Optionally, the obtaining a data set based on the first state information, the second state information, and the third state information includes:
acquiring each scene corresponding to the first state information, the second state information and the third state information;
labeling each scene to obtain an allocation label;
obtaining the data set based on the assigned label.
Optionally, the labeling each scene to obtain an allocation label includes:
obtaining a target distribution mode of each scene based on the test; or
And calculating based on the distribution gain index and the distribution gain of each distribution mode to obtain the target distribution mode of each scene.
Optionally, obtaining the distribution gain index of each client at each time includes:
acquiring a first parameter of each client at each moment; the first parameter is positively correlated with the distribution gain indicator; acquiring a second parameter of each client at each moment; the second parameter is inversely related to the distribution gain indicator;
and calculating based on the first parameter and the second parameter to obtain the distribution gain index of each client at each moment.
Optionally, training the network model based on the data set to obtain an allocation model includes:
preprocessing the data set to obtain a target data set;
inputting the first state information and the second state information into a first network model, and training the first network model to obtain a first target network model; outputting an initial result based on the first target network model;
inputting the initial result and the third state information into a second network model, and training the second network model to obtain a second target network model;
obtaining the allocation model based on the first target network model and the second target network model.
Optionally, allocating resource units based on the first target network model, the second target network model, and the target state information to obtain an allocation result, including:
acquiring target state information; wherein the target state information comprises first target state information, second target state information and third target state information;
processing the first target state information and the second target state information based on the first target network model, and outputting a first result;
And processing the first result and the third target state information based on the second target network model, and outputting the distribution result.
An embodiment of the present invention further provides a resource unit allocation apparatus, including:
the acquisition module is used for acquiring a plurality of state information of each client to obtain a data set for each client;
the training module is used for training a network model based on the data set to obtain an allocation model; wherein the distribution model comprises a first target network model and a second target network model;
the processing module is used for allocating resource units based on the first target network model, the second target network model and the target state information to obtain an allocation result;
and the allocation module is used for allocating the resource units based on the allocation result.
Embodiments of the present invention also provide an electronic device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the method as described above when executing the computer program.
Embodiments of the present invention also provide a computer-readable storage medium comprising a stored computer program, wherein the computer program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method as described above.
The embodiment of the invention has the following technical effects:
in the technical scheme of the invention, 1) double networks are adopted to distribute the resource units; the first target network model is used for obtaining data and performance related characteristics of the client, the second target network model is used for evaluating final performance in combination with the current channel environment, and the first target network model and the second target network model are combined to enable the network to learn related characteristics in stages more easily, so that a more reasonable resource unit distribution mode is obtained.
2) Besides the combination of actual scene acquisition, the data set can be enhanced through a related wireless simulation platform, so that the difficulty in large-scale acquisition is avoided, and the complexity of data acquisition is reduced.
3) The reference index of the distribution gain is increased, the parameters such as time delay, QoS (quality of service), service fairness and the like are comprehensively considered, the service quality of the client and the wireless channel condition are considered, the relevant weight can be adjusted according to different emphasis points of actual scenes, and the operation requirement of a real-time system can be better met.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
Fig. 1 is a flowchart illustrating a resource unit allocation method according to an embodiment of the present invention;
FIG. 2 is an example of a flow of a resource unit allocation method provided by an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a resource unit allocation apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
To facilitate understanding of the embodiments by those skilled in the art, some terms are explained:
(1) BSR: buffer Status Report, Buffer information Report; in the embodiment of the invention, the cache information report sent by each client is represented.
(2) BQR: basic quality Review, available bandwidth query; the available bandwidth query for each client is represented in embodiments of the present invention.
(3) RSSI: received Signal Strength Indication.
(4) CQI: a channel quality indication.
(5) NS-3: a brand new network simulator.
(6) SDR: software Defined Radio, Software Radio.
At present, only a certain specific evaluation index is considered in a scheme of fully traversing a potential allocation scheme to solve a target option, indexes such as QoS (quality of service) or delay parameters and the like which are improved in a subsequent protocol are not involved, the number of potential allocation schemes is increased obviously with the introduction of an ultra-large bandwidth, the complexity of an algorithm is increased, and the operation requirement of a real-time system cannot be met well.
In order to solve the above technical problems, embodiments of the present invention provide the following technical solutions:
an embodiment of the present invention provides a resource unit allocation system, including:
the system comprises a data acquisition unit, a model trainer, a wireless communication simulation platform and distribution equipment;
the data acquisition device, the model trainer, the wireless communication simulation platform and the distribution equipment realize data interaction based on a network;
specifically, a data acquisition unit acquires a plurality of state information of a plurality of clients in a plurality of scenes to obtain a data set;
in addition, in order to simulate more topologies or use scenes, the construction of more topologies or use scenes can be carried out based on a wireless communication simulation platform, and a plurality of pieces of state information under each constructed scene are collected based on a data collector, so that more effective data sets are obtained;
Then, the data acquisition unit inputs the data set into a model trainer, and the model trainer trains a first network model based on first state information and second state information in the data set and outputs an initial result; then training a second network model by the model trainer based on the initial result and the third state information;
repeating the steps for multiple times to respectively obtain a first target network model and a second target network model;
after the first target network model and the second target network model are obtained, the first target network model and the second target network model can be actually deployed based on the distribution equipment, so that after target state information is processed based on the deployed first target network model and the deployed second network model, distribution results are output, and finally, the distribution of resource units based on the distribution results is realized.
As shown in fig. 1, an embodiment of the present invention provides a resource unit allocation method, which is applied to the above system, and includes:
step S1: collecting a plurality of state information of each client to obtain a data set of each client;
specifically, the obtaining a data set based on a plurality of state information of each client includes:
Acquiring first state information, second state information and third state information of each client;
obtaining a data set based on the first state information, the second state information, and the third state information.
For example, in an embodiment of the present invention, the first state information may be data state information; the second state information may be performance state information; the third state information may be channel state information;
wherein, 1) the data state information includes, but is not limited to, the buffer data volume of each client (such as BSR, QoS, remaining lifetime information, etc.;
2) the performance status information includes, but is not limited to, the sending rate of each client, the available sub-channels (e.g., BQR), and the signal strength (e.g., RSSI of the last packet), etc.;
3) channel state information includes, but is not limited to, channel quality per client (e.g., CQI or CSI per client), available channel resources or interference, etc.;
in an actual application scenario, a data set is obtained based on data state information, performance state information, and channel state information of a plurality of clients under a plurality of scenarios.
It should be noted that the data set includes, but is not limited to, data state information, performance state information, and channel state information, and in an actual application scenario, other state information may also be collected based on actual requirements, which is not limited in this embodiment of the present invention.
According to an optional embodiment of the invention, more topologies or use scenes can be simulated through the wireless communication simulation platform to reduce the difficulty of acquiring the data set or enhance the data set;
for example: through an NS-3 wireless communication emulation platform or other wireless SDR, etc.
According to the embodiment of the invention, the data set can be enhanced through a related wireless simulation platform besides being combined with actual scene acquisition, so that the difficulty of a large amount of acquisition work is avoided, and the complexity of data acquisition is reduced.
In an optional embodiment of the invention, a plurality of evaluation indexes are combined in the acquisition process of the data set, and compared with the prior technical scheme, the operation requirement of a real-time system can be met;
specifically, the obtaining a data set based on the first state information, the second state information, and the third state information includes:
acquiring each scene corresponding to the first state information, the second state information and the third state information;
labeling each scene to obtain an allocation label;
obtaining the data set based on the assigned label.
In the embodiment of the invention, in order to realize the subsequent training of the first network model and the second network model by adopting a supervised learning mode, the distribution labels are set for the attention points in the actual application scene; specifically, each scene in the data set is provided with a corresponding distribution label, and the distribution labels are used as subsequent supervision labels for training the first network model and the second network model in a supervision learning mode;
Wherein the assignment tags need to be pre-given when the data set is obtained.
In an actual application scenario, there may be multiple labeling modes, and the embodiment of the present invention is described by taking two labeling modes as examples:
specifically, the labeling each of the scenes to obtain an allocation label includes:
obtaining a target distribution mode of each scene based on the test; or
And calculating based on the distribution gain index and the distribution gain of each distribution mode to obtain the target distribution mode of each scene.
In an actual application scene, when a plurality of state information of a plurality of clients in a plurality of scenes are collected, a target allocation mode of a resource unit in each scene is tested based on a testing method, and a testing result is recorded:
for example: recording an allocation mode which enables the overall throughput performance to be highest or overall allocation fairness to be highest as an allocation label;
or calculating the target distribution mode of each scene by using some algorithms with higher distribution efficiency, and taking the obtained calculation result as the distribution label of each scene.
In the embodiment of the present invention, the test mode includes, but is not limited to, the above two test methods.
In an optional embodiment of the present invention, the test method may have a problem that the operation is relatively complicated when the distribution label is obtained in an actual scene or an existing algorithm; or the target distribution mode obtained by the existing algorithm is only a local target, and the actual reference value is not large; it is also possible to apply the more autonomous direction of emphasis of the scene itself, for example: more attention is paid to optimizing the performance of QoS and real-time traffic, etc.
In order to optimize or perfect the test method, the embodiment of the present invention provides another way of obtaining the distribution label, that is, the target distribution way of each scene is obtained by calculating based on the distribution gain index and the distribution gain of each distribution way.
Further, obtaining a distribution gain index of each client at each time comprises:
acquiring a first parameter of each client at each moment; the first parameter is positively correlated with the distribution gain indicator; wherein the first parameter comprises a quality of service, a channel quality, and a transmission rate;
acquiring a second parameter of each client at each moment; the second parameter is inversely related to the distribution gain indicator; wherein the second parameters comprise a time window, a historical average rate and a time delay parameter;
And calculating based on the first parameter and the second parameter to obtain the distribution gain index of each client at each moment.
For example: the distribution gain index of each client at each moment can be calculated and obtained based on the following calculation formula:
wherein S isi(n)The distribution gain index of the ith client at the nth moment is represented; deltaiRepresents a QoS quality of service; t is tcIs a time window, RiThe historical average speed of the ith client before the n moment; t isiIs a time delay parameter or remaining life time; ci jIs the channel quality on the jth RU; r isiIs the transmission rate; i. c and j are positive integers, and n is greater than 0.
Based on the formula, it can be seen that the embodiment of the invention considers that fairness and certain service quality are both required for resource unit allocation, and avoids the problems that a part of clients cannot obtain service all the time or QoS flow is delayed and the like; in addition, the distribution gain is positively correlated with the QoS level, while negatively correlated with the service duration and remaining lifetime or delay parameters.
For example: the distribution gain index of each client at each time can be calculated based on the following calculation formula:
Wherein, α, β, γ, λ are weights respectively.
In an actual application scenario, any one of α, β, γ, and λ may be adjusted in real time based on actual needs to change the weight of each item corresponding to α, β, γ, and λ in the above formula.
For example; if a certain server side is more than fairness, the weight of beta can be increased, the proportion of beta corresponding items can be increased, and an allocation label can be obtained based on the proportion.
The embodiment of the invention increases the reference index of distribution gain, comprehensively considers the parameters such as time delay, QoS, service fairness and the like, gives consideration to the client service quality and the wireless channel condition, can adjust the relevant weight according to different emphasis points of actual scenes, and can better meet the operation requirement of a real-time system.
Specifically, the distribution gains obtained by different distribution modes can be completely traversed on a data set by self, so that the target distribution mode of the resource unit under each scene is obtained; in addition, the embodiment of the invention can realize the adjustment of alpha, beta, gamma and lambda to give consideration to the fairness and certain service quality in the resource unit allocation process, thereby better meeting the operation requirement of a real-time system.
Step S2: training a network model based on the data set to obtain a distribution model; wherein the distribution model comprises a first target network model and a second target network model;
specifically, the training of the network model based on the data set to obtain the distribution model includes:
preprocessing the data set to obtain a target data set;
inputting the first state information and the second state information into a first network model, and training the first network model to obtain a first target network model; outputting an initial result based on the first target network model;
inputting the initial result and the third state information into a second network model, and training the second network model to obtain a second target network model;
obtaining the allocation model based on the first target network model and the second target network model.
According to the embodiment of the invention, the first network model and the second network model are trained and optimized in a supervised learning mode based on the data set, so that a final distribution model is obtained.
In order to make the obtained distribution model more accurate, before training the first network model and the second network model, firstly, carrying out data normalization on a data set;
For example: the data set may be normalized using existing standard normalization procedures or otherwise.
Specifically, data state information and performance state information are input into a first network model, and are processed by the first network model, and gain characteristics of data and performance are output; the first network model can be a feature extraction network model, and performs feature fusion and feature extraction on data state information and performance state information so as to output gain features of data and performance;
the data output by the first network model, the gain characteristics of the performance, and the channel state information are then input to the second network model.
Wherein, the channel state information comprises the channel quality, the available channel resource, the interference and other relevant parameters;
further, since the channel state information is related to frequency, and the convolutional neural network can effectively extract data features with specific position information (ordered according to fixed frequency), and the calculation amount of the convolutional neural network is much lower than that of a fully-connected network, the embodiment of the present invention adopts the convolutional neural network as a second network model;
in addition, in the process of training the first network model and the second network model, the first input channel is the gain characteristic of data and performance output by the first network model, the second input channel is channel state information with fixed frequency sequencing, and the output channel is a training distribution result.
And repeating the training process, and training the first network model and the second network model for multiple times based on a large amount of data sets to further obtain a first target network model and a second target network model and finally obtain an allocation model.
In the training process, based on a supervised learning mode, accuracy judgment is carried out on the training distribution result output by the second network model through the distribution labels, an accuracy threshold can be set, and when the accuracy of the training distribution result reaches the accuracy threshold, the first network model and the second network model corresponding to the current training distribution result can be determined as the first target network model and the second target network model.
In addition, a training frequency threshold value can be set, and when the training frequency reaches the training frequency threshold value, the training of the first network model and the second network model is stopped, so that the first target network model and the second target network model can be obtained.
It should be noted that, the first network model according to the embodiment of the present invention is illustrated as a convolutional neural network, but is not limited to the convolutional neural network.
Step S3: allocating resource units based on the first target network model, the second target network model and the target state information to obtain an allocation result;
Specifically, allocating resource units based on the first target network model, the second target network model and the target state information to obtain an allocation result includes:
acquiring target state information; wherein the target state information comprises first target state information, second target state information and third target state information;
processing the first target state information and the second target state information based on the first target network model, and outputting a first result;
and processing the first result and the third target state information based on the second target network model, and outputting the distribution result.
When the allocation of each round of OFDMA resource units is started, acquiring first target state information, second target state information and third target state information based on each client operating in the current scene;
further, the first target state information corresponds to data state information of each client in the current scene; the second target state information corresponds to the performance state information of each client under the current scene; the third target state information corresponds to channel state information of each client under the current scene.
In an actual application scene, when each round of OFDMA resource unit allocation is started, collecting data state information, performance state information and channel state information of each client under the current scene, inputting relevant data of the data state information and the performance state information into a first target network model, performing feature fusion and extraction on the obtained data state information and the performance state information by the first target network model, and then outputting feature data relevant to the data and the performance;
and then inputting the characteristic data related to the data and the performance and the channel state information data into a second target network model, wherein the second target network model evaluates the final performance based on the acquired data, the characteristic data related to the performance and the channel state information, and then outputs the allocation result of the resource unit.
And the obtained first target network model and the second target network model are actually deployed so as to be convenient for the subsequent calling of the distribution model.
Step S4: and allocating the resource unit based on the allocation result.
In an actual application scenario, after the allocation result is output based on the second target network model, the allocation result is executed, and the resource unit is allocated.
In the embodiment of the invention, the resource units are distributed by adopting double networks; the first target network model is used for obtaining data and performance related characteristics of the client, the second target network model is used for evaluating final performance in combination with the current channel environment, and the first target network model and the second target network model are combined to enable the network to learn related characteristics in stages more easily, so that a more reasonable resource unit distribution mode is obtained.
The above embodiments of the present invention can be implemented based on the following implementation manners:
step S201: collecting a plurality of state information of each client to obtain a data set;
step S202: obtaining an allocation label: for example: obtaining a distribution label based on the test method or the distribution gain index;
step S203: constructing two feature extraction networks, respectively extracting data performance features and channel features, and performing training optimization;
step S204: deploying a network: and deploying the two networks which are optimized by the training in the step S203.
Step S205: in each round of distribution process, state information of relevant clients is collected, and distribution results are obtained after the state information passes through two networks.
An embodiment of the present invention further provides a resource unit allocation apparatus 300, including:
An obtaining module 301, configured to collect multiple pieces of state information of each client to obtain a data set;
a training module 302, configured to train a network model based on the data set to obtain an allocation model; wherein the distribution model comprises a first target network model and a second target network model;
a processing module 303, configured to allocate resource units based on the first target network model, the second target network model, and the target state information to obtain an allocation result;
an allocating module 304, configured to allocate the resource unit based on the allocation result.
Optionally, the obtaining a data set based on a plurality of state information of each client includes:
acquiring first state information, second state information and third state information of each client;
obtaining a data set based on the first state information, the second state information, and the third state information.
Optionally, the obtaining a data set based on the first state information, the second state information, and the third state information includes:
acquiring each scene corresponding to the first state information, the second state information and the third state information;
Labeling each scene to obtain a distribution label;
obtaining the data set based on the assigned label.
Optionally, the labeling each of the scenes to obtain an allocation label includes:
obtaining a target distribution mode of each scene based on the test; or
And calculating based on the distribution gain index and the distribution gain of each distribution mode to obtain the target distribution mode of each scene.
Optionally, obtaining the distribution gain index of each client at each time includes:
acquiring a first parameter of each client at each moment; the first parameter is positively correlated with the distribution gain indicator; acquiring a second parameter of each client at each moment; the second parameter is inversely related to the distribution gain indicator; and calculating based on the first parameter and the second parameter to obtain the distribution gain index of each client at each moment.
Optionally, the training a network model based on the data set to obtain an allocation model includes:
preprocessing the data set to obtain a target data set;
inputting the first state information and the second state information into a first network model, and training the first network model to obtain a first target network model; outputting an initial result based on the first target network model;
Inputting the initial result and the third state information into a second network model, and training the second network model to obtain a second target network model;
obtaining the allocation model based on the first target network model and the second target network model.
Optionally, allocating resource units based on the first target network model, the second target network model and the target state information to obtain an allocation result, including:
acquiring target state information; wherein the target state information comprises first target state information, second target state information and third target state information;
processing the first target state information and the second target state information based on the first target network model, and outputting a first result;
and processing the first result and the third target state information based on the second target network model, and outputting the distribution result.
Embodiments of the present invention also provide an electronic device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the method as described above when executing the computer program.
Embodiments of the present invention also provide a computer-readable storage medium comprising a stored computer program, wherein the computer program, when executed, controls an apparatus on which the computer-readable storage medium is located to perform the method as described above.
In addition, other structures and functions of the apparatus according to the embodiment of the present invention are known to those skilled in the art, and are not described herein for reducing redundancy.
It should be noted that the logic and/or steps shown in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following technologies, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, but are not intended to indicate or imply that the device or element so referred to must have a particular orientation, be constructed in a particular orientation, and be operated in a particular manner, and are not to be construed as limiting the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless explicitly specified otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood according to specific situations by those of ordinary skill in the art.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (10)
1. A method for resource unit allocation, comprising:
collecting a plurality of state information of each client to obtain a data set;
training a network model based on the data set to obtain a distribution model; wherein the distribution model comprises a first target network model and a second target network model;
allocating resource units based on the first target network model, the second target network model and the target state information to obtain an allocation result;
allocating the resource units based on the allocation result.
2. The method of claim 1, wherein obtaining the data set based on the plurality of state information of each client comprises:
acquiring first state information, second state information and third state information of each client;
obtaining a data set based on the first state information, the second state information, and the third state information.
3. The method of claim 2, wherein obtaining a data set based on the first state information, the second state information, and the third state information comprises:
acquiring each scene corresponding to the first state information, the second state information and the third state information;
labeling each scene to obtain a distribution label;
obtaining the data set based on the assigned label.
4. The method of claim 3, wherein said labeling each of said scenes to obtain an assigned label comprises:
obtaining a target distribution mode of each scene based on the test; or
And calculating based on the distribution gain index and the distribution gain of each distribution mode to obtain the target distribution mode of each scene.
5. The method of claim 4, wherein obtaining the distribution gain indicator for each client at each time comprises:
acquiring a first parameter of each client at each moment; the first parameter is positively correlated with the distribution gain indicator;
acquiring a second parameter of each client at each moment; the second parameter is inversely related to the distribution gain indicator; and calculating based on the first parameter and the second parameter to obtain the distribution gain index of each client at each moment.
6. The method of claim 2, wherein training a network model based on the dataset to obtain an assignment model comprises:
preprocessing the data set to obtain a target data set;
inputting the first state information and the second state information into a first network model, and training the first network model to obtain a first target network model; outputting an initial result based on the first target network model;
inputting the initial result and the third state information into a second network model, and training the second network model to obtain a second target network model;
obtaining the allocation model based on the first target network model and the second target network model.
7. The method of claim 1, wherein allocating resource units based on the first and second target network models and target state information to obtain an allocation result comprises:
acquiring target state information; wherein the target state information comprises first target state information, second target state information, and third target state information;
Processing the first target state information and the second target state information based on the first target network model, and outputting a first result;
and processing the first result and the third target state information based on the second target network model, and outputting the distribution result.
8. An apparatus for resource unit allocation, comprising:
the acquisition module is used for acquiring a plurality of state information of each client to obtain a data set for each client;
the training module is used for training a network model based on the data set to obtain an allocation model; wherein the distribution model comprises a first target network model and a second target network model;
the processing module is used for allocating resource units based on the first target network model, the second target network model and the target state information to obtain an allocation result;
and the allocation module is used for allocating the resource units based on the allocation result.
9. An electronic device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, comprising a stored computer program, wherein the computer program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210332282.XA CN114760639A (en) | 2022-03-30 | 2022-03-30 | Resource unit allocation method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210332282.XA CN114760639A (en) | 2022-03-30 | 2022-03-30 | Resource unit allocation method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114760639A true CN114760639A (en) | 2022-07-15 |
Family
ID=82328327
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210332282.XA Pending CN114760639A (en) | 2022-03-30 | 2022-03-30 | Resource unit allocation method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114760639A (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190294995A1 (en) * | 2018-03-21 | 2019-09-26 | Telefonica, S.A. | Method and system for training and validating machine learning in network environments |
CN111062495A (en) * | 2019-11-28 | 2020-04-24 | 深圳市华尊科技股份有限公司 | Machine learning method and related device |
CN111249724A (en) * | 2018-12-03 | 2020-06-09 | 索尼互动娱乐有限责任公司 | Machine learning driven resource allocation |
CN111866953A (en) * | 2019-04-26 | 2020-10-30 | 中国移动通信有限公司研究院 | Network resource allocation method, device and storage medium |
US20210051677A1 (en) * | 2019-08-14 | 2021-02-18 | Huawei Technologies Co., Ltd. | Radio frequency resource allocation method, apparatus, device and system, and storage medium |
WO2021114625A1 (en) * | 2020-05-28 | 2021-06-17 | 平安科技(深圳)有限公司 | Network structure construction method and apparatus for use in multi-task scenario |
WO2021139881A1 (en) * | 2020-01-08 | 2021-07-15 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods for intelligent resource allocation based on throttling of user equipment traffic and related apparatus |
CN113971459A (en) * | 2020-07-24 | 2022-01-25 | 阿里巴巴集团控股有限公司 | Training method and device of classification network model and electronic equipment |
CN114021770A (en) * | 2021-09-14 | 2022-02-08 | 北京邮电大学 | Network resource optimization method and device, electronic equipment and storage medium |
WO2022028793A1 (en) * | 2020-08-07 | 2022-02-10 | Nokia Technologies Oy | Instantiation, training, and/or evaluation of machine learning models |
CN114202062A (en) * | 2021-12-13 | 2022-03-18 | 中国科学院计算机网络信息中心 | Network model training method, client and server |
-
2022
- 2022-03-30 CN CN202210332282.XA patent/CN114760639A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190294995A1 (en) * | 2018-03-21 | 2019-09-26 | Telefonica, S.A. | Method and system for training and validating machine learning in network environments |
CN111249724A (en) * | 2018-12-03 | 2020-06-09 | 索尼互动娱乐有限责任公司 | Machine learning driven resource allocation |
US20210362049A1 (en) * | 2018-12-03 | 2021-11-25 | Sony Interactive Entertainment LLC | Machine learning driven resource allocation |
CN111866953A (en) * | 2019-04-26 | 2020-10-30 | 中国移动通信有限公司研究院 | Network resource allocation method, device and storage medium |
US20210051677A1 (en) * | 2019-08-14 | 2021-02-18 | Huawei Technologies Co., Ltd. | Radio frequency resource allocation method, apparatus, device and system, and storage medium |
CN111062495A (en) * | 2019-11-28 | 2020-04-24 | 深圳市华尊科技股份有限公司 | Machine learning method and related device |
WO2021139881A1 (en) * | 2020-01-08 | 2021-07-15 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods for intelligent resource allocation based on throttling of user equipment traffic and related apparatus |
WO2021114625A1 (en) * | 2020-05-28 | 2021-06-17 | 平安科技(深圳)有限公司 | Network structure construction method and apparatus for use in multi-task scenario |
CN113971459A (en) * | 2020-07-24 | 2022-01-25 | 阿里巴巴集团控股有限公司 | Training method and device of classification network model and electronic equipment |
WO2022028793A1 (en) * | 2020-08-07 | 2022-02-10 | Nokia Technologies Oy | Instantiation, training, and/or evaluation of machine learning models |
CN114021770A (en) * | 2021-09-14 | 2022-02-08 | 北京邮电大学 | Network resource optimization method and device, electronic equipment and storage medium |
CN114202062A (en) * | 2021-12-13 | 2022-03-18 | 中国科学院计算机网络信息中心 | Network model training method, client and server |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108924221B (en) | Method and device for allocating resources | |
CN108900333A (en) | A kind of appraisal procedure and assessment device of quality of wireless network | |
CN107948083B (en) | SDN data center congestion control method based on reinforcement learning | |
CN108684046B (en) | Random learning-based access network service function chain deployment method | |
CN111628855A (en) | Industrial 5G dynamic multi-priority multi-access method based on deep reinforcement learning | |
CN114269007A (en) | Method, device and method storage medium for determining energy-saving strategy of base station | |
CN113382477B (en) | Method for modeling uplink interference between wireless network users | |
US20220104027A1 (en) | Method for sharing spectrum resources, apparatus, electronic device and storage medium | |
CN114125785A (en) | Low-delay high-reliability transmission method, device, equipment and medium for digital twin network | |
CN113642809A (en) | Power consumption prediction method and device, computer equipment and storage medium | |
CN107943697A (en) | Problem distribution method, device, system, server and computer-readable storage medium | |
CN113128532A (en) | Method for acquiring training sample data, method for processing training sample data, device and system | |
CN113660687B (en) | Network difference cell processing method, device, equipment and storage medium | |
CN113543160B (en) | 5G slice resource allocation method, device, computing equipment and computer storage medium | |
CN113259145B (en) | End-to-end networking method and device for network slicing and network slicing equipment | |
CN114760639A (en) | Resource unit allocation method, device, equipment and storage medium | |
CN116567843A (en) | Wireless resource allocation optimization device and method | |
CN109996210A (en) | Congestion window control method, device and the equipment of car networking | |
CN113128694A (en) | Method, device and system for data acquisition and data processing in machine learning | |
CN114841490A (en) | Ecological protection priority area identification method, system, device and storage medium | |
CN114630443A (en) | Inner loop value adjusting method and device, storage medium and electronic device | |
US7386315B2 (en) | Method for scaling the radio interface for GPRS traffic and mixed GPRS and voice GSM traffic | |
EP3337220B1 (en) | A system for evaluating performance of a wireless access point | |
CN112311486B (en) | Method and device for accelerating wireless network interference prediction convergence | |
CN117251276B (en) | Flexible scheduling method and device for collaborative learning platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |