CN115243284A - LoRa wireless resource allocation method, device and storage medium - Google Patents

LoRa wireless resource allocation method, device and storage medium Download PDF

Info

Publication number
CN115243284A
CN115243284A CN202210614167.1A CN202210614167A CN115243284A CN 115243284 A CN115243284 A CN 115243284A CN 202210614167 A CN202210614167 A CN 202210614167A CN 115243284 A CN115243284 A CN 115243284A
Authority
CN
China
Prior art keywords
node
data
information
gateway
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210614167.1A
Other languages
Chinese (zh)
Inventor
宁磊
钟瀚
陈勇
李蒙
曹建民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Technology University
Original Assignee
Shenzhen Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Technology University filed Critical Shenzhen Technology University
Priority to CN202210614167.1A priority Critical patent/CN115243284A/en
Publication of CN115243284A publication Critical patent/CN115243284A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/69Spread spectrum techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/69Spread spectrum techniques
    • H04B2001/6912Spread spectrum techniques using chirp
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application discloses a LoRa wireless resource allocation method, a device and a storage medium, wherein the method comprises the following steps: acquiring node information and channel information forwarded by a gateway; obtaining a corresponding adjustment strategy through a neural network according to the node information and the channel information; setting a node parameter value according to the adjustment strategy, sending the node parameter value to a gateway, and generating a corresponding reward value and a next state according to a preset reward function; placing the reward value and the next state into a experience pool; and training and updating the neural network according to the data in the experience pool, so that the collision rate of the data in the LoRa channel is reduced, the transmission delay is reduced, and the number of the devices which can be accessed to the Internet of things by a single gateway is increased.

Description

LoRa wireless resource allocation method, device and storage medium
Technical Field
The application relates to the field of internet of things, in particular to a LoRa wireless resource allocation method, a LoRa wireless resource allocation device and a storage medium.
Background
LoRa is a spread spectrum modulation technique, also known as Chirp modulation. The LoRa has the characteristics of long transmission distance, strong anti-interference capability, low power consumption, networking on demand and the like. At present, the method has wide application in the market, such as animal and plant tracking monitoring in agriculture and animal husbandry, forest building environment information monitoring and the like. LoRaWAN is a set of communication protocol and system architecture promoted by LoRa alliance, and its network composition mainly includes terminal node, gateway, network server, application server.
In the related art, the ALOHA mechanism is used for communication between the terminal node of LoRa and the gateway, and users are allowed to transmit data as long as they have the data to transmit, which may cause collision and thus frame corruption. Because the broadcast channel has feedback, the sender can perform collision detection in the process of sending data, and can know whether the data frame is damaged or not by comparing the received data with the data in the buffer. Likewise, other users may follow this process. If the sender knows that a data frame is corrupted (i.e., a collision is detected), it can wait a randomly long time before retransmitting the frame. When more and more uplink transmission messages of users are transmitted, data packets detected to collide in a channel are exponentially increased, which is not beneficial to some services with delay requirements.
Therefore, the above technical problems of the related art need to be solved.
Disclosure of Invention
The present application is directed to solving the technical problems in the related art. Therefore, embodiments of the present application provide a method, an apparatus, and a storage medium for allocating LoRa wireless resources, which can reduce a data collision rate and reduce communication delay, thereby increasing the number of access gateway nodes.
According to an aspect of the embodiments of the present application, there is provided a LoRa wireless resource allocation method, including:
acquiring node information and channel information forwarded by a gateway;
obtaining a corresponding adjustment strategy through a neural network according to the node information and the channel information;
setting a node parameter value according to the adjustment strategy, sending the node parameter value to a gateway, and generating a corresponding reward value and a next state according to a preset reward function;
placing the reward value and the next state into an experience pool;
and training and updating the neural network according to the data in the experience pool.
In one embodiment, before obtaining the node information and the channel information forwarded by the gateway, the method further includes:
detecting whether data is collided in a channel;
and if the data is not collided in the channel, the data sent by the receiving node is forwarded to the server.
In one embodiment, after the server receives the data, the method further includes:
and analyzing the data to obtain node information and channel information.
In one embodiment, after the gateway receives the node parameter value, the method further includes:
and sending an MAC command to the node according to the node parameter value, and adjusting the parameter setting of the node.
In one embodiment, the obtaining, by a neural network, a corresponding adjustment policy according to the node information and the channel information includes:
and obtaining an adjusting action through a neural network according to the node information and the channel information, wherein the adjusting action is an action of dynamically adjusting the spreading factor and the bandwidth and determining whether to switch the channel by the node.
In one embodiment, the generating the corresponding bonus value and the next state according to a preset bonus function includes: the environment state information, the corresponding generated action, the new environment state and the prize value generated by the adjusted prize function.
In one embodiment, the method further comprises:
putting the reward value and the next state into an experience pool as one interaction;
and if the interaction times reach preset times, taking out data from the experience pool to train and update the neural network.
According to an aspect of the embodiments of the present application, there is provided an LoRa wireless resource allocation apparatus, the apparatus including:
the first module is used for acquiring node information and channel information forwarded by the gateway;
a second module, configured to obtain a corresponding adjustment policy through a neural network according to the node information and the channel information;
the third module is used for setting a node parameter value according to the adjustment strategy, sending the node parameter value to the gateway and generating a corresponding reward value and a next state according to a preset reward function;
a fourth module for placing the reward value and a next state into a experience pool;
and the fifth module is used for training and updating the neural network according to the data in the experience pool.
According to an aspect of the embodiments of the present application, there is provided an LoRa wireless resource allocation apparatus, the apparatus including:
at least one processor;
at least one memory for storing at least one program;
at least one of the programs, when executed by at least one of the processors, implements a LoRa radio resource allocation method as recited in any one of claims 1-7.
According to an aspect of the embodiments of the present application, there is provided a storage medium storing a program executable by a processor, wherein the program executable by the processor implements a LoRa radio resource allocation method according to any one of claims 1-7.
The method, the device and the storage medium for allocating the LoRa wireless resources have the advantages that: the method comprises the steps of obtaining node information and channel information forwarded by a gateway; obtaining a corresponding adjustment strategy through a neural network according to the node information and the channel information; setting a node parameter value according to the adjustment strategy, sending the node parameter value to a gateway, and generating a corresponding reward value and a next state according to a preset reward function; placing the reward value and the next state into a experience pool; and training and updating the neural network according to the data in the experience pool, so that the collision rate of the data in the LoRa channel is reduced, the transmission delay is reduced, and the number of the devices which can be accessed to the Internet of things by a single gateway is increased.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for allocating LoRa radio resources according to an embodiment of the present disclosure;
fig. 2 is a flowchart of an implementation of a method for allocating LoRa radio resources according to an embodiment of the present disclosure;
fig. 3 is a communication optimization flowchart of a LoRa radio resource allocation method according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of an LoRa radio resource allocation apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of another LoRa radio resource allocation apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
LoRa is a spread spectrum modulation technique, also known as Chirp modulation. The LoRa has the characteristics of long transmission distance, strong anti-interference capability, low power consumption, networking on demand and the like. At present, the method has wide application in the market, such as animal and plant tracking monitoring in agriculture and animal husbandry, forest building environment information monitoring and the like. LoRaWAN is a set of communication protocol and system architecture promoted by LoRa alliance, and its network composition mainly includes terminal node, gateway, network server, application server.
In the related art, the ALOHA mechanism is used for the communication between the terminal node of LoRa and the gateway, and users are allowed to transmit data as long as they have the data to transmit, which may cause collision and thus frame damage. Because the broadcast channel has feedback, the sender can perform collision detection in the process of sending data, and can know whether the data frame is damaged or not by comparing the received data with the data in the buffer. Likewise, other users may follow this process. If the sender knows that a data frame is corrupted (i.e., a collision is detected), it can wait a randomly long time before retransmitting the frame. When more and more uplink transmission messages of users are transmitted, data packets detected to collide in a channel are exponentially increased, which is not beneficial to some services with delay requirements.
In LoRaWAN, there are four main parameters affecting data collision, namely bandwidth, spreading factor, time and power. The selection of bandwidth and spreading factor affects the rate and transmission distance of the node. The larger the spreading factor, the smaller the rate, but the longer the transmission distance. Therefore, in LoRaWAN, it is mainly considered to increase the transmission rate of the node as much as possible while ensuring the accessibility. Therefore, loRaWAN always tends to choose the optimal parameters, which can guarantee optimal transmission for a single node. Under the condition that the number of nodes is small, the collision rate of the strategy is within an allowable range, but under the condition that the number of nodes is more and more, the strategy considers the condition that a single node is optimal but the global optimal condition is not considered, and the situation that a certain channel resource is extruded but other channels are idle is easily caused. Currently, some improvement measures are proposed in the industry for the situation. If collision occurs, channel switching is randomly performed at the next retransmission, but the method for randomly switching channels cannot ensure that the switched channels cannot collide in a large connection scene.
In order to solve the above problem, the present application proposes a method, an apparatus and a storage medium for allocating LoRa wireless resources.
First, the possible vocabularies appearing in the present specification are explained as follows:
LoRa: a physical layer modulation technology of linear frequency modulation spread spectrum has the characteristics of low power consumption, long transmission distance and strong anti-interference capability.
LoRaWAN: loRaWAN is a suite of LoRa-based technologies introduced by the LoRa alliance. This standard currently dominates the LoRa.
And a MAC layer: medium access control responsible for handling physical transmission or reception on a medium
Deep reinforcement learning: deep learning has strong perception capability, but lacks certain decision-making capability; and the reinforcement learning has decision-making capability and is ineligible for perceiving problem tie. Therefore, the two are combined, the advantages are complementary, and a solution is provided for the perception decision problem of a complex system.
An experience pool: the main function is to overcome the correlation (correlated data) and non-stationary distribution (non-stationary distribution) problems of empirical data. It is done by training from random samples from past state transitions (experience). Advantages include multiple uses of a sample, and high data utilization rate.
Fig. 1 is a flowchart of an LoRa radio resource allocation method according to an embodiment of the present disclosure, and as shown in fig. 1, the LoRa radio resource allocation method according to the present disclosure includes:
s101, acquiring node information and channel information forwarded by a gateway.
Optionally, before acquiring the node information and the channel information forwarded by the gateway in step S101, the method further includes: detecting whether data is collided in a channel; and if the data is not collided in the channel, receiving the data sent by the node and forwarding the data to the server.
It should be noted that the AI algorithm is designed and embedded in the network server, the LoRa gateway performs a forwarding function of the node, the LoRa node sends the uplink data based on the ALOHA mechanism, and only when the data is not collided in the channel, the LoRa gateway can successfully receive the data sent by the node and forward the data to the network server. And after the server receives the data, the server analyzes the data to obtain node information and channel information.
And S102, obtaining a corresponding adjustment strategy through a neural network according to the node information and the channel information.
In step S102, obtaining a corresponding adjustment policy through a neural network according to the node information and the channel information may specifically include: and obtaining an adjusting action through a neural network according to the node information and the channel information, wherein the adjusting action is an action of dynamically adjusting the spreading factor and the bandwidth and determining whether to switch the channel by the node.
S103, setting a node parameter value according to the adjustment strategy, sending the node parameter value to a gateway, and generating a corresponding reward value and a next state according to a preset reward function.
Optionally, when the node parameter value is sent to the gateway, and the gateway receives the node parameter value, the gateway sends an MAC command to the node according to the node parameter value, and adjusts the parameter setting of the node. Specifically, the gateway sends the MAC command when the receiving window of the node is opened, so as to adjust the channel parameter setting of each node, thereby achieving the purposes of maximally utilizing each channel resource and reducing the collision rate of data.
In step S103, the corresponding bonus value and the next state are generated according to a preset bonus function, where the bonus value and the next state include: the environment state information, the corresponding generated action, the new environment state and the prize value generated by the adjusted prize function.
And S104, putting the reward value and the next state into an experience pool.
And S105, training and updating the neural network according to the data in the experience pool.
Optionally, the process of putting the reward value and the next state into the experience pool is taken as one interaction; and if the interaction times reach preset times, taking out data from the experience pool to train and update the neural network.
Fig. 2 is a flowchart of an implementation of an LoRa radio resource allocation method according to an embodiment of the present disclosure, and as shown in fig. 2, when the method is applied to a specific flowchart, an operation process of the radio resource allocation method provided by the present disclosure is as follows:
(1) And the network server embedded with the AI agent acquires the node information and the service information from the gateway as the state S of the environment.
(2) According to the environment state information S, the agent generates an action a according to the learned strategy to dynamically adjust the spreading factor and the bandwidth for the node, namely whether to switch the channel or not.
(3) The adjustment in step (2) will dynamically change the communication condition of the channel in the communication environment, the new communication environment state information is called S', and the adjusted communication condition is evaluated by the artificially set reward function to obtain a reward value R for determining the adjusted condition.
(4) The steps are called as a primary interaction process of the intelligent agent and the environment, the generated data comprise environment state information S, correspondingly generated action a, a new environment state S', R generated by the adjusted reward function, and four parameters which are used as the primary interaction process are stored in an experience pool, and when the interaction times reach the set times, the data are taken out from the experience pool to update and optimize the strategy.
It should be added that the parameters of the specific states in step (1) are designed as shown in table 1 below:
TABLE 1 parameter design Table for specific states
Figure BDA0003673742600000061
It should be added that the specific format of the data of the experience pool in the above step (4) is designed as the following table 2:
TABLE 2 data specific Format design Table for experience pool
Figure BDA0003673742600000062
In addition, the data is extracted from the experience pool in the step (4) by random extraction, and the random extraction can be used for breaking the relevance between the data and preventing the data from falling into some local benefit maximization. Because the size of the experience pool is set to be limited, when the data is full, according to the FIFO first-in first-out strategy, the data which is the longest in history is covered, and the latest data is reserved.
The method for achieving the distributed optimal resource allocation in the LoRaWAN is characterized in that an algorithm used in the mainstream LoRaWAN is based on an optimal transmission strategy of each node, local optimization is considered, but global optimization is not considered, an intelligent agent is embedded into a network server on the basis of an original communication protocol of the LoRaWAN through a deep reinforcement learning algorithm, an ALOHA mechanism is kept, dynamic adjustment of the nodes is conducted on a receiving window, the collision rate between the nodes is greatly reduced, and the utilization rate of resources is improved. At the same time, the number of accessible devices within the allowable collision rate of a single gateway also increases substantially. The method is mainly based on the LoRaWAN network server for deep reinforcement learning, and initialization setting and subsequent dynamic adjustment of the nodes are carried out on four parameters influencing the collision rate. The reinforcement learning algorithm is set and embedded in the network server, and the static allocation problem of the traditional scheduling algorithm is solved by dynamically adjusting the allocation strategy.
Fig. 3 is a communication optimization flow chart of an LoRa wireless resource allocation method according to an embodiment of the present disclosure, where as shown in fig. 3, an AI algorithm is designed to be embedded in a network server, an LoRa gateway performs a forwarding function of a node, an LoRa node sends uplink data based on an ALOHA mechanism, and only when data does not collide in a channel, the LoRa gateway can successfully receive the data sent by the node and forward the data to the network server, and the network server analyzes the data after receiving the data, where the data further includes parameter information of a part of channels. The AI algorithm gives out a corresponding adjustment strategy according to the node information and the channel information, the strategy is forwarded through the gateway, and the gateway sends an MAC command when a receiving window of the node is opened, so that the channel parameter setting of each node is adjusted, the maximum utilization of each channel resource is achieved, and the collision rate of data is reduced.
Fig. 4 is a schematic diagram of an LoRa radio resource allocation apparatus according to an embodiment of the present disclosure, and as shown in fig. 4, the LoRa radio resource allocation apparatus according to the present disclosure includes:
a first module 401, configured to obtain node information and channel information forwarded by a gateway;
a second module 402, configured to obtain a corresponding adjustment policy through a neural network according to the node information and the channel information;
a third module 403, configured to set a node parameter value according to the adjustment policy, send the node parameter value to the gateway, and generate a corresponding reward value and a next state according to a preset reward function;
a fourth module 404 for placing the reward value and next state into a experience pool;
a fifth module 405, configured to train and update the neural network according to the data in the experience pool.
Fig. 5 is a schematic diagram of another LoRa radio resource allocation apparatus according to an embodiment of the present application, and as shown in fig. 5, the LoRa radio resource allocation apparatus according to the present application includes:
at least one processor 501;
at least one memory 502, said memory 502 for storing at least one program;
when at least one of the programs is executed by at least one of the processors 501, a method for allocating LoRa radio resources according to the previous embodiments is implemented.
The contents in the method embodiments are all applicable to the device embodiments, the functions specifically implemented by the device embodiments are the same as those in the method embodiments, and the beneficial effects achieved by the device embodiments are also the same as those achieved by the method embodiments.
Furthermore, the present application also provides a storage medium, where the storage medium stores a program executable by a processor, and the program executable by the processor is executed by the processor to implement a LoRa wireless resource allocation method according to the foregoing embodiment.
Similarly, the contents in the foregoing method embodiments are all applicable to this storage medium embodiment, the functions specifically implemented by this storage medium embodiment are the same as those in the foregoing method embodiments, and the advantageous effects achieved by this storage medium embodiment are also the same as those achieved by the foregoing method embodiments.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present application are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present application is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion regarding the actual implementation of each module is not necessary for an understanding of the present application. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the present application as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the application, which is defined by the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: numerous changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the application, the scope of which is defined by the claims and their equivalents.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A LoRa wireless resource allocation method, the method comprising:
acquiring node information and channel information forwarded by a gateway;
obtaining a corresponding adjustment strategy through a neural network according to the node information and the channel information;
setting a node parameter value according to the adjustment strategy, sending the node parameter value to a gateway, and generating a corresponding reward value and a next state according to a preset reward function;
placing the reward value and the next state into a experience pool;
and training and updating the neural network according to the data in the experience pool.
2. The LoRa radio resource allocation method according to claim 1, wherein before obtaining the node information and the channel information forwarded by the gateway, the method further comprises:
detecting whether data is collided in a channel;
and if the data is not collided in the channel, receiving the data sent by the node and forwarding the data to the server.
3. The method of claim 2, wherein after the server receives the data, the method further comprises:
and analyzing the data to obtain node information and channel information.
4. The method of claim 1, wherein after the gateway receives the node parameter value, the method further comprises:
and sending an MAC command to the node according to the node parameter value, and adjusting the parameter setting of the node.
5. The method of claim 1, wherein the obtaining the corresponding adjustment policy through a neural network according to the node information and the channel information comprises:
and obtaining an adjusting action through a neural network according to the node information and the channel information, wherein the adjusting action is an action of dynamically adjusting the spreading factor and the bandwidth and deciding whether to switch the channel or not by the node.
6. A LoRa radio resource allocation method according to claim 1, wherein the generating of the corresponding reward value and next state according to a predetermined reward function includes: the environment state information, the corresponding generated action, the new environment state and the prize value generated by the adjusted prize function.
7. The method of claim 1, wherein the method further comprises:
putting the reward value and the next state into an experience pool as one interaction;
and if the interaction times reach preset times, taking out data from the experience pool to train and update the neural network.
8. An apparatus for assigning LoRa radio resources, the apparatus comprising:
the first module is used for acquiring node information and channel information forwarded by the gateway;
a second module, configured to obtain a corresponding adjustment policy through a neural network according to the node information and the channel information;
the third module is used for setting a node parameter value according to the adjustment strategy, sending the node parameter value to the gateway and generating a corresponding reward value and a next state according to a preset reward function;
a fourth module for placing the reward value and a next state into a experience pool;
and the fifth module is used for training and updating the neural network according to the data in the experience pool.
9. An apparatus for allocating LoRa radio resources, the apparatus comprising:
at least one processor;
at least one memory for storing at least one program;
at least one of the programs, when executed by at least one of the processors, implements a LoRa radio resource allocation method as recited in any one of claims 1-7.
10. Storage medium, characterized in that the storage medium stores a program executable by a processor, and the program executable by the processor realizes a LoRa radio resource allocation method according to any one of claims 1-7.
CN202210614167.1A 2022-06-01 2022-06-01 LoRa wireless resource allocation method, device and storage medium Pending CN115243284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210614167.1A CN115243284A (en) 2022-06-01 2022-06-01 LoRa wireless resource allocation method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210614167.1A CN115243284A (en) 2022-06-01 2022-06-01 LoRa wireless resource allocation method, device and storage medium

Publications (1)

Publication Number Publication Date
CN115243284A true CN115243284A (en) 2022-10-25

Family

ID=83669936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210614167.1A Pending CN115243284A (en) 2022-06-01 2022-06-01 LoRa wireless resource allocation method, device and storage medium

Country Status (1)

Country Link
CN (1) CN115243284A (en)

Similar Documents

Publication Publication Date Title
Piran et al. Multimedia communication over cognitive radio networks from QoS/QoE perspective: A comprehensive survey
JP6236719B2 (en) Method, apparatus, and system for QoS parameter configuration in WLAN
US7852826B2 (en) Techniques to communication MAP information elements in a wireless network
US7929436B2 (en) Network communication control methods and systems
CN104995923B (en) Video acceleration method, client and network element
CN103975630A (en) Using wireless wide area network protocol information for managing a performance level of a processor
CN112868265A (en) Network resource management method, management device, electronic device and storage medium
Maia et al. A fair QoS-aware dynamic LTE scheduler for machine-to-machine communication
US20230164690A1 (en) Communication Prediction-Based Energy Saving Method and Apparatus
Zhang et al. MoWIE: toward systematic, adaptive network information exposure as an enabling technique for cloud-based applications over 5G and beyond
WO2013165812A1 (en) Data transfer reduction during video broadcasts
CN104871617A (en) Method and radio network node for managing a request for a radio access bearer
CN106332153B (en) Bandwidth control method and device in WLAN
US20100035623A1 (en) Method and apparatus for controlling quality of service in mobile communication system
US7787434B2 (en) Method access point and program product for providing bandwidth and airtime fairness in wireless networks
CN106604404B (en) Service scheduling method and device
CN112770263A (en) Indoor Lora communication system and method
US11895531B2 (en) Method and device for regulating flow of data transmission in a wireless network
CN115243284A (en) LoRa wireless resource allocation method, device and storage medium
Harsini et al. Effective capacity optimization for multiuser diversity systems with adaptive transmission
CN110661722A (en) Flow control method and device
CN107172652B (en) Base station scheduling method and device based on high-level service information
CN114845338A (en) Random back-off method for user access
US10863368B2 (en) Machine learning based adaptive short beacon transmission
EP4305823A1 (en) Devices and methods for collaborative learning of a transmission policy in wireless networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination