CN110380776A - A kind of Internet of things system method of data capture based on unmanned plane - Google Patents
A kind of Internet of things system method of data capture based on unmanned plane Download PDFInfo
- Publication number
- CN110380776A CN110380776A CN201910777808.3A CN201910777808A CN110380776A CN 110380776 A CN110380776 A CN 110380776A CN 201910777808 A CN201910777808 A CN 201910777808A CN 110380776 A CN110380776 A CN 110380776A
- Authority
- CN
- China
- Prior art keywords
- network
- action
- training
- base station
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000013481 data capture Methods 0.000 title abstract 2
- 238000013480 data collection Methods 0.000 claims abstract description 15
- 230000005540 biological transmission Effects 0.000 claims description 50
- 230000009471 action Effects 0.000 claims description 39
- 238000013528 artificial neural network Methods 0.000 claims description 32
- 238000012549 training Methods 0.000 claims description 26
- 230000002452 interceptive effect Effects 0.000 claims description 15
- 230000000875 corresponding effect Effects 0.000 claims description 14
- 230000003993 interaction Effects 0.000 claims description 11
- 230000002787 reinforcement Effects 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 5
- 230000007774 longterm Effects 0.000 abstract description 3
- 238000004891 communication Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 abstract 1
- 238000005457 optimization Methods 0.000 abstract 1
- 238000005562 fading Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B7/00—Radio transmission systems, i.e. using radiation field
- H04B7/14—Relay systems
- H04B7/15—Active relay systems
- H04B7/185—Space-based or airborne stations; Stations for satellite systems
- H04B7/18502—Airborne stations
- H04B7/18506—Communications with or from aircraft, i.e. aeronautical mobile service
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W52/00—Power management, e.g. TPC [Transmission Power Control], power saving or power classes
- H04W52/04—TPC
- H04W52/18—TPC being performed according to specific parameters
- H04W52/26—TPC being performed according to specific parameters using transmission rate or quality of service QoS [Quality of Service]
- H04W52/267—TPC being performed according to specific parameters using transmission rate or quality of service QoS [Quality of Service] taking into account the information rate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W52/00—Power management, e.g. TPC [Transmission Power Control], power saving or power classes
- H04W52/04—TPC
- H04W52/30—TPC using constraints in the total amount of available transmission power
- H04W52/34—TPC management, i.e. sharing limited amount of power among users or channels or data types, e.g. cell loading
- H04W52/346—TPC management, i.e. sharing limited amount of power among users or channels or data types, e.g. cell loading distributing total power among users or channels
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Astronomy & Astrophysics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Quality & Reliability (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The invention belongs to wireless communication technology fields, are related to a kind of Internet of things system method of data capture based on unmanned plane.The present invention data collection and control IoT node uplink process using unmanned plane, and optimization system efficiency is to promote the cruising ability of Internet of things system.In the solution of the present invention, unmanned machine equipment is not needed in decision using the real-time network information, but is extracted useful information from historical information and predicted current network environment, so that the long-term efficiency of all IoT nodes maximizes in system.
Description
Technical Field
The invention belongs to the technical field of wireless communication, and relates to an Internet of things system data collection method based on an unmanned aerial vehicle.
Background
With the development of the Internet of Things (IoT), the storage quantity and the transmission data volume of the Internet of Things equipment both show exponential growth, which puts higher requirements on the data collection work of the Internet of Things equipment. In addition, the internet of things device is usually an energy-limited device, and cannot perform long-distance data transmission. Therefore, there is a need for an efficient, flexible and low-cost data collection method for the internet of things system. Unmanned Aerial Vehicles (UAVs) are considered a viable solution. Unlike traditional data collection devices that are fixed to the ground, unmanned aerial vehicles can fly to dynamically deploy in the air. This means that the drone device can move quickly to the data hotspot while performing data collection, without being limited by the environmental terrain.
In addition, the unmanned aerial vehicle can improve the channel gain between the unmanned aerial vehicle and the ground node by adjusting the distance between the unmanned aerial vehicle and the ground node in the data collecting process, so that the ground node can achieve higher transmission rate by using unit transmission power, and the overall performance of the Internet of things system is improved. Thus, drones may be used as an efficient data collection solution in internet of things systems.
Disclosure of Invention
Aiming at the problem that the number of the internet of things equipment is rapidly increased and data collection is difficult, the unmanned aerial vehicle is used for collecting data and controlling the uplink transmission process of the IoT node, and the energy efficiency of the system is optimized to improve the cruising ability of the internet of things system. The invention mainly focuses on an unmanned aerial vehicle Internet of things system, an unmanned aerial vehicle Internet of things model shown in figure 1 is designed, in the considered model, an unmanned aerial vehicle is used as a mobile base station to collect data of a plurality of ground nodes, namely the plurality of ground nodes simultaneously carry out uplink transmission on the unmanned aerial vehicle base station. The invention designs a transmission protocol in the data collection process, wherein each Time slot IoT node is allocated to a corresponding channel by the unmanned aerial vehicle equipment for transmission, and a plurality of nodes allocated to the same channel are transmitted in a Time Division Multiple Access (TDMA) mode.
In the present invention, the node energy efficiency is defined as the ratio of the transmission rate realized by the node to the transmission power used for transmission. The invention aims to improve the long-term energy efficiency of an Internet of things system, and the transmission channel and the transmission power of an IoT node are distributed by using an unmanned aerial vehicle device, so that the IoT node can transmit more data by using unit power, thereby improving the energy efficiency of the node and the cruising ability of the IoT node.
According to the invention, by means of Deep Learning (DRL), the unmanned aerial vehicle equipment does not need to collect global network information in real time, but learns the network environment change rule and predicts the environment change by using historical observation information of the network environment, so that corresponding channel and power distribution decisions are made. Compared with the traditional method, the scheme provided by the invention can effectively reduce the information overhead. In the present invention, the frame structure for uplink transmission consists of decision, transmission and training processes. The invention simultaneously utilizes two neural networks for decision making, namely an action network and a judgment network, wherein the action network is used for calculating the probability of a decision making space to obtain the strategy to be executed, and the judgment network is used for judging the superiority of the selected strategy. The evaluation network can assist the action network to achieve convergence, so that the designed control scheme can overcome the problem of difficult convergence of a high-dimensional decision space.
Furthermore, the invention pertains to online learning techniques, i.e. the drone device learns through short-term experience gained from its interaction with IoT nodes. The invention avoids the experience playback memory required in the traditional deep reinforcement learning, so that the unmanned aerial vehicle equipment does not need to store a large amount of historical network information, but only needs to record a small amount of interactive data at each training interval, thereby effectively reducing the storage overhead of the unmanned aerial vehicle equipment. In addition, after the method is converged, the required control decision can be directly obtained only by inputting the current network information into the neural network.
The invention has the advantages that the unmanned aerial vehicle equipment does not need to utilize real-time network information during decision making, but extracts useful information from historical information and predicts the current network environment, so that the long-term energy efficiency of all IoT nodes in the system is maximized.
Drawings
Fig. 1 shows a system model of the internet of things of the unmanned aerial vehicle in the invention.
Fig. 2 shows an uplink transmission frame structure in the present invention.
Fig. 3 shows the deep reinforcement learning decision and information interaction flow in the present invention.
Fig. 4 shows the performance comparison of the data collection control method based on deep reinforcement learning proposed by the present invention with other data collection control schemes.
Detailed Description
The following detailed description of specific embodiments of the invention is provided in connection with the accompanying drawings.
Fig. 1 shows an unmanned aerial vehicle internet of things system model in the present invention, which considers collecting data from ground IoT nodes using a mobile unmanned aerial vehicle base station and reasonably allocating transmission channels and transmission power of the nodes. As shown, in the system, one drone base station is deployed in the air, and M IoT nodes are randomly distributed throughout the area. Each IoT node has only one antenna and there is data to transmit to the drone base station at all times. The unmanned aerial vehicle base station works in a preset flight orbit, and K orthogonal channels with the same width are required to be distributed to the ground IoT nodes in each time slot for uplink data transmission. In this example, the number of channels is less than the number of IoT nodes, i.e., K < M. Each node can only be assigned to one channel for transmission in each time slot.
In the invention, all the air-ground channels from the unmanned aerial vehicles to the nodes are composed of two parts, namely a direct path (LOS) and a Non-direct path (NLOS), and the proportion coefficient of each component in the channel gain is determined by the elevation angle sigma between the unmanned aerial vehicle and the ground nodesiAnd the environment of the system, and the sum of the proportionality coefficients of the two components is 1. An example of an LOS diameter ratio can be expressed as:
wherein a, b, c, d, e represent the corresponding environmental parameters.
There are two parts, large-scale fading and small-scale fading, in both LOS and NLOS paths. The large scale fading is determined by the distance between the drone and the ground node, while the small scale fading remains constant in one frame, but varies from frame to frame. The small scale fading in the LOS path is a leis distribution, while in the NLOS path the small scale fading is a rayleigh distribution. The specific channel gain may be expressed as:
where f denotes the carrier frequency and v denotes the speed of light. Mu.sLOSAnd muNLOSRespectively, represent the antenna gains for the different components,andrespectively, a rayleigh distribution and a rice distribution.
In the invention, the transmission power of a node i in a time slot t of a channel k is Pi,k,tWhere the signal-to-noise ratio can be expressed as
In the invention, for a plurality of IoT nodes accessed to the same channel, a Time Division Multiple Access (TDMA) mode is adopted to transmit data to the unmanned aerial vehicle equipment, namely, a time slot is divided into the number N of access usersk,tAnd each node is allocated with one small time slot for transmission. Meanwhile, in each time slot, the unmanned aerial vehicle equipment distributes the transmission power of all the nodes, so that the transmission rate c realized by the node i in the time slot t of the channel ki,k,tCan be expressed as
Energy efficiency rho of IoT node i in channel k time slot ti,k,tDefining the transmission rate c realized therefori,k,tAnd the transmission power P usedi,k,tRatio of (i) to (ii)
The aim of the invention is to maximize the minimum energy efficiency η of the IoT nodestCan be expressed as
Wherein,is the minimum energy efficiency, I, realized by all nodes at time ti,k,tIndicates whether the ith node is allocated to channel k at time slot t, and indicates that the ith node is allocated to channel k when the variable value is 1, and indicates that the ith node is allocated to channel k when the variable value is 0.
Fig. 3 shows a frame structure of uplink transmission in the present invention. In the invention, an online learning mode is adopted, interactive data at n moments are utilized, namely, the established neural network is trained, namely, the training is carried out once every n moments, and if the recording starting moment is represented as tstartWhen t is equal to tstartAnd training at + n time. Defining a frame structure at a training moment as a training frame structure, and defining a frame structure at a non-training moment as a common frame structure, wherein the common frame structure comprises a decision stage and a transmission stage, namely, an unmanned aerial vehicle base station firstly utilizes an established neural network to obtain a current control decision, and then an IoT node transmits data to the unmanned aerial vehicle base station according to corresponding decision information; the training frame structure comprises a decision phase, a transmission phase and a training phase, and is distinguished from the ordinary frame structure in that the training phase utilizes recorded records<s(tstart),a(tstart),r(tstart) …, s (t) >, for training the action neural network and the judgment neural network, the recorded mutual information will be cleared after the training is completed, and the mutual information will be recorded again at a new time. After the neural network converges, the neural network no longer needs to be performedTraining, so only the first frame structure exists at this time.
Fig. 2 shows the deep reinforcement learning decision and information interaction process in the invention. The system mainly comprises two parts, namely an unmanned aerial vehicle base station and a ground IoT node. Wherein, the drone base station is used as a decision maker, and all IoT nodes can be regarded as an environment. The drone needs to establish two neural networks, called action network and evaluation network, respectively. The unmanned aerial vehicle can obtain a state s (t) from the environment by observing when each time slot starts, obtain the probability corresponding to each decision (action) after inputting the state into the actor neural network, and select an action a (t) from the action space to execute according to the obtained probability. The decision of the drone mainly consists of two parts, transmission channel and transmission power of IoT nodes, respectively. After the selected strategy is executed, the drone base station obtains an immediate feedback r (t) to characterize the current benefit of the selected decision, and a new state s (t + 1). After interaction is performed every time, the unmanned aerial vehicle device needs to record an interaction track and train the neural network once at every n moments.
The method adopts deep reinforcement learning to make the allocation decision of transmission channels and transmission power, and specifically comprises the following steps:
at the beginning of each frame, the drone base station will acquire a corresponding state by observing the environment. The state s (t) of the unmanned aerial vehicle base station mainly comprises 4 parts, namely the channel number k of the last time slot accessi,t-1Channel gain of last time slot nodeTransmission rate realized by each node in last time slotAnd the number N of users of each channel in the last time slotk,t-1I.e. by
The action a (t) made by the drone base station is
After performing the selected action, the drone base station obtains an immediate reward r (t) and a new state s (t +1) corresponding to the next time instant. In this patent, the reward function is set to the minimum energy efficiency that is currently achieved by all IoT nodes, i.e., the reward function is set to
r(t)=ηt
After obtaining the reward and the new state, the drone base station combines the state value s (t) at the current moment, the selected action a (t), the reward r (t) representing the income obtained after the action is executed, and the new state s (t +1) to be used as interaction data < s (t), a (t), r (t), s (t +1) >, and records the interaction data in the interaction track cache.
The unmanned aerial vehicle base station establishes two neural networks in an initialization stage, wherein one neural network is called as an action network pi (a | s; theta), theta is an action neural network parameter and is responsible for outputting a probability value of a corresponding action according to a current input state and selecting the action (namely a corresponding transmission channel and a distribution decision of transmission power) to execute according to the probability; the other is called the evaluation network V (s; theta)v),θvIs used for evaluating network parameters, and is responsible for estimating current input state and calculating time difference error r (t) + gamma V (s (t + 1); thetav)-V(s(t);θv) Wherein γ ∈ (0, 1)]The discount coefficient represents the influence of the future on the current moment, and since r (t) is obtained by the fact that the unmanned aerial vehicle base station is in the state s (t) and executes the action a (t), the evaluation network can evaluate the quality of the selected action and assist the action network in converging. Both neural networks are fully connected networks and their parameters are initialized randomly.
During training, the unmanned aerial vehicle equipment forms a complete interactive track by using the interactive data recorded in the interactive track cache at the continuous n moments as training data < s (t) of the judgment neural networkstart),a(tstart),r(tstart),…,s(t)>. Firstly, the correspondence of n moments is obtained by utilizing the interactive track and the calculation of a judgment neural networkTime difference error ofThen, the action network and the judgment neural network are trained by using the random gradient descent algorithm by using the obtained time difference error and the interactive track, and parameters theta and theta of the action network and the judgment neural network arevAnd (6) updating.
Because the judgment network is used to assist the action network to train, when the action network reaches convergence, the judgment network will be closed, and the decision of transmission channel and transmission power allocation controlled by user data collection is only obtained by the trained action network.
Fig. 4 shows a comparison of the performance of the proposed deep reinforcement learning control scheme of the present invention with other control schemes. For comparison, the performance of three conventional methods, namely an optimal scheme, a deep Q-network-based scheme and a random scheme, is shown in the figure. The optimal solution is obtained through a search algorithm under the condition that the global network information is known, and can be regarded as an upper performance bound. Through the graph 4, it can be found that after the algorithm is trained for a period of time to reach convergence, the minimum energy efficiency performance which can be realized by the algorithm can gradually approach the optimal performance and is far superior to the other two methods, and the superiority of the method in the aspect of improving the node energy efficiency of the internet of things system is proved.
Claims (1)
1. An Internet of things system data collection method based on an Unmanned Aerial Vehicle (UAV) utilizes a mobile UAV base station to collect data from ground IoT nodes and distributes transmission channels and transmission power of the nodes, and is characterized in that node energy efficiency is defined as the ratio of the transmission rate realized by the nodes to the transmission power used for transmission so as to maximize the minimum energy efficiency eta of the IoT nodestEstablishing a target model:
wherein,is the minimum energy efficiency, I, realized by all nodes at time ti,k,tIndicates whether the ith node is allocated to the channel k at the time slot t, the variable value is 1 to indicate that the ith node is allocated to the channel k, 0 to indicate that the ith node is not allocated to the channel k, ci,k,tFor the transmission rate, P, of node i in time slot t of channel ki,k,tFor the transmission power of the node i in the channel k time slot t, the energy efficiency of the node i in the channel k time slot t
The method adopts deep reinforcement learning to make the allocation decision of transmission channels and transmission power, and specifically comprises the following steps:
at the beginning of each frame, the drone base station obtains a corresponding state s (t) by observing the environment, wherein the state s (t) of the drone base station mainly comprises 4 parts, namely a channel number k accessed in the last time sloti,t-1Channel gain of last time slot nodeTransmission rate realized by each node in last time slotAnd the number N of users of each channel in the last time slotk,t-1I.e. by
The action a (t) made by the drone base station is
After performing the selected action, the drone base station obtains an immediate reward r (t) and a new state s (t +1) corresponding to the next time instant, with the reward function set to the energy efficiency minimum achieved by all current IoT nodes, i.e., the reward function is set to
r(t)=ηt
After obtaining the reward and the new state, the unmanned aerial vehicle base station combines the state value s (t) of the current moment, the selected action a (t), the reward r (t) for representing the income obtained after the action and the new state s (t +1) to be used as interaction data < s (t), a (t), r (t), s (t +1) >, and records the interaction data in an interaction track cache;
the unmanned aerial vehicle base station establishes two neural networks in an initialization stage, wherein one neural network is defined as an action network pi (a | s; theta), and theta is an action neural network parameter and is responsible for outputting probability values of corresponding actions according to a current input state and selecting the actions to execute according to the probability; the other is defined as the evaluation network V (s; theta)v),θvIs used for evaluating network parameters, and is responsible for estimating current input state and calculating time difference error r (t) + gamma V (s (t + 1); thetav)-V(s(t);θv) Wherein γ ∈ (0, 1)]The discount coefficient represents the influence of the future on the current moment, r (t) is obtained by the fact that the unmanned aerial vehicle base station is in a state s (t) and executes an action a (t), and the evaluation network is used for evaluating the quality of the selected action and assisting the action network to converge; the two neural networks are all fully connected networks, and the parameters of the two neural networks are initialized randomly;
training the established neural network by using interactive data at n moments in an online learning manner, namely training every n moments, and defining the recording starting moment as tstartWhen t is equal to tstartTraining at + n time; defining the frame structure of the training time as trainingThe frame structure at the non-training moment is a common frame structure, the common frame structure comprises a decision stage and a transmission stage, namely, the unmanned aerial vehicle base station firstly utilizes the established neural network to obtain the current control decision, and then the IoT node transmits data to the unmanned aerial vehicle base station according to the corresponding decision information; the training frame structure comprises a decision phase, a transmission phase and a training phase, and is distinguished from the ordinary frame structure in that the training phase utilizes recorded records<s(tstart),a(tstart),r(tstart),…,s(t)>For training the action neural network and the judgment neural network and updating the parameters theta and theta of the neural networkvAfter the training is finished, the recorded interactive information is cleared, and the recorded interactive information is recorded again at a new moment;
when training is carried out, the unmanned aerial vehicle equipment forms a complete interactive track by using the interactive data recorded in the interactive track cache at the continuous n moments as training data of the judgment neural network<s(tstart),a(tstart),r(tstart),…,s(t)>Firstly, calculating by using an interactive track and a judgment neural network to obtain time difference errors corresponding to n momentsThen, the action network and the judgment neural network are trained by using the random gradient descent algorithm by using the obtained time difference error and the interactive track, and parameters theta and theta of the action network and the judgment neural network arevUpdating is carried out;
the judging network is used for assisting the action network to train, when the action network reaches convergence, the judging network is closed, and the distribution decision of the transmission channel and the transmission power controlled by the user data collection is only obtained by the trained action network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910777808.3A CN110380776B (en) | 2019-08-22 | 2019-08-22 | Internet of things system data collection method based on unmanned aerial vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910777808.3A CN110380776B (en) | 2019-08-22 | 2019-08-22 | Internet of things system data collection method based on unmanned aerial vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110380776A true CN110380776A (en) | 2019-10-25 |
CN110380776B CN110380776B (en) | 2021-05-14 |
Family
ID=68260301
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910777808.3A Active CN110380776B (en) | 2019-08-22 | 2019-08-22 | Internet of things system data collection method based on unmanned aerial vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110380776B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111031513A (en) * | 2019-12-02 | 2020-04-17 | 北京邮电大学 | Multi-unmanned-aerial-vehicle-assisted Internet-of-things communication method and system |
CN111629383A (en) * | 2020-05-09 | 2020-09-04 | 清华大学 | Channel prediction method and device for pre-deployment of mobile air base station |
CN111698717A (en) * | 2020-05-26 | 2020-09-22 | 清华大学 | Network transmission parameter selection method, device, equipment and storage medium |
CN112601291A (en) * | 2020-12-09 | 2021-04-02 | 广州技象科技有限公司 | Low-conflict access method, device, system and storage medium based on channel detection |
CN113194446A (en) * | 2021-04-21 | 2021-07-30 | 北京航空航天大学 | Unmanned aerial vehicle auxiliary machine communication method |
CN113259946A (en) * | 2021-01-14 | 2021-08-13 | 西安交通大学 | Ground-to-air full coverage power control and protocol design method based on centralized array antenna |
CN114222251A (en) * | 2021-11-30 | 2022-03-22 | 中山大学·深圳 | Adaptive network forming and track optimizing method for multiple unmanned aerial vehicles |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170041873A1 (en) * | 2015-08-05 | 2017-02-09 | Samsung Electronics Co., Ltd | Apparatus and method for power saving for cellular internet of things devices |
CN106961684A (en) * | 2017-03-24 | 2017-07-18 | 厦门大学 | The cognitive radio null tone two dimension meaning interference method against the enemy learnt based on deeply |
CN108243431A (en) * | 2017-08-28 | 2018-07-03 | 南京邮电大学 | The power distribution algorithm of unmanned plane relay system based on efficiency optiaml ciriterion |
CN108337024A (en) * | 2018-02-06 | 2018-07-27 | 重庆邮电大学 | A kind of extensive mimo system efficiency optimization method based on energy acquisition |
CN108353081A (en) * | 2015-09-28 | 2018-07-31 | 13部门有限公司 | Unmanned plane intrusion detection and confrontation |
CN109445462A (en) * | 2018-11-30 | 2019-03-08 | 电子科技大学 | A kind of unmanned plane robust paths planning method under uncertain condition |
CN109474980A (en) * | 2018-12-14 | 2019-03-15 | 北京科技大学 | A kind of wireless network resource distribution method based on depth enhancing study |
CN109511134A (en) * | 2018-10-23 | 2019-03-22 | 郑州航空工业管理学院 | Based on the unmanned plane auxiliary radio communication system load shunt method that efficiency is optimal |
CN109743099A (en) * | 2019-01-10 | 2019-05-10 | 深圳市简智联信息科技有限公司 | Mobile edge calculations system and its resource allocation methods |
US20190205736A1 (en) * | 2017-12-29 | 2019-07-04 | Intel Corporation | Compute optimization mechanism for deep neural networks |
CN110012547A (en) * | 2019-04-12 | 2019-07-12 | 电子科技大学 | A kind of method of user-association in symbiosis network |
-
2019
- 2019-08-22 CN CN201910777808.3A patent/CN110380776B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170041873A1 (en) * | 2015-08-05 | 2017-02-09 | Samsung Electronics Co., Ltd | Apparatus and method for power saving for cellular internet of things devices |
CN108353081A (en) * | 2015-09-28 | 2018-07-31 | 13部门有限公司 | Unmanned plane intrusion detection and confrontation |
CN106961684A (en) * | 2017-03-24 | 2017-07-18 | 厦门大学 | The cognitive radio null tone two dimension meaning interference method against the enemy learnt based on deeply |
CN108243431A (en) * | 2017-08-28 | 2018-07-03 | 南京邮电大学 | The power distribution algorithm of unmanned plane relay system based on efficiency optiaml ciriterion |
US20190205736A1 (en) * | 2017-12-29 | 2019-07-04 | Intel Corporation | Compute optimization mechanism for deep neural networks |
CN108337024A (en) * | 2018-02-06 | 2018-07-27 | 重庆邮电大学 | A kind of extensive mimo system efficiency optimization method based on energy acquisition |
CN109511134A (en) * | 2018-10-23 | 2019-03-22 | 郑州航空工业管理学院 | Based on the unmanned plane auxiliary radio communication system load shunt method that efficiency is optimal |
CN109445462A (en) * | 2018-11-30 | 2019-03-08 | 电子科技大学 | A kind of unmanned plane robust paths planning method under uncertain condition |
CN109474980A (en) * | 2018-12-14 | 2019-03-15 | 北京科技大学 | A kind of wireless network resource distribution method based on depth enhancing study |
CN109743099A (en) * | 2019-01-10 | 2019-05-10 | 深圳市简智联信息科技有限公司 | Mobile edge calculations system and its resource allocation methods |
CN110012547A (en) * | 2019-04-12 | 2019-07-12 | 电子科技大学 | A kind of method of user-association in symbiosis network |
Non-Patent Citations (6)
Title |
---|
3GPP: "3rd Generation Partnership Project;", 《3GPP TR 22.891 V1.3.1》 * |
GHAITH HATTAB: "Spectrum Sharing Protocols based on", 《2018 IEEE INTERNATIONAL SYMPOSIUM ON DYNAMIC SPECTRUM ACCESS NETWORKS (DYSPAN)》 * |
XUAN QI: "Enabling Deep Learning on IoT Edge: Approaches and Evaluation", 《2018 THIRD ACM/IEEE SYMPOSIUM ON EDGE COMPUTING》 * |
YING-CHANG LIANG: "Energy-Efficient UAV Backscatter", 《2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC)》 * |
YING-CHANG LIANG: "Optimal Power Allocation for Fading Channels in", 《2008 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS》 * |
王丹洋: "认知网络中多传输功率感知与辨识技术研究", 《中国博士学位论文全文数据库-信息科技辑》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111031513A (en) * | 2019-12-02 | 2020-04-17 | 北京邮电大学 | Multi-unmanned-aerial-vehicle-assisted Internet-of-things communication method and system |
CN111031513B (en) * | 2019-12-02 | 2020-12-15 | 北京邮电大学 | Multi-unmanned-aerial-vehicle-assisted Internet-of-things communication method and system |
CN111629383A (en) * | 2020-05-09 | 2020-09-04 | 清华大学 | Channel prediction method and device for pre-deployment of mobile air base station |
CN111698717A (en) * | 2020-05-26 | 2020-09-22 | 清华大学 | Network transmission parameter selection method, device, equipment and storage medium |
CN112601291A (en) * | 2020-12-09 | 2021-04-02 | 广州技象科技有限公司 | Low-conflict access method, device, system and storage medium based on channel detection |
CN113259946A (en) * | 2021-01-14 | 2021-08-13 | 西安交通大学 | Ground-to-air full coverage power control and protocol design method based on centralized array antenna |
CN113194446A (en) * | 2021-04-21 | 2021-07-30 | 北京航空航天大学 | Unmanned aerial vehicle auxiliary machine communication method |
CN113194446B (en) * | 2021-04-21 | 2022-03-15 | 北京航空航天大学 | Unmanned aerial vehicle auxiliary machine communication method |
CN114222251A (en) * | 2021-11-30 | 2022-03-22 | 中山大学·深圳 | Adaptive network forming and track optimizing method for multiple unmanned aerial vehicles |
CN114222251B (en) * | 2021-11-30 | 2024-06-28 | 中山大学·深圳 | Self-adaptive network forming and track optimizing method for multiple unmanned aerial vehicles |
Also Published As
Publication number | Publication date |
---|---|
CN110380776B (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110380776B (en) | Internet of things system data collection method based on unmanned aerial vehicle | |
CN109743210B (en) | Unmanned aerial vehicle network multi-user access control method based on deep reinforcement learning | |
CN110488861A (en) | Unmanned plane track optimizing method, device and unmanned plane based on deeply study | |
CN114025330B (en) | Air-ground cooperative self-organizing network data transmission method | |
CN113162679A (en) | DDPG algorithm-based IRS (inter-Range instrumentation System) auxiliary unmanned aerial vehicle communication joint optimization method | |
CN111526592B (en) | Non-cooperative multi-agent power control method used in wireless interference channel | |
CN113055078B (en) | Effective information age determination method and unmanned aerial vehicle flight trajectory optimization method | |
CN114142908B (en) | Multi-unmanned aerial vehicle communication resource allocation method for coverage reconnaissance task | |
Donevski et al. | Federated learning with a drone orchestrator: Path planning for minimized staleness | |
CN115499921A (en) | Three-dimensional trajectory design and resource scheduling optimization method for complex unmanned aerial vehicle network | |
Bayerlein et al. | Learning to rest: A Q-learning approach to flying base station trajectory design with landing spots | |
CN116582871B (en) | Unmanned aerial vehicle cluster federal learning model optimization method based on topology optimization | |
CN113406965A (en) | Unmanned aerial vehicle energy consumption optimization method based on reinforcement learning | |
CN113255218A (en) | Unmanned aerial vehicle autonomous navigation and resource scheduling method of wireless self-powered communication network | |
CN115119174A (en) | Unmanned aerial vehicle autonomous deployment method based on energy consumption optimization in irrigation area scene | |
Cui et al. | Model-free based automated trajectory optimization for UAVs toward data transmission | |
CN113382060A (en) | Unmanned aerial vehicle track optimization method and system in Internet of things data collection | |
CN115412156B (en) | Urban monitoring-oriented satellite energy-carrying Internet of things resource optimal allocation method | |
CN117193351A (en) | Online trajectory planning method of air-ground collaborative unmanned aerial vehicle for distributed model training | |
CN116074974A (en) | Multi-unmanned aerial vehicle group channel access control method under layered architecture | |
Lyu et al. | Movement and communication co-design in multi-UAV enabled wireless systems via DRL | |
CN116321237A (en) | Unmanned aerial vehicle auxiliary internet of vehicles data collection method based on deep reinforcement learning | |
CN116009590A (en) | Unmanned aerial vehicle network distributed track planning method, system, equipment and medium | |
CN112383893B (en) | Time-sharing-based wireless power transmission method for chargeable sensing network | |
CN115278905A (en) | Multi-node communication opportunity determination method for unmanned aerial vehicle network transmission |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |