CN111752304A - Unmanned aerial vehicle data acquisition method and related equipment - Google Patents
Unmanned aerial vehicle data acquisition method and related equipment Download PDFInfo
- Publication number
- CN111752304A CN111752304A CN202010584082.4A CN202010584082A CN111752304A CN 111752304 A CN111752304 A CN 111752304A CN 202010584082 A CN202010584082 A CN 202010584082A CN 111752304 A CN111752304 A CN 111752304A
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- option
- sensor
- state information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Abstract
The invention provides an unmanned aerial vehicle data acquisition method based on an Option-DQN algorithm and related equipment. The method comprises the following steps: (a) acquiring current state information of the unmanned aerial vehicle, wherein the state information comprises the percentage of data acquired by each sensor in a sensor network, the current position of the unmanned aerial vehicle and the residual electric quantity of the unmanned aerial vehicle; (b) inputting the state information into a value function neural network to obtain the probability of each option in an option set selected by the unmanned aerial vehicle, wherein the options in the option set comprise data of each sensor in a sensor network, return journey charging and task ending; (c) determining a most preferred option according to the probability of each option in the option set selected by the unmanned aerial vehicle; (d) acquiring a strategy corresponding to the most preferable item, and controlling the unmanned aerial vehicle to execute the strategy; (e) and (b) judging whether the optimal preference is a task ending or not, and if the optimal preference is not the task ending, returning to the step (a). The invention can ensure that the data acquisition time of the unmanned aerial vehicle is shortest and simultaneously ensure that the unmanned aerial vehicle can be charged in time.
Description
Technical Field
The invention relates to the communication technology, in particular to an unmanned aerial vehicle data acquisition method and related equipment.
Background
The application of the unmanned aerial vehicle technology in wireless communication is more and more extensive in recent years. The unmanned aerial vehicle has the characteristics of high flexibility, strong maneuverability and the like, and can be used as a mobile aerial base station to assist the ground base station in communication, for example, to help remote areas to realize the coverage of communication. In addition, the information transmission between the drone and the ground user has almost no obstruction and can be assumed to be a line-of-sight channel. Therefore, the throughput and the coverage rate of the communication network covered by the unmanned aerial vehicle base station can be effectively improved.
The unmanned aerial vehicle can also assist the sensor network in data acquisition. Data acquisition among nodes of a traditional sensor network is realized in a multi-hop mode, one node transmits data to another node, and in the same way, data of all the nodes are converged to one node called a fusion center. The data acquisition mode has the problems that each sensor not only needs to transmit own data, but also needs to transfer data of other nodes, so that the electric quantity consumption of the nodes is too fast, and the multi-hop communication connection stability is poor. When adopting unmanned aerial vehicle to come the auxiliary sensor network to carry out data acquisition, above-mentioned problem just can avoid. Ground sensor can directly give the unmanned aerial vehicle that closes on with data transmission, improves the efficiency of transmission by a wide margin.
However, the power of the drone is limited, and timely charging is required if the power is insufficient during the task. At present, most of the conventional mathematical methods are adopted to solve the problems in the data acquisition scene of the unmanned aerial vehicle, and the electric quantity of the unmanned aerial vehicle is assumed to be infinite, which obviously does not conform to the reality. At present, no method is available for solving the problem that the unmanned aerial vehicle considers path planning and charging simultaneously in the data acquisition process.
Disclosure of Invention
In view of the above, there is a need to provide a data acquisition method, device, computer device and storage medium for an unmanned aerial vehicle, which can ensure that the time for the unmanned aerial vehicle to acquire data from a sensor network is shortest, and meanwhile, ensure that the unmanned aerial vehicle can be charged in time.
A first aspect of the application provides a method for data acquisition by an unmanned aerial vehicle, the method comprising:
(a) acquiring current state information of the unmanned aerial vehicle, wherein the state information comprises the percentage of data acquired by each sensor in a sensor network, the current position of the unmanned aerial vehicle and the residual electric quantity of the unmanned aerial vehicle;
(b) inputting the state information into a value function neural network to obtain the probability of each option in an option set selected by the unmanned aerial vehicle, wherein the options in the option set comprise data of each sensor in a sensor network, return journey charging and task ending;
(c) determining a most preferred option according to the probability of each option in the option set selected by the unmanned aerial vehicle;
(d) acquiring a strategy corresponding to the most preferable item, and controlling the unmanned aerial vehicle to execute the strategy;
(e) and (b) judging whether the optimal preference is the task ending or not, and if the optimal preference is not the task ending, returning to the step (a).
In another possible implementation manner, the value function neural network includes an input layer, a hidden layer, and an output layer, where the hidden layer includes a first fully-connected layer and a second fully-connected layer, and an output of the first fully-connected layer is:
wherein W1And b1Respectively, a weight parameter and a deviation parameter of the first fully connected layer, wherein the ReLU is a linear rectification function;
the output of the second fully connected layer is:
wherein W2And b2Respectively, a weight parameter and a deviation parameter of the second fully-connected layer;
the output of the output layer is:
wherein W3And b3Respectively, a weight parameter and a bias parameter of the output layer, softmax being a normalized exponential function.
In another possible implementation manner, before the inputting the state information into the value function neural network, the method further includes:
from a training sample setTraining the value function neural network by randomly extracting training samples, and training a sample setThe kth training sample of (1) Is the state information of the unmanned aerial vehicle before the training,is composed ofThe most preferred option under the conditions of the condition,for unmanned aerial vehicle implementationThe total instant prize to be won later on,for unmanned aerial vehicle implementationThe latter state information;
using training samples dkThe loss function for training the value function neural network is as follows:
whereinRepresenting the expectation, gamma is a discount factor, theta represents all parameters of the value function neural network, QopA neural network representing the function of the value,a target network representing the value function neural network.
In another possible implementation manner, the update rule of θ is:
where α is the learning rate, θnewAnd thetaoldRespectively representing the updated parameters and the parameters before updating of the value function neural network, and the loss functionGradient of (2)Comprises the following steps:
in another possible implementation, the overall instant prize isIncluding electric power rewardsCollecting rewardsAnd path rewards
In another possible implementation, the electric quantity is rewardedCalculated according to the following formula:
wherein N ise、NcAnd NlAre all negative constants,. lkFor unmanned aerial vehicle atThe distance flown in.
In another possible implementation manner, the determining, according to the probability that the drone selects each option in the option set, the most preferred item includes:
generating a random number between 0 and 1;
judging whether the random number is smaller than a constant between 0 and 1;
if the random number is less than the random number, randomly selecting one option from the option set as the most preferred option;
and if the random number is not less than the random number, selecting the option with the maximum probability from the option set as the optimal option.
A second aspect of the application provides an unmanned aerial vehicle data acquisition device, the device includes:
the acquisition module is used for acquiring the current state information of the unmanned aerial vehicle, wherein the state information comprises the percentage of data acquired by each sensor in the sensor network, the current position of the unmanned aerial vehicle and the residual electric quantity of the unmanned aerial vehicle;
the planning module is used for inputting the state information into a value function neural network to obtain the probability of each option in an option set selected by the unmanned aerial vehicle, wherein the options in the option set comprise data of each sensor in the sensor network, return voyage charging and task ending;
a determining module for determining a most preferred option according to a probability of each option in the set of options being selected by the drone;
the execution module is used for acquiring the strategy corresponding to the most preferred item and controlling the unmanned aerial vehicle to execute the strategy;
and the judging module is used for judging whether the optimal option is the task ending.
In another possible implementation manner, the value function neural network includes an input layer, a hidden layer, and an output layer, where the hidden layer includes a first fully-connected layer and a second fully-connected layer, and an output of the first fully-connected layer is:
wherein W1And b1Respectively, a weight parameter and a deviation parameter of the first fully connected layer, wherein the ReLU is a linear rectification function;
the output of the second fully connected layer is:
wherein W2And b2Respectively, a weight parameter and a deviation parameter of the second fully-connected layer;
the output of the output layer is:
wherein W3And b3Respectively, a weight parameter and a bias parameter of the output layer, softmax being a normalized exponential function.
In another possible implementation manner, the apparatus further includes:
a training module for, prior to said inputting said state information into a value-function neural network, deriving a set of training samplesTraining the value function neural network by randomly extracting training samples, and training a sample setThe kth training sample of (1) Is the state information of the unmanned aerial vehicle before the training,is composed ofThe most preferred option under the conditions of the condition,for unmanned aerial vehicle implementationThe total instant prize to be won later on,for unmanned aerial vehicle implementationThe latter state information;
using training samples dkThe loss function for training the value function neural network is as follows:
whereinRepresenting the expectation, gamma is a discount factor, theta represents all parameters of the value function neural network, QopA neural network representing the function of the value,a target network representing the value function neural network.
In another possible implementation manner, the update rule of θ is:
where α is the learning rate, θnewAnd thetaoldRespectively representing the updated parameters and the parameters before updating of the value function neural network, and the loss functionGradient of (2)Comprises the following steps:
in another possible implementation, the overall instant prize isIncluding electric power rewardsCollecting rewardsAnd path rewards
In another possible implementation, the electric quantity is rewardedCalculated according to the following formula:
wherein N ise、NcAnd NlAre all negative constants,. lkFor unmanned aerial vehicle atThe distance flown in.
In another possible implementation manner, the determining, according to the probability that the drone selects each option in the option set, the most preferred item includes:
generating a random number between 0 and 1;
judging whether the random number is smaller than a constant between 0 and 1;
if the random number is less than the random number, randomly selecting one option from the option set as the most preferred option;
and if the random number is not less than the random number, selecting the option with the maximum probability from the option set as the optimal option.
A third aspect of the application provides a computer device comprising a processor for implementing the drone data acquisition method when executing a computer program stored in a memory.
A fourth aspect of the present application provides a computer storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the drone data acquisition method.
According to the technical scheme, the invention provides a data acquisition solution for autonomous charging and path planning of the unmanned aerial vehicle. The invention adopts an Option-DQN-based layered reinforcement learning algorithm to ensure that the unmanned aerial vehicle finds the optimal path selection, so that the time for data acquisition is shortest, and simultaneously ensures that the unmanned aerial vehicle can judge when to charge according to the self state and complete the action of charging.
Different from the traditional method, the scheme can deal with complex scene changes, such as increase of the number of sensors on the ground, limited electric quantity of the unmanned aerial vehicle and the like. The method is simple to implement, low in complexity and obvious in practical application value.
Drawings
Fig. 1 is a flowchart of a data acquisition method for an unmanned aerial vehicle according to an embodiment of the present invention.
Fig. 2 is a structural diagram of the data acquisition device of the unmanned aerial vehicle provided by the embodiment of the invention.
Fig. 3 is a schematic diagram of a computer device provided by an embodiment of the present invention.
Fig. 4 is a comparison graph of the period return of the proposed Option-DQN algorithm and the conventional DQN algorithm.
Fig. 5 is a diagram of a flight path of an unmanned aerial vehicle for data acquisition of the unmanned aerial vehicle according to the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a detailed description of the present invention will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention, and the described embodiments are merely a subset of the embodiments of the present invention, rather than a complete embodiment. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Preferably, the unmanned aerial vehicle data acquisition method is applied to one or more computer devices. The computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
Example one
Fig. 1 is a flowchart of a data acquisition method for an unmanned aerial vehicle according to an embodiment of the present invention. The unmanned aerial vehicle data acquisition method is applied to computer equipment. The data acquisition method of the unmanned aerial vehicle controls the rechargeable unmanned aerial vehicle to acquire data of the sensor network, so that the shortest data acquisition time of the unmanned aerial vehicle is ensured, and meanwhile, the unmanned aerial vehicle can be charged in time.
As shown in fig. 1, the data acquisition method for the unmanned aerial vehicle includes:
101, acquiring current state information of the unmanned aerial vehicle, wherein the state information comprises the percentage of data collected by each sensor in the sensor network, the current position of the unmanned aerial vehicle and the residual capacity of the unmanned aerial vehicle.
The current state information of the unmanned aerial vehicle can be recorded as st,st=(crt,pt,et),Wherein Is a sensorThe percentage of data that has been collected by each sensor in the network,n is the number of sensors in the sensor network, ptIs the current position of the drone, pt=(xt,yt,zt),etThe surplus power of the unmanned aerial vehicle.
In one embodiment, the height z of the dronetIs a constant H, i.e. zt=H。
The unmanned aerial vehicle starts from the initial position, and data acquisition is carried out on the sensors in the sensor network one by one. And in the data acquisition process, if the electric quantity is insufficient, the unmanned aerial vehicle returns to the charging station to be fully charged and continues to acquire data, and when the data acquisition of all the sensors in the sensor network is finished, the unmanned aerial vehicle returns to the initial position.
In one embodiment, the starting position of the drone is a charging station. The unmanned aerial vehicle starts from the charging station, if the electric quantity is insufficient in the data acquisition process, the unmanned aerial vehicle returns to the charging station to be fully charged and then continues to acquire data, and when the data acquisition of all the sensors in the sensor network is finished, the unmanned aerial vehicle returns to the charging station.
The sensor network comprises a plurality of sensors deployed on the ground, the positions of the sensors are randomly distributed, and the data volume carried by each sensor is different. Therefore, the time that the unmanned aerial vehicle stays on each sensor in the data acquisition process is different.
And 102, inputting the state information into a value function neural network to obtain the probability of each Option (Option) in an Option set selected by the unmanned aerial vehicle, wherein the options in the Option set comprise data acquisition of each sensor in the sensor network, return voyage charging and task ending.
The option set can be recorded asos,1For acquiring data, o, of a first sensor in a sensor networks,2For collecting data of a second sensor in the sensor network … …, os,NFor collecting data of the Nth sensor in the sensor network, ocFor return journey charging, opTo end the task. Option setThe number of the included options is
Option setEach option in (a) is a triplet<Io,πo,βo>,IoAnd a state information set corresponding to the option (indicating the state information of the unmanned aerial vehicle in which the option can be selected). In an embodiment, the option selectable by the drone in any state (any state information) is the whole option setTherefore, the temperature of the molten metal is controlled,πopolicies for this option βoIs the termination condition for each option.
In one embodiment, each option corresponds to a policy πoTermination condition β for each option for a predefined policyoTo finish pioAll actions defined. In particular, option o for collecting sensor datas,iThe strategy is that the current position flies to the ith sensor in a straight line and collects the data of the sensor until the collection is finished, and the operation is quit os,i. For return voyage charging ocThe strategy is that the electric charge is flown to a charging station in a straight line and charged until the electric charge is full and exits from the charging stationc. For the end task opThe strategy is that the unmanned plane flies back linearlyCharging station for informing user of task completion and exitingp. It is understood that each option may correspond to other policies.
The value function neural network is a pre-trained neural network.
In one embodiment, the value function neural network comprises an input layer, a hidden layer and an output layer, the hidden layer comprises a first fully-connected layer and a second fully-connected layer, and the output of the first fully-connected layer is:
wherein W1And b1Respectively, a weight parameter and a deviation parameter of the first fully connected layer, wherein the ReLU is a linear rectification function;
the output of the second fully connected layer is:
wherein W2And b2Respectively, a weight parameter and a deviation parameter of the second fully-connected layer;
the output of the output layer is:
wherein W3And b3Respectively, a weight parameter and a bias parameter of the output layer, softmax being a normalized exponential function.
The input to the second fully-connected layer is the output of the first fully-connected layer, and the first fully-connected network may consist of 1024 neurons, with the second fully-connected layer consisting of 300 neurons.
The input of the output layer is the output of the second fully connected layer. The output of the output layer is an N + 2-dimensional vector comprising the probability outputs of the drone for selecting each option in the set of optionsj,
It will be appreciated that other network architectures may be used for the value function neural network.
In an embodiment, prior to said inputting said state information into a value function neural network, said method further comprises:
from a training sample setTraining the value function neural network by randomly extracting training samples, and training a sample setThe kth training sample of (1) Is the state information of the unmanned aerial vehicle before the training,is composed ofThe most preferred option under the conditions of the condition,for unmanned aerial vehicle implementationThe total instant prize to be won later on,for unmanned aerial vehicle implementationThe latter state information;
using training samples dkThe loss function for training the value function neural network is as follows:
whereinRepresenting the expectation, gamma is a discount factor, theta represents all parameters of the value function neural network, QopA neural network representing the function of the value,a target network representing the value function neural network.
In one embodiment, the update rule for θ is:
where α is the learning rate, θnewAnd thetaoldRespectively representing updated parameters and parameters before updating of the value function neural network, and a loss function QopGradient of (2)Comprises the following steps:
in an embodiment, the target network is updated in a soft update manner, that is, after a certain period, the target network is updated by using the parameter synthesis of the original target network and the current function neural network, and the update rule is as follows:
θtarget,new=αθtarget,old+(1-α)θ
wherein α is the update rate, and α∈ [0,1 ]],θtarget,newAnd thetatarget,oldRespectively representing target networksUpdated parameters and parameters before updating. The robustness of neural network training can be increased by adopting a soft update mode for the target network.
In one embodiment, the overall instant prize isIncluding electric power rewardsCollecting rewardsAnd path rewards
The electric quantity rewardAnd the method is used for punishing the condition that the electric quantity of the unmanned aerial vehicle is insufficient in the execution process of executing the strategy corresponding to the option.
The collection rewardAn option for punishing that the drone repeatedly selects a sensor whose acquisition has completed.
The path rewardThe method is used for guiding the unmanned aerial vehicle to learn the shortest possible path for flying so as to acquire the data of the sensor.
wherein N ise、NcAnd NlAre all negative constants,. lkFor unmanned aerial vehicle atThe distance flown in.
103, determining a most preferred option according to the probability of the unmanned aerial vehicle selecting each option in the option set.
In an embodiment, the determining the most preferred item according to the probability of the drone selecting each option in the set of options includes:
the determining the most preferred option from the option set by a greedy algorithm according to the probability of each option in the option set being selected by the drone.
Specifically, determining the most preferred item from the set of options by a-greedy algorithm comprises:
generating a random number between 0 and 1;
judging whether the random number is smaller than a constant between 0 and 1;
if the random number is less than the random number, randomly selecting one option from the option set as the most preferred option;
and if the random number is not less than the random number, selecting the option with the maximum probability from the option set as the optimal option.
And 104, acquiring a strategy corresponding to the most preferred item, and controlling the unmanned aerial vehicle to execute the strategy.
Controlling the drone to execute the policy is controlling the drone to execute a sequence of actions specified by the policy.
For example, the most preferable item is to collect data of the ith sensor, then a strategy for collecting data of the ith sensor is obtained, the unmanned aerial vehicle is controlled to fly to the ith sensor from the current position in a straight line and collect data of the sensor until the unmanned aerial vehicle is completely collected and exits from the position os,i。
As another example, the most preferred is return charging ocThen obtain the return charge ocStrategy of (1) according to the return charge ocThe strategy controls the unmanned aerial vehicle to fly to a charging station in a straight line and charge until the electric quantity is full and exit from the charging stationc。
As another example, the most preferred item is the end task opThen get the end task opAccording to the strategy of ending the task opThe strategy controls the unmanned aerial vehicle to fly back to the charging station in a straight line, informs the user of ending the task and quitsp。
And 105, judging whether the optimal preference is the task ending or not, and if the optimal preference is not the task ending, returning to 101.
For example, if the optimal preference is to collect data of the ith sensor and the optimal preference is not to end the task, the process returns to 101.
And if the optimal option is the task ending, ending the process.
The embodiment provides a data acquisition solution for unmanned aerial vehicle autonomous charging and path planning. According to the scheme, an Option-Deep Q-network (DQN) -based hierarchical reinforcement learning algorithm is adopted to ensure that the unmanned aerial vehicle finds the optimal path selection, so that the data acquisition time is shortest, and meanwhile, the unmanned aerial vehicle can judge when to charge according to the state of the unmanned aerial vehicle and complete the charging action.
Fig. 4 is a comparison graph of the period return of the proposed Option-DQN algorithm and the conventional DQN algorithm.
In fig. 4, the abscissa is the number of training cycles and the ordinate is the cumulative total instant prize. Compared with the traditional DQN algorithm, the period return of the Option-DQN algorithm provided by the invention rises more rapidly and can be converged rapidly. The periodic return of the DQN algorithm has obvious oscillation and large variance, and the final periodic return is obviously lower than the former. The Option-DQN algorithm provided by the invention utilizes a mode of directly learning a 'high-level' strategy, and can learn the meaning of a scene more quickly compared with the traditional DQN algorithm, so that the Option-DQN algorithm is more effective; while the traditional DQN algorithm only selects basic actions at a time, it lacks overall considerations, such as often turning to collect one sensor on its way, resulting in a lower collection efficiency.
Fig. 5 is a diagram of a flight path of an unmanned aerial vehicle for data acquisition of the unmanned aerial vehicle according to the present invention. Wherein the unmanned aerial vehicle starts from the starting point and traverses each sensor once, and finally returns to the terminal point, and the unmanned aerial vehicle returns to the charging station once on the way for charging, and the whole track unmanned aerial vehicle selects 22 options altogether, 162 time units are used.
Example two
Fig. 2 is a structural diagram of an unmanned aerial vehicle data acquisition device provided in the second embodiment of the present invention. The unmanned aerial vehicle data acquisition device 20 is applied to computer equipment. Unmanned aerial vehicle data acquisition device 40 control chargeable unmanned aerial vehicle carries out data acquisition to sensor network, guarantees that unmanned aerial vehicle data acquisition's time is the shortest, guarantees simultaneously that unmanned aerial vehicle can in time charge.
As shown in fig. 2, the unmanned aerial vehicle data acquisition apparatus 20 may include an acquisition module 201, a planning module 202, a determination module 203, an execution module 204, and a determination module 205.
The acquisition module 201 is configured to acquire current state information of the unmanned aerial vehicle, where the state information includes a percentage of data acquired by each sensor in the sensor network, a current position of the unmanned aerial vehicle, and a remaining power of the unmanned aerial vehicle.
The current state information of the unmanned aerial vehicle can be recorded as st,st=(crt,pt,et),Wherein Is the percentage of data that has been collected by each sensor in the sensor network,n is the number of sensors in the sensor network, ptIs the current position of the drone, pt=(xt,yt,zt),etThe surplus power of the unmanned aerial vehicle.
In one embodiment, the height z of the dronetIs a constant H, i.e. zt=H。
The unmanned aerial vehicle starts from the initial position, and data acquisition is carried out on the sensors in the sensor network one by one. And in the data acquisition process, if the electric quantity is insufficient, the unmanned aerial vehicle returns to the charging station to be fully charged and continues to acquire data, and when the data acquisition of all the sensors in the sensor network is finished, the unmanned aerial vehicle returns to the initial position.
In one embodiment, the starting position of the drone is a charging station. The unmanned aerial vehicle starts from the charging station, if the electric quantity is insufficient in the data acquisition process, the unmanned aerial vehicle returns to the charging station to be fully charged and then continues to acquire data, and when the data acquisition of all the sensors in the sensor network is finished, the unmanned aerial vehicle returns to the charging station.
The sensor network comprises a plurality of sensors deployed on the ground, the positions of the sensors are randomly distributed, and the data volume carried by each sensor is different. Therefore, the time that the unmanned aerial vehicle stays on each sensor in the data acquisition process is different.
The planning module 202 is configured to input the state information into a value function neural network to obtain a probability that the unmanned aerial vehicle selects each Option (Option) in an Option set, where the options in the Option set include collecting data of each sensor in the sensor network, returning to the home, charging, and ending a task.
The option set can be recorded asos,1For acquiring data, o, of a first sensor in a sensor networks,2For collecting data of a second sensor in the sensor network … …, os,NFor collecting data of the Nth sensor in the sensor network, ocFor return journey charging, opTo end the task. Option setThe number of the included options is
Option setEach option in (a) is a triplet<Io,πo,βo>,IoA state information set corresponding to the option (representing the state information of the unmanned aerial vehicleSelecting the option). In an embodiment, the option selectable by the drone in any state (any state information) is the whole option setTherefore, the temperature of the molten metal is controlled,πopolicies for this option βoIs the termination condition for each option.
In one embodiment, each option corresponds to a policy πoTermination condition β for each option for a predefined policyoTo finish pioAll actions defined. In particular, option o for collecting sensor datas,iThe strategy is that the current position flies to the ith sensor in a straight line and collects the data of the sensor until the collection is finished, and the operation is quit os,i. For return voyage charging ocThe strategy is that the electric charge is flown to a charging station in a straight line and charged until the electric charge is full and exits from the charging stationc. For the end task opThe strategy is that the unmanned aerial vehicle flies back to the charging station in a straight line, and the user is informed of the completion of the task and quitsp. It is understood that each option may correspond to other policies.
The value function neural network is a pre-trained neural network.
In one embodiment, the value function neural network comprises an input layer, a hidden layer and an output layer, the hidden layer comprises a first fully-connected layer and a second fully-connected layer, and the output of the first fully-connected layer is:
wherein W1And b1Respectively, a weight parameter and a deviation parameter of the first fully connected layer, wherein the ReLU is a linear rectification function;
the output of the second fully connected layer is:
wherein W2And b2Respectively, a weight parameter and a deviation parameter of the second fully-connected layer;
the output of the output layer is:
wherein W3And b3Respectively, a weight parameter and a bias parameter of the output layer, softmax being a normalized exponential function.
The input to the second fully-connected layer is the output of the first fully-connected layer, and the first fully-connected network may consist of 1024 neurons, with the second fully-connected layer consisting of 300 neurons.
The input of the output layer is the output of the second fully connected layer. The output of the output layer is an N + 2-dimensional vector comprising the probability outputs of the drone for selecting each option in the set of optionsj,
It will be appreciated that other network architectures may be used for the value function neural network.
In an embodiment, the unmanned aerial vehicle data acquisition device 20 further includes:
a training module for inputting the state information into a value-function neural network from a set of training samplesTraining the value function neural network by randomly extracting training samples, and training a sample setThe kth training sample of (1) Is the state information of the unmanned aerial vehicle before the training,is composed ofThe most preferred option under the conditions of the condition,for unmanned aerial vehicle implementationThe total instant prize to be won later on,for unmanned aerial vehicle implementationThe latter state information;
using training samples dkThe loss function for training the value function neural network is as follows:
whereinRepresenting the expectation, gamma is a discount factor, theta represents all parameters of the value function neural network, QopA neural network representing the function of the value,a target network representing the value function neural network.
In one embodiment, the update rule for θ is:
where α is the learning rate, θbewAnd thetaoldRespectively representing updated parameters and parameters before updating of the value function neural network, and a loss function QopGradient of (2)Comprises the following steps:
in an embodiment, the target network is updated in a soft update manner, that is, after a certain period, the target network is updated by using the parameter synthesis of the original target network and the current function neural network, and the update rule is as follows:
θtarget,new=αθtarget,old+(1-α)θ
wherein α is the update rate, and α∈ [0,1 ]],θtarget,newAnd thetatarget,oldRespectively representing target networksUpdated parameters and parameters before updating. The robustness of neural network training can be increased by adopting a soft update mode for the target network.
In one embodiment, the overall instant prize isIncluding electric power rewardsCollecting rewardsAnd path rewards
The electric quantityRewardAnd the method is used for punishing the condition that the electric quantity of the unmanned aerial vehicle is insufficient in the execution process of executing the strategy corresponding to the option.
The collection rewardAn option for punishing that the drone repeatedly selects a sensor whose acquisition has completed.
The path rewardThe method is used for guiding the unmanned aerial vehicle to learn the shortest possible path for flying so as to acquire the data of the sensor.
wherein N ise、NcAnd NlAre all negative constants,. lkFor unmanned aerial vehicle atThe distance flown in.
A determining module 203, configured to determine a most preferred option according to a probability that the drone selects each option in the option set.
In an embodiment, the determining the most preferred item according to the probability of the drone selecting each option in the set of options includes:
the determining the most preferred option from the option set by a greedy algorithm according to the probability of each option in the option set being selected by the drone.
Specifically, determining the most preferred item from the set of options by a-greedy algorithm comprises:
generating a random number between 0 and 1;
judging whether the random number is smaller than a constant between 0 and 1;
if the random number is less than the random number, randomly selecting one option from the option set as the most preferred option;
and if the random number is not less than the random number, selecting the option with the maximum probability from the option set as the optimal option.
And the executing module 204 is configured to acquire a policy corresponding to the most preferable item, and control the unmanned aerial vehicle to execute the policy.
Controlling the drone to execute the policy is controlling the drone to execute a sequence of actions specified by the policy.
For example, the most preferable item is to collect data of the ith sensor, then a strategy for collecting data of the ith sensor is obtained, the unmanned aerial vehicle is controlled to fly to the ith sensor from the current position in a straight line and collect data of the sensor until the unmanned aerial vehicle is completely collected and exits from the position os,i。
As another example, the most preferred is return charging ocThen obtain the return charge ocStrategy of (1) according to the return charge ocThe strategy controls the unmanned aerial vehicle to fly to a charging station in a straight line and charge until the electric quantity is full and exit from the charging stationc。
As another example, the most preferred item is the end task opThen get the end task opAccording to the strategy of ending the task opThe strategy controls the unmanned aerial vehicle to fly back to the charging station in a straight line, informs the user of ending the task and quitsp。
A determining module 205, configured to determine whether the most preferred item is the task to be ended, and if the most preferred item is not the task to be ended, the obtaining module 201 obtains the current state information of the unmanned aerial vehicle again.
For example, if the optimal preference is to collect data of the ith sensor, and the optimal preference is not to end the task, the obtaining module 201 obtains the current state information of the aircraft again.
And if the optimal option is the task ending, finishing data acquisition.
The second embodiment provides a data acquisition solution for unmanned aerial vehicle autonomous charging and path planning. According to the scheme, the layered reinforcement learning algorithm based on the Option-DQN is adopted to ensure that the unmanned aerial vehicle finds the optimal path selection, so that the data acquisition time is shortest, and meanwhile, the unmanned aerial vehicle can judge when to charge according to the state of the unmanned aerial vehicle and complete the charging action.
EXAMPLE III
The present embodiment provides a storage medium, where a computer program is stored on the storage medium, and when the computer program is executed by a processor, the steps in the above-mentioned data acquisition method for an unmanned aerial vehicle are implemented, for example, 101-106 shown in fig. 1:
101, acquiring current state information of the unmanned aerial vehicle, wherein the state information comprises the percentage of data acquired by each sensor in a sensor network, the current position of the unmanned aerial vehicle and the residual electric quantity of the unmanned aerial vehicle;
102, inputting the state information into a value function neural network to obtain the probability of each option in an option set selected by the unmanned aerial vehicle, wherein the options in the option set comprise data acquisition of each sensor in a sensor network, return journey charging and task ending;
103, determining a most preferred option according to the probability of the unmanned aerial vehicle selecting each option in the option set;
104, acquiring a strategy corresponding to the most preferred item, and controlling the unmanned aerial vehicle to execute the strategy;
and 105, judging whether the optimal preference is the task ending or not, and if the optimal preference is not the task ending, returning to 101.
Alternatively, the computer program, when executed by the processor, implements the functions of the modules in the above device embodiments, such as the module 201 and 205 in fig. 2:
an obtaining module 201, configured to obtain current state information of the unmanned aerial vehicle, where the state information includes a percentage of data acquired by each sensor in the sensor network, a current position of the unmanned aerial vehicle, and a remaining power of the unmanned aerial vehicle;
the planning module 202 is configured to input the state information into a value function neural network to obtain a probability that the unmanned aerial vehicle selects each option in an option set, where the options in the option set include data acquisition of each sensor in the sensor network, return journey charging, and task completion;
a determining module 203, configured to determine a most preferred option according to a probability that the drone selects each option in the option set;
an executing module 204, configured to acquire a policy corresponding to the most preferable item, and control the unmanned aerial vehicle to execute the policy;
a judging module 205, configured to judge whether the most preferable item is the end task.
Example four
Fig. 3 is a schematic diagram of a computer device according to a fourth embodiment of the present invention. The computer device 30 comprises a memory 301, a processor 302 and a computer program 303, such as a drone data acquisition program, stored in the memory 301 and executable on the processor 302. The processor 302, when executing the computer program 303, implements the steps in the above-described drone data acquisition method embodiments, such as 101-05 shown in fig. 1. Alternatively, the computer program, when executed by the processor, implements the functions of the modules in the above-described device embodiments, such as the module 201 and 205 in fig. 2.
Illustratively, the computer program 303 may be partitioned into one or more modules that are stored in the memory 301 and executed by the processor 302 to perform the present method. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 303 in the computer device 30.
The computer device 30 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. Those skilled in the art will appreciate that the schematic diagram 3 is merely an example of the computer device 30 and does not constitute a limitation of the computer device 30, and may include more or less components than those shown, or combine certain components, or different components, for example, the computer device 30 may also include input and output devices, network access devices, buses, etc.
The Processor 302 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor 302 may be any conventional processor or the like, the processor 302 being the control center for the computer device 30 and connecting the various parts of the overall computer device 30 using various interfaces and lines.
The memory 301 may be used to store the computer program 303, and the processor 302 may implement various functions of the computer device 30 by running or executing the computer program or module stored in the memory 301 and calling data stored in the memory 301. The memory 301 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the computer device 30. Further, the memory 301 may include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other non-volatile solid state storage device.
The modules integrated by the computer device 30 may be stored in a storage medium if they are implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a storage medium and executed by a processor, to instruct related hardware to implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware form, and can also be realized in a form of hardware and a software functional module.
The integrated module implemented in the form of a software functional module may be stored in a storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned. Furthermore, it is to be understood that the word "comprising" does not exclude other modules or steps, and the singular does not exclude the plural. A plurality of modules or means recited in the system claims may also be implemented by one module or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (10)
1. A data acquisition method for an unmanned aerial vehicle, the method comprising:
(a) acquiring current state information of the unmanned aerial vehicle, wherein the state information comprises the percentage of data acquired by each sensor in a sensor network, the current position of the unmanned aerial vehicle and the residual electric quantity of the unmanned aerial vehicle;
(b) inputting the state information into a value function neural network to obtain the probability of each option in an option set selected by the unmanned aerial vehicle, wherein the options in the option set comprise data of each sensor in a sensor network, return journey charging and task ending;
(c) determining a most preferred option according to the probability of each option in the option set selected by the unmanned aerial vehicle;
(d) acquiring a strategy corresponding to the most preferable item, and controlling the unmanned aerial vehicle to execute the strategy;
(e) and (b) judging whether the optimal preference is the task ending or not, and if the optimal preference is not the task ending, returning to the step (a).
2. The unmanned aerial vehicle data acquisition method of claim 1, wherein the value function neural network comprises an input layer, a hidden layer, and an output layer, the hidden layer comprises a first fully-connected layer and a second fully-connected layer, and an output of the first fully-connected layer is:
wherein W1And b1Respectively, a weight parameter and a deviation parameter of the first fully connected layer, wherein the ReLU is a linear rectification function;
the output of the second fully connected layer is:
wherein W2And b2Respectively, a weight parameter and a deviation parameter of the second fully-connected layer;
the output of the output layer is:
wherein W3And b3Respectively, a weight parameter and a bias parameter of the output layer, softmax being a normalized exponential function.
3. The unmanned aerial vehicle data collection method of claim 1, wherein prior to the inputting the state information into a value function neural network, the method further comprises:
from a training sample setTraining the value function neural network by randomly extracting training samples, and training a sample setThe kth training sample of (1) Is the state information of the unmanned aerial vehicle before the training,is composed ofThe most preferred option under the conditions of the condition,for unmanned aerial vehicle implementationThe total instant prize to be won later on,for unmanned aerial vehicle implementationThe latter state information;
using training samples dkThe loss function for training the value function neural network is as follows:
4. An unmanned aerial vehicle data acquisition method as defined in claim 3, wherein the update rule of θ is:
where α is the learning rate, θnewAnd thetaoldRespectively representing the updated parameters and the parameters before updating of the value function neural network, and the loss functionGradient of (2)Comprises the following steps:
6. The unmanned aerial vehicle data collection method of claim 5, wherein the power reward isCalculated according to the following formula:
7. The drone data collection method of claim 1, wherein the determining a most preferred item according to the probability of the drone selecting each option in the set of options comprises:
generating a random number between 0 and 1;
judging whether the random number is smaller than a constant between 0 and 1;
if the random number is less than the random number, randomly selecting one option from the option set as the most preferred option;
and if the random number is not less than the random number, selecting the option with the maximum probability from the option set as the optimal option.
8. An unmanned aerial vehicle data acquisition device, its characterized in that, the device includes:
the acquisition module is used for acquiring the current state information of the unmanned aerial vehicle, wherein the state information comprises the percentage of data acquired by each sensor in the sensor network, the current position of the unmanned aerial vehicle and the residual electric quantity of the unmanned aerial vehicle;
the planning module is used for inputting the state information into a value function neural network to obtain the probability of each option in an option set selected by the unmanned aerial vehicle, wherein the options in the option set comprise data of each sensor in the sensor network, return voyage charging and task ending;
a determining module for determining a most preferred option according to a probability of each option in the set of options being selected by the drone;
the execution module is used for acquiring the strategy corresponding to the most preferred item and controlling the unmanned aerial vehicle to execute the strategy;
and the judging module is used for judging whether the optimal option is the task ending.
9. A computer device, characterized in that the computer device comprises a processor for executing a computer program stored in a memory to implement the drone data acquisition method of any one of claims 1 to 7.
10. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the drone data acquisition method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010584082.4A CN111752304B (en) | 2020-06-23 | 2020-06-23 | Unmanned aerial vehicle data acquisition method and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010584082.4A CN111752304B (en) | 2020-06-23 | 2020-06-23 | Unmanned aerial vehicle data acquisition method and related equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111752304A true CN111752304A (en) | 2020-10-09 |
CN111752304B CN111752304B (en) | 2022-10-14 |
Family
ID=72676678
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010584082.4A Active CN111752304B (en) | 2020-06-23 | 2020-06-23 | Unmanned aerial vehicle data acquisition method and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111752304B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283013A (en) * | 2021-06-10 | 2021-08-20 | 北京邮电大学 | Multi-unmanned aerial vehicle charging and task scheduling method based on deep reinforcement learning |
CN113360276A (en) * | 2021-04-15 | 2021-09-07 | 北京航空航天大学 | Unmanned aerial vehicle system task planning method and device based on health state |
CN113433967A (en) * | 2021-06-07 | 2021-09-24 | 北京邮电大学 | Chargeable unmanned aerial vehicle path planning method and system |
CN114237281A (en) * | 2021-11-26 | 2022-03-25 | 国网北京市电力公司 | Control method and device for unmanned aerial vehicle inspection and inspection system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101650201A (en) * | 2008-08-13 | 2010-02-17 | 中国科学院自动化研究所 | System and method for ground information acquisition |
CN109583665A (en) * | 2018-12-26 | 2019-04-05 | 武汉烽火凯卓科技有限公司 | A kind of unmanned plane charging tasks dispatching method in wireless sensor network |
CN110324805A (en) * | 2019-07-03 | 2019-10-11 | 东南大学 | A kind of radio sensor network data collection method of unmanned plane auxiliary |
CN110329101A (en) * | 2019-05-30 | 2019-10-15 | 成都尚德铁科智能科技有限公司 | A kind of wireless sensing system based on integrated wireless electrical transmission and unmanned plane |
CN110488861A (en) * | 2019-07-30 | 2019-11-22 | 北京邮电大学 | Unmanned plane track optimizing method, device and unmanned plane based on deeply study |
CN110856134A (en) * | 2019-10-16 | 2020-02-28 | 东南大学 | Large-scale wireless sensor network data collection method based on unmanned aerial vehicle |
-
2020
- 2020-06-23 CN CN202010584082.4A patent/CN111752304B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101650201A (en) * | 2008-08-13 | 2010-02-17 | 中国科学院自动化研究所 | System and method for ground information acquisition |
CN109583665A (en) * | 2018-12-26 | 2019-04-05 | 武汉烽火凯卓科技有限公司 | A kind of unmanned plane charging tasks dispatching method in wireless sensor network |
CN110329101A (en) * | 2019-05-30 | 2019-10-15 | 成都尚德铁科智能科技有限公司 | A kind of wireless sensing system based on integrated wireless electrical transmission and unmanned plane |
CN110324805A (en) * | 2019-07-03 | 2019-10-11 | 东南大学 | A kind of radio sensor network data collection method of unmanned plane auxiliary |
CN110488861A (en) * | 2019-07-30 | 2019-11-22 | 北京邮电大学 | Unmanned plane track optimizing method, device and unmanned plane based on deeply study |
CN110856134A (en) * | 2019-10-16 | 2020-02-28 | 东南大学 | Large-scale wireless sensor network data collection method based on unmanned aerial vehicle |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113360276A (en) * | 2021-04-15 | 2021-09-07 | 北京航空航天大学 | Unmanned aerial vehicle system task planning method and device based on health state |
CN113360276B (en) * | 2021-04-15 | 2022-09-27 | 北京航空航天大学 | Unmanned aerial vehicle system task planning method and device based on health state |
CN113433967A (en) * | 2021-06-07 | 2021-09-24 | 北京邮电大学 | Chargeable unmanned aerial vehicle path planning method and system |
CN113283013A (en) * | 2021-06-10 | 2021-08-20 | 北京邮电大学 | Multi-unmanned aerial vehicle charging and task scheduling method based on deep reinforcement learning |
CN113283013B (en) * | 2021-06-10 | 2022-07-19 | 北京邮电大学 | Multi-unmanned aerial vehicle charging and task scheduling method based on deep reinforcement learning |
CN114237281A (en) * | 2021-11-26 | 2022-03-25 | 国网北京市电力公司 | Control method and device for unmanned aerial vehicle inspection and inspection system |
CN114237281B (en) * | 2021-11-26 | 2023-11-21 | 国网北京市电力公司 | Unmanned aerial vehicle inspection control method, unmanned aerial vehicle inspection control device and inspection system |
Also Published As
Publication number | Publication date |
---|---|
CN111752304B (en) | 2022-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111752304B (en) | Unmanned aerial vehicle data acquisition method and related equipment | |
Liu et al. | Energy-efficient UAV crowdsensing with multiple charging stations by deep learning | |
CN112580801B (en) | Reinforced learning training method and decision-making method based on reinforced learning | |
CN110458663A (en) | A kind of vehicle recommended method, device, equipment and storage medium | |
WO2019071909A1 (en) | Automatic driving system and method based on relative-entropy deep inverse reinforcement learning | |
CN108458716A (en) | A kind of electric vehicle charging air navigation aid based on the prediction of charging pile dynamic occupancy | |
CN110736478A (en) | unmanned aerial vehicle assisted mobile cloud-aware path planning and task allocation scheme | |
CN111090899B (en) | Spatial layout design method for urban building | |
CN109726676B (en) | Planning method for automatic driving system | |
CN114261400A (en) | Automatic driving decision-making method, device, equipment and storage medium | |
CN116345578B (en) | Micro-grid operation optimization scheduling method based on depth deterministic strategy gradient | |
CN115951587B (en) | Automatic driving control method, device, equipment, medium and automatic driving vehicle | |
Zhang et al. | cgail: Conditional generative adversarial imitation learning—an application in taxi drivers’ strategy learning | |
Xiao et al. | Vehicle trajectory interpolation based on ensemble transfer regression | |
Tagliaferri et al. | A real-time strategy-decision program for sailing yacht races | |
Wang et al. | Human-drone collaborative spatial crowdsourcing by memory-augmented and distributed multi-agent deep reinforcement learning | |
CN114519433A (en) | Multi-agent reinforcement learning and strategy execution method and computer equipment | |
CN113619604A (en) | Integrated decision and control method and device for automatic driving automobile and storage medium | |
CN111259526B (en) | Cluster recovery path planning method, device, equipment and readable storage medium | |
CN116167254A (en) | Multidimensional city simulation deduction method and system based on city big data | |
CN116259175A (en) | Vehicle speed recommendation method and device for diversified dynamic signal lamp modes | |
Arbabi et al. | Planning for autonomous driving via interaction-aware probabilistic action policies | |
CN115330556A (en) | Training method and device for information adjustment model of charging station and product | |
CN115016540A (en) | Multi-unmanned aerial vehicle disaster situation detection method and system | |
Luo et al. | Deployment optimization for shared e-mobility systems with multi-agent deep neural search |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |