CN110807230A - Method for optimizing robustness of topology structure of Internet of things through autonomous learning - Google Patents

Method for optimizing robustness of topology structure of Internet of things through autonomous learning Download PDF

Info

Publication number
CN110807230A
CN110807230A CN201911036835.1A CN201911036835A CN110807230A CN 110807230 A CN110807230 A CN 110807230A CN 201911036835 A CN201911036835 A CN 201911036835A CN 110807230 A CN110807230 A CN 110807230A
Authority
CN
China
Prior art keywords
network
action
strategy
learning
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911036835.1A
Other languages
Chinese (zh)
Other versions
CN110807230B (en
Inventor
邱铁
陈宁
李克秋
周晓波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201911036835.1A priority Critical patent/CN110807230B/en
Publication of CN110807230A publication Critical patent/CN110807230A/en
Application granted granted Critical
Publication of CN110807230B publication Critical patent/CN110807230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method for optimizing the robustness of a topological structure of an Internet of things through autonomous learning, which comprises the following steps of: initializing a topological structure of the Internet of things; step 2: compressing the topology; and step 3: the autonomous learning model is initialized. According to the characteristics of deep learning and reinforcement learning, a deep deterministic learning strategy model is constructed to train the topological structure of the Internet of things; and 4, step 4: training and testing the model; and 5: step 4 is repeated periodically in an independent repeated experiment, and steps 1, 2, 3 and 4 are repeated periodically in a plurality of independent repeated experiments; up to a maximum number of iterations. In this process, the maximum number of iterations is set, and the experiment is repeated independently each time, with the best result being selected. The experiment was repeated several times, and the average value was selected as the result of this experiment. The invention can obviously improve the capability of the initial topological structure for resisting attack; the robustness of the network topology structure is optimized through autonomous learning, and high-reliability data transmission is guaranteed.

Description

Method for optimizing robustness of topology structure of Internet of things through autonomous learning
Technical Field
The invention relates to the technical field of internet of things network technology, in particular to a method for optimizing robustness of a topology structure of an internet of things.
Background
The Internet of things is an important component in the smart city network, and large-scale equipment nodes are connected together through the Internet of things to provide high-quality service for people. However, the connected device nodes need to tolerate the threat of failure, such as random failure of the device, man-made malicious damage and natural disaster, energy exhaustion, etc., causing the failure of the network part nodes and thus the breakdown of the whole network. Under the wide application scene of the internet of things, how to ensure that large-scale nodes ensure high-quality data service communication of the network on the premise of the failure of partial nodes of the network has important research significance.
In traditional internet of things network topology optimization, nodes are usually deployed at fixed locations, and have certain communication range limitations. The network topology initializes its network model according to a scale-free network model. In the network topology optimization strategy, as far as we know, most researches adopt a greedy edge-change strategy or an evolution algorithm to optimize the robustness of the network topology, so that the whole network has very high capability of resisting attacks. For example, the journal "distribution with multi-probability co-probability for scale-free sensor networks" proposes a multi-population genetic algorithm to solve the problem of local optimization so as to obtain a globally optimal network topology, but the time cost for optimizing one network topology is large, and the algorithm cannot accumulate optimization experiences every time, so that the algorithm needs to be restarted every time the algorithm is operated. Secondly, other researchers use a neural network model to represent learning behaviors before and after optimization of a network topology structure, and optimization time of the topology is reduced. Therefore, in the optimization of the topology structure of the internet of things, the network topology robustness is improved by utilizing the self-learning optimization network topology structure strategy, the on-line of the optimization target value is eliminated, and the experience of each learning is accumulated to guide the subsequent optimization behavior.
Disclosure of Invention
In order to solve the problems in the prior art, the invention aims to provide a method for optimizing the robustness of the topology structure of the internet of things through autonomous learning.
The invention discloses a method for optimizing the robustness of a topological structure of an Internet of things through autonomous learning, which comprises the following steps:
step 1: initializing a topological structure of the Internet of things, namely randomly deploying nodes according to rules of a scale-free network model and an edge density parameter M, and fixing the geographical position, wherein the edge density parameter is set to be M-2;
step 2: compressing the topological structure, namely removing redundant node information which is not in the communication range, only keeping the node connection relation in the communication range in the form of an adjacent matrix, compressing the storage space of the network topological structure, and taking the compressed network topology as an environment space S, wherein the environment space S is a row vector which can change along with the change of the network topological state;
and step 3: initializing an autonomous learning model, namely constructing a deep deterministic learning strategy model according to the characteristics of deep learning and reinforcement learning to train the topology structure of the Internet of things: a depth certainty Q-learning network model is adopted, a selection strategy pi of an action and an optimization strategy Q of a network are simulated, a continuous action space is mapped into a discrete action space, and a target optimization function O and an update rule of the whole training model are designed; wherein:
the action selection policy π is defined by equation (1):
at=π(st|θ) (1)
in the formula, atIndicating a deterministic action taken, stRepresenting the current network topology state, and theta represents the parameter of the action network;
the network optimization policy Q is defined by equation (2):
Q(st,at)=E(r(st,at)+γQ(st+1,π(st+1))) (2)
wherein r represents the current action atFor the current network state stGamma represents a discount factor, accumulated learning experience, Q(s)t+1,π(st+1) Represents the future return value of taking an action in the next network state, and thus the effect Q(s) of the current action on the current network statet,at) The method comprises the steps that an instant return value and a future return value are combined, E () represents an expected value, and the effect before a strategy is selected and accumulated for a series of actions;
the objective function O of the autonomous learning model is defined by equation (3) according to the above description:
O(θ)=Ε(r1+γr22r3+...|π(,θ)) (3)
where r denotes the environment O (θ) · Ε (r) per action1+γr22r3+.. | pi (, theta)), namely a return value, gamma represents a discount factor, accumulated learning experience, pi (, theta) represents a strategy selected by an action, theta represents a parameter of an action strategy network, and E represents an average expected value;
the update rule of the network is defined by equation (4):
Figure BDA0002251737540000031
in the formula, TiRepresents a target expectation value, defined by equation (5):
Ti=ri+γQ'(si,π'(si+1π’)|θQ’) (5)
in the formula, Q ', pi' represents a target network of an action selection strategy and an optimization strategy, and the error of the whole autonomous learning model is calculated;
and 4, step 4: training and testing the model, namely in the training stage, selecting a neural network model through actions to randomly obtain discrete actions a, optimizing the strategy of the neural network model to evaluate the effect of the actions on the current environment, accumulating the previous learning experience and updating the whole network model to finally obtain the optimal result; in the testing stage, testing the sample data to obtain a testing result; wherein:
the output of the discrete action is defined by equation (6):
d=MAP(a) (6)
in the formula, MAP represents a mapping relationship between a continuous motion space and a discrete motion space, and a is defined by formula (7):
a=π(s)=π(s|θ)+N (7)
in the formula, N represents a random sampling rule and explores more effective action behaviors in an action space;
the action selection strategy network updating principle is updated towards the direction which enables the strategy selection network value to be maximum, and therefore the selected action enables the strategy selection network to be maximum;
the update rule of the target network is defined as formula (10);
θQ’←τθQ+(1-τ)θQ’
θπ’←τθπ+(1-τ)θπ’(10)
wherein τ represents the update rate of the target network;
and 5: step 4 is repeated periodically in an independent repeated experiment, and steps 1, 2, 3 and 4 are repeated periodically in a plurality of independent repeated experiments; until the maximum number of iterations;
in the process, the maximum iteration times are set, the optimal result is selected in each independent repeated experiment, the experiments are repeated for multiple times, and the average value is selected as the result of the experiment.
The positive technical effects obtained by the invention comprise:
(1) according to the method, a strategy for optimizing the robustness of the topological structure of the Internet of things through autonomous learning is designed by utilizing a deep reinforcement learning neural network model, so that the capability of resisting attacks of an initial topological structure can be remarkably improved;
(2) according to the invention, the state representation of the topological structure of the Internet of things, the mapping relation of the discrete action space, the scale-free characteristic of the network and the compression characteristic are utilized to automatically learn the robustness of the optimized network topological structure, so that high-reliability data transmission is ensured.
Drawings
FIG. 1 is an overall flow chart of a method for optimizing robustness of a topology structure of the Internet of things through autonomous learning according to the invention;
FIG. 2 is a schematic diagram of a mapping relationship between continuous and discrete actions of an autonomous learning optimization model;
fig. 3 is a schematic diagram of a topology compression model of the internet of things.
Detailed Description
The following detailed description of the specific modes, structures, features and functions of the node deployment strategy designed according to the present invention is provided with the accompanying drawings.
As shown in fig. 1, the method comprehensively considers the mapping relationship between a large-scale continuous action space and a discrete action space, the compression mode of a network topology structure and the connection relationship of nodes, enhances the autonomous learning behavior of the whole network while effectively improving the network robustness, balances the connection distribution of the nodes and ensures the high-quality communication capability of the network. The process of the method specifically comprises the following steps:
step 1: and initializing the topology structure of the Internet of things. And randomly deploying nodes according to the rules of the scale-free network model and the edge density parameter M, and fixing the geographic position. Most of the nodes have few degrees, and few of the nodes have large degrees, so that the topology of the internet of things in the real world is described to the greatest extent. Each node has the same attributes.
Wherein, the edge density parameter is set to be M-2, which indicates that the number of edges in the network is 2 times of the number of nodes.
Step 2: the topology is compressed. Different from the representation mode of the adjacent matrix of the network topology structure, the invention eliminates redundant node information which is not in the communication range, only keeps the node connection relation in the communication range on the form of the adjacent matrix, compresses the storage space of the network topology structure, and takes the compressed network topology as the environment space S.
The environment space S is a row vector, and the row vector changes with the change of the network topology state.
And step 3: the autonomous learning model is initialized. According to the characteristics of deep learning and reinforcement learning, a deep deterministic learning strategy model is constructed to train the topological structure of the Internet of things. A deep deterministic Q-learning network model is adopted, a selection strategy pi of an action and an optimization strategy Q of a network are simulated, a continuous action space is mapped into a discrete action space, and a target optimization function O and an update rule of the whole training model are designed.
Wherein the action selection policy pi is defined by equation (1).
at=π(st|θ) (1)
In the formula, atIndicating a deterministic action taken, stRepresenting the current network topology state and theta representing a parameter of the action network. Current network topology state stAnd obtaining a deterministic action through an action strategy function pi, wherein the action can directly operate the current network topology structure.
The network optimization strategy Q is defined by formula (2), and the effect of the selected action on the environment space is measured.
Q(st,at)=E(r(st,at)+γQ(st+1,π(st+1))) (2)
Wherein r represents the current action atFor the current network state stGamma is a discount factor, accumulating learning experience. Q(s)t+1,π(st+1) Represents the future return value of taking an action in the next network state, and thus the effect Q(s) of the current action on the current network statet,at) The E () represents an expected value, and accumulates the previous effect for a series of action selection strategies.
Wherein the objective function O of the autonomous learning model can be defined as formula (3) according to the above description
(3)
Where r denotes the environment O (θ) · Ε (r) per action1+γr22r3+.. | π (, θ)) produces an effect, i.e., a reward value. Gamma denotes a discount factor, accumulating learning experience. π (, θ) represents the strategy of action selection, θ is the action strategy netThe parameters of the collaterals. E denotes the average expected value as an objective function of the entire autonomous learning model.
Wherein the update rule of the network is defined by equation (4).
Figure BDA0002251737540000061
In the formula, TiThe target expectation value is expressed and defined by equation (5).
Ti=ri+γQ'(si,π'(si+1π’)|θQ’) (5)
In the formula, Q ', pi' represents a target network of an action selection strategy and an optimization strategy, and the error of the whole autonomous learning model is calculated.
And 4, step 4: and (5) training and testing the model. In the training stage, a discrete action a is randomly obtained by selecting a neural network model through actions, the effect of the action on the current environment is evaluated by a neural network model strategy optimization, the previous learning experience is accumulated, the whole network model is updated, and finally the optimal result is obtained. And in the testing stage, the sample data is tested to obtain a testing result.
Wherein the output of the discrete action is defined by equation (6).
d=MAP(a) (6)
In the formula, MAP represents a mapping relationship between a continuous motion space and a discrete motion space, and a is defined by formula (7):
a=π(s)=π(s|θ)+N
(7)
in the formula, N represents a random sampling rule, and more effective action behaviors in the action space are explored.
After the effect of the action on the current environment, namely the return value, is obtained, the return value is stored in a memory, the previous learning experience is utilized in the candidate optimization learning, the convergence speed of the autonomous learning model is accelerated, and the rule is defined by the formula (8).
(st,at,rt,st+1)→D (8)
In the formula, D represents a memory in the network model, and stores information such as the current network status, the action, the immediate reporting value, and the next network status.
In the stage of updating the autonomous learning optimization model, the action selection strategy network updating rule is defined by the formula (9).
▽π=Eπ'[▽aQ(s,a)▽θπ(s)](9)
In the formula, the action selection policy network updating principle is updated towards the direction of maximizing the value of the policy selection network, so that the selected action maximizes the value of the policy selection network.
Wherein, the update rule of the target network is defined as formula (10).
θQ’←τθQ+(1-τ)θQ’
θπ’←τθπ+(1-τ)θπ’(10)
In the formula, τ represents the update rate of the target network.
And 5: step 4 is repeated periodically in an independent repeated experiment, and steps 1, 2, 3 and 4 are repeated periodically in a plurality of independent repeated experiments; up to a maximum number of iterations. In this process, the maximum number of iterations is set, and the experiment is repeated independently each time, with the best result being selected. The experiment was repeated several times, and the average value was selected as the result of this experiment.

Claims (1)

1. A method for optimizing the robustness of a topological structure of the Internet of things through autonomous learning is characterized by comprising the following steps:
step 1: initializing a topological structure of the Internet of things, namely randomly deploying nodes according to rules of a scale-free network model and an edge density parameter M, and fixing the geographical position, wherein the edge density parameter is set to be M-2;
step 2: compressing the topological structure, namely removing redundant node information which is not in the communication range, only keeping the node connection relation in the communication range in the form of an adjacent matrix, compressing the storage space of the network topological structure, and taking the compressed network topology as an environment space S, wherein the environment space S is a row vector which can change along with the change of the network topological state;
and step 3: initializing an autonomous learning model, namely constructing a deep deterministic learning strategy model according to the characteristics of deep learning and reinforcement learning to train the topology structure of the Internet of things: a depth certainty Q-learning network model is adopted, a selection strategy pi of an action and an optimization strategy Q of a network are simulated, a continuous action space is mapped into a discrete action space, and a target optimization function O and an update rule of the whole training model are designed; wherein:
the action selection policy π is defined by equation (1):
at=π(st|θ) (1)
in the formula, atIndicating a deterministic action taken, stRepresenting the current network topology state, and theta represents the parameter of the action network;
the network optimization policy Q is defined by equation (2):
Q(st,at)=E(r(st,at)+γQ(st+1,π(st+1))) (2)
wherein r represents the current action atFor the current network state stGamma represents a discount factor, accumulated learning experience, Q(s)t+1,π(st+1) Represents the future return value of taking an action in the next network state, and thus the effect Q(s) of the current action on the current network statet,at) The method comprises the steps that an instant return value and a future return value are combined, E () represents an expected value, and the effect before a strategy is selected and accumulated for a series of actions;
the objective function O of the autonomous learning model is defined by equation (3) according to the above description:
O(θ)=Ε(r1+γr22r3+...|π(,θ)) (3)
where r denotes the environment O (θ) · Ε (r) per action1+γr22r3+.. | π (θ)), i.e., the return value, γ represents the discount factor, the accumulated learning experience, π (, θ) represents the strategy of action selection, θ represents the strategy of action selectionA parameter of the action policy network, E denotes an average expected value;
the update rule of the network is defined by equation (4):
Figure FDA0002251737530000021
in the formula, TiRepresents a target expectation value, defined by equation (5):
Ti=ri+γQ'(si,π'(si+1π′)|θQ′) (5)
in the formula, Q ', pi' represents a target network of an action selection strategy and an optimization strategy, and the error of the whole autonomous learning model is calculated;
and 4, step 4: training and testing the model, namely in the training stage, selecting a neural network model through actions to randomly obtain discrete actions a, optimizing the strategy of the neural network model to evaluate the effect of the actions on the current environment, accumulating the previous learning experience and updating the whole network model to finally obtain the optimal result; in the testing stage, testing the sample data to obtain a testing result; wherein:
the output of the discrete action is defined by equation (6):
d=MAP(a) (6)
in the formula, MAP represents a mapping relationship between a continuous motion space and a discrete motion space, and a is defined by formula (7):
a=π(s)=π(s|θ)+N (7)
in the formula, N represents a random sampling rule, more effective action behaviors in an action space are explored, and s represents the current network state;
the action selection strategy network updating principle is updated towards the direction which enables the strategy selection network value to be maximum, and therefore the selected action enables the strategy selection network to be maximum;
the update rule of the target network is defined as formula (10);
θQ′←τθQ+(1-τ)θQ′
θπ′←τθπ+(1-τ)θπ′(10)
wherein τ represents the update rate of the target network;
and 5: step 4 is repeated periodically in an independent repeated experiment, and steps 1, 2, 3 and 4 are repeated periodically in a plurality of independent repeated experiments; until the maximum number of iterations;
in the process, the maximum iteration times are set, the optimal result is selected in each independent repeated experiment, the experiments are repeated for multiple times, and the average value is selected as the result of the experiment.
CN201911036835.1A 2019-10-29 2019-10-29 Method for autonomously learning and optimizing topological structure robustness of Internet of things Active CN110807230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911036835.1A CN110807230B (en) 2019-10-29 2019-10-29 Method for autonomously learning and optimizing topological structure robustness of Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911036835.1A CN110807230B (en) 2019-10-29 2019-10-29 Method for autonomously learning and optimizing topological structure robustness of Internet of things

Publications (2)

Publication Number Publication Date
CN110807230A true CN110807230A (en) 2020-02-18
CN110807230B CN110807230B (en) 2024-03-12

Family

ID=69489419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911036835.1A Active CN110807230B (en) 2019-10-29 2019-10-29 Method for autonomously learning and optimizing topological structure robustness of Internet of things

Country Status (1)

Country Link
CN (1) CN110807230B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111929641A (en) * 2020-06-19 2020-11-13 天津大学 Rapid indoor fingerprint positioning method based on width learning
CN111935724A (en) * 2020-07-06 2020-11-13 天津大学 Wireless sensor network topology optimization method based on asynchronous deep reinforcement learning
CN113422695A (en) * 2021-06-17 2021-09-21 天津大学 Optimization method for improving robustness of topological structure of Internet of things
CN113435567A (en) * 2021-06-25 2021-09-24 广东技术师范大学 Intelligent topology reconstruction method based on flow prediction, electronic equipment and storage medium
CN113923123A (en) * 2021-09-24 2022-01-11 天津大学 Underwater wireless sensor network topology control method based on deep reinforcement learning
CN114567563A (en) * 2022-03-31 2022-05-31 北京邮电大学 Network topology model training method, network topology reconstruction method and device
CN115225509A (en) * 2022-07-07 2022-10-21 天津大学 Internet of things topological structure generation method based on neural evolution

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060178918A1 (en) * 1999-11-22 2006-08-10 Accenture Llp Technology sharing during demand and supply planning in a network-based supply chain environment
CN102789593A (en) * 2012-06-18 2012-11-21 北京大学 Intrusion detection method based on incremental GHSOM (Growing Hierarchical Self-organizing Maps) neural network
CN102868972A (en) * 2012-09-05 2013-01-09 河海大学常州校区 Internet of things (IoT) error sensor node location method based on improved Q learning algorithm
CN103490413A (en) * 2013-09-27 2014-01-01 华南理工大学 Intelligent electricity generation control method based on intelligent body equalization algorithm
US20190166005A1 (en) * 2017-11-27 2019-05-30 Massachusetts Institute Of Technology Methods and Apparatus for Communication Network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060178918A1 (en) * 1999-11-22 2006-08-10 Accenture Llp Technology sharing during demand and supply planning in a network-based supply chain environment
CN102789593A (en) * 2012-06-18 2012-11-21 北京大学 Intrusion detection method based on incremental GHSOM (Growing Hierarchical Self-organizing Maps) neural network
CN102868972A (en) * 2012-09-05 2013-01-09 河海大学常州校区 Internet of things (IoT) error sensor node location method based on improved Q learning algorithm
CN103490413A (en) * 2013-09-27 2014-01-01 华南理工大学 Intelligent electricity generation control method based on intelligent body equalization algorithm
US20190166005A1 (en) * 2017-11-27 2019-05-30 Massachusetts Institute Of Technology Methods and Apparatus for Communication Network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
冯涛;李洪涛;袁占亭;马建峰: "伯努利节点网络模型的拓扑鲁棒性分析方法", 电子学报, vol. 39, no. 7 *
王超;王芷阳;沈聪;: "基于强化学习的无线网络自组织性研究", 中国科学技术大学学报, no. 12 *
董晓东;郭志强;陈胜;周晓波;齐恒;李克秋: "面向SDN网络虚拟化平台的控制器放置算法", 电信科学, no. 004 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111929641B (en) * 2020-06-19 2022-08-09 天津大学 Rapid indoor fingerprint positioning method based on width learning
CN111929641A (en) * 2020-06-19 2020-11-13 天津大学 Rapid indoor fingerprint positioning method based on width learning
CN111935724A (en) * 2020-07-06 2020-11-13 天津大学 Wireless sensor network topology optimization method based on asynchronous deep reinforcement learning
CN111935724B (en) * 2020-07-06 2022-05-03 天津大学 Wireless sensor network topology optimization method based on asynchronous deep reinforcement learning
CN113422695A (en) * 2021-06-17 2021-09-21 天津大学 Optimization method for improving robustness of topological structure of Internet of things
CN113435567A (en) * 2021-06-25 2021-09-24 广东技术师范大学 Intelligent topology reconstruction method based on flow prediction, electronic equipment and storage medium
CN113435567B (en) * 2021-06-25 2023-07-07 广东技术师范大学 Intelligent topology reconstruction method based on flow prediction, electronic equipment and storage medium
CN113923123B (en) * 2021-09-24 2023-06-09 天津大学 Underwater wireless sensor network topology control method based on deep reinforcement learning
CN113923123A (en) * 2021-09-24 2022-01-11 天津大学 Underwater wireless sensor network topology control method based on deep reinforcement learning
CN114567563A (en) * 2022-03-31 2022-05-31 北京邮电大学 Network topology model training method, network topology reconstruction method and device
CN114567563B (en) * 2022-03-31 2024-04-12 北京邮电大学 Training method of network topology model, and reconstruction method and device of network topology
CN115225509A (en) * 2022-07-07 2022-10-21 天津大学 Internet of things topological structure generation method based on neural evolution
CN115225509B (en) * 2022-07-07 2023-09-22 天津大学 Internet of things topological structure generation method based on neural evolution

Also Published As

Publication number Publication date
CN110807230B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN110807230A (en) Method for optimizing robustness of topology structure of Internet of things through autonomous learning
CN111935724B (en) Wireless sensor network topology optimization method based on asynchronous deep reinforcement learning
CN113469325B (en) Hierarchical federation learning method for edge aggregation interval self-adaptive control, computer equipment and storage medium
CN112086958B (en) Power transmission network extension planning method based on multi-step backtracking reinforcement learning algorithm
CN110267292A (en) Cellular network method for predicting based on Three dimensional convolution neural network
JP2019204458A (en) Electric power demand prediction system, learning device, and electric power demand prediction method
CN103905246A (en) Link prediction method based on grouping genetic algorithm
CN112598150A (en) Method for improving fire detection effect based on federal learning in intelligent power plant
CN101706888A (en) Method for predicting travel time
CN115866621A (en) Wireless sensor network coverage method based on whale algorithm
CN115983374A (en) Cable partial discharge database sample expansion method based on optimized SA-CACGAN
CN113382060B (en) Unmanned aerial vehicle track optimization method and system in Internet of things data collection
Shahbazi et al. Density-based clustering and performance enhancement of aeronautical ad hoc networks
CN115396366B (en) Distributed intelligent routing method based on graph attention network
Baxevani et al. Very short-term spatio-temporal wind power prediction using a censored Gaussian field
CN114861917A (en) Knowledge graph inference model, system and inference method for Bayesian small sample learning
CN115395502A (en) Photovoltaic power station power prediction method and system
CN114372418A (en) Wind power space-time situation description model establishing method
Han et al. Improved FOA-ESN method using opposition-based learning mechanism for the network traffic prediction with multiple steps
CN110826244B (en) Conjugated gradient cellular automaton method for simulating influence of rail transit on urban growth
Pouladi et al. Optimum localization of wind turbine sites using opposition based ant colony optimization
Wang et al. Research on the difficulty of mobile node deployment’s self-play in wireless Ad Hoc networks based on deep reinforcement learning
CN116070714B (en) Cloud edge cooperative training method and system based on federal learning and neural architecture search
Shu et al. Link prediction based on 3D convolutional neural network
CN117650834B (en) Space-time flow prediction method of space-time integrated network based on knowledge distillation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant