CN114090238A - Edge node load prediction method and device - Google Patents
Edge node load prediction method and device Download PDFInfo
- Publication number
- CN114090238A CN114090238A CN202111276213.3A CN202111276213A CN114090238A CN 114090238 A CN114090238 A CN 114090238A CN 202111276213 A CN202111276213 A CN 202111276213A CN 114090238 A CN114090238 A CN 114090238A
- Authority
- CN
- China
- Prior art keywords
- edge node
- edge
- node
- graph
- load
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000003062 neural network model Methods 0.000 claims abstract description 28
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 238000013277 forecasting method Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 15
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/147—Network analysis or design for predicting network behaviour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a method and a device for predicting load of an edge node, wherein the method comprises the following steps: determining adjacent edge nodes of the current edge node according to the edge node topological relation graph, and collecting the characteristics of the current edge node and the characteristics of the adjacent edge nodes; and performing weighted summation on the characteristics of the current edge node and the characteristics of the adjacent edge nodes by adopting the trained graph neural network model to calculate and obtain a predicted load value of the current edge node in a future preset time period. According to the edge node load prediction method and device provided by the invention, the characteristics of the edge node and the characteristics of the adjacent edge node are comprehensively considered, and even for a complex topological structure with multiple edge nodes, a more accurate load prediction result can be provided.
Description
Technical Field
The invention relates to a method and a device for predicting load of an edge node, and belongs to the technical field of communication.
Background
The existing edge node load prediction technology mainly models the loads of edge nodes in different regions and different time periods through models such as machine learning and time series, and the problems that adjacent edge nodes are mutually influenced and the loads of the edge nodes have spatial correlation are not considered, so that the load prediction accuracy of the edge nodes is low.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a method and a device for predicting the load of an edge node, which consider the influence of adjacent edge nodes and can provide a more accurate load prediction result of the edge node.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme:
according to an aspect of the present invention, there is provided an edge node load prediction method, including:
determining adjacent edge nodes of the current edge node according to the edge node topological relation graph, and collecting the characteristics of the current edge node and the characteristics of the adjacent edge nodes;
and performing weighted summation on the characteristics of the current edge node and the characteristics of the adjacent edge nodes by adopting the trained graph neural network model to calculate and obtain a predicted load value of the current edge node in a future preset time period.
In an embodiment of the invention, the characteristics include resource characteristics, historical load values and task request characteristics of the edge nodes.
In an embodiment of the invention, the historical load values comprise yesterday historical load values, last week historical average load values, last month historical average load values, and last year calendar historical average load values.
In the embodiment of the present invention, the resource characteristics include the number of CPU cores of the edge node, the total amount of memory, the total amount of bandwidth, and the number of servers of the edge node.
In an embodiment of the invention, the task request characteristics include yesterday user request total and resource characteristics of the user request.
In an embodiment of the present invention, the training method of the graph neural network model includes:
collecting the characteristics of each edge node in the edge node topological relation graph;
according to the characteristics of each edge node and the characteristics of the adjacent edge nodes of each edge node, learning the relationship between the characteristics of the edge nodes and the load by adopting a forward propagation algorithm, and thus calculating and acquiring a predicted load value of each edge node in a specified historical time period based on a graph neural network model;
and calculating the MSE loss based on the predicted load value of the graph neural network model and the actual load value of the corresponding edge node in the appointed historical time period according to each edge node so as to optimize the parameters of the graph neural network model.
In an embodiment of the present invention, the method for constructing the edge node topological relation graph includes:
according to the structural information and the network delay information among the edge nodes, constructing an undirected graph among the edge nodes belonging to the same region through edge connection, and simultaneously extracting the characteristics of each edge node;
wherein the weight of an edge in an undirected graph is defined as the network delay between edge nodes.
In the embodiment of the invention, when the number of the edge nodes exceeds the preset upper limit value, the scale of the edge node topological relation graph is limited by deleting the long-tail edge nodes and setting the edge weight threshold value.
In the embodiment of the invention, the weight of the weighted summation is determined by the weight of the edge in the edge node topological relation graph and the coefficient matrix of the graph neural network model of each layer.
According to another aspect of the present invention, there is provided an edge node load prediction apparatus, including:
a characteristic acquisition unit: the system comprises a node topology relation graph, a node configuration graph and a node configuration graph, wherein the node topology relation graph is used for determining adjacent edge nodes of a current edge node and acquiring the characteristics of the current edge node and the characteristics of the adjacent edge nodes;
a prediction unit: the method is used for carrying out weighted summation on the features of the current edge node and the features of the adjacent edge nodes by adopting the trained graph neural network model so as to calculate and obtain a predicted load value of the current edge node in a future preset time period.
Compared with the prior art, the invention has the following beneficial effects:
according to the edge node load prediction method and device provided by the invention, the characteristics of the edge node and the characteristics of the adjacent edge node are comprehensively considered, and even for a complex topological structure with multiple edge nodes, a more accurate load prediction result can be provided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings of the embodiments will be briefly described below, it being understood that the drawings described below relate only to some embodiments of the present invention and are not limiting thereof, wherein:
FIG. 1 is an exemplary flow diagram of a method for edge node load prediction according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a neural network model according to an embodiment of the present invention;
FIG. 3 is a flow diagram of a method of training a neural network model in accordance with an embodiment of the present invention;
fig. 4 is an exemplary block diagram of an edge node topology network in accordance with an embodiment of the present invention.
Detailed Description
In order to make the technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention without any inventive step, also belong to the scope of protection of the invention.
The first embodiment is as follows:
fig. 1 shows an exemplary flowchart of an edge node load prediction method according to an embodiment of the present invention.
As shown in fig. 1, in the first step, the neighboring edge node of the current edge node is determined according to the edge node topological relation graph, and the features of the current edge node and the features of the neighboring edge node are collected.
Referring to fig. 4, which is a diagram of an exemplary structure of an edge node topology network according to an embodiment of the present invention, the edge node topology relationship diagram is an undirected graph, and in some embodiments, the constructing method may be:
according to the structural information and the network delay information among the edge nodes, constructing an undirected graph among the edge nodes belonging to the same region through edge connection, and simultaneously extracting the characteristics of each edge node;
wherein the weight of an edge in an undirected graph is defined as the network delay between edge nodes.
In the edge node topological relation graph, edge nodes with edge connection are adjacent edge nodes to each other, and the adjacent edge nodes are usually closer to each other, for example: belonging to a geographic area, etc.
In some embodiments, the characteristics of each edge node mainly include the following categories:
a) resource characteristics: the number of CPU cores, the total amount of memory and the total amount of bandwidth of the edge node; the number of servers of the edge node, etc.;
b) historical load: yesterday historical load; historical average load over the last week; historical average load over the last month; average load of recent almanac history, etc.;
c) task request feature: yesterday's total number of user requests, and resource characteristics (such as CPU, memory, IO, bandwidth, etc.) of the user requests.
In the second step, the trained graph neural network model is adopted to carry out weighted summation on the features of the current edge node and the features of the adjacent edge nodes so as to calculate and obtain the predicted load value of the current edge node in a future preset time period.
Referring to fig. 3, a training process of the graph neural network model, namely learning a relationship between features of edge nodes and loads through historical loads, determines parameters of the graph neural network model, and specifically, may include the following steps:
s201: collecting the characteristics of each edge node in the edge node topological relation graph;
s202: according to the characteristics of each edge node and the characteristics of the adjacent edge nodes of each edge node, learning the relationship between the characteristics of the edge nodes and the load by adopting a forward propagation algorithm, and thus calculating and acquiring a predicted load value of each edge node in a specified historical time period based on a graph neural network model;
FIG. 2 is a diagram illustrating a neural network model according to an embodiment of the present invention. Each node X1-X4 in the input layer represents an edge node, predicted load information Y1-Y4 of each node can be output through forward propagation of the graph neural network, and the output load information is compared with real load information Z1-Z4, so that the loss function can be calculated as follows:
wherein Z isiLoad data of a real edge node i; y isiPredicted load data for edge node i output for forward propagation through the graph neural network; finally, the parameters of the network can be updated by a gradient back propagation method.
The specific forward propagation process is as follows:
in the calculation process of adopting the forward propagation algorithm, for an edge node, the output of the next layer is obtained by weighting and summing the input characteristics of the current edge node and the current layer of the adjacent edge node of the current edge node, and the weight is determined by the weight of the edge and the coefficient matrix of the graph neural network model of each layer. It is worth noting that, unlike the conventional method, the graph neural network model does not learn the relationship between the edge node features and the load according to its own features, but comprehensively considers the features of resources, loads, requests and the like of the neighboring edge nodes of the current edge node, and then comprehensively gives the load prediction result of each edge node, and the forward propagation calculation method can be represented by the following formula:
Hl=σ(AHl-1Wl-1)
in the formula: hlHidden state output for the l hidden layer; a is an adjacency matrix composed of a topology structure of edge nodes, Hl-1Hidden states output by the (l-1) th hidden layer, wherein the first hidden state is an initial input X, and the last hidden state is an output Z of the neural network; wl-1The parameters of the neural network corresponding to the l-1 th hidden layer are also parameters that need to be updated by inverse gradient propagation.
S203: and calculating MSE (mean-square error) loss based on the predicted load value of the graph neural network model and the actual load value of the corresponding edge node in a specified historical time period according to each edge node so as to optimize the parameters of the graph neural network model.
It should be noted that, in some embodiments, the graph neural network model supports million-level nodes and million-level edges, and a larger-scale graph calculation may limit the scale of the edge node graph by deleting long-tail edges (the request amount is small) and setting edge weight thresholds in the case of an excessive number of edge nodes.
The edge node load prediction method provided by the embodiment of the invention comprehensively considers the characteristics of the edge node and the characteristics of the adjacent edge nodes, and can still provide a more accurate load prediction result even for a complex topological structure with multiple edge nodes.
Example two:
the present embodiment provides an edge node load prediction apparatus, which may be used to implement the method in the first embodiment, and specifically includes:
a characteristic acquisition unit: the system comprises a node topology relation graph, a node configuration graph and a node configuration graph, wherein the node topology relation graph is used for determining adjacent edge nodes of a current edge node and acquiring the characteristics of the current edge node and the characteristics of the adjacent edge nodes;
a prediction unit: the method is used for carrying out weighted summation on the features of the current edge node and the features of the adjacent edge nodes by adopting the trained graph neural network model so as to calculate and obtain a predicted load value of the current edge node in a future preset time period.
It should be noted that, the method for implementing the corresponding method steps by the feature acquisition unit and the prediction unit may be referred to in the first embodiment, and details are not described herein.
Example three:
a terminal comprising a processor, a memory coupled to the processor, wherein the memory stores program instructions for performing a method of an embodiment;
the processor is configured to execute the program instructions stored in the memory to control the execution of the method of embodiment one.
Example four:
a storage medium storing program instructions executable by a processor to perform a method according to an embodiment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.
Claims (10)
1. An edge node load prediction method, the method comprising:
determining adjacent edge nodes of the current edge node according to the edge node topological relation graph, and collecting the characteristics of the current edge node and the characteristics of the adjacent edge nodes;
and performing weighted summation on the characteristics of the current edge node and the characteristics of the adjacent edge nodes by adopting the trained graph neural network model to calculate and obtain a predicted load value of the current edge node in a future preset time period.
2. The edge node load prediction method of claim 1 wherein the characteristics include resource characteristics, historical load values, and task request characteristics of the edge node.
3. The edge node load forecasting method of claim 2, wherein the historical load values include yesterday historical load values, last week historical average load values, last month historical average load values, and last almanac historical average load values.
4. The edge node load prediction method of claim 2, wherein the resource characteristics comprise a number of CPU cores, a total amount of memory, a total amount of bandwidth of the edge node, and a number of servers of the edge node.
5. The edge node load prediction method of claim 2, wherein the task request characteristics comprise yesterday user request total and resource characteristics of user requests.
6. The edge node load prediction method of claim 1, wherein the training method of the graph neural network model comprises:
collecting the characteristics of each edge node in the edge node topological relation graph;
according to the characteristics of each edge node and the characteristics of the adjacent edge nodes of each edge node, learning the relationship between the characteristics of the edge nodes and the load by adopting a forward propagation algorithm, and thus calculating and acquiring a predicted load value of each edge node in a specified historical time period based on a graph neural network model;
and calculating the MSE loss based on the predicted load value of the graph neural network model and the actual load value of the corresponding edge node in a specified historical time period according to each edge node so as to optimize the parameters of the graph neural network model.
7. The edge node load prediction method according to claim 1, wherein the method for constructing the edge node topological relation graph comprises:
according to the structural information and the network delay information among the edge nodes, constructing an undirected graph among the edge nodes belonging to the same region through edge connection, and simultaneously extracting the characteristics of each edge node;
wherein the weight of an edge in an undirected graph is defined as the network delay between edge nodes.
8. The edge node load prediction method according to claim 7, wherein when the number of edge nodes exceeds a predetermined upper limit value, the size of the edge node topological relation graph is limited by deleting long-tailed edge nodes and setting edge weight thresholds.
9. The edge node load prediction method according to claim 1, wherein the weight of the weighted sum is determined by the weight of the edge in the edge node topological relation graph and a coefficient matrix of a graph neural network model of each layer.
10. An edge node load prediction apparatus, the apparatus comprising:
a characteristic acquisition unit: the system comprises a node topology relation graph, a node configuration graph and a node configuration graph, wherein the node topology relation graph is used for determining adjacent edge nodes of a current edge node and acquiring the characteristics of the current edge node and the characteristics of the adjacent edge nodes;
a prediction unit: the method is used for carrying out weighted summation on the features of the current edge node and the features of the adjacent edge nodes by adopting the trained graph neural network model so as to calculate and obtain a predicted load value of the current edge node in a future preset time period.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111276213.3A CN114090238A (en) | 2021-10-29 | 2021-10-29 | Edge node load prediction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111276213.3A CN114090238A (en) | 2021-10-29 | 2021-10-29 | Edge node load prediction method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114090238A true CN114090238A (en) | 2022-02-25 |
Family
ID=80298319
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111276213.3A Pending CN114090238A (en) | 2021-10-29 | 2021-10-29 | Edge node load prediction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114090238A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180157161A1 (en) * | 2016-12-01 | 2018-06-07 | Lam Research Corporation | Design layout pattern proximity correction through fast edge placement error prediction |
CN110445866A (en) * | 2019-08-12 | 2019-11-12 | 南京工业大学 | Task migration and cooperative load balancing method in mobile edge computing environment |
CN111538567A (en) * | 2020-04-26 | 2020-08-14 | 国网江苏省电力有限公司信息通信分公司 | Method and equipment for deploying virtual network function chain on edge equipment |
CN112732442A (en) * | 2021-01-11 | 2021-04-30 | 重庆大学 | Distributed model for edge computing load balancing and solving method thereof |
CN112910710A (en) * | 2021-02-08 | 2021-06-04 | 清华大学 | Network flow space-time prediction method and device, computer equipment and storage medium |
-
2021
- 2021-10-29 CN CN202111276213.3A patent/CN114090238A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180157161A1 (en) * | 2016-12-01 | 2018-06-07 | Lam Research Corporation | Design layout pattern proximity correction through fast edge placement error prediction |
CN110445866A (en) * | 2019-08-12 | 2019-11-12 | 南京工业大学 | Task migration and cooperative load balancing method in mobile edge computing environment |
CN111538567A (en) * | 2020-04-26 | 2020-08-14 | 国网江苏省电力有限公司信息通信分公司 | Method and equipment for deploying virtual network function chain on edge equipment |
CN112732442A (en) * | 2021-01-11 | 2021-04-30 | 重庆大学 | Distributed model for edge computing load balancing and solving method thereof |
CN112910710A (en) * | 2021-02-08 | 2021-06-04 | 清华大学 | Network flow space-time prediction method and device, computer equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
郭倩彤: "基于负载预测的边缘数据中心节能管理机制研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》, 15 March 2020 (2020-03-15) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109902801B (en) | Flood collective forecasting method based on variational reasoning Bayesian neural network | |
CN110149237B (en) | Hadoop platform computing node load prediction method | |
CN108829766B (en) | Interest point recommendation method, system, equipment and computer readable storage medium | |
CN112990958B (en) | Data processing method, device, storage medium and computer equipment | |
CN112329997A (en) | Power demand load prediction method and system, electronic device, and storage medium | |
CN112311578A (en) | VNF scheduling method and device based on deep reinforcement learning | |
CN113825165B (en) | 5G slice network congestion early warning method and device based on time diagram network | |
CN114065634A (en) | Data-driven power quality monitoring and stationing optimization method and device | |
CN113342474A (en) | Method, device and storage medium for forecasting customer flow and training model | |
CN113489787B (en) | Method and device for collaborative migration of mobile edge computing service and data | |
CN111510473B (en) | Access request processing method and device, electronic equipment and computer readable medium | |
CN114090238A (en) | Edge node load prediction method and device | |
CN112257977B (en) | Logistics project construction period optimization method and system with resource limitation under fuzzy man-hour | |
CN114528992A (en) | Block chain-based e-commerce business analysis model training method | |
CN113379392A (en) | Method for acquiring high-quality data for numerical tasks in crowdsourcing scene | |
CN113949633A (en) | 5G network slice disaster recovery pool resource management method and device based on machine learning | |
CN113837782A (en) | Method and device for optimizing periodic item parameters of time series model and computer equipment | |
CN113822455A (en) | Time prediction method, device, server and storage medium | |
CN114580578B (en) | Method and device for training distributed random optimization model with constraints and terminal | |
CN111767934A (en) | Image identification method and device and electronic equipment | |
CN111277445B (en) | Method and device for evaluating performance of online node server | |
CN114826951B (en) | Service automatic degradation method, device, computer equipment and storage medium | |
CN113162780B (en) | Real-time network congestion analysis method, device, computer equipment and storage medium | |
US20240119198A1 (en) | Communication reduction techniques for parallel computing | |
CN117808040B (en) | Method and device for predicting low forgetting hot events based on brain map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |