CN114077482B - Intelligent computing optimization method for industrial intelligent manufacturing edge - Google Patents

Intelligent computing optimization method for industrial intelligent manufacturing edge Download PDF

Info

Publication number
CN114077482B
CN114077482B CN202010829583.4A CN202010829583A CN114077482B CN 114077482 B CN114077482 B CN 114077482B CN 202010829583 A CN202010829583 A CN 202010829583A CN 114077482 B CN114077482 B CN 114077482B
Authority
CN
China
Prior art keywords
edge
model
exit
state
computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010829583.4A
Other languages
Chinese (zh)
Other versions
CN114077482A (en
Inventor
于诗矛
宋纯贺
徐文想
武婷婷
刘硕
曾鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN202010829583.4A priority Critical patent/CN114077482B/en
Publication of CN114077482A publication Critical patent/CN114077482A/en
Application granted granted Critical
Publication of CN114077482B publication Critical patent/CN114077482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Abstract

The invention relates to an intelligent computing optimization method for industrial intelligent manufacturing edges, which is characterized in that an industrial intelligent manufacturing edge computing model is designed and divided into an edge resource perception model, an edge resource and task scheduling model, an intelligent edge computing model and the like. And the edge intelligent computing model is based on a deep learning method, and computes a joint loss function of each neural network exit node to obtain an autonomous scale adaptation network model with cloud edge coordination. And in the online optimization stage, under the condition of setting network bandwidth, calculating the overall operation time of the server and the edge equipment, and if the overall operation time is smaller than the required time delay, selecting an exit point online. According to the method, the scale of the neural network can be reduced, a shallow deep learning model is deployed on the edge gateway equipment, state pre-research and judgment are achieved, and meanwhile, the intermediate result is sent to the cloud end, so that the final calculation result is completed.

Description

Intelligent computing optimization method for industrial intelligent manufacturing edge
Technical Field
The invention relates to the field of industrial intelligent manufacturing and edge computing, in particular to an industrial intelligent manufacturing edge intelligent computing optimization method.
Background
With the pulling of cloud services and the internet of things, the edge of the network is transitioning from data consumers to data producers and data consumers. Data is increasingly being generated at the edges of the network, and therefore, processing data at the edges of the network is more efficient. Micro data centers, cloud centers, and fog computing have been proposed and applied sequentially in a number of fields, and cloud computing is not always effective in processing data when it is generated at the edge of a network. With the internet of things, we will enter the cloud-behind era, a large amount of data will be generated in daily life, and many application programs will be deployed at the edge to consume the data. Cisco global cloud index statistics, in 2019, the data generated by people, machines and things reach 500 gigabits, but the global data center IP traffic only reaches 10.4 gigabits. 45% of the data created by the internet of things is stored, processed, analyzed, and calculated near or at the edge of the network. By 2020, 500 million things will be connected to the internet. Some internet of things applications may require very short response times, some may involve private data, some may generate large amounts of data, which places heavy loads on the network, and cloud computing efficiency is insufficient to support these applications.
The traditional programming model is not suitable for edge computing, most of devices in the edge computing are heterogeneous computing platforms, the runtime environment and data on each device are different, the resources of the edge devices are relatively limited, and the deployment of user application programs in an edge computing scene is difficult. One task in the edge computation can be migrated to a different edge device for execution, i.e. task migration is a necessary condition for realizing data processing on the edge device. The data in the edge computation model is somewhat distributable, requiring that the computation, storage and communication resources required to process the data are also distributable. Only when the edge computing system has the resources required for data processing and computation, the edge device can process the data.
Disclosure of Invention
In order to solve the technical problems, the invention provides an intelligent computing optimization method for industrial intelligent manufacturing edges.
The invention adopts the following technical scheme: an intelligent computing optimization method for industrial intelligent manufacturing edges comprises the following steps:
collecting state data of equipment in industrial production, and transmitting the state data to a server through an edge gateway;
establishing an edge computing model with exit nodes in a server, and deploying the edge computing model into a gateway;
when a computing task exists, the edge gateway selects an exit node through the input of an edge computing model, performs online identification on the equipment state acquired in real time, and sends an identification result to a server to realize the optimization of edge computing.
The establishing the edge computing model with the exit node comprises the following steps:
the equipment state is subjected to pooling to obtain a characteristic result, and the characteristic result is used as an input variable to be input into a neural network;
in the training phase, combining a loss function with each outlet, wherein each outlet is determined by the accuracy of the depth; x is the input variable, and an objective function is designed at each exit point as follows:
is a function representing the output of the neural network from the entry point to the nth exit branch, i.e., z, as a recognition result; θ represents a network parameter, including weight and bias;
then the network output result is used for obtaining a state discrimination result for representing normal or abnormal classification of the equipment through a softmax activation function
C is the set of all possible device states C, the calculated model loss function L at the nth exit node n The following are provided:
y is a true value output by the model, namely a true normal or abnormal state of the equipment; and then training the weighted sum minimization of each outlet loss function as an optimization problem to obtain a trained model.
The equipment state is subjected to a maximum pooling method to obtain a characteristic result, which is specifically as follows:
wherein e ij Is an element in the ith row and the jth column of a state matrix, the state matrix is obtained through the acquired equipment state, m is the number of the input of the equipment state feature vector,is the result of the largest element.
The equipment state is subjected to an average pool method to obtain a characteristic result, which is specifically as follows:
the averaging pool takes as a feature result the average value of each component of the state matrix.
Wherein e ij Is an element in the ith row and the jth column of a state matrix, the state matrix is obtained through the acquired equipment state, m is the number of the input of the equipment state feature vector,is the result of averaging the elements.
The joint loss function L is as follows:
where N is the total number of model exit points, beta n Associated weights for each exit.
After training, the estimated probability of the sample at the exit point classifier is measured by using entropy, and the definition of the entropy is as follows:
wherein, the state discrimination result is obtained by activating the functionThe calculated probabilities containing all possible outcomes, C is the set of all possible device states C, the fast decision algorithm is:
(1) n=1..n is the nth exit node, calculate
(2) If e < T n Returning n, otherwise repeating (1);
where x is the input sample, tn is the time threshold to determine whether to exit at layer N, N is the total number of exit points.
In the online optimization stage, the edge computing model exits from the edge gateway firstly, and then network computing of the rest nodes is completed on a server;
at the nth exit node, the run-time ED of each exit node at the edge gateway n And runtime ES on server n ,Z n The output calculated amount of the model n layer is Input calculated amount of the model Input; at bandwidth B, the entire runtime is calculated
If A is smaller than the target time delay, directly selecting an exit node in the target time delay as an exit point n;
if A is greater than or equal to the target time delay, the outlet point n obtained in the training stage is not changed.
The invention has the following beneficial effects and advantages:
the invention designs an intelligent computing optimization method for industrial intelligent manufacturing edges, wherein an intelligent computing model design is suitable for joint model training and inference strategies based on deep learning of edge testing equipment, a shallow network quick exit is designed, edge server exit nodes are reasonably selected, network complexity is reduced, and cloud edge collaborative data optimization algorithm is established. The intelligent algorithm optimization method based on the computing resources of the edge equipment solves the problems of real-time performance and reliability of an edge intelligent system, reduces energy consumption, network bandwidth requirements and possibility of information leakage.
Drawings
FIG. 1 is a block diagram of the overall architecture of the present invention;
FIG. 2 is a flow chart of the intelligent edge computing algorithm of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to the appended drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The invention may be embodied in many other forms than described herein and similarly modified by those skilled in the art without departing from the spirit or scope of the invention, which is therefore not limited to the specific embodiments disclosed below.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
In industrial production, various edge devices (such as a mechanical arm, a camera, a sensor and the like) can generate a large amount of data, the network time delay of the conventional cloud computing method is large, and meanwhile, the transmission and storage of the large amount of data also have the problem of computing load. The invention designs a method for realizing intelligent computation at the edge side of equipment, the framework of which is shown in figure 1, an edge perception layer collects state data of equipment in industrial production and transmits the state data to an edge gateway, and an edge intelligent computation model is deployed in the edge intelligent gateway to realize intelligent perception and intelligent decision. The edge task scheduling model mainly plans task scheduling on an edge server side and unloads computing tasks of edge measurement into the edge server.
The edge task scheduling model can be realized by the following modes:
collecting state data and equipment information of equipment in industrial production, and transmitting the state data and the equipment information to an edge gateway;
the edge server acquires equipment information through data transmission with the edge gateway, and plans task scheduling according to the equipment information and the data transmission state to acquire task quantity unloaded to the edge gateway by all terminal equipment;
and distributing tasks to the terminal equipment according to the task quantity of each edge gateway.
The device information includes: the terminal equipment calculates the CPU period number required by the unit bit unloading task and the effective capacitance coefficient of the terminal equipment.
The data transmission state includes: the transmission rate of the terminal device to the edge gateway, the bandwidth of the device to edge gateway transmission system, the transmission power of the terminal device to the edge gateway, the channel gain, the average noise power of the terminal device to the edge gateway.
The task scheduling planning method comprises the following steps:
the total calculation task amount of the terminal equipment is expressed as L, the task amount of the terminal equipment for local calculation is expressed by L, and O is used i Indicating the task quantity allocated to the ith edge gateway, wherein N is the total number of the edge gateways; thus, the amount of computational tasks satisfies the following constraints:
when the terminal equipment executes the local calculation task l in the whole time block T, C is used for representing the CPU period number required by the terminal equipment for calculating the unit bit unloading task, so that the energy consumption E locally calculated by the terminal equipment is as follows:
wherein k represents an effective capacitance coefficient, l represents a task amount of the terminal equipment for local calculation depending on a CPU structure of the terminal equipment, and T is local calculation time;
when the terminal device chooses to offload tasks to the edge gateway, the terminal device to edge gateway transmission rate r i The method comprises the following steps:
wherein i is the ith edge gateway, N is the total number of edge gateways, B is the bandwidth of the device-to-edge gateway transmission system, and P i Representing the transmission power from the terminal equipment to the edge gateway; h i =di -k Represents the channel gain, d i Representing the distance from the terminal equipment to the edge gateway, k is a fading factor, N i Representing the average noise power of the terminal device to the edge gateway;
task amount O for offloading terminal equipment to edge gateway i Expressed as:
τ i the time slot of the terminal equipment is processed for the ith edge gateway, and N is the total number of the edge gateways.
The edge intelligent computing optimization model is used for adjusting limited computing resources on the edge side by aiming at intelligent algorithms such as a deep learning model and the like which are computationally intensive and difficult to be distributed and optimized, an edge equipment computing resource assessment method is established, and an intelligent algorithm optimization method based on the edge equipment computing resources is designed on the basis, so that the instantaneity and reliability of an edge intelligent system are solved, and the energy consumption, the network bandwidth requirements and the possibility of information leakage are reduced. This property can be used for inference and decision-making of cross-regional data.
The model is based on a convolutional neural network, and a neural network with exit nodes is obtained through a design training method so as to reduce the network scale to run on an edge gateway, equipment state running data is used as input during network training, different pooling skills are used for obtaining state characteristics, and the maximum pooling method is mainly used for taking the maximum value of a matrix as a characteristic result.
Where m is the number of device status feature inputs, e ij Is the element in the ith row and jth column of the state matrix, which is the virtual quantity of the state quantity calculated by elementary pooling, and is obtained through the input of the state of the equipment, such as voltage, power, noise and the like,is the result of the largest element.
The averaging pool takes as a feature result the average value of each component of the state matrix.
Where m is the number of device status feature inputs, e ij Is an element in the ith row and jth column of the state matrix, which is the same meaning as the average pool.As a result of the averaging element, the averaging pool may reduce the noise input of certain terminal devices. Wherein the method comprises the steps ofAnd->Is two different characteristic results, namely two methods which can be selected in practice, and then the characteristic results are continuously input into a network, so that the final training result is obtained
During the training phase, the loss function is combined with each outlet, so that the whole neural network can be trained jointly, and each outlet is determined by the accuracy of depth. x is a model input variable, or a state characteristic result after pooling, and a specific objective function is designed at each outlet point and can be written as follows:
is a function representing the output of the neural network from the entry point to the nth exit branch, i.e., z, θ represents network parameters such as weights and offsets. Then the network output result is processed through a softmax activation function to obtain a state discrimination result
C is the set of all possible states, after which the model loss function L can be calculated at the nth exit node n
y is the true value of the model output. The weighted sum minimization of each exit loss function is then trained as an optimization problem, L being the joint loss function.
Where N is the total number of exit points and beta n Associated weights for each exit。
After training, the module can classify samples at the shallow layer of the network for rapid reasoning. If the classifier of the branch exit has a high probability of correctly labeling the test sample, the sample will exit ahead of time and return a predicted result. The entropy is used to measure the probability of predicting a sample at an exit point classifier, and is defined as:
wherein the state discrimination result is obtained by activating the functionThe calculated probabilities for all possible outcomes are included, and C is the set of all possible outcomes. The fast decision algorithm is:
(3) n=1..n is the nth exit node, calculate
(4) If e < T n Return n, otherwise repeat (1).
Where x is the input sample, tn is the time threshold to determine whether to exit at layer N, N is the total number of exit points.
In the online optimization stage, the edge intelligent computing module receives delay requirements from the mobile device and then searches for the optimal exit point of the trained model. The model first exits at the edge node, after which the network computation of the remaining nodes is completed at the server. Run-time ED for each node on the device at the nth exit node n And runtime ES on server n ,Z n The output calculated amount of the nth layer is Input calculated amount of the model Input. At a certain bandwidth B, the entire runtime is calculatedIf A is less than the target delay, the exit point n is optimally selected. If the target time delay is greater than or equal to the target time delay, the outlet point n obtained in the training stage is not changed.
The optimized neural network model is respectively deployed in an edge intelligent device (gateway) and a server, when intelligent computing tasks exist, the edge gateway can reasonably exit nodes through input selection, a simple deep learning algorithm is operated to identify the device state, and the intermediate result is sent to the server side to finish final computing.
The above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.

Claims (6)

1. The intelligent computing and optimizing method for the industrial intelligent manufacturing edge is characterized by comprising the following steps of:
collecting state data of equipment in industrial production, and transmitting the state data to a server through an edge gateway;
establishing an edge computing model with exit nodes in a server, and deploying the edge computing model into a gateway;
the establishing the edge computing model with the exit node comprises the following steps:
the equipment state is subjected to pooling to obtain a characteristic result, and the characteristic result is used as an input variable to be input into a neural network;
in the training phase, combining a loss function with each outlet, wherein each outlet is determined by the accuracy of the depth; x is the input variable, and an objective function is designed at each exit point as follows:
is a function representing the output of the neural network from the entry point to the nth exit branch, i.e., z, as a recognition result; θ represents a network parameter, including weight and bias;
then the network output result is used for obtaining a state discrimination result for representing normal or abnormal classification of the equipment through a softmax activation function
C is the set of all possible device states C, the calculated model loss function L at the nth exit node n The following are provided:
y is a true value output by the model, namely a true normal or abnormal state of the equipment; then training the weighted sum minimization of each outlet loss function as an optimization problem to obtain a trained model;
when a computing task exists, the edge gateway selects an exit node through the input of an edge computing model, performs online identification on the equipment state acquired in real time, and sends an identification result to a server to realize the optimization of edge computing.
2. The intelligent computing and optimizing method for industrial intelligent manufacturing edges according to claim 1, wherein the device state is subjected to a maximum pooling method to obtain a characteristic result, specifically comprising the following steps:
wherein e ij Is an element in the ith row and the jth column of a state matrix, the state matrix is obtained through the acquired equipment state, m is the number of the input of the equipment state feature vector,is the result of the largest element.
3. The intelligent computing and optimizing method for industrial intelligent manufacturing edges according to claim 1, wherein the device state is subjected to an average pool method to obtain a characteristic result, specifically comprising the following steps:
the averaging pool takes the average of each component of the state matrix as a feature result:
wherein e ij Is an element in the ith row and the jth column of a state matrix, the state matrix is obtained through the acquired equipment state, m is the number of the input of the equipment state feature vector,is the result of averaging the elements.
4. The intelligent computing and optimizing method for industrial intelligent manufacturing edges according to claim 1, wherein the joint loss function L is as follows:
where N is the total number of model exit points, beta n Associated weights for each exit.
5. The intelligent computing and optimizing method for industrial intelligent manufacturing edges according to claim 1, wherein after training, entropy is used to measure the estimated probability of the classifier on the sample at the exit point, and the definition of entropy is:
wherein, the state discrimination result is obtained by activating the functionThe calculated probabilities containing all possible outcomes, C is the set of all possible device states C, the fast decision algorithm is:
(1) n=1..n is the nth exit node, calculate
(2) If e < T n Returning n, otherwise repeating (1);
where x is the input sample, tn is the time threshold to determine whether to exit at layer N, N is the total number of exit points.
6. The method for intelligent computing optimization of industrial intelligent manufacturing edge according to claim 1, wherein in the online optimization stage, an edge computing model is first exited on an edge gateway, and then network computation of the remaining nodes is completed on a server;
at the nth exit node, the run-time ED of each exit node at the edge gateway n And runtime ES on server n ,Z n The output calculated amount of the model n layer is Input calculated amount of the model Input; at bandwidth B, the entire runtime is calculated
If A is smaller than the target time delay, directly selecting an exit node in the target time delay as an exit point n;
if A is greater than or equal to the target time delay, the outlet point n obtained in the training stage is not changed.
CN202010829583.4A 2020-08-18 2020-08-18 Intelligent computing optimization method for industrial intelligent manufacturing edge Active CN114077482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010829583.4A CN114077482B (en) 2020-08-18 2020-08-18 Intelligent computing optimization method for industrial intelligent manufacturing edge

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010829583.4A CN114077482B (en) 2020-08-18 2020-08-18 Intelligent computing optimization method for industrial intelligent manufacturing edge

Publications (2)

Publication Number Publication Date
CN114077482A CN114077482A (en) 2022-02-22
CN114077482B true CN114077482B (en) 2024-04-16

Family

ID=80281280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010829583.4A Active CN114077482B (en) 2020-08-18 2020-08-18 Intelligent computing optimization method for industrial intelligent manufacturing edge

Country Status (1)

Country Link
CN (1) CN114077482B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115983390B (en) * 2022-12-02 2023-09-26 上海科技大学 Edge intelligent reasoning method and system based on multi-antenna aerial calculation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961132A (en) * 2017-12-22 2019-07-02 英特尔公司 System and method for learning the structure of depth convolutional neural networks
CN110347500A (en) * 2019-06-18 2019-10-18 东南大学 For the task discharging method towards deep learning application in edge calculations environment
CN110995858A (en) * 2019-12-17 2020-04-10 大连理工大学 Edge network request scheduling decision method based on deep Q network
CN111149141A (en) * 2017-09-04 2020-05-12 Nng软件开发和商业有限责任公司 Method and apparatus for collecting and using sensor data from a vehicle
CN111445026A (en) * 2020-03-16 2020-07-24 东南大学 Deep neural network multi-path reasoning acceleration method for edge intelligent application

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11734568B2 (en) * 2018-02-14 2023-08-22 Google Llc Systems and methods for modification of neural networks based on estimated edge utility
JP7174243B2 (en) * 2018-12-21 2022-11-17 富士通株式会社 Information processing device, neural network program, neural network processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111149141A (en) * 2017-09-04 2020-05-12 Nng软件开发和商业有限责任公司 Method and apparatus for collecting and using sensor data from a vehicle
CN109961132A (en) * 2017-12-22 2019-07-02 英特尔公司 System and method for learning the structure of depth convolutional neural networks
CN110347500A (en) * 2019-06-18 2019-10-18 东南大学 For the task discharging method towards deep learning application in edge calculations environment
CN110995858A (en) * 2019-12-17 2020-04-10 大连理工大学 Edge network request scheduling decision method based on deep Q network
CN111445026A (en) * 2020-03-16 2020-07-24 东南大学 Deep neural network multi-path reasoning acceleration method for edge intelligent application

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
选区激光烧结收缩率预测及工艺参数优化;贺可太;刘硕;陈哲涵;杨智;;高分子材料科学与工程;20180706(第06期);全文 *

Also Published As

Publication number Publication date
CN114077482A (en) 2022-02-22

Similar Documents

Publication Publication Date Title
Yu et al. Intelligent edge: Leveraging deep imitation learning for mobile edge computation offloading
CN110851782A (en) Network flow prediction method based on lightweight spatiotemporal deep learning model
CN112835715B (en) Method and device for determining task unloading strategy of unmanned aerial vehicle based on reinforcement learning
Callegaro et al. Optimal edge computing for infrastructure-assisted UAV systems
CN110601777B (en) Method for estimating satellite-ground downlink co-channel interference under low-orbit mobile satellite constellation
Elsherbiny et al. 4G LTE network throughput modelling and prediction
CN110213784B (en) Flow prediction method and device
CN112995343B (en) Edge node calculation unloading method with performance and demand matching capability
CN111343650A (en) Urban scale wireless service flow prediction method based on cross-domain data and loss resistance
CN114385272B (en) Ocean task oriented online adaptive computing unloading method and system
CN114077482B (en) Intelligent computing optimization method for industrial intelligent manufacturing edge
Zhang et al. Latency prediction for delay-sensitive v2x applications in mobile cloud/edge computing systems
CN109375999A (en) A kind of MEC Random Task moving method based on Bayesian network
CN114936708A (en) Fault diagnosis optimization method based on edge cloud collaborative task unloading and electronic equipment
CN113961204A (en) Vehicle networking computing unloading method and system based on multi-target reinforcement learning
CN117436485A (en) Multi-exit point end-edge-cloud cooperative system and method based on trade-off time delay and precision
Al-Tahmeesschi et al. Applying deep neural networks for duty cycle estimation
Shimonishi et al. Energy optimization of distributed video processing system using genetic algorithm with bayesian attractor model
Crispim et al. Prediction of the solar radiation evolution using computational intelligence techniques and cloudiness indices
CN117062025A (en) Energy-saving combined computing unloading and resource allocation method for Internet of vehicles
Phung et al. A prediction based autoscaling in serverless computing
Shaodong et al. Multi-step reinforcement learning-based offloading for vehicle edge computing
CN114615705B (en) Single-user resource allocation strategy method based on 5G network
Zhang et al. An Adaptive Resource Allocation Approach Based on User Demand Forecasting for E-Healthcare Systems
Sun et al. Semantic-driven computation offloading and resource allocation for UAV-assisted monitoring system in vehicular networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant