CN116166341A - Static cloud edge collaborative architecture function calculation unloading method based on deep learning - Google Patents

Static cloud edge collaborative architecture function calculation unloading method based on deep learning Download PDF

Info

Publication number
CN116166341A
CN116166341A CN202310450930.6A CN202310450930A CN116166341A CN 116166341 A CN116166341 A CN 116166341A CN 202310450930 A CN202310450930 A CN 202310450930A CN 116166341 A CN116166341 A CN 116166341A
Authority
CN
China
Prior art keywords
computing
input data
calculation
sub
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310450930.6A
Other languages
Chinese (zh)
Inventor
李忠博
李少南
谢永强
张凯
齐锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Systems Engineering of PLA Academy of Military Sciences
Original Assignee
Institute of Systems Engineering of PLA Academy of Military Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Systems Engineering of PLA Academy of Military Sciences filed Critical Institute of Systems Engineering of PLA Academy of Military Sciences
Priority to CN202310450930.6A priority Critical patent/CN116166341A/en
Publication of CN116166341A publication Critical patent/CN116166341A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a static cloud edge collaborative architecture function calculation unloading method based on deep learning, and relates to the technical field of data processing. According to the cloud edge cooperative function computing and unloading strategy, cloud edge computing resources are fused, and the execution efficiency of tasks and the resource utilization rate of cloud edge nodes are improved.

Description

Static cloud edge collaborative architecture function calculation unloading method based on deep learning
Technical Field
The invention belongs to the technical field of data processing, and particularly relates to a static cloud edge collaborative architecture function calculation unloading method based on deep learning.
Background
With the development of cloud computing, edge computing, artificial intelligence and 5G communication in recent years, intelligent applications such as Internet of vehicles, intelligent conferences and intelligent security are promoted to fall to the ground by deploying a large number of cloud edge infrastructures, and a large number of nearby resource supports such as computing and storage are provided for application and research of intelligent algorithms of user terminals. In order to reasonably and efficiently utilize cloud edge resources to accelerate the computing power of an intelligent algorithm of the user terminal. The cloud edge collaborative computing and unloading algorithm is an important research direction and a hot research problem in the fields of cloud computing and edge computing, and has very important roles in combination with technologies such as distributed computing, blockchain, SDN, 5G communication and the like. The traditional cloud edge collaborative computing unloading method comprises an unloading method based on heuristic and an unloading method based on intelligent learning algorithm. The current cloud-edge collaborative computing offloading policy is still mainly based on offloading tasks, and the main research objective is to obtain better computing performance by transferring computing tasks requiring a large amount of computing resources to a certain node of the cloud edge for computing. Therefore, for a task, cloud edge resources which can be utilized by the cloud edge computing system are limited, and the computing advantage of cloud edge cooperation cannot be really exerted. On the other hand, in the scenario of computing and unloading multiple computing tasks, different tasks may include the same computing process, and in the traditional computing and unloading method, the same computing processes repeatedly deploy special excessive resources along with the tasks, which definitely causes a great amount of resource waste.
Disclosure of Invention
In order to solve the technical problems, the invention provides a static cloud edge collaborative architecture function calculation unloading method based on deep learning.
The method comprises the following steps: step S1, dividing a calculation task into N sub-calculation tasks according to a function dependency relationship of the calculation task generated by unmanned aerial vehicle equipment, wherein the sub-calculation tasks are function-based calculation tasks; s2, determining L computing nodes for computing the N sub-computing tasks from M computing nodes, wherein the M computing nodes comprise a cloud computing center, a plurality of synchronous satellite edge computing nodes and a plurality of ground base station edge computing nodes, and L is less than or equal to N; and step S3, the N sub-calculation tasks are sent to the determined L calculation nodes so as to execute a calculation process and return a calculation result, and the unmanned aerial vehicle integrates the received settlement result.
In the step S1, a control flow graph of the computing task is obtained, and the computing task is divided into N sub-computing tasks based on a function basic block based on a function dependency relationship in the control flow graph.
In the step S1, a control flow graph of the computing task is obtained, and the computing task is divided into N sub-computing tasks based on a function basic block based on a function dependency relationship in the control flow graph.
Wherein in said step S2, said L computing nodes for computing said N sub-computing tasks are determined based on a deep neural network model comprising 1 input layer, 5 convolution layers, 2 fully connected layers and 1 output layer, wherein: the parameters of the input layer are 128×128×8; the parameters of the first 4 convolutional layers are 126×126×16, 124×124×24, 122×122×24 and 120×120×24, containing 2 convolutional kernels of 3×3, with a convolutional step size of 1; the parameters of the last 1 convolutional layer are 16 x 16, a convolution kernel comprising 103 x 103; the convolution layer comprises two convolution kernels of 3 multiplied by 3, and the convolution step length is 1; the parameters of the full connection are 1×256 and 1×128 respectively; the number of neurons of the output layer is N, and the maximum value of the number of neurons is 128.
In the step S2, a bandwidth between any two computing nodes of the M computing nodes is acquired
Figure SMS_1
Computing power of each of the M computing nodes +.>
Figure SMS_2
The data amount of the calculation result of each of the N sub-calculation tasks +.>
Figure SMS_3
And determining the amount of computation required for each sub-computation task +.>
Figure SMS_4
To determine the input data of the deep neural network model, i is more than or equal to 1 and less than or equal to M, j is more than or equal to 1 and less than or equal to M, and k is more than or equal to 1 and less than or equal to N.
Wherein in the step S2, the input data of the deep neural network model includes first input data and second input data; wherein: the data amount from the calculation result
Figure SMS_5
A first matrix with dimensions of 1 XN is formed by characterizing the bandwidth between any two computing nodes>
Figure SMS_6
Forming a second matrix with dimension M multiplied by M, dividing each element in the first matrix by all elements in the second matrix, and obtaining N matrixes with dimension M multiplied by M as the first input data; the amount of computation required by said each sub-computation task +.>
Figure SMS_7
A third matrix with dimension of 1 XN is formed, the computing power of each computing node is +.>
Figure SMS_8
And dividing each element in the third matrix by all elements in the fourth matrix to obtain N matrixes with the dimension of 1×M, splicing the matrixes with the dimension of 1×M into matrixes with the dimension of n×M as transition matrixes, and subtracting all elements in the transition matrixes from each element in the third matrix to obtain M matrixes with the dimension of n×M as the second input data.
In the step S2, the first input data and the second input data are transposed and spliced to form three-dimensional data of mxmxmx 2N as input data of the deep neural network model.
Wherein in said step S2, said deep neural network model is pre-trained such that it can determine matching computational nodes for several sets of training input data.
In summary, the technical scheme provided by the invention realizes the fusion of cloud edge computing resources by researching the cloud edge cooperative function computing and unloading strategy, improves the execution efficiency of tasks, breaks through the traditional cloud edge cooperative computing and unloading mode, and further improves the resource utilization rate of cloud edge nodes.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings which are required in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a static cloud edge collaborative architecture function computation offload method according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention discloses a static cloud edge collaborative architecture function calculation unloading method. The computational offload problem, i.e., the functional computational offload study of static cloud-edge synergy, is studied from another perspective than the prior art. Based on the non-service computing platform, the computing task is split into a plurality of functions, input-output relations exist among the functions, different functions are deployed on different cloud edge nodes, and the final result of the task is obtained through data transmission among the functions.
FIG. 1 is a flow chart of a static cloud edge collaborative architecture function computation offload method according to an embodiment of the invention; as shown in connection with fig. 1, the method comprises: step S1, dividing a calculation task into N sub-calculation tasks according to a function dependency relationship of the calculation task generated by unmanned aerial vehicle equipment, wherein the sub-calculation tasks are function-based calculation tasks; s2, determining L computing nodes for computing the N sub-computing tasks from M computing nodes, wherein the M computing nodes comprise a cloud computing center, a plurality of synchronous satellite edge computing nodes and a plurality of ground base station edge computing nodes, and L is less than or equal to N; and step S3, the N sub-calculation tasks are sent to the determined L calculation nodes so as to execute a calculation process and return a calculation result, and the unmanned aerial vehicle integrates the received settlement result.
The function stage Yun Bian collaborative computing unloading method based on deep learning solves the problem of unloading functions under a static cloud edge collaborative architecture, and a static Yun Bianjia architecture is shown in the figure. The cloud edge architecture consists of a cloud data center, four edge computing nodes depending on base stations, two edge computing nodes depending on synchronous satellites and an unmanned aerial vehicle. The geostationary satellite, edge compute nodes, and cloud data center provide different computing power and bandwidth capabilities, and the drone generates computing tasks. For a task on the unmanned aerial vehicle, the task needs to be split into a plurality of functions, and an optimal node is selected for each function to be deployed, so that the shortest task calculation time delay is obtained.
In some embodiments, in the step S1, a control flow graph of the computing task is obtained, and the computing task is divided into N sub-computing tasks based on basic blocks of functions based on functional dependencies in the control flow graph.
In some embodiments, in the step S2, the L computing nodes for computing the N sub-computing tasks are determined based on a deep neural network model comprising 1 input layer, 5 convolution layers, 2 full connection layers, and 1 output layer, wherein: the parameters of the input layer are 128×128×8; the parameters of the first 4 convolutional layers are 126×126×16, 124×124×24, 122×122×24 and 120×120×24, containing 2 convolutional kernels of 3×3, with a convolutional step size of 1; the parameters of the last 1 convolutional layer are 16 x 16, a convolution kernel comprising 103 x 103; the convolution layer comprises two convolution kernels of 3 multiplied by 3, and the convolution step length is 1; the parameters of the full connection are 1×256 and 1×128 respectively; the number of neurons of the output layer is N, and the maximum value of the number of neurons is 128.
In some embodiments, in the step S2, a bandwidth between any two computing nodes of the M computing nodes is acquired
Figure SMS_9
Computing power of each of the M computing nodes +.>
Figure SMS_10
The data amount of the calculation result of each of the N sub-calculation tasks +.>
Figure SMS_11
And determining the amount of computation required for each sub-computation task +.>
Figure SMS_12
To determine the input data of the deep neural network model, i is more than or equal to 1 and less than or equal to M, j is more than or equal to 1 and less than or equal to M, and k is more than or equal to 1 and less than or equal to N.
In some embodiments, in the step S2, the input data of the deep neural network model includes first input data and second input data; wherein: the data amount from the calculation result
Figure SMS_13
A first matrix with dimensions of 1 XN is formed by characterizing the bandwidth between any two computing nodes>
Figure SMS_14
Forming a second matrix with dimension M multiplied by M, dividing each element in the first matrix by all elements in the second matrix, and obtaining N matrixes with dimension M multiplied by M as the first input data; the amount of computation required by said each sub-computation task +.>
Figure SMS_15
A third matrix with dimension of 1 XN is formed, the computing power of each computing node is +.>
Figure SMS_16
Forming a fourth matrix with dimension of 1×M, dividing each element in the third matrix by all elements in the fourth matrix to obtain N matrixes with dimension of 1×M, and splicing the matrixes with dimension of 1×M into matrixes with dimension of n×M as transition momentsAnd subtracting all elements in the transition matrix from each element in the third matrix to obtain M matrixes with dimension of N multiplied by M as the second input data. />
In some embodiments, in the step S2, the first input data and the second input data are transposed and spliced to form three-dimensional data of mxmxmx 2N as input data of the deep neural network model.
In some embodiments, in the step S2, the deep neural network model is pre-trained so that it can determine matching compute nodes for sets of training input data.
Step S2 mainly includes an input data processing process and a model construction process. The function of the data processing is to input data for the built model, and the data generated by the data processing needs to be capable of clearly characterizing quantitative parameters in the environment. The model construction part will construct a deep learning structure suitable for the above scenario, including input, convolution, full connection and output layers, etc., through which data can be directly input into the offloading strategy of the function. Finally, the model is converged through training, and since the just built model is a model which is not subjected to learning, an unloading strategy cannot be generated, iterative learning is required to be performed through a certain amount of data, and the model after learning has practical value.
The quantitative parameters include: number of nodes M, bandwidth between nodes
Figure SMS_17
Computing power of node->
Figure SMS_18
The number of functions N of task splitting, the output data size of the functions +.>
Figure SMS_19
Calculation amount required for completion of function calculation +.>
Figure SMS_20
The input data of the neural network comprisesTwo sets of data. The first input data is represented by a matrix of output data sizes of the function
Figure SMS_21
Bandwidth matrix between each element and node ∈>
Figure SMS_22
The division results in a three-dimensional matrix of M x N. The second input data is defined by the required computing power of the function +.>
Figure SMS_23
Computing power matrix of each element and node in (a)>
Figure SMS_24
Dividing, and taking the first node as a base point to obtain the difference value between all nodes and the node, so as to generate three-dimensional MxMxN second input data. Finally, the first input data and the second input data are combined to form input data of the neural network, wherein the input data are three-dimensional data of M multiplied by 2N.
The neural network structure mainly comprises a 1-layer input layer, a 5-layer convolution layer, a 2-layer full-connection layer and a 1-layer input layer. The input is 128×128×8, the unloading strategy generation of 128 cloud edge nodes and less than 4 task functions can be realized, the convolution layer is two convolution kernels of 3×3, the convolution step size is 1, the first four layers of convolutions are 126 x 6, 124 x 24, 122 x 24, respectively, the layer 5 convolution kernel is 103 x 103, step size is 1, so layer 5 is 16 x 16, the full connection is 1×256 and 1×128, respectively. The last layer is the output result, and 4 output neurons respectively represent the deployment nodes of 4 functions, and take values [1,128].
It can be seen that the above method studies the computational offload problem from another perspective, namely, the static cloud-edge collaborative function computational offload study. Based on the non-service computing platform, the computing task is split into a plurality of functions, input-output relations exist among the functions, different functions are deployed on different cloud edge nodes, and the final result of the task is obtained through data transmission among the functions. The method can quickly generate the function unloading strategy in the static cloud edge environment, can obtain the performance very close to the theoretical optimal value under the lower complexity, and can be downward compatible with the number of nodes, the number of terminals and the number of functions of different scales.
Note that the technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be regarded as the scope of the description. The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (7)

1. The method for calculating and unloading the static cloud edge collaborative architecture function based on the deep learning is characterized by comprising the following steps:
step S1, dividing a calculation task into N sub-calculation tasks according to a function dependency relationship of the calculation task generated by unmanned aerial vehicle equipment, wherein the sub-calculation tasks are function-based calculation tasks;
s2, determining L computing nodes for computing the N sub-computing tasks from M computing nodes, wherein the M computing nodes comprise a cloud computing center, a plurality of synchronous satellite edge computing nodes and a plurality of ground base station edge computing nodes, and L is less than or equal to N;
and step S3, the N sub-calculation tasks are sent to the determined L calculation nodes so as to execute a calculation process and return a calculation result, and the unmanned aerial vehicle integrates the received calculation result.
2. The method for computing and unloading a static cloud edge collaborative architecture function based on deep learning according to claim 1, wherein in the step S1, a control flow graph of the computing task is obtained, and the computing task is divided into N sub-computing tasks based on a basic block of a function based on a function dependency relationship in the control flow graph.
3. The method according to claim 2, wherein in the step S2, the L computing nodes for computing the N sub-computing tasks are determined based on a deep neural network model, the deep neural network model including 1 input layer, 5 convolution layers, 2 full connection layers, and 1 output layer, wherein:
the parameters of the input layer are 128×128×8;
the parameters of the first 4 convolutional layers are 126×126×16, 124×124×24, 122×122×24 and 120×120×24, containing 2 convolutional kernels of 3×3, with a convolutional step size of 1; the parameters of the last 1 convolutional layer are 16 x 16, a convolution kernel comprising 103 x 103; the convolution layer comprises two convolution kernels of 3 multiplied by 3, and the convolution step length is 1;
the parameters of the full connection are 1×256 and 1×128 respectively;
the number of neurons of the output layer is N, and the maximum value of the number of neurons is 128.
4. The deep learning-based static cloud edge collaborative architecture function computing and offloading method according to claim 3, wherein in the step S2, a bandwidth between any two computing nodes of the M computing nodes is obtained
Figure QLYQS_1
Computing power of each of the M computing nodes +.>
Figure QLYQS_2
The data amount of the calculation result of each of the N sub-calculation tasks +.>
Figure QLYQS_3
Determining said eachThe amount of computation required for a sub-computation task +.>
Figure QLYQS_4
To determine the input data of the deep neural network model, i is more than or equal to 1 and less than or equal to M, j is more than or equal to 1 and less than or equal to M, and k is more than or equal to 1 and less than or equal to N.
5. The method according to claim 4, wherein in the step S2, the input data of the deep neural network model includes first input data and second input data; wherein:
the data amount from the calculation result
Figure QLYQS_5
A first matrix with dimensions of 1 XN is formed by characterizing the bandwidth between any two computing nodes>
Figure QLYQS_6
Forming a second matrix with dimension M multiplied by M, dividing each element in the first matrix by all elements in the second matrix, and obtaining N matrixes with dimension M multiplied by M as the first input data;
the amount of computation required by each sub-computation task
Figure QLYQS_7
A third matrix with dimension of 1 XN is formed, the computing power of each computing node is +.>
Figure QLYQS_8
And dividing each element in the third matrix by all elements in the fourth matrix to obtain N matrixes with the dimension of 1×M, splicing the matrixes with the dimension of 1×M into matrixes with the dimension of n×M as transition matrixes, and subtracting all elements in the transition matrixes from each element in the third matrix to obtain M matrixes with the dimension of n×M as the second input data.
6. The method according to claim 5, wherein in the step S2, the first input data and the second input data are transposed and spliced to form three-dimensional data of mxmxmx 2N as the input data of the deep neural network model.
7. The method according to claim 6, wherein in step S2, the deep neural network model is trained in advance to determine matched computing nodes for several sets of training input data.
CN202310450930.6A 2023-04-25 2023-04-25 Static cloud edge collaborative architecture function calculation unloading method based on deep learning Pending CN116166341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310450930.6A CN116166341A (en) 2023-04-25 2023-04-25 Static cloud edge collaborative architecture function calculation unloading method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310450930.6A CN116166341A (en) 2023-04-25 2023-04-25 Static cloud edge collaborative architecture function calculation unloading method based on deep learning

Publications (1)

Publication Number Publication Date
CN116166341A true CN116166341A (en) 2023-05-26

Family

ID=86411766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310450930.6A Pending CN116166341A (en) 2023-04-25 2023-04-25 Static cloud edge collaborative architecture function calculation unloading method based on deep learning

Country Status (1)

Country Link
CN (1) CN116166341A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109062677A (en) * 2018-07-10 2018-12-21 中国人民解放军国防科技大学 Unmanned aerial vehicle system calculation migration method
CN111176820A (en) * 2019-12-31 2020-05-19 中科院计算技术研究所大数据研究院 Deep neural network-based edge computing task allocation method and device
CN111600296A (en) * 2020-01-09 2020-08-28 浙江中新电力工程建设有限公司自动化分公司 Power load prediction system based on edge calculation and prediction method thereof
CN112711427A (en) * 2019-10-24 2021-04-27 华为技术有限公司 Method and device for acquiring mirror image file
CN113315669A (en) * 2021-07-28 2021-08-27 江苏电力信息技术有限公司 Cloud edge cooperation-based throughput optimization machine learning inference task deployment method
CN115759237A (en) * 2022-10-21 2023-03-07 国网天津市电力公司 End-to-end deep neural network model compression and heterogeneous conversion system and method
CN115767637A (en) * 2022-12-21 2023-03-07 西北工业大学 Cloud computing network resource optimal allocation method based on opportunistic access

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109062677A (en) * 2018-07-10 2018-12-21 中国人民解放军国防科技大学 Unmanned aerial vehicle system calculation migration method
CN112711427A (en) * 2019-10-24 2021-04-27 华为技术有限公司 Method and device for acquiring mirror image file
CN111176820A (en) * 2019-12-31 2020-05-19 中科院计算技术研究所大数据研究院 Deep neural network-based edge computing task allocation method and device
CN111600296A (en) * 2020-01-09 2020-08-28 浙江中新电力工程建设有限公司自动化分公司 Power load prediction system based on edge calculation and prediction method thereof
CN113315669A (en) * 2021-07-28 2021-08-27 江苏电力信息技术有限公司 Cloud edge cooperation-based throughput optimization machine learning inference task deployment method
CN115759237A (en) * 2022-10-21 2023-03-07 国网天津市电力公司 End-to-end deep neural network model compression and heterogeneous conversion system and method
CN115767637A (en) * 2022-12-21 2023-03-07 西北工业大学 Cloud computing network resource optimal allocation method based on opportunistic access

Similar Documents

Publication Publication Date Title
Zhang et al. Aerial edge computing on orbit: A task offloading and allocation scheme
EP3276539A1 (en) Accelerator in convolutional neural network and method for operating the same
CN111222628B (en) Method, device, system and readable storage medium for optimizing training of recurrent neural network
CN111556516B (en) Distributed wireless network task cooperative distribution method facing delay and energy efficiency sensitive service
CN112469047B (en) Method for deploying space-ground integrated intelligent network satellite nodes
CN113258982B (en) Satellite information transmission method, device, equipment, medium and product
CN110161861B (en) Aircraft ad hoc network routing decision method and device based on fuzzy neural network
Li et al. Communication-aware computing for edge processing
CN112272381A (en) Satellite network task deployment method and system
CN113992678A (en) Calculation migration method for offshore MEC load balancing and resource allocation joint optimization
CN112818207A (en) Network structure search method, device, equipment, storage medium and program product
CN113592077B (en) Cloud edge DNN collaborative reasoning acceleration method for edge intelligence
Dai et al. Hybrid quantum-behaved particle swarm optimization for mobile-edge computation offloading in internet of things
CN116032845A (en) Data center network overhead management method and system
Goh et al. Partial Offloading MEC Optimization Scheme using Deep Reinforcement Learning for XR Real-Time M&S Devices
CN116166341A (en) Static cloud edge collaborative architecture function calculation unloading method based on deep learning
CN111047018A (en) Intelligent scheduling method for mobile communication resources of low-earth-orbit satellite
Yang et al. Coded computing in unknown environment via online learning
CN116582173A (en) Method, device and storage medium for processing data by satellite-based distributed network
CN115758643A (en) Network flow prediction method and device based on temporal-spatial feature fusion and storage medium
CN109726820B (en) Energy node importance degree calculation method and device, storage medium and electronic device
CN112543481A (en) Method, device and system for balancing calculation force load of edge node
Chen et al. Energy and Time-Aware Inference Offloading for DNN-based Applications in LEO Satellites
CN111225380A (en) Dynamic access method for air-space-earth-sea integrated multi-user cooperative learning
CN112446463A (en) Neural network full-connection layer operation method and device and related products

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230526