CN112685176A - Resource-constrained edge computing method for improving DDNN (distributed neural network) - Google Patents

Resource-constrained edge computing method for improving DDNN (distributed neural network) Download PDF

Info

Publication number
CN112685176A
CN112685176A CN202011559400.8A CN202011559400A CN112685176A CN 112685176 A CN112685176 A CN 112685176A CN 202011559400 A CN202011559400 A CN 202011559400A CN 112685176 A CN112685176 A CN 112685176A
Authority
CN
China
Prior art keywords
neural network
edge
network
layer
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011559400.8A
Other languages
Chinese (zh)
Inventor
杨力平
孙思思
辛锐
张鹏飞
王新颖
韩桂楠
王兆辉
康之曾
刘云龙
刘明硕
王俊卿
尹晓宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
North China Electric Power University
Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
North China Electric Power University
Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, North China Electric Power University, Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202011559400.8A priority Critical patent/CN112685176A/en
Publication of CN112685176A publication Critical patent/CN112685176A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a resource-constrained edge calculation method for improving DDNN, which comprises the following steps: the method comprises the steps that terminal equipment collects original data, an equipment layer sub-neural network is used for evaluating to obtain characteristic data and an evaluation result, an entropy value is obtained according to the evaluation result, if the entropy value is lower than a threshold value, the evaluation result is returned, and otherwise, the characteristic data are uploaded to edge equipment; after the edge device receives the feature data, the edge device evaluates the feature data by using the edge layer sub-neural network to obtain new feature data and an evaluation result, then, an entropy value is obtained according to the new evaluation result, if the entropy value is lower than a threshold value, the evaluation result is returned, and if not, the new feature data is uploaded to the cloud; and after the cloud receives the new characteristic data, evaluating by using a sub-neural network of the cloud layer to obtain an evaluation result, and finishing execution. The method allows the shallow part of the neural network to be used for rapid and localized reasoning on the edge and the terminal equipment, and has the advantages of low communication cost, high precision and strong safety.

Description

Resource-constrained edge computing method for improving DDNN (distributed neural network)
Technical Field
The invention relates to the technical field of edge computing, in particular to a resource-constrained edge computing method for improving DDNN.
Background
Edge computing refers to an open platform that integrates network, compute, storage, and application core functions at the edge of the network near the edge of an object or data source. The edge computing mode can transfer a part of tasks of the cloud computing center to the edge equipment to complete computing, and the method has the advantages of being large in data size, timeliness and diversity, fast in spacing, efficient and the like. Deep learning is a product of machine learning which is developed to a certain stage, and in recent years, deep learning technology can attract wide attention in various social circles because deep learning is not in academic circles, and has great breakthrough and wide application in the industrial industry, particularly in power systems. In the deep learning application on the edge, because the computing resources of the mobile terminal are relatively limited, and the deep learning has the characteristics of complex model and large computing amount, an end-cloud architecture is proposed to realize the training and reasoning of the model, namely: the end (edge device) provides the model input and the cloud (remote data center) performs the computation process. However, as the complexity of solving the problem increases, the architecture becomes increasingly difficult to meet the user's demand for analysis real-time.
In 2017, Terapittayanon et al proposed a novel Deep network structure BranchyNet, and later improved on the structure to provide a DDNN (distributed Deep Neural network) model. The DDNN oriented to edge computing is used for mapping a part of a single DNN to distributed heterogeneous equipment, the equipment comprises terminal equipment, an edge and a cloud, a classifier is arranged at each of an equipment side, an edge side and a cloud side outlet of the DDNN, different layers of the DDNN are mapped to each heterogeneous equipment to realize 'cloud-edge-end' cooperative computing, the problem that the computing capacity of the edge computing equipment is limited is solved, the DDNN can allow deep neural network reasoning to be carried out at a cloud end, and quick and local reasoning can be carried out on the edge and the terminal equipment by using a shallow part of a neural network.
Disclosure of Invention
The invention aims to provide a resource-constrained edge computing method of an improved DDNN, which is used for computing resource-constrained edges based on an improved DDNN model, allows the edges and terminal equipment to use a shallow part of a neural network for fast and localized reasoning, solves the problem of limited computing capability of edge computing equipment, simplifies the computing process, ensures the accuracy of computing results, and has the advantages of low communication cost, high accuracy and strong safety.
In order to achieve the purpose, the invention provides the following scheme:
a resource-constrained edge computing method of an improved DDNN (distributed data neural network) is used for resource-constrained edge computing, wherein the improved DDNN comprises a device layer sub-neural network, an edge layer sub-neural network and a cloud layer sub-neural network, and the method comprises the following steps:
s1, the terminal equipment collects the original data, and the equipment layer sub-neural network is used for evaluation to obtain characteristic data and an evaluation result;
s2, the terminal device obtains an entropy value according to the evaluation result, if the entropy value is lower than the threshold value, the execution is finished, the evaluation result is returned, otherwise the characteristic data are uploaded to the edge device;
s3, after the edge device receives the feature data, the edge device uses the edge layer sub-neural network to evaluate, and new feature data and an evaluation result are obtained;
s4, the edge device obtains an entropy value according to the new evaluation result, if the entropy value is lower than a threshold value, the execution is finished, the evaluation result is returned, and otherwise, new characteristic data are uploaded to the cloud end;
and S5, after the cloud receives the new characteristic data, evaluating by using the sub-neural network of the cloud layer to obtain an evaluation result, and finishing execution.
Furthermore, the device layer sub-neural network, the edge layer sub-neural network and the cloud layer sub-neural network are provided with network outlets, each network outlet is provided with a classifier, and different layers are mapped to each heterogeneous device to achieve cloud-edge-end cooperative computing.
The sample information entropy of an exit point is defined as:
Figure BDA0002859866570000021
where x represents the input sample and C represents the set of labels.
Further, in the improved DDNN model, BNNs and eBNNs are used to accommodate terminal devices, and the terminal devices are jointly trained with neural networks in edge devices and cloud ends; the network egress is placed on the physical boundary, i.e. between the last neural network layer of the terminal device and the first neural network layer of the next higher layer.
Further, the improved DDNN model aggregation method is as follows:
maximum pooling: aggregating the input vectors by obtaining a maximum value for each component;
Figure BDA0002859866570000031
average pooling: aggregating the input vectors by taking an average of each component;
Figure BDA0002859866570000032
where n denotes the number of components, vijThe jth vector of the ith component.
Connecting: connecting the input vectors together, and keeping all information useful for the cloud, which expands the dimension of the output vector; an additional linear layer is added to map the output vector back to the same dimension as the input vector.
Further, the improved DDNN model training method is as follows:
during training, loss function values of network outlets of the device layer sub-neural network, the edge layer sub-neural network and the cloud layer sub-neural network are combined in a reverse serial broadcasting process, so that the whole network is subjected to combined training, a combined optimization problem is constructed during DDNN training, the weighted sum of the loss functions of each network outlet is minimized, and a softmax cross entry loss function is used as an optimization target, wherein:
the loss function is:
Figure BDA0002859866570000033
whereinAnd y represents the true label of the specimen,
Figure BDA0002859866570000034
representing an estimate of a sample, C representing a set of labels, ycThe true label representing sample C in set C,
Figure BDA0002859866570000035
representing the estimated value of sample C in set C.
The prediction output vector is:
Figure BDA0002859866570000036
wherein z is expressed as the final output result of the network layer, zcRepresenting the final output result of the network layer of sample C in set C.
And final output results of the network layer are as follows:
Figure BDA0002859866570000041
where x represents the input samples, theta represents the parameters of the process network such as weights and biases,
Figure BDA0002859866570000042
representing the operation of a sample from the input to the nth outlet of the neural network.
The optimized objective loss function of the whole network is as follows:
Figure BDA0002859866570000043
where, y represents the true label of the specimen,
Figure BDA0002859866570000044
representing the estimated value of the sample, theta representing the parameters of the process network such as weight and bias, N representing the number of classification outlets, wnRepresenting each outletThe weight of the weight is calculated,
Figure BDA0002859866570000045
representing an estimate of the nth exit.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: according to the resource-constrained edge calculation method of the improved DDNN, the resource-constrained edge calculation is carried out by applying the improved DDNN model, the neural network model with few parameters in the terminal equipment is combined with the neural network model with many parameters in the cloud end, the initial feature extraction can be rapidly carried out by using the neural network model with few parameters, and a classification task can be carried out if the model is credible; the data processed by the terminal is transmitted back to the large neural network model of the cloud end for subsequent feature processing and other operations, and the risk of sensitive information leakage can be reduced because the data transmitted to the cloud end is processed by the terminal equipment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a resource constrained edge calculation method for improving DDNN according to the present invention;
fig. 2 is an architecture of the improved DDNN of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a resource-constrained edge computing method of an improved DDNN, which is used for carrying out resource-constrained edge computing based on an improved DDNN model, allows the edge and terminal equipment to use a shallow part of a neural network for fast and localized reasoning, and has the advantages of low communication cost, high precision and strong safety.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, the resource-constrained edge computing method for an improved DDNN provided by the present invention performs resource-constrained edge computing by applying an improved DDNN model, where the improved DDNN model includes a device layer sub-neural network, an edge layer sub-neural network, and a cloud layer sub-neural network, and the method includes the following steps:
s1, the terminal equipment collects the original data, and the equipment layer sub-neural network is used for evaluation to obtain characteristic data and an evaluation result;
s2, the terminal device obtains an entropy value according to the evaluation result, if the entropy value is lower than the threshold value, the execution is finished, the evaluation result is returned, otherwise the characteristic data are uploaded to the edge device;
s3, after the edge device receives the feature data, the edge device uses the edge layer sub-neural network to evaluate, and new feature data and an evaluation result are obtained;
s4, the edge device obtains an entropy value according to the new evaluation result, if the entropy value is lower than a threshold value, the execution is finished, the evaluation result is returned, and otherwise, new characteristic data are uploaded to the cloud end;
and S5, after the cloud receives the new characteristic data, evaluating by using the sub-neural network of the cloud layer to obtain an evaluation result, and finishing execution.
The architecture of the improved DDNN is shown in fig. 2, wherein the device layer sub-neural network, the edge layer sub-neural network, and the cloud layer sub-neural network are all provided with network outlets, each network outlet is provided with a classifier, and different layers are mapped to each heterogeneous device to realize 'cloud-edge-end' cooperative computing.
In the improved DDNN model, BNNs and eBNNs are used for accommodating terminal equipment, and the terminal equipment, the edge equipment and a neural network in a cloud end are subjected to joint training; the network egress is placed on the physical boundary, i.e. between the last neural network layer of the terminal device and the first neural network layer of the next higher layer. Samples that can be classified in advance will be dropped locally, thus achieving lower response latency and saving the communication to the next physical boundary.
The multi-layer network outlet and the classifier solve the problem of limited computing capability of the edge computing device, and not only can the deep neural network inference be carried out at the cloud end, but also the shallow part of the neural network can be used for carrying out rapid and localized inference on the edge and the terminal device. The classifier outlet, similar to a cascade classifier, has the characteristics of cascade and self-adaptability. The multi-outlet neural network has the self-adaptive learning capability, and can automatically decide which samples need more layers of reasoning and which samples can be judged quickly. The adaptive neural network is beneficial to reducing inference time and consumption of CPU computing resources. The adaptive mechanism of the DDNN makes the forward propagation structure of the neural network be not fixed, and can be used to dynamically construct the neural network model, and the decision tree selects the features with the largest information amount by using the information theory method, if the samples for training the DDNN are enough, the more information is obtained, the clearer the input samples should correspond to which layer of DNN output.
The improved DDNN model aggregation method comprises the following steps:
maximum pooling: aggregating the input vectors by obtaining a maximum value for each component;
Figure BDA0002859866570000061
average pooling: aggregating the input vectors by taking an average of each component;
Figure BDA0002859866570000062
where n denotes the number of components, vijThe jth vector of the ith component.
Connecting: connecting the input vectors together, and keeping all information useful for the cloud, which expands the dimension of the output vector; an additional linear layer is added to map the output vector back to the same dimension as the input vector.
The improved DDNN model training method comprises the following steps:
during training, loss function values of network outlets of the device layer sub-neural network, the edge layer sub-neural network and the cloud layer sub-neural network are combined in a reverse serial broadcasting process, so that the whole network is subjected to combined training, the outlets of different layers can obtain satisfactory precision relative to the depth of the outlets, a combined optimization problem is constructed during DDNN training, the weighted sum of the loss functions of each network outlet is minimized, and a softmax cross entry loss function is used as an optimization target, wherein:
the loss function is:
Figure BDA0002859866570000071
where, y represents the true label of the specimen,
Figure BDA0002859866570000072
representing an estimate of a sample, C representing a set of labels, ycThe true label representing sample C in set C,
Figure BDA0002859866570000073
representing the estimated value of sample C in set C.
The prediction output vector is:
Figure BDA0002859866570000074
wherein z is represented as a networkFinal output result of layer, zcRepresenting the final output result of the network layer of sample C in set C.
And final output results of the network layer are as follows:
Figure BDA0002859866570000075
where x represents the input samples, theta represents the parameters of the process network such as weights and biases,
Figure BDA0002859866570000076
representing the operation of a sample from the input to the nth outlet of the neural network.
The optimized objective loss function of the whole network is as follows:
Figure BDA0002859866570000077
where, y represents the true label of the specimen,
Figure BDA0002859866570000081
representing the estimated value of the sample, theta representing the parameters of the process network such as weight and bias, N representing the number of classification outlets, wnThe weight of each of the outlets is represented,
Figure BDA0002859866570000082
representing an estimate of the nth exit.
The original deep neural network is divided into a plurality of sub-networks and a plurality of branches. One may choose to train some of them first, a method called hierarchical training. Firstly, training the whole network, and after the backbone is trained, training the branch parts of each subnet, such as full connection in the equipment layer, by using the fixed backbone model. The specific process is as follows:
(1) training a backbone network, wherein sample data is passed through a device layer backbone, a device layer convolution, an edge layer backbone and a cloud end layer;
(2) saving the backbone network;
(3) and loading a backbone network and training equipment layer branches. The method comprises the steps that sample data is passed through an equipment layer trunk, equipment layer convolution and equipment layer branches, and only a sample is evaluated in the equipment layer trunk and the equipment layer convolution in training, and training is not carried out;
(4) saving the device layer branch network;
(5) and loading a backbone network, training edge layer branches, wherein the sample data is routed to the device layer backbone, the device layer convolution, the edge layer backbone and the edge layer branches, and the device layer backbone, the device layer convolution and the edge layer backbone only evaluate the sample in the training without training.
The DDNN maps the trained DNN to the cloud, the edge device and the terminal device, and can classify the samples with high reliability of the local network by using exit points after device reasoning without sending any information to the cloud. And for the samples which cannot be processed, outputting the intermediate DNN (until a local exit point) to the cloud end, further processing the intermediate DNN in the cloud end, and making a final classification decision task. The DDNN may also make classification decisions for multiple end-devices that are geographically distributed together, each performing a respective local computation, but with their outputs aggregated among local exit points. Since the entire DDNN is jointly trained across all end devices and egress points, the network will automatically aggregate the inputs to maximize the classification accuracy. The DDNN, by using an edge layer in the distributed computing hierarchy between the terminal device and the cloud, obtains output from the terminal device, performs aggregation and classification if possible, and passes intermediate output to the cloud if more processing is required.
In DDNN, inference is performed in several stages, respectively, using a number of preset exit thresholds T as confidence measures for sample prediction. At a given exit point, if the prediction period is not confident about the result, the system drops back to a higher level of exit points until the last exit point is reached, at which time the classification task is always performed.
The sample information entropy of an exit point is defined as:
Figure BDA0002859866570000091
where x represents the input sample and C represents the set of labels.
Since most machine learning models are trained in a supervised fashion, it is very challenging to process a new environment without enough labeled data, and there is also a problem of data distribution differences between the training set and the test set. The migration learning utilizes the existing labeled data to process the unknown situation. The adversarial domain adaptation in the domain adaptation is an important research direction of the transfer learning, and can enable the model trained on the labeled source domain to be applicable to the target domain. A cyclic-consistency adaptive ad-vertical Transfer Networks (3 CATN) optimization method may be used to solve the similarity problem of the source and target domains.
3CATN focuses on the consistency of the source and target domains by using antagonism training. I.e., training the antagonistic network by learning the cross-covariance of the extracted features and classifier predictions to capture the multi-modal structure of the data. Domain-invariant features are composed of the same components and thus can be represented by each other by training two feature migration networks, one responsible for migrating features from a source domain to a target domain and the other responsible for migrating features from the target domain to the source domain, and calculating a cycle consistency loss by using the results output by the two feature migration networks. The number of exit points and the structure of each exit point can be customized according to specific tasks, different exit points share a backbone network, and after each exit point, additional convolutional layers or domain adaptation layers can be added as required. And obtaining a network model which has a plurality of exit points and simultaneously satisfies the feature migration through a joint training mode.
According to the resource-constrained edge calculation method of the improved DDNN, the resource-constrained edge calculation is carried out by applying the improved DDNN model, the neural network model with few parameters in the terminal equipment is combined with the neural network model with many parameters in the cloud end, the initial feature extraction can be rapidly carried out by using the neural network model with few parameters, and a classification task can be carried out if the model is credible; the data processed by the terminal is transmitted back to the large neural network model of the cloud end for subsequent feature processing and other operations, and the risk of sensitive information leakage can be reduced because the data transmitted to the cloud end is processed by the terminal equipment.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (5)

1. A resource-constrained edge computing method of an improved DDNN is characterized in that an improved DDNN model is applied to perform resource-constrained edge computing, the improved DDNN model comprises a device layer sub-neural network, an edge layer sub-neural network and a cloud layer sub-neural network, and the method comprises the following steps:
s1, the terminal equipment collects the original data, and the equipment layer sub-neural network is used for evaluation to obtain characteristic data and an evaluation result;
s2, the terminal device obtains an entropy value according to the evaluation result, if the entropy value is lower than the threshold value, the execution is finished, the evaluation result is returned, otherwise the characteristic data are uploaded to the edge device;
s3, after the edge device receives the feature data, the edge device uses the edge layer sub-neural network to evaluate, and new feature data and an evaluation result are obtained;
s4, the edge device obtains an entropy value according to the new evaluation result, if the entropy value is lower than a threshold value, the execution is finished, the evaluation result is returned, and otherwise, new characteristic data are uploaded to the cloud end;
and S5, after the cloud receives the new characteristic data, evaluating by using the sub-neural network of the cloud layer to obtain an evaluation result, and finishing execution.
2. The method of claim 1, wherein the device layer sub-neural network, the edge layer sub-neural network, and the cloud layer sub-neural network are each provided with a network outlet, each network outlet is provided with a classifier, and different layers are mapped into each heterogeneous device to achieve "cloud-edge-end" cooperative computing.
The sample information entropy of an exit point is defined as:
Figure RE-FDA0002969446740000011
where x represents the input sample and C represents the set of labels.
3. The method of claim 2, wherein the improved DDNN resource-constrained edge computing method comprises using BNNs and eBNNs to accommodate end devices, and training the end devices jointly with neural networks in the edge devices and the cloud; the network egress is placed on the physical boundary, i.e. between the last neural network layer of the terminal device and the first neural network layer of the next higher layer.
4. The method of claim 3, wherein the improved DDNN resource-constrained edge computation method comprises the following steps:
maximum pooling: aggregating the input vectors by obtaining a maximum value for each component;
Figure RE-FDA0002969446740000021
average pooling: aggregating the input vectors by taking an average of each component;
Figure RE-FDA0002969446740000022
where n denotes the number of components, vijThe jth vector of the ith component;
connecting: connecting the input vectors together, and keeping all information useful for the cloud, which expands the dimension of the output vector; an additional linear layer is added to map the output vector back to the same dimension as the input vector.
5. The method of claim 2, wherein the improved DDNN resource-constrained edge computation method comprises:
during training, loss function values of network outlets of the device layer sub-neural network, the edge layer sub-neural network and the cloud layer sub-neural network are combined in a reverse serial broadcasting process, so that the whole network is subjected to combined training, a combined optimization problem is constructed during DDNN training, the weighted sum of the loss functions of each network outlet is minimized, and a softmax cross entry loss function is used as an optimization target, wherein:
the loss function is:
Figure RE-FDA0002969446740000023
where, y represents the true label of the specimen,
Figure RE-FDA0002969446740000024
representing an estimate of a sample, C representing a set of labels, ycThe true label representing sample C in set C,
Figure RE-FDA0002969446740000025
representing the estimated value of sample C in set C;
the prediction output vector is:
Figure RE-FDA0002969446740000031
wherein z is expressed as the final output result of the network layer, zcRepresenting the final output result of the network layer of the sample C in the set C;
and final output results of the network layer are as follows:
Figure RE-FDA0002969446740000036
where x represents the input samples, theta represents the parameters of the process network such as weights and biases,
Figure RE-FDA0002969446740000035
representing an operation performed by a sample from an input of the neural network to an nth output;
the optimized objective loss function of the whole network is as follows:
Figure RE-FDA0002969446740000032
where, y represents the true label of the specimen,
Figure RE-FDA0002969446740000033
representing the estimated value of the sample, theta representing the parameters of the process network such as weight and bias, N representing the number of classification outlets, wnThe weight of each of the outlets is represented,
Figure RE-FDA0002969446740000034
representing an estimate of the nth exit.
CN202011559400.8A 2020-12-25 2020-12-25 Resource-constrained edge computing method for improving DDNN (distributed neural network) Pending CN112685176A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011559400.8A CN112685176A (en) 2020-12-25 2020-12-25 Resource-constrained edge computing method for improving DDNN (distributed neural network)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011559400.8A CN112685176A (en) 2020-12-25 2020-12-25 Resource-constrained edge computing method for improving DDNN (distributed neural network)

Publications (1)

Publication Number Publication Date
CN112685176A true CN112685176A (en) 2021-04-20

Family

ID=75453216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011559400.8A Pending CN112685176A (en) 2020-12-25 2020-12-25 Resource-constrained edge computing method for improving DDNN (distributed neural network)

Country Status (1)

Country Link
CN (1) CN112685176A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114662661A (en) * 2022-03-22 2022-06-24 东南大学 Method for accelerating multi-outlet DNN reasoning of heterogeneous processor under edge calculation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543829A (en) * 2018-10-15 2019-03-29 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Method and system for hybrid deployment of deep learning neural network on terminal and cloud
CN110738309A (en) * 2019-09-27 2020-01-31 华中科技大学 DDNN training method and DDNN-based multi-view target identification method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543829A (en) * 2018-10-15 2019-03-29 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Method and system for hybrid deployment of deep learning neural network on terminal and cloud
CN110738309A (en) * 2019-09-27 2020-01-31 华中科技大学 DDNN training method and DDNN-based multi-view target identification method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SURAT TEERAPITTAYANON: ""Distributed Deep Neural Networks over the Cloud, the Edge and End Devices"", 《2017 IEEE 37TH ICDCS》, pages 1 - 12 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114662661A (en) * 2022-03-22 2022-06-24 东南大学 Method for accelerating multi-outlet DNN reasoning of heterogeneous processor under edge calculation
CN114662661B (en) * 2022-03-22 2024-04-16 东南大学 Method for accelerating multi-outlet DNN reasoning of heterogeneous processor under edge computing

Similar Documents

Publication Publication Date Title
Chen et al. DNNOff: offloading DNN-based intelligent IoT applications in mobile edge computing
Yu et al. Intelligent edge: Leveraging deep imitation learning for mobile edge computation offloading
CN113067873B (en) Edge cloud collaborative optimization method based on deep reinforcement learning
CN112181666A (en) Method, system, equipment and readable storage medium for equipment evaluation and federal learning importance aggregation based on edge intelligence
Quang et al. Multi-domain non-cooperative VNF-FG embedding: A deep reinforcement learning approach
CN109753751A (en) A kind of MEC Random Task moving method based on machine learning
CN112486690A (en) Edge computing resource allocation method suitable for industrial Internet of things
CN112995343B (en) Edge node calculation unloading method with performance and demand matching capability
CN111723910A (en) Method and device for constructing multi-task learning model, electronic equipment and storage medium
CN116596095B (en) Training method and device of carbon emission prediction model based on machine learning
Kim et al. A deep learning approach to vnf resource prediction using correlation between vnfs
CN114936708A (en) Fault diagnosis optimization method based on edge cloud collaborative task unloading and electronic equipment
CN113971090B (en) Layered federal learning method and device of distributed deep neural network
CN112685176A (en) Resource-constrained edge computing method for improving DDNN (distributed neural network)
CN101226521A (en) Machine learning method for ambiguity data object estimation modeling
CN116882708B (en) Steel process flow control method and device based on digital twin and related equipment
CN117436485A (en) Multi-exit point end-edge-cloud cooperative system and method based on trade-off time delay and precision
CN114781598A (en) Fault prediction method based on hierarchical neural network distributed training
CN113065641B (en) Neural network model training method and device, electronic equipment and storage medium
CN113747500B (en) High-energy-efficiency low-delay workflow application migration method based on generation of countermeasure network in complex heterogeneous mobile edge calculation
CN114077482B (en) Intelligent computing optimization method for industrial intelligent manufacturing edge
CN115587616A (en) Network model training method and device, storage medium and computer equipment
CN114662658A (en) On-chip optical network hot spot prediction method based on LSTM neural network
Huang et al. Enhanced experience replay generation for efficient reinforcement learning
Lee et al. Application of end-to-end deep learning in wireless communications systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210420