CN114564882A - Construction and application of edge deep learning simulator based on discrete events - Google Patents
Construction and application of edge deep learning simulator based on discrete events Download PDFInfo
- Publication number
- CN114564882A CN114564882A CN202210110235.0A CN202210110235A CN114564882A CN 114564882 A CN114564882 A CN 114564882A CN 202210110235 A CN202210110235 A CN 202210110235A CN 114564882 A CN114564882 A CN 114564882A
- Authority
- CN
- China
- Prior art keywords
- edge
- deep learning
- module
- simulation
- constructing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 99
- 238000010276 construction Methods 0.000 title description 4
- 238000004088 simulation Methods 0.000 claims abstract description 73
- 238000000034 method Methods 0.000 claims abstract description 33
- 238000003062 neural network model Methods 0.000 claims description 14
- 238000004422 calculation algorithm Methods 0.000 claims description 13
- 238000004891 communication Methods 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000012549 training Methods 0.000 description 9
- 238000010801 machine learning Methods 0.000 description 7
- 238000012360 testing method Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 239000013598 vector Substances 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000007786 learning performance Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The method for constructing the edge deep learning simulator based on the discrete event comprises the steps of constructing an in-network cache module in a simulation module, wherein the in-network cache module is used for caching data at edge nodes and providing data support for deep learning; the invention integrates a deep learning framework into the edge simulation environment to realize distributed learning in the edge environment. The edge deep learning simulator can be used for federal learning or edge ensemble learning.
Description
Technical Field
The invention belongs to the technical field of artificial intelligence and deep learning, relates to edge calculation, in particular to edge learning simulation, and discloses an edge deep learning simulator based on discrete events.
Background
Distributed machine learning is one of the most popular research fields of machine learning, and particularly, with the rise of the concept of 'big data', data is explosively increased, and a brand-new 'big data' era is met. The traditional machine learning focuses on the speed of processing data in a single machine, huge data storage and calculation cannot be done far on the single machine, the limitation of hardware support makes big data processing on the single machine very laborious, and the distributed deployment of a calculation model on multiple machines and multiple types of machines is a necessary solution for simultaneous calculation. The goal of distributed machine learning is to distribute tasks with huge data and computation to multiple machines, so as to improve the speed and expandability of data computation and reduce the time consumption of tasks.
With the advent of distributed learning, distributed learning has rapidly evolved in edge computing. Meanwhile, new network protocols and algorithms are continuously emerging due to the development of computer and network communication technologies. Such studies may be facilitated if the simulator is capable of interacting with a deep learning framework. The uncontrollable and volatile nature of network protocols, however, presents significant difficulties in the validation of new network solutions. Three methods, namely an analysis method, a network simulation method and an experimental network method, are created for solving the problem. The network simulation is used as the intermediate stage of the analysis method and the experimental network method, has flexibility and low cost, and can preliminarily verify and realize a new protocol. The combination of network simulation with distributed learning also faces numerous problems and challenges in the actual training and testing process. Currently, there are some mainstream simulation frameworks such as MatLab, ns3 on edge devices. With the rapid growth of data and the development of deep learning, these frameworks have been unable to meet the existing needs. MatLab is simple in design algorithm, operation code and embedded system. However, it does not support systematic simulation methods such as discrete events, so that the validity of the edge learning performance verification cannot be guaranteed. And ns3 is used as a network simulator and does not relate to the field of deep learning.
Disclosure of Invention
In order to overcome the shortcomings of the prior art, the invention aims to provide an edge deep learning simulator based on discrete events, which integrates a deep learning framework into an edge simulation environment to realize distributed learning in the edge environment.
In order to achieve the purpose, the invention adopts the technical scheme that:
the method for constructing the edge deep learning simulator based on the discrete events comprises the following steps of:
step 1, an in-network cache module is constructed in a simulation module, and the in-network cache module is used for caching data at edge nodes and providing data support for deep learning;
step 2, constructing an edge simulation environment based on deep learning so that edge nodes support the deep learning, and the steps are as follows:
step 2.1, constructing a deep learning module in the simulation module;
step 2.2, creating conditions for the combined compilation of the deep learning module and the simulation module;
and 2.3, performing deep learning by using a deep learning module in the simulated edge learning.
In one embodiment, the simulation module includes an application submodule, a topology, and a node; the application program submodule is used for acquiring an application program and realizing a corresponding target by using resources controlled by system software; the node is used for simulating edge equipment; the topology is used to build communications between edge devices.
In one embodiment, the intra-network cache module takes the data packets sent between the edge nodes as a cache unit based on an LRU cache algorithm.
In one embodiment, the in-network cache module is constructed in the simulation module as follows:
in the simulation module, an LRU cache module based on an LRU cache algorithm is constructed, and the LRU cache module is added into the edge node.
In one embodiment, the LRU cache module is constructed by:
under the core directory of the simulation module, a user-defined new module is created by using a command for creating the new module, the new module is named as LRU, and the LRU caching algorithm is placed at the corresponding position of the directory of the new module;
the configuration method of the LRU cache module comprises the following steps:
the source code contained in the source code of the source code of the source code of the LRU of the source code;
the LRU caching module is added into the edge simulation environment by the following method:
all modules in the edge simulation environment are configured and compiled using commands in the edge simulation environment to add the LRU cache module into the core module of the edge simulation environment.
In one embodiment, the intra-network cache module is used as a member variable of a Node class and is added to each edge Node.
In one embodiment, in the step 2.1, the method for constructing the deep learning module is as follows:
a deep learning library is added in the simulation module to provide model support for deep learning, and the deep learning library is composed of a plurality of deep neural network models, such as MLP, AlexNet, LeNet and the like.
In one embodiment, said step 2.2, importing the eigen library for linear operation, and providing support for the matrix operation of the deep learning module.
In one embodiment, the step 2.3 is to directly call the existing deep neural network model in the deep learning library; and for a deep neural network model which does not exist in the deep learning library, designing and realizing a model independent of a third-party framework, importing the existing deep learning library, and calling a corresponding file.
The discrete event based edge deep learning simulator may be used for edge learning such as federal learning or edge ensemble learning.
Compared with the prior art, the invention provides a precondition for intelligent edge simulation verification. A deep learning environment similar to a real TCP/IP edge network can be constructed on the simulator, and the transmission, the caching and the processing of massive training data are simulated.
Drawings
FIG. 1 is a schematic view of the present invention.
Fig. 2 is a topology structural diagram of an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the drawings and examples.
To facilitate understanding of the technical aspects of the present invention, the following explanation is made on the concept related to the present invention.
Discrete events: events change at discrete points in time, i.e., non-continuous events.
Edge: is meant on the side near the source of the object or data.
Edge equipment: the edge node is embodied in a device placed at the edge, which is physically present between the cloud and the end, and is closer to the end side (i.e., the device side), such as a notebook, a mobile phone, and the like.
Edge nodes: all edge devices are abstracted as edge nodes.
Deep learning: is a new research direction in the field of Machine Learning (ML), which is introduced into Machine Learning to make it closer to the original target, Artificial Intelligence (AI).
A simulator: the method mainly simulates the functions of a hardware processor and programs of an instruction system through software so that a computer or other multimedia platforms (palm computers and mobile phones) can run software on other platforms. The invention aims to create the marginal and learning environment.
A simulation module: the method comprises a plurality of submodules, such as sub-modules of topology, application, in-network cache and the like.
The intra-network cache module: to simulate the caching of edge nodes.
Edge simulation environment: environment for simulation learning at the edge.
The deep learning module is a deep learning library: the model mainly used for deep learning includes AlexNet, LeNet, BP and MLP.
As shown in FIG. 1, the present invention is a method for constructing an edge depth Learning simulator (sim 4edge DL) based on discrete events,
the method comprises the following steps:
step 1, an in-network cache module is constructed in a simulation module.
The existing simulation module mainly comprises an application program submodule, a topology, a node and the like, wherein the application program submodule is used for acquiring an application program and realizing a corresponding target by using resources controlled by system software; the node is used for simulating the edge device; the topology is then used to build up the communication between the edge devices. By setting a network and a topology, the simulation module can realize scene simulation of a deep learning environment, and the simulation scene comprises data transmission, data caching and network topology under the simulation condition.
According to the invention, the in-network cache module is additionally arranged in the simulation module and is used for caching data at the edge node, so that data support is provided for deep learning.
In the intra-network cache module, data packets sent between edge nodes are taken as a cache unit, and in order to construct the intra-network cache module, firstly, a cache algorithm is required, and an LRU (least recently used) cache algorithm is used; the construction process is as follows:
in the simulation module, an LRU cache module based on an LRU cache algorithm is constructed, and the LRU cache module is added into the edge node, so that the storage function of the node is realized. Specifically, under the core directory of the simulation module, a user-defined new module is created by using a command for creating the new module, the new module is named as LRU, and an LRU caching algorithm is placed at a corresponding position of the directory of the new module; the core directory comprises a directory for constructing simulated basic modules (in-network cache modules, application submodules and the like), for example, a list of folders, each module comprises the following folders, a model, a test, an example, a wscript and the like, a code which is put in a corresponding position to realize the LRU is put in the model, and a code for testing is put in the test.
The LRU cache module is used as a certain submodule in the simulation, and the files contained in the LRU cache module are model, test, example and wscript, like other submodules. The model file is used for storing codes realized by the LRU cache, the test file is used for storing codes cached by the test LRU, an example can be stored in the example, and the wscript is a configuration file.
The LRU cache module is configured by:
the structure of the wscript file in the LRU cache module is fixed, the wscript file is used for registering the source code contained in the LRU cache module and referencing other modules in the simulation module, and the other modules needing to be depended on are added in the LRU cache module according to the other modules needing to be depended on by the LRU cache module. The LRU cache module of the invention needs to depend on partial modules of the simulation module, such as data packets, but since the sub-modules of the simulation module are all replaced by core, only the core module needs to be added.
The LRU cache module is added into the edge simulation environment by the following method:
all modules in the edge simulation environment are configured and compiled using commands in the edge simulation environment to add the LRU cache module into the core module of the edge simulation environment. The core module comprises a basic module for constructing simulation and comprises sub-modules of topology, application and the like.
The LRU cache module may use the ns3 namespace or a custom namespace. A header for ns3 needs to be introduced when writing the corresponding code to reference the module. The whole process of creating a cache module in ns3 is completed, and only the cache needs to be installed on the edge node.
The edge Node class is defined in the Node class module, so the LRU cache module is required to be added as a member variable of the Node class, and the LRU cache is realized by calling the variable by the edge Node.
Step 2, constructing an edge simulation environment based on deep learning so that edge nodes support the deep learning, and the steps are as follows:
and 2.1, constructing a deep learning module in the simulation module.
The deep learning module can process approximation, classification and prediction problems by designing a neural network model. The deep learning module is constructed by adding new dependencies in a configuration file of the simulation module, for example, a deep learning library can be added in the simulation module to provide model support for deep learning, and the deep learning library is composed of a plurality of deep neural network models, such as MLP, AlexNet, LeNet, and the like. That is, the required deep learning library is included, including the header file and the source file.
And 2.2, creating conditions for the combined compilation of the deep learning module and the simulation module.
Illustratively, an eigen library is imported into the deep learning module, and the eigen library is a C + + template library for linear operation and can support matrix and vector operation, numerical analysis and related algorithms thereof. Because of the large number of matrix operations involved in the deep learning module, a template library of eigen needs to be used.
And 2.3, performing deep learning by using a deep learning module in the simulated edge learning.
The step comprises two cases, one is a deep neural network model which is already existed in the deep learning library, and the other is a deep neural network model which is not existed in the deep learning library. Because the deep learning library deep file contains all core code header files of the deep learning library, the existing deep neural network model can be directly called. For a deep neural network model which does not exist in the deep learning library, a model independent of a third-party framework needs to be designed and realized, the existing deep learning library is imported, and when the model is used, a corresponding code for deep learning is called to realize a required function. Illustratively, third party frameworks refer to tensorflow, buffer, etc., independent of code written directly in C or C + + without using these frameworks. In addition, it should be noted that, for variables contained in both the simulation module and the deep learning module, such as Vector vectors, need to be distinguished when used, and when Vector vectors of the deep learning module are used, a namespace for deep learning may be added in front.
The sim4edge DL supports discrete simulation, and allows a user to execute a deep learning task in an edge simulation scene, so that the deep learning research is more convenient and faster to develop in an edge network environment. Therefore, the method can be used for federal learning and edge ensemble learning.
In one embodiment of the present invention, edge ensemble learning is performed, comprising: (1) constructing an edge simulation environment required by an experiment, wherein the edge simulation environment comprises the establishment of a network topological structure and communication, and the topological structure is shown in figure 2; (2) building a neural network; (3) ensemble learning is performed in an edge simulation environment.
More specifically:
step (1), constructing a topological structure, wherein the topological structure comprises a remote data center, a gateway node, two edge computing nodes and four terminal devices, the remote data center, the gateway node, the two edge computing nodes and the four terminal devices are connected through a gigabit link, and the point-to-point transmission is used as a data transmission mode between the remote data center and the gateway node. Peer-to-Peer (Peer-to-Peer, P2P) is also known as Peer-to-Peer networking technology, and is a new technology for networks that relies on the computing power and bandwidth of participants in the network, rather than aggregating the dependencies on a few servers.
And (2) training a deep learning model according to the edge learning simulator based on the discrete events. Taking the addition of a new model as an example, a model independent of a third-party framework needs to be designed, and an interface of a deep learning module is used to implement other types of neural network models. The following is a specific method for adding a new model LeNet in the deep learning module.
The following steps are required to import the LeNet source code into the deep learning module.
1) And writing the source code of LeNet into two files, namely a lens.c file and a lens.h file according to the coding mode in the deep learning module.
2) And a head file lens.h of LeNet is contained in a deep.h file, so that a new neural network model is introduced into the deep learning module.
3) When using LeNet, only the header file deep.h needs to be referenced in the program.
Step (3), the ensemble learning can be subdivided into the following steps:
1) and generating nodes, and creating required nodes by using a Create () function in the NodeContainer class.
2) And installing network equipment, realizing physical connection of nodes through a point-to-point channel, and setting the transmission rate of data.
3) And installing a protocol stack for the nodes, and mainly configuring an IP address for each node.
4) And setting a sending end and a receiving end, wherein each sending end needs to set the receiving end, the receiving end is an edge computing device according to the created topological structure, and the sending end is a terminal device.
5) The training of the neural network is set and packaged into an application, and the setting comprises the termination condition of the training, the adopted training strategy and the like.
6) And installing the application on the nodes participating in training, and simultaneously initializing the cache size for each node and storing the data set into the cache in a strip mode.
7) After the above configuration is completed, deep learning can be performed in the edge simulation environment. In this embodiment, two nodes are provided, including four terminal devices, two edge computing devices, and one data center. The terminal device is used for caching different data, and before the model is trained, the edge computing device needs to acquire the trained data from the cache of the terminal device. And two edge computing devices for model training. Each edge computing device is trained by using a LeNet model, and after the submodels are converged, n differential submodels h are obtained1,…,hnAnd sending the parameters to a parameter server by using the socket, and constructing the ensemble learning model by using the n sub-models.
According to the invention, the sim4edge DL is realized by building the deep learning module in the edge simulation environment, and the deep learning is realized in the edge simulation environment. The edge deep learning simulation platform based on the discrete events can construct a deep learning environment similar to a real TCP/IP edge network, simulate the transmission, caching and processing of massive training data and provide a precondition for edge intelligent simulation verification.
While the invention has been described in detail with reference to specific embodiments thereof, it will be understood that the invention is not limited to the details of construction and the embodiments set forth herein. For a person skilled in the art to which the invention pertains, several simple deductions or substitutions may be made without departing from the spirit of the invention and the scope of protection defined by the claims, which shall be regarded as belonging to the scope of protection of the invention.
Claims (10)
1. The method for constructing the edge deep learning simulator based on the discrete event is based on an edge node architecture and is characterized by comprising the following steps of:
step 1, an in-network cache module is constructed in a simulation module, and the in-network cache module is used for caching data for edge nodes and providing data support for deep learning;
step 2, constructing an edge simulation environment based on deep learning so that edge nodes support the deep learning, and the steps are as follows:
step 2.1, a deep learning module is constructed in the simulation module;
step 2.2, creating conditions for the combined compilation of the deep learning module and the simulation module;
and 2.3, performing deep learning by using a deep learning module in the simulated edge learning.
2. The method for constructing the discrete event based edge deep learning simulator according to claim 1, wherein the simulation module comprises an application program sub-module, a topology and a node; the application program submodule is used for acquiring an application program and realizing a corresponding target by using resources controlled by system software; the node is used for simulating edge equipment; the topology is used to build communications between edge devices.
3. The method for constructing the edge deep learning simulator based on discrete events according to claim 1, wherein the in-network cache module uses the data packets sent between the edge nodes as a cache unit based on an LRU cache algorithm.
4. The method for constructing the discrete event-based edge deep learning simulator according to claim 1, wherein the in-network cache module is constructed in the simulation module as follows:
in the simulation module, an LRU cache module based on an LRU cache algorithm is constructed and added into the edge node.
5. The method for constructing the discrete event based edge deep learning simulator according to claim 4, wherein the LRU cache module is constructed by:
under the core directory of the simulation module, a user-defined new module is created by using a command for creating the new module, the new module is named as LRU, and the LRU caching algorithm is placed at the corresponding position of the directory of the new module;
the configuration method of the LRU cache module comprises the following steps:
the source code contained in the source code of the source code of the source code of the LRU of the source code;
the LRU caching module is added into the edge simulation environment by the following method:
all modules in the edge simulation environment are configured and compiled using commands in the edge simulation environment to add the LRU cache module into the core module of the edge simulation environment.
6. The method for constructing the discrete event-based edge deep learning simulator according to claim 1, wherein the intra-network cache module is added to each edge Node as a member variable of a Node class.
7. The method for constructing the discrete event-based edge deep learning simulator according to claim 1, wherein in the step 2.1, the method for constructing the deep learning module is:
and adding a deep learning library in the simulation module to provide model support for deep learning, wherein the deep learning library consists of a plurality of deep neural network models.
8. The method for constructing the edge deep learning simulator based on discrete events according to claim 1, wherein in the step 2.2, an eigen library for linear operation is imported to provide support for matrix operation of a deep learning module.
9. The method for constructing the edge deep learning simulator based on the discrete events according to claim 1, wherein in the step 2.3, the deep neural network model which already exists in the deep learning library is directly called; and for a deep neural network model which does not exist in the deep learning library, designing and realizing a model independent of a third-party framework, importing the existing deep learning library, and calling the deep learning library when in use.
10. Use of the discrete event based edge deep learning simulator of claim 1 for edge learning.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210110235.0A CN114564882A (en) | 2022-01-29 | 2022-01-29 | Construction and application of edge deep learning simulator based on discrete events |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210110235.0A CN114564882A (en) | 2022-01-29 | 2022-01-29 | Construction and application of edge deep learning simulator based on discrete events |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114564882A true CN114564882A (en) | 2022-05-31 |
Family
ID=81714782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210110235.0A Pending CN114564882A (en) | 2022-01-29 | 2022-01-29 | Construction and application of edge deep learning simulator based on discrete events |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114564882A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117170889A (en) * | 2023-11-01 | 2023-12-05 | 沐曦集成电路(上海)有限公司 | Heterogeneous non-blocking data packet synchronous processing system |
CN117938957A (en) * | 2024-03-22 | 2024-04-26 | 精为技术(天津)有限公司 | Edge cache optimization method based on federal deep learning |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040078524A1 (en) * | 2002-10-16 | 2004-04-22 | Robinson John T. | Reconfigurable cache controller for nonuniform memory access computer systems |
WO2018208939A1 (en) * | 2017-05-09 | 2018-11-15 | Neurala, Inc. | Systems and methods to enable continual, memory-bounded learning in artificial intelligence and deep learning continuously operating applications across networked compute edges |
CN109509159A (en) * | 2018-11-20 | 2019-03-22 | 湖南湖工电气有限公司 | A kind of end-to-end restored method of UAV Fuzzy image based on deep learning |
CN111901392A (en) * | 2020-07-06 | 2020-11-06 | 北京邮电大学 | Mobile edge computing-oriented content deployment and distribution method and system |
US20210064802A1 (en) * | 2018-09-06 | 2021-03-04 | Terrafuse, Inc. | Method and System for Increasing the Resolution of Physical Gridded Data |
CN112989712A (en) * | 2021-04-27 | 2021-06-18 | 浙大城市学院 | Aeroengine fault diagnosis method based on 5G edge calculation and deep learning |
CN113987705A (en) * | 2021-10-25 | 2022-01-28 | 中国科学院重庆绿色智能技术研究院 | Automobile covering part rebound prediction method based on deep learning |
-
2022
- 2022-01-29 CN CN202210110235.0A patent/CN114564882A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040078524A1 (en) * | 2002-10-16 | 2004-04-22 | Robinson John T. | Reconfigurable cache controller for nonuniform memory access computer systems |
WO2018208939A1 (en) * | 2017-05-09 | 2018-11-15 | Neurala, Inc. | Systems and methods to enable continual, memory-bounded learning in artificial intelligence and deep learning continuously operating applications across networked compute edges |
US20210064802A1 (en) * | 2018-09-06 | 2021-03-04 | Terrafuse, Inc. | Method and System for Increasing the Resolution of Physical Gridded Data |
CN109509159A (en) * | 2018-11-20 | 2019-03-22 | 湖南湖工电气有限公司 | A kind of end-to-end restored method of UAV Fuzzy image based on deep learning |
CN111901392A (en) * | 2020-07-06 | 2020-11-06 | 北京邮电大学 | Mobile edge computing-oriented content deployment and distribution method and system |
CN112989712A (en) * | 2021-04-27 | 2021-06-18 | 浙大城市学院 | Aeroengine fault diagnosis method based on 5G edge calculation and deep learning |
CN113987705A (en) * | 2021-10-25 | 2022-01-28 | 中国科学院重庆绿色智能技术研究院 | Automobile covering part rebound prediction method based on deep learning |
Non-Patent Citations (3)
Title |
---|
YANA QIN1;2, DANYE WU2, ZHIWEI XU1;2, JIE TIAN: "Adaptive In-network Collaborative Caching for Enhanced Ensemble Deep Learning at Edge.", COMPUTER SCIENCE, pages 1 - 16 * |
朱江;王婷婷;宋永辉;刘亚利;: "无线网络中基于深度Q学习的传输调度方案", 通信学报, no. 04, pages 39 - 48 * |
秦亚娜: "面向边缘集成学习的网内缓存智 能调度机制研究", CNKI优秀硕士学位论文全文库, pages 21 - 57 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117170889A (en) * | 2023-11-01 | 2023-12-05 | 沐曦集成电路(上海)有限公司 | Heterogeneous non-blocking data packet synchronous processing system |
CN117170889B (en) * | 2023-11-01 | 2024-01-23 | 沐曦集成电路(上海)有限公司 | Heterogeneous non-blocking data packet synchronous processing system |
CN117938957A (en) * | 2024-03-22 | 2024-04-26 | 精为技术(天津)有限公司 | Edge cache optimization method based on federal deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Riley et al. | A generic framework for parallelization of network simulations | |
García et al. | Planetsim: A new overlay network simulation framework | |
CN114564882A (en) | Construction and application of edge deep learning simulator based on discrete events | |
Amoretti et al. | DEUS: a discrete event universal simulator | |
CN108306804A (en) | A kind of Ethercat main station controllers and its communication means and system | |
Surati et al. | A survey of simulators for P2P overlay networks with a case study of the P2P tree overlay using an event-driven simulator | |
CN110413595A (en) | A kind of data migration method and relevant apparatus applied to distributed data base | |
US10248324B2 (en) | Oblivious parallel random access machine system and methods | |
CN114422010B (en) | Protocol testing method of satellite communication simulation platform based on network virtualization | |
Nayak et al. | Computer Network simulation using NS2 | |
van Ditmarsch et al. | Reachability and expectation in gossiping | |
CN110442753A (en) | A kind of chart database auto-creating method and device based on OPC UA | |
Qiu et al. | Iterative learning control for multi‐agent systems with noninstantaneous impulsive consensus tracking | |
CN114629767A (en) | Power dispatching network simulation method and device, computer equipment and storage medium | |
CN112199154A (en) | Distributed collaborative sampling central optimization-based reinforcement learning training system and method | |
CN110138589A (en) | A kind of underwater sensor network visual simulation system based on Linux | |
Hine et al. | Scalable emulation of enterprise systems | |
Salem et al. | Mobile ad-hoc network simulators, a survey and comparisons | |
Diamantopoulos et al. | Symbchainsim: A novel simulation tool for dynamic and adaptive blockchain management and its trilemma tradeoff | |
Agosti et al. | P2pam: a framework for peer-to-peer architectural modeling based on peersim | |
CN114764389A (en) | Heterogeneous simulation test platform of joint learning system | |
CN113992520A (en) | Virtual network resource deployment method and system | |
Dias et al. | BrowserCloud. js-A federated community cloud served by a P2P overlay network on top of the web platform | |
Ciraci et al. | An evaluation of the network simulators in large-scale distributed simulations | |
Aldahir | Evaluation of the performance of WebGPU in a cluster of WEB-browsers for scientific computing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |