CN111459665A - Distributed edge computing system and distributed edge computing method - Google Patents
Distributed edge computing system and distributed edge computing method Download PDFInfo
- Publication number
- CN111459665A CN111459665A CN202010206865.9A CN202010206865A CN111459665A CN 111459665 A CN111459665 A CN 111459665A CN 202010206865 A CN202010206865 A CN 202010206865A CN 111459665 A CN111459665 A CN 111459665A
- Authority
- CN
- China
- Prior art keywords
- computing
- node
- module
- task
- distributed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004364 calculation method Methods 0.000 title claims abstract description 26
- 238000004458 analytical method Methods 0.000 claims abstract description 34
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 230000002776 aggregation Effects 0.000 claims abstract description 7
- 238000004220 aggregation Methods 0.000 claims abstract description 7
- 238000004891 communication Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 11
- 230000002093 peripheral effect Effects 0.000 claims description 10
- 238000000034 method Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 4
- 238000009826 distribution Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 abstract description 3
- 238000013461 design Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 abstract description 3
- 230000008901 benefit Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 3
- 238000005520 cutting process Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/502—Proximity
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
Abstract
The invention discloses a distributed edge computing system and a distributed edge computing method.A distributed design is adopted for a main node and a computing node which are positioned in the same local area network, and computing node state information is collected to distribute computing resources; preprocessing and segmenting data to be analyzed according to computing resources; the task management module distributes the analysis task to the node management module of each computing node, and the node management module of each computing node starts the operation module to execute operation; and after the operation is finished, transmitting the analysis result back to the main node for result aggregation, arrangement and output to a user. The invention has the advantages that: the computing capacity of the parallel computing unit can be dynamically expanded by a distributed edge computing system in a mode of increasing computing nodes; the complex operation analysis capability of the parallel computing unit is improved by using the operation modules on the plurality of computing nodes in the distributed design, the functions of decentralization, multiple interfaces and the like are realized, the transmission time and the cost are saved, the data delay is reduced, and the data safety is enhanced.
Description
Technical Field
The invention relates to a distributed edge computing system and a distributed edge computing method, and belongs to the field of edge computing.
Background
In recent years, with the rapid development of 5G and industrial internet, the requirement of emerging services on edge computing is urgent, and the requirement on edge computing is mainly embodied in three aspects of time delay, bandwidth and safety in emerging services of a plurality of vertical industries. As is well known, edge computing pushes data processing from the cloud to the edge closer to data and applications, reducing system response delay, saving network bandwidth, and protecting data security, so that the requirements of application scenarios in vertical industries, including smart manufacturing, smart cities, live games, car networking, and the like, can be completely met.
In the field of edge computing, in the current terminal products based on the ARM architecture, the system and the method can only perform simple algorithm analysis on a single image, and complex analysis on continuous images cannot be performed on the end or an operation result cannot be obtained in near real time. With the development of the internet of things technology, the requirement for performing edge calculation on the platform communication front end is more and more urgent. Developers hope to select different analysis algorithms and application software at the front end, and return the structured calculation result, so that the value of the front end for collecting data is fully exerted, the network transmission pressure is greatly relieved, the data delay is reduced, and the data safety is enhanced.
At present, edge calculation is mainly realized on equipment of the Internet of things, and the equipment is low in calculation efficiency and cannot realize calling among software and algorithms of multiple operating systems, so that the application value is not high.
Disclosure of Invention
In order to solve the technical problem that the edge calculation cannot perform complex operation analysis or cannot obtain a complex analysis operation result in near real time, the invention provides a distributed edge calculation system and a distributed edge calculation method.
The distributed edge computing system comprises: the system comprises a peripheral interface, a network module, a power supply module, a parallel computing unit, at least one main node and at least one computing node. The peripheral interface is connected with the parallel computing unit to form an operating system module; the power supply module is respectively connected with the parallel computing unit and the network module and is responsible for supplying power; the main nodes and the computing nodes are distributed on the parallel computing units, and in the same local area network, all main node modules are distributed on at least one parallel computing unit, and all computing node modules are distributed on at least one parallel computing unit; the network module connects all the parallel computing units through a local area network;
the master node includes:
1. and the resource management module is used for collecting node state information uploaded by the computing node management module in distributed edge computing, wherein the node state information comprises a node memory state, a CPU state, a GPU state and running task information. The sub-nodes respectively acquire the memory state, the CPU state and the GPU state, and transmit the information to the main node by using a communication protocol.
2. And the task management module is used for issuing the analysis task to the computing node. The scheduling mechanism may employ a policy of speculative execution scheduling, fair scheduling, capacity scheduling, and proportional data allocation:
1) the speculative execution scheduling strategy comprises the following steps: if the running time of one task is lower than the average value, the main node starts a new task to rerun the slow task, and meanwhile, the party running between the original task and the new task first gives the slow party to the shutdown.
2) And (3) fair scheduling strategy: the resources are evenly distributed to each user.
3) Capacity scheduling policy: each queue sets a lower limit and an upper limit of the use of resources, taking the queue as a minimum unit. When the resource of one queue is left, the resource can be shared by another queue for use, and the resource is returned after the resource is used up.
4) And (3) proportion data allocation strategy: firstly, various application programs are used for testing on each computing node, the time spent by each node for running the task is counted, the relative processing rate is replaced, the data blocks with the same size are recombined and divided according to the computing rate of the nodes, and the task executed by each node is in direct proportion to the processing rate so as to reduce the movement amount of data between the nodes.
3. And the data preprocessing module is used for preprocessing and segmenting the execution task data. The incoming task is divided into a number of independent task blocks, which are then distributed by the master node to the compute nodes to process them in a completely parallel manner.
4. And the result aggregation module is used for collecting the analysis results of the operation modules and sorting the result set according to the sequence numbers.
5. And the output module is used for outputting the analysis result set to the application layer.
The computing node includes:
1. the node management module is used for collecting state information of the computing node, wherein the state information comprises a node memory state, a CPU state, a GPU state and running task information; and receiving the operation task, starting the operation module and transmitting the specific operation parameters to the operation module.
2. And the operation module is used for specifically executing the analysis algorithm. The part adopts corresponding algorithms according to different application scenes. Each compute node may run a task individually. When the task calculation amount exceeds the calculation power of one calculation node, a parallel distributed calculation method is adopted. And the computing power of a plurality of computing nodes is transferred to jointly complete one task.
Furthermore, the number of the operating system modules is more than or equal to two, the number of the operating system modules depends on the number of operating systems running on the device, and the operating system modules are connected in a parallel structure; the parallel computing units in the operating system module and the parallel computing units connected in parallel with the parallel computing units realize mutual communication through the network module, and the parallel computing units in the operating system module are mainly used for task distribution and cooperative task processing.
Furthermore, the parallel computing unit is a parallel computer based on the combination of a CPU and a GPU and provided with a memory. In order to better realize the remote communication capability, one of the parallel computing units preferably has the access capability of 4G/5G mobile communication network or satellite communication.
Further, the network module is any one or two of a wired network card module with RJ45 or a wireless network card module; the network module can be directly connected with the Internet of things equipment and is communicated through a network.
Furthermore, the peripheral interfaces are more than two of TYPE-C, USB, Ethernet, Micro SD, Display Port, UART and other interfaces, and interface configuration is reasonably selected according to user requirements. The TYPE-C interface can be applied to an external touch screen; the Display Port is used for being externally connected with a Display screen and supporting double-screen Display; the Micro SD card is used for expanding the storage capacity of the device; the USB can be used for connecting general peripherals such as a mouse, a keyboard and the like; the UART interface supports a serial port communication protocol and is used for receiving and processing data of the Internet of things equipment.
As for the distributed edge calculation method, it includes:
step one, collecting state information of a computing node and distributing computing resources;
preprocessing and segmenting data to be analyzed according to computing resources;
step three, starting a task management module, distributing the analysis task to the node management module of each computing node, and starting an operation module to execute operation by the node management module of each computing node;
and step four, after the operation is completed, transmitting the analysis result back to the main node for result aggregation and result arrangement, and outputting the arranged analysis result set to a user.
The invention has the following beneficial effects: by using the operation modules on a plurality of calculation nodes in distributed design, the complex operation analysis capability of the parallel calculation unit is improved, and the operation result responsible for analysis can be obtained in near real time; by utilizing the distributed edge computing system, the computing capacity of the parallel computing unit can be dynamically expanded in a mode of increasing computing nodes. The method can be flexibly deployed in the environments such as network shortage, power supply and the like while the computing requirement is met, functions such as decentralization and multi-interface are achieved, the transmission time and cost of data are saved, data delay is reduced, and data safety is enhanced.
Drawings
FIG. 1 is a block diagram of a distributed edge computing system according to one embodiment of the invention;
FIG. 2 is a component connection diagram of a distributed edge computing system according to one embodiment of the invention;
FIG. 3 is a flow diagram of a distributed edge computation method according to one embodiment of the invention. .
Detailed Description
The invention discloses a novel distributed edge computing system and a distributed edge computing method.
The distributed edge computing system comprises: the distributed edge computing system comprises: the system comprises a peripheral interface, a network module, a power supply module, a parallel computing unit, at least one main node and at least one computing node. The peripheral interface is connected with the parallel computing unit to form an operating system module; the power supply module is respectively connected with the parallel computing unit and the network module and is responsible for supplying power; the main nodes and the computing nodes are distributed on the parallel computing units, and in the same local area network, all main node modules are distributed on at least one parallel computing unit, and all computing node modules are distributed on at least one parallel computing unit; the network module connects all the parallel computing units through a local area network; the master node includes:
1. and the resource management module is used for collecting node state information uploaded by the computing node management module in distributed edge computing, wherein the node state information comprises a node memory state, a CPU state, a GPU state and running task information. The sub-nodes respectively acquire the memory state, the CPU state and the GPU state, and transmit the information to the main node by using a communication protocol.
2. And the task management module is used for issuing the analysis task to the computing node. The scheduling mechanism may employ strategies for speculative execution scheduling, fair scheduling, capacity scheduling, and proportional data allocation.
1) The speculative execution scheduling strategy comprises the following steps: if the running time of one task is lower than the average value, the main node starts a new task to rerun the slow task, and meanwhile, the party running between the original task and the new task first gives the slow party to the shutdown.
2) And (3) fair scheduling strategy: the resources are evenly distributed to each user. For example, if there are 10 tasks to be executed by 3 nodes, then compute node 1 allocates tasks 1-4, compute node 2 allocates tasks 5-7, and compute node 3 allocates tasks 8-10.
3) Capacity scheduling policy: each queue sets a lower limit and an upper limit of the use of resources, taking the queue as a minimum unit. When the resource of one queue is left, the resource can be shared by another queue for use, and the resource is returned after the resource is used up.
4) And (3) proportion data allocation strategy: firstly, various application programs are used for testing on each computing node, the time spent by each node for running the task is counted, the relative processing rate is replaced, the data blocks with the same size are recombined and divided according to the computing rate of the nodes, and the task executed by each node is in direct proportion to the processing rate so as to reduce the movement amount of data between the nodes. For example, the 1 st child node has 3 times the capacity of the 2 nd and 3 rd nodes, where there are 10 equivalent tasks, the first node gets tasks 1-6, the second node gets tasks 7-8, and the third node gets tasks 9-10. Its advantage is, compare with the round of patrolling tactics, practice thrift the distribution time.
3. And the data preprocessing module is used for preprocessing and segmenting the execution task data. The incoming task is divided into a number of independent task blocks, which are then distributed by the master node to the compute nodes to process them in a completely parallel manner. For example, taking video analysis as an example, the master node performs image separation on the video by using a frame cutting technology (such as JavaCV), distributes frame cutting images to each computing node, processes the received images by the computing nodes, and feeds back the results to the master node. The master node is responsible for scheduling and monitoring tasks and re-executing tasks that have failed.
4. And the result aggregation module is used for collecting the analysis results of the operation modules and sorting the result set according to the sequence numbers.
5. And the output module is used for outputting the analysis result set to an application layer (for example, a websocket is adopted).
As for the computing node, it includes:
1. the node management module is used for collecting state information of the computing node, wherein the state information comprises a node memory state, a CPU state, a GPU state and running task information; and receiving the operation task, starting the operation module and transmitting the specific operation parameters to the operation module.
For example, when the operating platform is an Android system, a first step of obtaining the size of the currently available operating memory of the Android, and obtaining the ActivityManager of the system through context, and then obtaining a result through a getmemorinfor function of the ActivityManager, a second step of obtaining the size of the total operating memory of the Android, reading/proc/meminfo files under the Android system to obtain the result, a third step of obtaining an Android system version (directly calling the obtained result), adopting a build.version.re L EASE command, a fourth step of operating a CPU instruction set to check whether 64 bits are supported (directly calling the obtained result), adopting a build.CPU _ ABI command, a fifth step of obtaining a usable ROM of the fuselage, reading Statfs files under the Android system, and a station.getLockSize () of the total size of the fuselage, and obtaining a total size of the Statfes ([ get ROM).
2. And the operation module is used for specifically executing the analysis algorithm. The part adopts corresponding algorithms according to different application scenes. Each compute node may run a task individually. When the task calculation amount exceeds the calculation power of one calculation node, a parallel distributed calculation method is adopted. And the computing power of a plurality of computing nodes is transferred to jointly complete one task.
Furthermore, the parallel computing unit is a parallel computer based on the combination of a CPU and a GPU and provided with a memory. In order to better realize the remote communication capability, one of the parallel computing units preferably has the access capability of 4G/5G mobile communication network or satellite communication.
Further, the network module is any one or two of a wired network card module with RJ45 or a wireless network card module; the network module can be directly connected with the Internet of things equipment and is communicated through a network.
Furthermore, the peripheral interfaces are more than two of TYPE-C, USB, Ethernet, Micro SD, Display Port, UART and other interfaces, and interface configuration is reasonably selected according to user requirements. The TYPE-C interface can be applied to an external touch screen; the Display Port is used for being externally connected with a Display screen and supporting double-screen Display; the Micro SD card is used for expanding the storage capacity of the device; the USB can be used for connecting general peripherals such as a mouse, a keyboard and the like; the UART interface supports a serial port communication protocol and is used for receiving and processing data of the Internet of things equipment.
The distributed edge calculation method comprises the following steps:
step one, collecting state information of a computing node and distributing computing resources;
preprocessing and segmenting data to be analyzed according to computing resources;
step three, starting a task management module, distributing the analysis task to the node management module of each computing node, and starting an operation module to execute operation by the node management module of each computing node;
and step four, after the operation is completed, transmitting the analysis result back to the main node for result aggregation and result arrangement, and outputting the arranged analysis result set to a user.
The present invention is not limited to the embodiments described above, and it will be apparent to a person skilled in the art that any modifications or variations to the embodiments of the present invention described above are possible without departing from the scope of protection of the embodiments of the present invention and the appended claims, which are given by way of illustration only and are not intended to limit the invention in any way.
Claims (13)
1. A distributed edge computing system comprises a peripheral interface, a network module, a power supply module, a parallel computing unit, at least one main node and at least one computing node,
the peripheral interface is connected with the parallel computing unit to form an operating system module;
the power supply module is respectively connected with the parallel computing unit and the network module and is responsible for supplying power;
the main nodes and the computing nodes are distributed on the parallel computing units, and in the same local area network, all main node modules are distributed on at least one parallel computing unit, and all computing node modules are distributed on at least one parallel computing unit;
the network module connects all the parallel computing units through a local area network;
the master node includes:
the resource management module is used for collecting node state information uploaded by the computing node management module in distributed edge computing;
the task management module is used for issuing analysis tasks to the computing nodes;
the data preprocessing module is used for preprocessing and segmenting the execution task data;
the result aggregation module is used for collecting the analysis results of each operation module and sorting the result sets according to the sequence numbers;
the output module is used for outputting the analysis result set to the application layer;
the computing node includes:
the node management module is used for collecting the state information of the computing node, receiving the computing task, starting the computing module and transmitting the specific computing parameters to the computing module;
and the operation module is used for specifically executing the analysis algorithm.
2. The distributed edge computing system of claim 1, the node state information comprising node memory state, CPU state, GPU state, running task information; the child nodes respectively acquire the memory state, the CPU state and the GPU state, and transmit the information to the master node by using a communication protocol.
3. The distributed edge computing system of claim 1, wherein the task management module to issue analysis tasks is a scheduling mechanism that may employ speculative execution scheduling, fair scheduling, capacity scheduling, and proportional data distribution.
4. The distributed edge computing system of claim 3 wherein the speculative execution scheduling policy is that when the running time of a task is below an average, the master node will start a new task to rerun the slow task, and the party between the original task and the new task that ran first turns off the slow party.
5. The distributed edge computing system of claim 3, wherein the fair scheduling policy is to evenly allocate resources to each user.
6. The distributed edge computing system of claim 3, wherein the capacity scheduling policy is a queue-based minimum unit, and when resources of one queue are left, the resources of the other queue can be shared for use and returned after the resources are used up.
7. The distributed edge computing system of claim 3 wherein the proportional data allocation strategy is to pre-count the time spent by each node in running the task, replace the time spent by each node in running the task with a relative processing rate, and reassemble the partitioned data blocks of the same size according to the node computing rate to make the task executed by each node proportional to the processing rate.
8. The distributed edge computing system of claim 1, the data pre-processing module, when processing the data, splits the incoming tasks into a plurality of independent task blocks, which are then distributed by the primary node to the compute nodes to process them in a fully parallel manner.
9. The distributed edge computing system of claim 1, wherein the computing module employs corresponding algorithms according to different application scenarios when executing the analysis algorithms, that is, each computing node can independently run a task, and when the computation of the task exceeds the computation of one computing node, the computing power of the computing nodes is transferred by using a parallel distributed computing method to jointly complete a task.
10. A distributed edge computing system according to claim 1, wherein: the number of the operating system modules is more than or equal to two, the number of the operating system modules depends on the number of operating systems running on the device, and the operating system modules are connected in a parallel structure.
11. A distributed edge computing system according to claim 1, wherein: the parallel computing unit is a parallel computer which is based on the combination of a CPU and a GPU and is provided with a memory.
12. A distributed edge computing system according to claim 1, wherein: the network module is any one or two of a wired network card module or a wireless network card module with RJ 45; the network module can be directly connected with the Internet of things equipment and is communicated through a network.
13. The distributed edge computing method applied to the distributed edge computing system of any one of the preceding claims, comprising:
step one, collecting state information of a computing node and distributing computing resources;
preprocessing and segmenting data to be analyzed according to computing resources;
step three, starting a task management module, distributing the analysis task to the node management module of each computing node, and starting an operation module to execute operation by the node management module of each computing node;
and step four, after the operation is completed, transmitting the analysis result back to the main node for result aggregation and result arrangement, and outputting the arranged analysis result set to a user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010206865.9A CN111459665A (en) | 2020-03-27 | 2020-03-27 | Distributed edge computing system and distributed edge computing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010206865.9A CN111459665A (en) | 2020-03-27 | 2020-03-27 | Distributed edge computing system and distributed edge computing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111459665A true CN111459665A (en) | 2020-07-28 |
Family
ID=71685652
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010206865.9A Pending CN111459665A (en) | 2020-03-27 | 2020-03-27 | Distributed edge computing system and distributed edge computing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111459665A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111823456A (en) * | 2020-08-10 | 2020-10-27 | 国电联合动力技术(连云港)有限公司 | Wind power blade manufacturing data acquisition control system and method based on edge calculation |
CN112099950A (en) * | 2020-09-15 | 2020-12-18 | 重庆电政信息科技有限公司 | Image preprocessing optimization method based on edge image processing system |
CN112101266A (en) * | 2020-09-25 | 2020-12-18 | 重庆电政信息科技有限公司 | Multi-ARM-based distributed inference method for action recognition model |
CN112306689A (en) * | 2020-11-02 | 2021-02-02 | 时代云英(深圳)科技有限公司 | Edge calculation system and method |
CN112637294A (en) * | 2020-12-15 | 2021-04-09 | 安徽长泰信息安全服务有限公司 | Distributed edge computing system |
CN112671896A (en) * | 2020-12-22 | 2021-04-16 | 上海上实龙创智能科技股份有限公司 | Agricultural management method, equipment and system |
CN112988710A (en) * | 2021-03-18 | 2021-06-18 | 成都青云之上信息科技有限公司 | Big data processing method and system |
CN113326122A (en) * | 2021-03-02 | 2021-08-31 | 东南大学 | Wireless distributed computing system and resource allocation method |
CN113556390A (en) * | 2021-07-15 | 2021-10-26 | 深圳市高德信通信股份有限公司 | Distributed edge computing system |
CN114338661A (en) * | 2021-08-27 | 2022-04-12 | 南京曦光信息科技研究院有限公司 | Distributed edge data center system based on optical packet switching and application |
CN114356511A (en) * | 2021-08-16 | 2022-04-15 | 中电长城网际系统应用有限公司 | Task allocation method and system |
CN114697197A (en) * | 2022-03-22 | 2022-07-01 | 支付宝(杭州)信息技术有限公司 | Edge computing apparatus and method |
CN115426363A (en) * | 2022-08-29 | 2022-12-02 | 广东鑫光智能系统有限公司 | Data acquisition method and terminal for intelligent plate processing factory |
WO2024120300A1 (en) * | 2022-12-09 | 2024-06-13 | 华为技术有限公司 | Communication method, communication apparatus, communication system, medium, chip and program product |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103870338A (en) * | 2014-03-05 | 2014-06-18 | 国家电网公司 | Distributive parallel computing platform and method based on CPU (central processing unit) core management |
CN109491790A (en) * | 2018-11-02 | 2019-03-19 | 中山大学 | Industrial Internet of Things edge calculations resource allocation methods and system based on container |
CN109542457A (en) * | 2018-11-21 | 2019-03-29 | 四川长虹电器股份有限公司 | A kind of system and method for the Distributed Application distribution deployment of edge calculations network |
CN109889575A (en) * | 2019-01-15 | 2019-06-14 | 北京航空航天大学 | Cooperated computing plateform system and method under a kind of peripheral surroundings |
CN110008015A (en) * | 2019-04-09 | 2019-07-12 | 中国科学技术大学 | The online task for having bandwidth to limit in edge calculations system assigns dispatching method |
CN110765064A (en) * | 2019-10-18 | 2020-02-07 | 山东浪潮人工智能研究院有限公司 | Edge-end image processing system and method of heterogeneous computing architecture |
-
2020
- 2020-03-27 CN CN202010206865.9A patent/CN111459665A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103870338A (en) * | 2014-03-05 | 2014-06-18 | 国家电网公司 | Distributive parallel computing platform and method based on CPU (central processing unit) core management |
CN109491790A (en) * | 2018-11-02 | 2019-03-19 | 中山大学 | Industrial Internet of Things edge calculations resource allocation methods and system based on container |
CN109542457A (en) * | 2018-11-21 | 2019-03-29 | 四川长虹电器股份有限公司 | A kind of system and method for the Distributed Application distribution deployment of edge calculations network |
CN109889575A (en) * | 2019-01-15 | 2019-06-14 | 北京航空航天大学 | Cooperated computing plateform system and method under a kind of peripheral surroundings |
CN110008015A (en) * | 2019-04-09 | 2019-07-12 | 中国科学技术大学 | The online task for having bandwidth to limit in edge calculations system assigns dispatching method |
CN110765064A (en) * | 2019-10-18 | 2020-02-07 | 山东浪潮人工智能研究院有限公司 | Edge-end image processing system and method of heterogeneous computing architecture |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111823456A (en) * | 2020-08-10 | 2020-10-27 | 国电联合动力技术(连云港)有限公司 | Wind power blade manufacturing data acquisition control system and method based on edge calculation |
CN112099950A (en) * | 2020-09-15 | 2020-12-18 | 重庆电政信息科技有限公司 | Image preprocessing optimization method based on edge image processing system |
CN112099950B (en) * | 2020-09-15 | 2024-09-24 | 重庆电政信息科技有限公司 | Image preprocessing optimization method based on edge image processing system |
CN112101266A (en) * | 2020-09-25 | 2020-12-18 | 重庆电政信息科技有限公司 | Multi-ARM-based distributed inference method for action recognition model |
CN112306689A (en) * | 2020-11-02 | 2021-02-02 | 时代云英(深圳)科技有限公司 | Edge calculation system and method |
CN112637294A (en) * | 2020-12-15 | 2021-04-09 | 安徽长泰信息安全服务有限公司 | Distributed edge computing system |
CN112671896A (en) * | 2020-12-22 | 2021-04-16 | 上海上实龙创智能科技股份有限公司 | Agricultural management method, equipment and system |
CN113326122B (en) * | 2021-03-02 | 2024-03-22 | 东南大学 | Wireless distributed computing system and resource allocation method |
CN113326122A (en) * | 2021-03-02 | 2021-08-31 | 东南大学 | Wireless distributed computing system and resource allocation method |
CN112988710A (en) * | 2021-03-18 | 2021-06-18 | 成都青云之上信息科技有限公司 | Big data processing method and system |
CN113556390A (en) * | 2021-07-15 | 2021-10-26 | 深圳市高德信通信股份有限公司 | Distributed edge computing system |
CN114356511A (en) * | 2021-08-16 | 2022-04-15 | 中电长城网际系统应用有限公司 | Task allocation method and system |
CN114338661B (en) * | 2021-08-27 | 2024-05-03 | 南京曦光信息科技研究院有限公司 | Distributed edge data center system based on optical packet switching and application |
CN114338661A (en) * | 2021-08-27 | 2022-04-12 | 南京曦光信息科技研究院有限公司 | Distributed edge data center system based on optical packet switching and application |
CN114697197A (en) * | 2022-03-22 | 2022-07-01 | 支付宝(杭州)信息技术有限公司 | Edge computing apparatus and method |
CN115426363A (en) * | 2022-08-29 | 2022-12-02 | 广东鑫光智能系统有限公司 | Data acquisition method and terminal for intelligent plate processing factory |
WO2024120300A1 (en) * | 2022-12-09 | 2024-06-13 | 华为技术有限公司 | Communication method, communication apparatus, communication system, medium, chip and program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111459665A (en) | Distributed edge computing system and distributed edge computing method | |
CN107018175B (en) | Scheduling method and device of mobile cloud computing platform | |
CN111614769B (en) | Behavior intelligent analysis engine system of deep learning technology and control method | |
CN101652750B (en) | Data processing device, distributed processing system and data processing method | |
CN111506434B (en) | Task processing method and device and computer readable storage medium | |
CN110389843A (en) | A kind of business scheduling method, device, equipment and readable storage medium storing program for executing | |
CN111880911A (en) | Task load scheduling method, device and equipment and readable storage medium | |
CN112003797B (en) | Method, system, terminal and storage medium for improving performance of virtualized DPDK network | |
CN110187960A (en) | A kind of distributed resource scheduling method and device | |
CN102945185B (en) | Task scheduling method and device | |
CN102970244A (en) | Network message processing method of multi-CPU (Central Processing Unit) inter-core load balance | |
CN114416352A (en) | Computing resource allocation method and device, electronic equipment and storage medium | |
CN105740085A (en) | Fault tolerance processing method and device | |
CN109614228B (en) | Comprehensive monitoring front-end system based on dynamic load balancing mode and working method | |
CN111324424A (en) | Virtual machine deployment method, device, server and storage medium | |
CN112488907A (en) | Data processing method and system | |
CN111193802A (en) | Dynamic resource allocation method, system, terminal and storage medium based on user group | |
CN117149665B (en) | Continuous integration method, control device, continuous integration system, and storage medium | |
CN111541646A (en) | Method for enhancing security service access capability of cipher machine | |
CN103299298A (en) | Service processing method and system | |
CN109639599B (en) | Network resource scheduling method and system, storage medium and scheduling device | |
CN116755829A (en) | Method for generating host PCIe topological structure and method for distributing container resources | |
CN105573204A (en) | Multi-processor digital audio frequency matrix control device and method | |
CN115114005A (en) | Service scheduling control method, device, equipment and computer readable storage medium | |
CN112817761A (en) | Energy-saving method for enhancing cloud computing environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200728 |
|
WD01 | Invention patent application deemed withdrawn after publication |