CN112671852A - Internet of things equipment computing unloading framework based on in-network computing platform - Google Patents

Internet of things equipment computing unloading framework based on in-network computing platform Download PDF

Info

Publication number
CN112671852A
CN112671852A CN202011470288.0A CN202011470288A CN112671852A CN 112671852 A CN112671852 A CN 112671852A CN 202011470288 A CN202011470288 A CN 202011470288A CN 112671852 A CN112671852 A CN 112671852A
Authority
CN
China
Prior art keywords
computing
internet
service
inpie
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011470288.0A
Other languages
Chinese (zh)
Inventor
姚海鹏
潘辉江
买天乐
马斌
忻向军
张尼
刘韵洁
童炉
李韵聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tibet Gaochi Science And Technology Information Industry Group Co ltd
Beijing University of Posts and Telecommunications
Original Assignee
Tibet Gaochi Science And Technology Information Industry Group Co ltd
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tibet Gaochi Science And Technology Information Industry Group Co ltd, Beijing University of Posts and Telecommunications filed Critical Tibet Gaochi Science And Technology Information Industry Group Co ltd
Priority to CN202011470288.0A priority Critical patent/CN112671852A/en
Publication of CN112671852A publication Critical patent/CN112671852A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses an internet of things equipment computing unloading architecture based on an in-network computing platform. The invention comprises an INPIE Service Cluster, an ME Cluster and a Remote Cloud, wherein the INPIE Service Cluster is used as a main body for providing in-network computing unloading Service, the ME Cluster is used for providing a main body for MEC computing unloading Service, the Remote Cloud is used for providing a main body for Cloud computing unloading Service, the current programmable data plane equipment inherently has the performance advantages of low time delay and high throughput, and can process data messages in a pipelined manner, so that the Remote Cloud computing unloading Service has lower Service response time delay than the traditional server (host) computing unloading framework, and the improved Internet of things equipment enables the programmable data plane equipment to assist the Internet of things equipment to better perform computing unloading and can be fused with the MEC and the Cloud computing framework, thereby generating the effects of improving Service time delay and throughput performance and improving the capability of guaranteeing real-time Service.

Description

Internet of things equipment computing unloading framework based on in-network computing platform
Technical Field
The invention relates to the technical field of computing of internet of things equipment, in particular to an internet of things equipment computing unloading framework based on an in-network computing platform.
Background
Cloud Computing (Cloud Computing) offloading is offloading Computing tasks of the internet of things devices to a Cloud server. By means of high-density operation and storage capacity in the data center, the problems that computing and storage resources of the Internet of things equipment are limited and computing-intensive tasks are difficult to deal with are solved. Meanwhile, support is provided for data sharing and various cloud services of multiple Internet of things devices. Mobile (Multi-Access) Edge Computing (Mobile/Multi-Access Edge Computing) offload is to deploy storage Computing capability at the Edge of a network, and provide Computing offload service and data sharing service for internet of things devices. Compared with cloud computing unloading, the edge computing server is often deployed near the wireless access base station, transmission delay is shorter, network bandwidth utilization rate is higher, and service response is more timely.
In cloud computing unloading, data of the internet of things equipment needs to be transmitted to a cloud server through a long link, service response time is long, network bandwidth consumption is large, real-time services are difficult to deal with, edge computing unloading solves the problem of partial cloud computing unloading, but the performance of the internet of things equipment architecture facing exponential growth still faces examination, edge computing still provides computing unloading services based on a general x86 server, and time delay and throughput performance are not fundamentally improved.
Disclosure of Invention
The invention aims to solve the following defects in the prior art that in cloud computing unloading, Internet of things equipment data needs to be transmitted to a cloud server through a long link, service response time is long, network bandwidth consumption is large, real-time services are difficult to deal with, edge computing unloading solves the problem of partial cloud computing unloading, but the performance of an exponentially-increased Internet of things equipment architecture is still tested, edge computing still provides computing unloading service based on a general x86 server, and time delay and throughput performance are not fundamentally improved, and the Internet of things equipment computing unloading architecture based on an in-network computing platform is provided.
In order to achieve the purpose, the invention adopts the following technical scheme:
an Internet of things equipment computing and unloading architecture based on an in-network computing platform comprises an INPIE Service Cluster, an ME Cluster and a Remote Cloud, wherein the INPIE Service Cluster is used as a main body for providing in-network computing and unloading services, the ME Cluster provides a main body for the MEC computing and unloading services, and the Remote Cloud provides a main body for the Cloud computing and unloading services.
Preferably, the INPIE Service Cluster comprises an INPIE Orchester Service scheduler and INPIE Target in-network computing equipment, the INPIE Orchester performs Service division and task planning on the computation unloading task according to the task requirement of computation unloading, and the INPIE Target is modified and configured later to adapt and provide in-network computation unloading Service.
Preferably, the INPIE Orchester is also responsible for cooperating with the ME Orchester to determine the participation role of each unloading module in the unloading task.
Preferably, the INPIE Orchester has a centralized scheduling characteristic and is often deployed together with an SDN controller in an SDN environment.
Preferably, the programmable Switch is used as an INPIE Switch Target of the bearer entity, and the programmable intelligent network card is used as an INPIE SmartNIC Target of the bearer entity.
Preferably, the INPIE Switch Target is deployed at the position of a traditional network Switch and replaces the traditional network Switch, and besides the data storage and forwarding functions of the original Switch, the functions of data aggregation, in-network storage, system control and other calculation and unloading are realized, so that services are provided for the equipment of the internet of things.
Preferably, the INPIE SmartNIC Target is deployed at the side of the Internet of things equipment to replace the traditional network card equipment, and besides the I/O function of the original network card, the functions of data encryption/decryption, data coding/decoding and some general computation unloading are realized.
Preferably, the interaction protocol is as follows:
the interaction protocol is used for communication among the Internet of things equipment, the MEC server and the INPIE;
the protocol is IPv6 and overlay protocol above a transmission layer;
the protocol is divided into two types, is used for different computation unloading services, and can be correctly analyzed by the INPIE Target;
the Version field is protocol Version number information, the Operation field represents Operation required by calculation unloading, and the Load Data carries specific Operation Data.
Compared with the prior art, the invention has the beneficial effects that:
according to the technical scheme, the traditional calculation of the Internet of things equipment is improved, the programmable data plane equipment assists the Internet of things equipment to better calculate and unload due to the improved calculation of the Internet of things equipment, and meanwhile, the programmable data plane equipment can be fused with the MEC and the cloud calculation framework, so that the effects of improving the time delay and the throughput performance of services and improving the guarantee capability of real-time services are achieved.
Drawings
Fig. 1 is a system schematic diagram of an internet of things device computing offload architecture based on an in-network computing platform according to the present invention;
fig. 2 is a system flow diagram of an internet of things device computing offload architecture based on an in-network computing platform according to the present invention;
fig. 3 is a schematic diagram of an internal processing logic of an intra-network computing function module INPIE Target of an internet of things device computing offload architecture based on an intra-network computing platform according to the present invention;
fig. 4 is a schematic diagram illustrating an interactive protocol composition of an internet of things device computing offload architecture based on an in-network computing platform according to the present invention;
fig. 5 is a schematic diagram of a correct analysis flow of an interaction protocol of an internet of things device computing offload architecture based on an in-network computing platform in an INPIE Target according to the present invention;
fig. 6 is a schematic diagram showing comparison of time delay performance of an internet of things device computing and offloading architecture based on an in-network computing platform for providing LQR control computing and offloading for an internet of things device (vehicle) for an INPIE Target according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Referring to fig. 1-5, an internet of things device computing and offloading architecture based on an in-network computing platform includes an INPIE Service Cluster, an ME Cluster, and a Remote Cloud, where the INPIE Service Cluster is a main body providing an in-network computing and offloading Service, the ME Cluster provides a main body for an MEC computing and offloading Service, and the Remote Cloud provides a main body for a Cloud computing and offloading Service.
The INPIE Service Cluster comprises an INPIE Orchester Service scheduler and InPIE Target in-network computing equipment, wherein the INPIE Orchester performs Service division and task planning on a computation unloading task according to the task requirement of computation unloading, the INPIE Target is modified and configured later to adapt to and provide in-network computation unloading Service, and the INPIE Orchester is also responsible for cooperating with the ME Orchester to determine the participation role of each unloading module in the unloading task.
The INPIE Orchester has a centralized scheduling characteristic and is commonly deployed with an SDN controller under the SDN environment, the programmable Switch is used as an INPIE Switch Target of a bearing entity and a programmable intelligent network card is used as an INPIE SmartNIC Target of the bearing entity, the INPIE Switch Target is deployed at the position of a traditional network Switch and replaces the traditional network Switch, the calculation unloading functions of data aggregation, in-network storage, system control and the like are realized besides the data storage and forwarding functions of the original Switch, services are provided for the equipment of the Internet of things, the INPIE SmartNIC Target is deployed at the side of the equipment of the Internet of things to replace the traditional network card equipment, and the functions of data encryption/decryption, data coding/decoding and some general calculation unloading are realized besides the I/O function of the original network card.
An internet of things equipment computing unloading architecture based on an in-network computing platform is characterized in that an interaction protocol is as follows:
the interaction protocol is used for communication among the Internet of things equipment, the MEC server and the INPIE;
the protocol is IPv6 and overlay protocol above a transmission layer;
the protocol is divided into two types, is used for different computation unloading services, and can be correctly analyzed by the INPIE Target;
the Version field is protocol Version number information, the Operation field represents Operation required by calculation unloading, and the Load Data carries specific Operation Data.
Referring to fig. 6, the red line is the latency performance using the INPIE architecture, and the blue (green) line is the latency performance using the conventional MEC to compute offload. It can be seen that the INPIE has very obvious advantage in reducing the service delay, and the delay performance is improved by nearly 10 times to the maximum. Because the improvement of the network bandwidth utilization rate has a direct relation with the aggregated data rate, the method is not suitable for quantitative analysis, but the method is very intuitive and can know that the INPIE Target forwards the data after being aggregated, thereby obviously reducing the flow in the network and improving the bandwidth utilization rate.
The invention provides an internet of things computing unloading framework INPIE based on an in-network computing platform, namely, programmable data plane equipment is used for assisting internet of things equipment in computing unloading, meanwhile, the internet of things computing unloading framework can be fused with an MEC and a cloud computing framework, the service delay and throughput performance can be improved by using the INPIE framework, the guarantee capability of real-time service is improved, and the service response delay is lower than that of a traditional server (host) computing unloading framework because the current programmable data plane equipment has the performance advantages of low delay and high throughput.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (8)

1. An Internet of things equipment computing unloading architecture based on an in-network computing platform comprises an INPIE Service Cluster, an ME Cluster and a Remote Cloud, and is characterized in that the INPIE Service Cluster is used as a main body for providing in-network computing unloading Service, the ME Cluster provides a main body for MEC computing unloading Service, and the Remote Cloud provides a main body for Cloud computing unloading Service.
2. The internet of things device computing offloading architecture based on an in-network computing platform of claim 1, wherein the inp Service Cluster includes an inp Orchester Service scheduler and an inp Target in-network computing device, the inp Orchester performs Service partitioning and task planning on computing offloading tasks according to task requirements for computing offloading, and the inp Target is modified and configured thereafter to adapt to and provide in-network computing offloading services.
3. The computing offload architecture of devices of the internet of things based on an in-network computing platform of claim 2, wherein the INPIE editor is further responsible for cooperating with the ME editor to determine the role of each offload module in offloading tasks.
4. The internet of things device computing offload architecture based on an in-network computing platform of claim 3, wherein the INPIE Orchester has a centralized scheduling feature and is often deployed with an SDN controller in an SDN environment.
5. The internet of things device computing offload architecture based on an in-network computing platform of claim 4, wherein the programmable Switch is an INPIE Switch Target as a bearer entity and the programmable smart network card is an INPIE SmartNIC Target as a bearer entity.
6. The computing and offloading architecture for the internet of things device based on the in-network computing platform as recited in claim 5, wherein the INPIE Switch Target is deployed at a location of a conventional network Switch and replaces the conventional network Switch, and the computing and offloading functions such as data aggregation, in-network storage, system control, and the like are implemented in addition to the original Switch data storage and forwarding function, so as to provide services for the internet of things device.
7. The internet of things device computing offload architecture based on an in-network computing platform of claim 6, wherein the INPIE SmartNIC Target is deployed at the side of the internet of things device to replace a conventional network card device, and further implements data encryption/decryption, data encoding/decoding, and some general computing offload functions in addition to the I/O function of an original network card.
8. The internet of things device computing offload architecture based on an in-network computing platform according to claim 1, wherein the interaction protocol is as follows:
the interaction protocol is used for communication among the Internet of things equipment, the MEC server and the INPIE;
the protocol is IPv6 and overlay protocol above a transmission layer;
the protocol is divided into two types, is used for different computation unloading services, and can be correctly analyzed by the INPIE Target;
the Version field is protocol Version number information, the Operation field represents Operation required by calculation unloading, and the Load Data carries specific Operation Data.
CN202011470288.0A 2020-12-14 2020-12-14 Internet of things equipment computing unloading framework based on in-network computing platform Pending CN112671852A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011470288.0A CN112671852A (en) 2020-12-14 2020-12-14 Internet of things equipment computing unloading framework based on in-network computing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011470288.0A CN112671852A (en) 2020-12-14 2020-12-14 Internet of things equipment computing unloading framework based on in-network computing platform

Publications (1)

Publication Number Publication Date
CN112671852A true CN112671852A (en) 2021-04-16

Family

ID=75405865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011470288.0A Pending CN112671852A (en) 2020-12-14 2020-12-14 Internet of things equipment computing unloading framework based on in-network computing platform

Country Status (1)

Country Link
CN (1) CN112671852A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110035410A (en) * 2019-03-07 2019-07-19 中南大学 Federated resource distribution and the method and system of unloading are calculated in a kind of vehicle-mounted edge network of software definition
US20200174888A1 (en) * 2018-11-29 2020-06-04 International Business Machines Corporation Autonomous self-healing stateless microservice nodes
CN111245651A (en) * 2020-01-08 2020-06-05 上海交通大学 Task unloading method based on power control and resource allocation
CN111641681A (en) * 2020-05-11 2020-09-08 国家电网有限公司 Internet of things service unloading decision method based on edge calculation and deep reinforcement learning
CN111901145A (en) * 2020-06-23 2020-11-06 国网江苏省电力有限公司南京供电分公司 Power Internet of things heterogeneous shared resource allocation system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200174888A1 (en) * 2018-11-29 2020-06-04 International Business Machines Corporation Autonomous self-healing stateless microservice nodes
CN110035410A (en) * 2019-03-07 2019-07-19 中南大学 Federated resource distribution and the method and system of unloading are calculated in a kind of vehicle-mounted edge network of software definition
CN111245651A (en) * 2020-01-08 2020-06-05 上海交通大学 Task unloading method based on power control and resource allocation
CN111641681A (en) * 2020-05-11 2020-09-08 国家电网有限公司 Internet of things service unloading decision method based on edge calculation and deep reinforcement learning
CN111901145A (en) * 2020-06-23 2020-11-06 国网江苏省电力有限公司南京供电分公司 Power Internet of things heterogeneous shared resource allocation system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘惠文 等: "《基于软件定义网络技术实现人工智能网络体系架构》", 《信息技术与网络安全》 *

Similar Documents

Publication Publication Date Title
Rost et al. Benefits and challenges of virtualization in 5G radio access networks
US9253015B2 (en) Transparent proxy architecture for multi-path data connections
CN108901046A (en) Cotasking unloading algorithm and system design scheme towards mobile edge calculations
CN104640223B (en) A kind of method reporting BSR, base station and terminal
US10206131B2 (en) System and method for programmable native analytics in 5G mobile networks
CN110545307B (en) Edge computing platform, calling method and computer readable storage medium
US8170043B2 (en) System and method of communication protocols in communication systems
EP4024763A1 (en) Network congestion control method, node, system and storage medium
US20230292387A1 (en) Method and device for jointly serving user equipment by wireless access network nodes
US11588751B2 (en) Combined network and computation slicing for latency critical edge computing applications
US20230090504A1 (en) Method, Apparatus and Device for Negotiating Traffic-to-link Mapping Configuration and Storage Medium
US11695626B2 (en) Method and apparatus for offloading hardware to software package
CN111698707A (en) MEC-based 5G small base station communication management method
Zhang et al. Testbed design and performance emulation in fog radio access networks
CN112671852A (en) Internet of things equipment computing unloading framework based on in-network computing platform
US20230103816A1 (en) Hybrid Cloud Cellular Network Routing
US20220150898A1 (en) Method and apparatus for allocating gpu to software package
CN113784372A (en) Joint optimization method for terminal multi-service model
Ham et al. Survey on 6g system for ai-native services
Singh et al. NexGen S-MPTCP: Next Generation Smart Multipath TCP Controller
US20230362865A1 (en) Method and device for obtaining network data server information in wireless communication system
US20230319646A1 (en) Monitoring for application ai/ml-based services and operations
Bansal Introduction to 5g and beyond
WO2024065136A1 (en) Control method and apparatus thereof
US20230239741A1 (en) Method and apparatus for service of ultra-reliable and low-latency communication in a mobile communication system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210416

RJ01 Rejection of invention patent application after publication