CN114691299A - Serverless-based edge computing resource management system - Google Patents

Serverless-based edge computing resource management system Download PDF

Info

Publication number
CN114691299A
CN114691299A CN202210281215.XA CN202210281215A CN114691299A CN 114691299 A CN114691299 A CN 114691299A CN 202210281215 A CN202210281215 A CN 202210281215A CN 114691299 A CN114691299 A CN 114691299A
Authority
CN
China
Prior art keywords
edge
application
serverless
server
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210281215.XA
Other languages
Chinese (zh)
Inventor
陈正伟
生铮
张东海
王刚
高传集
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Cloud Information Technology Co Ltd
Original Assignee
Inspur Cloud Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Cloud Information Technology Co Ltd filed Critical Inspur Cloud Information Technology Co Ltd
Priority to CN202210281215.XA priority Critical patent/CN114691299A/en
Publication of CN114691299A publication Critical patent/CN114691299A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances

Abstract

The invention particularly relates to an edge computing resource management system based on Serverless. The edge computing resource management system based on Serverless comprises: the Center server is used for managing server applications and triggers, and creating and executing the server applications at the Center node; the Edge server is used for executing server application on the Edge side, monitoring the node resource condition and transferring the request to the central node when the resource is insufficient; the edge event trigger is used for acquiring an edge event and sending event information to the message queue after a trigger condition is reached; the Kata Container is used for strongly isolating the user applications, so that the user applications are not interfered with each other during operation; the multi-architecture Runtime environment automatically selects a matched architecture when running with different architecture Runtime, and realizes architecture no-perception on a user layer. The edge computing resource management system based on Serverless can save edge equipment resources, realize edge node autonomy, enhance the resource utilization rate, automatically adapt to a multi-CPU framework, improve the safety performance and facilitate Serverless application deployment.

Description

Serverless-based edge computing resource management system
Technical Field
The invention relates to the technical field of cloud computing and edge computing, in particular to an edge computing resource management system based on Serverless.
Background
The edge calculation is a distributed calculation structure, which moves the calculation of application program, data and service from the central node of the network to the edge node of the network logic for processing. The advantages of edge calculation are very clear:
1) edge computing decomposes large services originally handled entirely by the central node, cuts them into smaller and more manageable parts, and distributes them to the edge nodes for processing.
2) The edge node is closer to the user terminal device, so that the processing and transmission speed of the data can be increased, and the delay can be reduced.
Because a computer used in the current edge computing scenario is usually configured in a relatively low manner, when a CPU, a memory, and a disk are involved, the application execution efficiency must be high. Therefore, if the business program is operated in the scene of edge calculation for more than a certain amount, a resource shortage situation is caused.
Based on the above situation, the invention provides an edge computing resource management system based on Serverless.
Disclosure of Invention
In order to make up for the defects of the prior art, the invention provides a simple and efficient edge computing resource management system based on Serverless.
The invention is realized by the following technical scheme:
an edge computing resource management system based on Serverless, characterized in that: comprises that
A Center server (Serverless management Center) for managing the server application and the edge event trigger, and creating and executing the server application at the Center node;
edge server less (Edge without server framework) for executing server less application at Edge side, monitoring node resource status, and transferring request to central node when resource is insufficient;
the edge event trigger is used for acquiring an edge event and sending event information to the message queue after a trigger condition is reached;
the Kata Container is used for carrying out strong isolation on the user application, so that the user applications do not interfere with each other during operation;
the multi-architecture Runtime environment automatically selects a matched architecture when Runtime of different architectures runs, the Runtime runs user application codes, and architecture no perception is realized on a user layer.
The Serverless application creates an instance to execute when triggered; when the triggering is not carried out, the number of the instances of the Serverless application is 0; when the instance has no actual business and operation requirements, the Serverless application does not occupy resources.
The Center server is a management program of the whole server platform, and has the following functions:
(1) managing Serverless application, and creating, modifying and deleting an edge event trigger;
a user writes a corresponding function execution code for a specific edge Serverless application, and writes processing logic in the code;
then, a Serverless application is established through the Center Serverless, and a user configures the name, the use memory and/or the overtime information of the application after uploading the code;
after the application is established, a user configures an edge event trigger for the application, and the triggering mode is timing triggering or triggering after a specific message is received;
the Center server transmits the configuration of the application created by the user and the Edge event trigger to the Edge server of each node;
(2) executing a request sent by the edge node, and managing a central resource pool;
when the Edge node resource is not enough to execute the Serverless application, the Edge Serverless sends the event request to the central node, namely to the Center Serverless;
after receiving the event request of the Edge node, the Center server judges whether the resources in the Center resource pool can support the establishment and the operation of the corresponding server application, and if the resources in the Center resource pool are insufficient, the error information of insufficient Edge server resources is returned; if the resources of the central resource pool are enough, applying for a part of CPU and memory resources from the central resource pool, and then establishing a Severless application instance to execute the request;
(3) constructing and upgrading a user Serverless application;
when a user creates a Serverless application, only a service code file needs to be uploaded, the Center Serverless is responsible for configuring a corresponding Runtime to run a user code, and the user does not need to pay attention to details of compiling construction and deployment;
when the user needs to upgrade the application, only the updated code file needs to be provided, and the Center server is responsible for updating the application of the user and running the application in the latest code logic.
The Edge server is deployed on each Edge node and used for receiving event information sent by an Edge event trigger on the Edge node and establishing a websocket connection to communicate with the Center server; the concrete functions are as follows:
(1) flow caching and application elastic expansion;
in the idle state of the Serverless application, the number of the instances of the application is reduced to 0; when a request is made to trigger the Serverless application, the Edge Serverless is cold started, and the capacity of an application instance is expanded from 0 to 1;
in the cold starting process, before the instance of the Serverless application expands to 1, the request for accessing the Serverless application cannot be sent, and the request which cannot be sent is cached in an Edge Serverless program; and after the application instance is expanded to 1, the Edge Server less sends the cached request to the application instance.
Similarly, when there are many requests for accessing the Serverless application and the current instance pressure is too large, the Edge Serverless will expand the number of instances of the Serverless application to cope with the surge traffic;
when the Serverless application has no flow for a period of time (the duration is preset by a user according to needs), the Edge Serverless reduces the number of the instances of the Serverless application to 0, so as to save resources of the Edge platform.
(2) When the node resources are insufficient, sending the event to a central node for processing;
the method comprises the steps that Edge servers monitor the resource condition of Edge nodes where the Edge servers are located, when the Edge events come, the residual resources of the nodes are not enough to support and start the server applications, the Edge servers send event information to a Center server through websocket connection, and the Center server applies for resources to execute;
(3) the edge is autonomous;
the interaction mode with the Center server can ensure the autonomy of the Edge server, when the network can not be connected with the Center node, the Edge server can still receive the event information, and the server application is executed on the node.
The edge event trigger is a device for triggering (sending an event message to a message queue as a producer) a Serverless application after a specified condition is reached; the edge event triggers can be both logical triggers and triggers on edge devices.
The Kata Container is a lightweight safety Container, and the safe Container operation is constructed through a lightweight virtual machine;
the Kata Container safety Container realizes resource isolation among the containers by creating a lightweight virtual machine, and the containers run in the virtual machine; the container is operated in a special kernel, the isolation of a network, I/O and a memory is provided, and the isolation is forced by hardware through the virtualization VT extension;
while providing security, the Kata Container security Container still has high performance, so that the user can not affect other functions or be affected by other Serverless applications when running the Serverless applications.
When different CPU architectures exist between the edge devices, adaptation is carried out through the multi-architecture Runtime, and the device running the Serverless application uses the Runtime environment corresponding to the architecture of the device to run the codes of the users.
The invention discloses an edge computing resource management system based on Serverless, which has the following overall architecture and operation flow:
the Center server is deployed at the Center node, and the Edge server is deployed at each Edge node; when Edge Server is started, a websocket long connection request is opened to connect Center Server; meanwhile, Edge Server monitors the resource use condition of the node;
a user writes codes according to respective service scenes, then uploads the codes and creates a Serverless application; after the application is successfully created, adding a trigger, and configuring a trigger condition or a trigger scene of the Serverless application; the Center server distributes the server application of the user and the configuration of the Edge event trigger to the corresponding Edge server;
after meeting the triggering requirement of the Edge event trigger, the Edge event trigger encapsulates the event information and sends the event information to an Edge server of the Edge node, the Edge server triggers the server application instance to expand from 0 to 1, and after the expansion is completed, the event message is sent to the server application for processing;
when the Edge node resources are insufficient, the Edge server sends the incoming event information to the Center server through the websocket connection, and the Center server creates and executes a server application on the Center node;
when the Serverless application does not receive the trigger event within a certain time, the Edge Serverless reduces the number of the instances to 0, thereby achieving the purpose of saving the resources of the Edge device.
The invention has the beneficial effects that: the edge computing resource management system based on Serverless can save edge equipment resources, realize edge node autonomy, enhance the resource utilization rate, automatically adapt to a multi-CPU framework, improve the safety performance and facilitate Serverless application deployment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of an edge computing resource management system based on Serverless according to the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the embodiment of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Serverless is not really a server not required. Fundamentally, Serverless is a framework aiming at improving agility by reducing the total cost, and a user can construct an application without considering a bottom-layer server and a CPU framework. In short, the function calculation is an abstraction, and details of an underlying computer are abstracted and packaged into a model, and in the model, only the core code of a program needs to be provided for a Serverless service, so that the function calculation can be operated and responded according to different events.
Therefore, the edge calculation and the Serverless are combined, and the edge calculation can be enabled by combining the event driving of the Serverless by using the characteristics of low cost and high elasticity of the Serverless.
The edge computing resource management system based on Serverless comprises:
a Center server (Serverless management Center) for managing the server application and the edge event trigger, and creating and executing the server application at the Center node;
edge server less (Edge without server framework) for executing server less application at Edge side, monitoring node resource status, and transferring request to central node when resource is insufficient;
the edge event trigger is used for acquiring an edge event and sending event information to the message queue after a trigger condition is reached;
the Kata Container is used for carrying out strong isolation on the user application, so that the user applications do not interfere with each other during operation;
the multi-architecture Runtime environment automatically selects a matched architecture when Runtime of different architectures runs, the Runtime runs user application codes, and architecture no perception is realized on a user layer.
The Serverless application creates an instance to execute when triggered; when the triggering is not carried out, the number of the instances of the Serverless application is 0; and when the instance has no actual service and operation requirements, the Serverless application does not occupy resources. The Serverless application is executed according to the requirement, so that the problem of saving edge equipment resources is solved, and more applications can be deployed and run on an edge platform with the same specification.
The Center server is a management program of the whole server platform, and has the following functions:
(1) managing Serverless application, and creating, modifying and deleting an edge event trigger;
a user writes a corresponding function execution code for a specific edge Serverless application, and writes processing logic in the code;
then, a Serverless application is established through the Center Serverless, and a user configures the name, the use memory and/or the overtime information of the application after uploading the code;
after the application is established, a user configures an edge event trigger for the application, and the triggering mode is timing triggering or triggering after a specific message is received;
the Center server transmits the configuration of the application created by the user and the Edge event trigger to the Edge server of each node;
(2) executing a request sent by the edge node, and managing a central resource pool;
when the Edge node resource is not enough to execute the Serverless application, the Edge Serverless sends the event request to the central node, namely to the Center Serverless;
after receiving the event request of the Edge node, the Center server judges whether the resources in the Center resource pool can support the establishment and the operation of the corresponding server application, and if the resources in the Center resource pool are insufficient, the error information of insufficient Edge server resources is returned; if the resources of the central resource pool are enough, applying for a part of CPU and memory resources from the central resource pool, and then establishing a Severless application instance to execute the request;
(3) constructing and upgrading a user Serverless application;
when a user creates a Serverless application, only a service code file needs to be uploaded, the Center Serverless is responsible for configuring a corresponding Runtime to run a user code, and the user does not need to pay attention to details of compiling construction and deployment;
when the user needs to upgrade the application, only the updated code file needs to be provided, and the Center server is responsible for updating the application of the user and running the application in the latest code logic.
The Edge server is deployed on each Edge node and used for receiving event information sent by an Edge event trigger on the Edge node and establishing a websocket connection to communicate with the Center server; the concrete functions are as follows:
(1) flow caching and application elastic expansion;
in the idle state of the Serverless application, the number of the instances of the application is reduced to 0; when a request is used for triggering the Serverless application, the Edge Serverless is cold started, and the capacity of an application instance is expanded from 0 to 1;
in the cold starting process, before the instance of the Serverless application expands to 1, the request for accessing the Serverless application cannot be sent, and the request which cannot be sent is cached in an Edge Serverless program; and after the application instance expands to 1, the Edge server less sends the cached request to the application instance.
Similarly, when there are many requests for accessing the Serverless application and the current instance pressure is too large, the Edge Serverless will expand the number of instances of the Serverless application to cope with the surge traffic;
when the Serverless application has no flow for a period of time (the duration is preset by a user according to needs), the Edge Serverless reduces the number of the instances of the Serverless application to 0, so as to save resources of the Edge platform.
(2) When the node resources are insufficient, sending the event to a central node for processing;
the method comprises the steps that Edge servers monitor the resource condition of Edge nodes where the Edge servers are located, when the Edge events come, the residual resources of the nodes are not enough to support and start the server applications, the Edge servers send event information to a Center server through websocket connection, and the Center server applies for resources to execute;
(3) the edge is autonomous;
the interaction mode with the Center server can ensure the autonomy of the Edge server, when the network can not be connected with the Center node, the Edge server can still receive the event information, and the server application is executed on the node.
The edge event trigger is a device for triggering (sending an event message to a message queue as a producer) a Serverless application after a specified condition is reached; the edge event triggers may be both logical triggers and triggers on the edge devices.
The logical trigger may be a software type trigger such as a timer event trigger or an Http event trigger. For example, if a camera is used as an edge device, a timer trigger triggered once an hour is configured, and this trigger will periodically trigger a Serverless application to compress and send the recording information for this period.
A device trigger refers to a device running on an edge, and triggers a Serverless application when a certain condition is reached or a specified event occurs. For example, a water level detector trigger, when an event occurs when the water level reaches a certain height, an event message is sent to a message queue to trigger a certain Serverless application to process. Or a temperature trigger, which triggers the Serverless application to handle the current problem when the temperature sensor detects that the temperature reaches a certain threshold event.
The Kata Container is a lightweight safety Container, and the safe Container operation is constructed through a lightweight virtual machine;
the Kata Container safety Container realizes resource isolation among the containers by creating a lightweight virtual machine, and the containers run in the virtual machine; the container is operated in a special kernel, the isolation of a network, I/O and a memory is provided, and the isolation is forced by hardware through the virtualization VT extension;
while providing security, the Kata Container security Container still has high performance, so that the user can not affect other functions or be affected by other Serverless applications when running the Serverless applications.
When a user creates the self Serverless application, only the written function code needs to be uploaded, and the code is operated in the runtime environment of the corresponding language. For example, if the user uploads a code written in Python language, the Serverless framework selects a Runtime environment in Python language to run the code of the application. When different CPU architectures exist among the edge devices, for example, some edge devices are CPUs with x86 architectures, some edge devices are CPUs with mips architectures, adaptation is performed through multi-architecture Runtime, and a device running the Serverless application uses a Runtime environment corresponding to its own architecture to run user codes.
The edge computing resource management system based on Serverless has the following overall structure and operation flow:
the Center server is deployed at the Center node, and the Edge server is deployed at each Edge node; when Edge Server is started, a websocket long connection request is opened to connect Center Server; meanwhile, Edge Server monitors the resource use condition of the node;
a user writes codes according to respective service scenes, then uploads the codes and creates a Serverless application; after the application is successfully created, adding a trigger, and configuring a trigger condition or a trigger scene of the Serverless application; the Center server distributes the server application of the user and the configuration of the Edge event trigger to the corresponding Edge server;
after meeting the triggering requirement of the Edge event trigger, the Edge event trigger encapsulates the event information and sends the event information to an Edge server of the Edge node, the Edge server triggers the server application instance to expand from 0 to 1, and after the expansion is completed, the event message is sent to the server application for processing;
when the Edge node resources are insufficient, the Edge server sends the incoming event information to the Center server through the websocket connection, and the Center server creates and executes a server application on the Center node;
when the Serverless application does not receive the trigger event within a certain time, the Edge Serverless reduces the number of the instances to 0, thereby achieving the purpose of saving the resources of the Edge device.
Compared with the prior art, the edge computing resource management system based on Serverless has the following characteristics:
a) can save edge equipment resources and enhance the utilization rate of resources
The instance number of the Serverless application is reduced to 0 when the Serverless application is not called, and resources are not occupied. The call is pulled only when the event comes in. When the same event is triggered by the edge device, the event information can be executed in the same Serverless application, and a plurality of instances are not required to be started in the edge node, so that the utilization rate of resources is enhanced.
b) Automatically adaptable multi-CPU architecture
When a user creates a Serverless application, the architecture information of the edge device does not need to be considered, and a plurality of architecture Runtime environments can be provided in a Serverless framework for adaptation. The user only needs to write and upload the service function codes, and does not need to consider the bottom CPU architecture information.
c) Secure operating environment
The Serverless applications run in a security Container Kata Container, each application runs by using a separate kernel, and the isolation between the applications reaches the kernel level, so that the applications of users can be safely executed without mutual interference.
d) Edge node autonomy
An Edge server less service is deployed on each Edge node. When the Edge node loses the connection with the central node, the Edge server can receive the event request according to the configuration of the existing server application and the trigger on the node, and create the server application for execution.
e) Facilitating Serverless application deployment
When a user deploys and upgrades an application, only a code file needs to be provided, and the edge Serverless management program provides a matched code runtime to run. The user does not need to be concerned with code deployment and operation and maintenance.
The above-described embodiment is only one specific embodiment of the present invention, and general changes and substitutions by those skilled in the art within the technical scope of the present invention are included in the protection scope of the present invention.

Claims (8)

1. An edge computing resource management system based on Serverless, characterized in that: comprises that
The Center server is used for managing the server application and the edge event trigger, and creating and executing the server application at the Center node;
edge server, which is used to execute server application at Edge side, and monitor node resource status, and transfer request to central node when resource is insufficient;
the edge event trigger is used for acquiring an edge event and sending event information to the message queue after a trigger condition is reached;
the Kata Container is used for carrying out strong isolation on the user application, so that the user applications do not interfere with each other during operation;
the multi-architecture Runtime environment automatically selects a matched architecture when Runtime of different architectures runs, the Runtime runs user application codes, and architecture imperceptibility is realized on a user layer.
2. The Serverless based edge computing resource management system according to claim 1, wherein: the Center server is a management program of the whole server platform, and the functions of the Center server are as follows:
(1) managing Serverless application, and creating, modifying and deleting an edge event trigger;
a user writes a corresponding function execution code for a specific edge Serverless application, and writes processing logic in the code;
then, a Serverless application is established through the Center Serverless, and a user configures the name, the use memory and/or the overtime information of the application after uploading the code;
after the application is established, a user configures an edge event trigger for the application, and the triggering mode is timing triggering or triggering after a specific message is received;
the Center server transmits the configuration of the application created by the user and the Edge event trigger to the Edge server of each node;
(2) executing a request sent by the edge node, and managing a central resource pool;
when the Edge node resource is not enough to execute the Server application, the Edge Server sends the event request to the Center node, namely to the Center Server;
after receiving the event request of the Edge node, the Center server judges whether the resources in the Center resource pool can support the establishment and the operation of the corresponding server application, and if the resources in the Center resource pool are insufficient, the error information of insufficient Edge server resources is returned; if the resources of the central resource pool are enough, applying for a part of CPU and memory resources from the central resource pool, and then establishing a Severless application instance to execute the request;
(3) constructing and upgrading a user Serverless application;
when a user creates a Serverless application, only a service code file needs to be uploaded, the Center Serverless is responsible for configuring a corresponding Runtime to run a user code, and the user does not need to pay attention to details of compiling construction and deployment;
when the user needs to upgrade the application, only the updated code file needs to be provided, and the Center server is responsible for updating the application of the user and running the application in the latest code logic.
3. The Serverless based edge computing resource management system according to claim 1, wherein: the Edge server is deployed on each Edge node and used for receiving event information sent by an Edge event trigger on the Edge node and establishing a websocket connection to communicate with the Center server; the concrete functions are as follows:
(1) flow caching and application elastic expansion;
when the requests for accessing the Serverless application are many and the pressure of the current instance is too large, the Edge Serverless expands the number of the instances of the Serverless application to deal with the increased flow;
when the Serverless application has no flow for a period of time, the Edge Serverless reduces the number of the instances of the Serverless application to 0 so as to save the resources of the Edge platform;
(2) when the node resources are insufficient, sending the event to a central node for processing;
the method comprises the steps that Edge servers monitor the resource condition of Edge nodes where the Edge servers are located, when the Edge events come, the residual resources of the nodes are not enough to support and start the server applications, the Edge servers send event information to a Center server through websocket connection, and the Center server applies for resources to execute;
(3) the edge is autonomous;
the interaction mode with the Center server can ensure the autonomy of the Edge server, when the network can not be connected with the Center node, the Edge server can still receive the event information, and the server application is executed on the node.
4. The Serverless-based edge computing resource management system according to claim 1 or 3, wherein: the Serverless application creates an instance to execute when triggered; when the triggering is not carried out, the number of the instances of the Serverless application is 0; when the instance has no actual business and operation requirements, the Serverless application does not occupy resources;
in the idle state of the Serverless application, the number of the instances of the application is reduced to 0; when a request is used for triggering the Serverless application, the Edge Serverless cold start is carried out, and the capacity of the application instance is expanded from 0 to 1;
in the cold starting process, before the instance of the Serverless application expands to 1, the request for accessing the Serverless application cannot be sent, and the request which cannot be sent is cached in an Edge Serverless program; and after the application instance is expanded to 1, the Edge Server less sends the cached request to the application instance.
5. The Serverless-based edge computing resource management system according to claim 1 or 2, wherein: the edge event trigger is a device for triggering Serverless application after a specified condition is reached; the edge event triggers may be both logical triggers and triggers on the edge devices.
6. The Serverless based edge computing resource management system according to claim 1, wherein: the Kata Container is a lightweight safety Container, and the safe Container operation is constructed through a lightweight virtual machine;
the Kata Container safety Container realizes resource isolation among the containers by creating a lightweight virtual machine, and the containers run in the virtual machine; the container is operated in a special kernel, the isolation of a network, I/O and a memory is provided, and the isolation is forced by hardware through the virtualization VT extension;
the Kata Container security Container provides security while still maintaining high performance so that the user does not affect other functions or is affected by other Serverless applications while running the Serverless applications.
7. The Serverless based edge computing resource management system according to claim 1, wherein: when different CPU architectures exist between the edge devices, adaptation is carried out through the multi-architecture Runtime, and the device running the Serverless application uses the Runtime environment corresponding to the architecture of the device to run the codes of the users.
8. The Serverless based edge computing resource management system according to claim 1, wherein: the overall architecture and operation flow are as follows:
the Center server is deployed at the Center node, and the Edge server is deployed at each Edge node; when Edge Server is started, a websocket long connection request is opened to connect Center Server; simultaneously, monitoring the resource use condition of the node where the Edge Server is located;
a user writes codes according to respective service scenes, then uploads the codes and creates a Serverless application; after the application is successfully created, adding a trigger, and configuring a trigger condition or a trigger scene of the Serverless application; the Center server distributes the server application of the user and the configuration of the Edge event trigger to the corresponding Edge server;
after meeting the triggering requirement of the Edge event trigger, the Edge event trigger encapsulates the event information and sends the event information to an Edge server of the Edge node, the Edge server triggers the server application instance to expand from 0 to 1, and after the expansion is completed, the event message is sent to the server application for processing;
when the Edge node resources are insufficient, the Edge server sends the incoming event information to the Center server through the websocket connection, and the Center server creates and executes a server application on the Center node;
when the Serverless application does not receive the trigger event within a certain time, the Edge Serverless reduces the number of the instances to 0, thereby achieving the purpose of saving the resources of the Edge device.
CN202210281215.XA 2022-03-22 2022-03-22 Serverless-based edge computing resource management system Pending CN114691299A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210281215.XA CN114691299A (en) 2022-03-22 2022-03-22 Serverless-based edge computing resource management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210281215.XA CN114691299A (en) 2022-03-22 2022-03-22 Serverless-based edge computing resource management system

Publications (1)

Publication Number Publication Date
CN114691299A true CN114691299A (en) 2022-07-01

Family

ID=82139776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210281215.XA Pending CN114691299A (en) 2022-03-22 2022-03-22 Serverless-based edge computing resource management system

Country Status (1)

Country Link
CN (1) CN114691299A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116887357A (en) * 2023-09-08 2023-10-13 山东海博科技信息系统股份有限公司 Computing platform management system based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116887357A (en) * 2023-09-08 2023-10-13 山东海博科技信息系统股份有限公司 Computing platform management system based on artificial intelligence
CN116887357B (en) * 2023-09-08 2023-12-19 山东海博科技信息系统股份有限公司 Computing platform management system based on artificial intelligence

Similar Documents

Publication Publication Date Title
US11556366B2 (en) Container login method, apparatus, and storage medium
US6697972B1 (en) Method for monitoring fault of operating system and application program
CN103475677B (en) The method, apparatus and system of dummy node are managed in a kind of PaaS cloud platforms
CN105357296A (en) Elastic caching system based on Docker cloud platform
CN107544783B (en) Data updating method, device and system
CN110515748B (en) Message processing method and related device
CN103414579A (en) Cross-platform monitoring system applicable to cloud computing and monitoring method thereof
CN102868736A (en) Design and implementation method of cloud computing monitoring framework, and cloud computing processing equipment
CN103986762A (en) Process state detection method and device
CN107911467B (en) Service operation management system and method for scripted operation
CN113656142B (en) Container group pod-based processing method, related system and storage medium
CN112783672B (en) Remote procedure call processing method and system
JP7161560B2 (en) Artificial intelligence development platform management method, device, medium
WO2021043124A1 (en) Kbroker distributed operating system, storage medium, and electronic device
CN114691299A (en) Serverless-based edge computing resource management system
CN112698838A (en) Multi-cloud container deployment system and container deployment method thereof
CN114565502A (en) GPU resource management method, scheduling method, device, electronic equipment and storage medium
CN101951327B (en) iSCSI network system and network fault detection method
CN104657240B (en) The Failure Control method and device of more kernel operating systems
CN114615268B (en) Service network, monitoring node, container node and equipment based on Kubernetes cluster
CN108667920B (en) Service flow acceleration system and method for fog computing environment
CN116501469A (en) Control method of high-performance computing cluster, electronic equipment and storage medium
CN106550002A (en) A kind of paas clouds mandatory system and method
US10579431B2 (en) Systems and methods for distributed management of computing resources
CN115426361A (en) Distributed client packaging method and device, main server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination