CN117519842A - Local deployment method of Serverless function flow - Google Patents

Local deployment method of Serverless function flow Download PDF

Info

Publication number
CN117519842A
CN117519842A CN202311462881.4A CN202311462881A CN117519842A CN 117519842 A CN117519842 A CN 117519842A CN 202311462881 A CN202311462881 A CN 202311462881A CN 117519842 A CN117519842 A CN 117519842A
Authority
CN
China
Prior art keywords
function
serverless
docker
service
developer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311462881.4A
Other languages
Chinese (zh)
Inventor
马骏
谢东烨
曹春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202311462881.4A priority Critical patent/CN117519842A/en
Publication of CN117519842A publication Critical patent/CN117519842A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4488Object-oriented
    • G06F9/449Object-oriented method invocation or resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45504Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances

Abstract

The invention discloses a local deployment method of a Serverless function flow, which comprises a method for locally maintaining a plurality of Serverless functions and a method for cooperating with external services on which the deployment functions depend, wherein the method comprises the steps of: a plurality of Serverless functions and a Master Docker are deployed to a local running environment through a Docker-compound technology, if the Serverless functions have dependent external services, the external services are deployed to the local environment together through the Docker-compound, and based on the fact that the Serverless functions can be executed locally by a developer and monitored. The invention is suitable for a plurality of Serverless cloud computing platforms, provides a method for locally deploying Serverless function arrangement services of the plurality of Serverless cloud computing platforms, can help developers to connect services of Serverless functions from different platforms in series, and provides a unified function flow execution interface to test the collaborative work of a plurality of Serverless functions of the developers.

Description

Local deployment method of Serverless function flow
Technical Field
The invention relates to a method for locally deploying and running a Serverless function written by a developer, belonging to the technical field of computers.
Background
With the development of cloud computing and cloud native technology, a computing manner of Serverless is gradually widely used for service deployment. Serverless is an embodiment of FaaS, which deploys functions as services on a cloud platform, and provides a certain interface for a developer to call. In this way, a scalable allocation of resources can be achieved, thereby achieving a lower cost business operation mode. A certain scale of Serverless platform is already in possession, such as ali cloud, tencent cloud, etc.
On one hand, as the service requirement is upgraded, the functions which the developer needs to realize are more complex, so that the developer needs to split the functions into a plurality of functions, and the service logic is completed in a mode of cooperative work of the functions; on the other hand, with the perfection of function scheduling technology, the calling mode of the Serverless function flow is gradually applied to a scene that one function needs a plurality of functions to cooperatively operate.
Before a developer deploys a function on a cloud platform, the function is usually required to be debugged locally, and the problem of the function on business logic is discovered in time. In the existing public cloud platforms, such as ali cloud and Tencent cloud, relatively complete local deployment and debugging functions are provided, and the online running environment is supported in a local simulation mode.
However, in the public cloud services platform that has been commonly used, its functionality for local deployment is not perfect, mainly in the following aspects:
lack of local development and testing environments. In the server service provided by the existing public cloud computing development platform, the function flow cannot be locally simulated. When developing Serverless function flows, developers typically need to deploy them to the cloud for testing and debugging. This not only wastes time and resources, but also reduces development efficiency because the iterative and debugging process needs to wait for cloud deployment and execution. This problem drives the motivation to seek to mimic the execution of a Serverless function flow in a local environment.
Complex multi-function workflow debugging. Multi-function workflows are a common scenario in Serverless function flows, but in the case of cloud deployment, debugging complex workflows is difficult. The developer needs to track and analyze interactions between functions in a cloud environment, which often requires additional tools and time. One of the motivations for the present invention is to provide a more efficient method of locally debugging a multi-function workflow.
Management of external dependencies. Serverless function flows typically rely on external services such as databases, message queues, caches. Managing and reconciling these external dependencies can become complex when deployed in the cloud. The conventional solution is to deploy external services on a local or cloud computing platform, and then access the external services by a locally deployed Serverless function. However, after the external service is accessed and found to be faulty based on the scheme, the external service is modified by the Serverless function to a certain extent, so that the context of the fault is difficult to reproduce. One of the motivations for the present invention is to provide a way in which these external dependencies can be easily co-deployed and managed while running the function flow locally to ensure their availability and consistency.
The strong dependence of the platform. In the Serverless service provided by the existing public cloud computing development platform, the deployment and execution of the function flows have a strong dependency relationship with the platforms, and the function flow arrangement service of one platform cannot call the function resources of the other cloud platforms, so that when the function resources of the developer are deployed on different cloud platforms, the function flow arrangement service cannot arrange the function services through a unified interface.
Disclosure of Invention
The invention aims to: aiming at the problems and the defects existing in the prior art, the invention provides a local deployment method of a Serverless function flow. The invention aims to solve a series of problems in the development and testing of a Serverless function flow, including local development, multi-function workflow debugging, external dependency management and platform-independent implementation. By providing a method for simulating the execution of a Serverless function stream locally, the invention aims to improve development efficiency, reduce cost and provide greater control and flexibility for developers.
The technical scheme is as follows: a local deployment method of a Serverless function flow refers to that when a service needs a plurality of functions to perform cooperative work, a function programming tool is used for programming the execution flow of the functions, a unified API interface is provided for the outside, when an API is called, the plurality of functions are triggered to perform cooperative work, and a flow execution result is returned. The method comprises a method for maintaining a plurality of Serverless functions locally and a method for cooperatively deploying external services on which the functions depend, and comprises the following specific implementation processes: 1) A plurality of Serverless functions written by a developer are respectively deployed to the local by a Docker-composition technology, and each Serverless function is encapsulated by a Docker mirror image to isolate the service of each function; 2) Execution state management is provided for the Serverless function flows through a Master Docker, and running state management is provided for each Serverless function.
The Serverless function is deployed through a Docker container, so that logic of business processing written by a developer is realized; the Master Docker is responsible for maintaining and managing execution flows of a plurality of Serverless functions and Serverless function flows. The Master Docker and Serverless functions are described in detail below, respectively.
Master Docker maintains mainly the following:
service address of the function. When the function flow is deployed, the service address of each function is registered in the Master Docker node. When the execution of a certain function needs to call the service of other functions, the Master Docker provides the service address of the called function for the function, so that the problem that the service address of the function often changes when deployed locally can be solved.
Definition of function flow. The execution flow between the functions defined by the developer is maintained by the Master Docker node in the form of a configuration file, and the definition describes the execution sequence between the functions.
The execution state of the function flow. When the function is executed, the Master Docker abstracts each function into its own execution node, so that the execution state of the function flow can be maintained.
The Serverless function is implemented mainly by:
and submitting the developer function to the virtual running environment through path mapping through Docker container encapsulation.
In the virtual running environment, a pre-written server is started, HTTP request call from the outside is received, parameters transferred by the call are forwarded to a called function, a return result of the called function is obtained, and the return result is packaged into an HTTP Response message and returned to a developer.
The Serverless function also suffers from dependency problems when deployed, i.e., the function needs to rely on some library to be able to run. The method provided by the application comprises the steps that part of commonly used dependency libraries are deployed in the Docker mirror image in advance in a directory mapping mode when the Docker mirror image is constructed; if the developer still needs to rely on some library dependencies not provided in the original Docker mirror image, before starting the Docker container, the part of dependencies are installed locally, and then the dependencies are deployed into the Docker container in a directory mapping mode, meanwhile, the installed dependencies are cached locally, and the time for installation can be reduced when the next execution is performed, so that better experience of the developer is obtained; if the developer depends on some libraries of the operating system, the method can install or upgrade the underlying libraries of the operating system by writing an installation script when the Docker container runs, and the method can influence the efficiency of Docker startup to a certain extent.
The beneficial effects are that: compared with the prior art, the invention has the following advantages:
the multiple functions are deployed through the Docker-compound, and the deployed multiple functions are uniformly managed through the Master Docker node, so that the execution process of the function flow can be monitored, and the information such as the execution state of the function flow, the execution result of each function in the function flow, the calling mode of each function in the function flow, the execution time of the function flow and the like can be obtained. Meanwhile, the Master Docker can be used as a communication pipeline of a plurality of functions, receives call requests of a certain function to other functions, forwards request information to the called function, and returns an execution result of the called function as return information to the calling function.
On the Master Docker node, the execution state, the deployment state and other information of each Serverless function can be acquired, so that the state of the locally deployed Serverless function can be monitored. When a certain Serverless function fails to run normally due to unexpected situations, a developer can conduct fault investigation on the function by inquiring log information of a Master Docker. Under this model, the developer does not need to enter between functions to check the running condition of the functions.
On the Master Docker node, the input and output of each Serverless function can be obtained, so that a developer can conveniently detect the correctness of service logic of the deployed function flow. Because the execution flow of each Serverless function is simulated, the execution result of the Serverless function informs the Master Docker node, and the Master Docker node can save the execution result of each function for the inquiry of developers.
An external service collaborative deployment method of a Serverless function is provided. In the process of executing a function, certain external services, such as a database service MySQL, a message queue service rubbi mq, a key value and a database Redis, are usually relied on, and in local deployment, a developer also hopes to be able to deploy the services together with the function during the function deployment operation so as to obtain a complete function local execution effect. The method comprises the following key components:
external service definition. The developer may define external services on which the function depends, such as databases (e.g., mySQL), message queues, cache services, etc. The definition of these external services includes service type, connection parameters and configuration information.
Service containerization. For external services, the system may containerize them for deployment in the form of a Docker container in the native operating environment. This helps manage and isolate external services.
Local service coordination. To ensure proper coordination of external services with the Serverless function, the system provides service coordination and communication mechanisms to enable the function to access external services and implement the corresponding dependencies.
And (5) automatic configuration. The system may automatically configure external services including initializing database table structures, setting connection parameters, etc. This helps to reduce the need for manual configuration.
In order to facilitate the configuration of external services by developers, the invention further improves the original container mirror image of the external services. The main transformation is as follows:
configuration information of external services. In general, the use of external services requires initializing a corresponding data model and sometimes initializing basic data, and the invention supports the manual configuration of the external services by a developer so as to simulate any on-line extreme scene;
an operation log of external services. A developer is typically required to review access operation information of external services while running a function locally.
The invention provides the environment which is completely consistent with the last time of running the function, thus being beneficial to the developer to simulate the environment on part of lines and reappear the fault occurrence scene.
An integrated local deployment solution is provided that ensures that the Serverless function and its dependent external services can work in concert in the local environment.
Drawings
FIG. 1 is a diagram of the overall architecture of a method according to an embodiment of the present invention.
FIG. 2 is a diagram of the overall architecture of a single function implementation of an embodiment of the present invention.
Detailed Description
The present invention is further illustrated below in conjunction with specific embodiments, it being understood that these embodiments are meant to be illustrative of the invention only and not limiting the scope of the invention, and that modifications of the invention, which are equivalent to those skilled in the art to which the invention pertains, will fall within the scope of the invention as defined in the claims appended hereto.
A local deployment method of a Serverless function flow comprises the following steps: 1) A plurality of Serverless functions written by a developer are respectively deployed to the local by a Docker-composition technology, and each Serverless function is encapsulated by a Docker mirror image to isolate the service of each function; 2) Execution state management is provided for the Serverless function flows through a Master Docker, and running state management is provided for each Serverless function.
1) When a plurality of functions need to be cooperatively deployed to jointly provide external services, the name of the external services can be defined through a Docker-compound technology, ports exposed by the plurality of Serverless functions to the outside are managed, and point-to-point communication among the Serverless functions is realized through an internal virtual network.
The Serverless function refers to a functional processing function written by a developer for processing a specific service, which is deployed as a service on a cloud platform.
The virtual running environment refers to a virtual environment supporting the running of developer server functions. The virtual environment herein includes two aspects: (1) and (5) virtual of the service. The server function of the developer is operated through a Docker, a server is operated in a Docker container provided by the developer, the server is used for keeping the online state of the function service, simultaneously interfaces of the function service are exposed outwards, further different function services are isolated, each server function is operated in a sandbox as if the server function is provided by the developer on the same host; (2) and (5) virtual in running. The method provided by the invention is that part of commonly used dependency libraries are deployed in the Docker mirror image in advance in a directory mapping mode when the Docker mirror image is constructed; if the library Docker image which is still needed to be relied by the developer is not provided, the part of the dependence is installed locally before the Docker container is started, and then the dependence is deployed into the Docker container in a directory mapping mode; if the developer depends on libraries which need some libraries of the operating system, the bottom-layer libraries of the operating system can be installed or upgraded when the Docker container runs by writing an installation script. Through the virtual operation, the developer only needs to define the dependent operation time environment for the Serverless function, and based on the method, the invention provides a unified programming interface for the operation time environment of different Serverless functions.
The invention provides a method for deploying a Serverless function based on a Docker for a developer, which is essentially to construct an operation virtual environment. The process comprises the following steps: (1) the method provides the Docker mirror image with different running time by installing the compiler and the interpreter required by the code running in advance, and the Docker mirror image respectively comprises a NodeJS running environment, a Python running environment, a Go running environment and a Java running environment. When a developer designates the running environment of the Serverless function, selecting a Docker mirror image corresponding to the running environment; (2) when the Serverless function is deployed, address mapping is carried out on a Serverless function path of a developer and a Docker container path, and the functions are uploaded into the Docker container. A server is operated in the container, and a request processing function of the server points to a Serverless function provided by a developer, so that Serverless call of the developer function is realized; (3) a deployment method of an underlying library is provided. Before the container is started, the code package required by deployment is installed in a local environment, the local code package is mapped with the address corresponding to the container in a code mapping mode, and then the code package provided by a developer can be called after the code package runs in the container.
Each Serverless function runs in an independent virtual environment, and when a developer deploys the Serverless function locally, the URL interface of the function is exposed to the outside. The HTTP request from the client starts the operation of the developer server function as a calling mode, a request body of the client is forwarded as an entry of the function, and a result of the function execution is returned to the client in an HTTP message mode through a Master Docker. After being executed, the Serverless function does not destroy the Docker container it runs, but continues to snoop on the client call waiting for the next time.
In the 2), the Master Docker is used as a unified API interface of the function flow, and provides an entry calling mode of the function flow, and the maintained contents include: (1) service address of the function. The Master Docker manages the URL address of each Serverless, and the abstraction of function communication is realized at the function service level; (2) definition of function flow. The Master Docker maintains a function flow defined by a developer, and generates a function flow static diagram corresponding to the function flow definition through a function programming idea; (3) the execution state of the function flow. In combination with the static diagram of the function flow described in (2), the Master Docker abstracts each Serverless function into an execution node on the diagram, and monitors the deployment state and the execution state of the function flow. Through the above functional requirements, the building of the Master Docker component includes:
and monitoring the server. The Master Docker is used as a unified API interface of the function flow, and a server is operated to monitor the request of the client;
service forwarding functions. The Master Docker registers the service address of each Serverless function as the own route, establishes the mapping from the route to the service address of the Serverless function, and forwards the request to the corresponding Serverless function through the corresponding route when the monitoring server monitors the request call of the client;
the result is an aggregate function. When the Master Docker executes the function flow, a plurality of Serverless functions are called, the returned results are respectively called and aggregated into a dictionary, the dictionary describes the input and the output of each function flow execution step, and the aggregated results are returned to a developer by the Master Docker in the above result aggregation mode so as to obtain a single step execution result in the function flow execution process.
In the step 2), a process of generating a corresponding function flow static diagram from a function flow defined by a developer through a Master Docker through a function programming idea includes: (1) the Master Docker regards the Serverless function related in the function flow defined by the developer as a pure function in the core concept of the functional programming idea, and has the characteristics of no side effect and no state; (2) the Master Docker regards the step definition of the Serverless function involved in the function flow defined by the developer as a pipeline in the core concept of the functional programming idea, and connects a plurality of Serverless functions in series so that data can flow from the output of one Serverless function to the input of another Serverless function; (3) the Master Docker can generate a Serverless function flow static diagram corresponding to the Serverless function flow definition by taking a pure function as a node of the diagram and taking a pipeline as an edge of the diagram.
The external service cooperative deployment method of the Serverless function flow is characterized in that external services required by the function flow in operation are cooperatively deployed together with the deployment of the Serverless function flow in a mode of a Docker through a Docker-compound technology.
The external service refers to a data processing service relied on by the Serverless function in running, and a developer can be deployed together with the external service when the Serverless function is deployed so as to obtain a complete function execution flow.
External service: (1) has observability. The invention provides a method for acquiring an external service operation log based on the original Docker image provided by the external service, which stores the access record of the Serverless function to the external service in a Docker container of the external service in a log form so as to facilitate a developer to monitor the access state of the external service; (2) has temporary property. The data processing capability of the external service is temporary so that the developer reproduces the operation scenario; (3) has configurability. The external service provided by the invention allows a developer to define the initial state of the service, and allows the developer to define the initial data of the service so as to simulate various on-line running scenes.
A method for constructing external service image, running a server on original official image of external service, the configuration of the server is as follows: (1) the server listens to the operation log of the external service, including but not limited to binary log, query log, error log. The method is characterized in that a monitoring process is created through the same network naming space in an external service container to access an operation log of external service, and the monitoring process acquires an operation log file through a default log path of an external service mirror image; (2) the server acquires real-time data of the operation log through a network connection or a socket connection with an external service. The implementation is that the server connects to the server of the external service container using a TCP socket to acquire real-time operation log data, which is achieved by using a built-in communication protocol of the external service; (3) the server can perform certain operations in response to change events in the oplog, such as logging, alerting, or forwarding. An embodiment is that the server includes a custom event handling module for altering events in response to specific data, including logging the events to a log file or communicating the event data to an external system.
The external service collaborative deployment method comprises the following steps: (1) definition and configuration of external services. First, the developer needs to define external services that function flow depends on, such as database services (e.g., mySQL, postgreSQL), message queue services (e.g., kafka, rocketMQ), and cache services (e.g., redis). For each external service, i) a service type of the service needs to be provided. Specifying the type of external service so that the functional flow knows how to connect and use it; ii) connection parameters. Including host address, port number, user name and password, so that the function flow can access external services; iii) Configuration information. Providing other configuration parameters as required; (2) and (5) external service containerization. For each external service, the invention containerizes it for operation in the local environment. The invention is realized by using a container arrangement tool, namely a Docker-compound, ensures that external services can work together with a function flow container, and is beneficial to isolating the external services and simplifying deployment and management.
The following is an example describing how a stream of Serverless functions may be deployed locally, exemplified by WordCount. WordCount is a common big data MapReduce used for word segmentation of a very long text, then the text is sent to a plurality of word counters for word counting, and finally a unified function is used for integrating all functions. This example is used to illustrate the application of the Serverless computing model. In this example, we will describe an example of how to implement running the WordCount function flow locally and use external dependencies therein.
In a first step, a function is defined. First, we define a WordCount function flow, which includes the following functions:
exechamtor: text retrieval function. The function is responsible for retrieving text.
split: text segmentation function. This function splits the text data into words.
count: word count function. This function counts the number of occurrences of each word.
sort: the result is output as a function. This function will order the results output by each count function. And outputs the sorting result to an external database
And secondly, locally deploying. We are now ready for a number of functions responsible for handling traffic, which can be deployed locally separately. It will comprise the following several sub-steps.
A function flow is defined. In the example, the definition function is executed by the executor to acquire the text, then the text is divided into a plurality of sub-texts by the split function, the text is respectively transmitted to the plurality of count functions to process the words of each part of text, and finally the sort function performs unified sorting processing.
An external service is defined. In this example, execution of the Serverless function flow relies on an external MySQL database service, and the developer needs to provide the access address of the database and the database structure table for that case.
And deploying a function flow. In this example, there are four already written Serverless functions and MySQL database services that need to be deployed.
And thirdly, executing the function flow. The WordCount function flow is executed in the native operating environment. By observing the log information output by the Master Docker, the execution condition of each Serverless function in the current function flow example can be observed.
Fourth, debugging and monitoring. A developer may use a local debug tool (e.g., VSCode, IDEA, etc.) to monitor execution of the function flow, set breakpoints, and check the function state. This helps debug and optimize the function flow.
This example demonstrates how a Serverless function flow, including dependent external services, can be deployed locally. It provides a powerful tool for developing and testing Serverless function flows locally while reducing the costs and complexity associated with cloud deployment and testing.

Claims (9)

1. A method for locally deploying a Serverless function stream, comprising: the method comprises a method for maintaining a plurality of Serverless functions locally and a method for cooperatively deploying external services on which functions depend, wherein the method comprises the following specific implementation processes: 1) A plurality of Serverless functions are deployed to the local through a Docker-composition technology, and each Serverless function is encapsulated through a Docker mirror image to isolate the service of each function; 2) Execution state management is provided for the Serverless function flows through a Master Docker, and running state management is provided for each Serverless function.
2. The method for locally deploying a Serverless function stream according to claim 1, wherein in 1), each Serverless function runs in an independent virtual running environment, when a plurality of functions need to be cooperatively deployed to jointly provide services to the outside, names of the services to the outside are defined through a Docker-compound technology, ports exposed to the outside of the plurality of Serverless functions are managed, and point-to-point communication between the Serverless functions is realized by forming an internal virtual network;
the Serverless function refers to a functional processing function written by a developer for processing a service, which is deployed as a service on a cloud platform.
3. The method for local deployment of a Serverless function stream according to claim 2, wherein the virtual running environment refers to a virtual environment supporting the running of Serverless functions; the virtual environment includes two aspects: (1) virtual of the service; the server function is operated through a Docker, a server is operated in a Docker container provided by the Docker, the server is used for keeping the online state of function service, simultaneously interfaces of the function service are exposed to the outside, further different function services are isolated, and on the same host, a developer only needs to provide the server function; (2) virtual in operation; pre-deploying part of the dependency library with high use frequency into the Docker mirror image in a directory mapping mode when the Docker mirror image is constructed; if the library Docker image which is still needed to be relied by the developer is not provided, the part of the dependence is installed locally before the Docker container is started, and then the dependence is deployed into the Docker container in a directory mapping mode; if the libraries relied on by the developer need some libraries of the operating system, the bottom-layer libraries of the operating system are installed or upgraded when the Docker container runs by writing an installation script.
4. The method of claim 2, wherein the process of building the run-time virtual environment comprises: (1) the method includes providing Docker images with different running modes by installing a compiler and an interpreter required by code running in advance, wherein the Docker images respectively comprise a NodeJS running environment, a Python running environment, a Go running environment and a Java running environment; when a developer designates the running environment of the Serverless function, selecting a Docker mirror image corresponding to the running environment; (2) when the Serverless function is deployed, address mapping is carried out on a Serverless function path of a developer and a Docker container path, and the functions are uploaded to the Docker container; a server is operated in the container, and a request processing function of the server points to a Serverless function provided by a developer, so that Serverless call of the developer function is realized; (3) providing a deployment method of a bottom layer library; before the container is started, the code package required by deployment is installed in a local environment, the local code package is mapped with the address corresponding to the container in a code mapping mode, and then the code package provided by a developer can be called after the code package runs in the container;
each Serverless function operates in an independent virtual environment, and when a developer deploys the Serverless function locally, the URL interface of the function is exposed outwards; the HTTP request from the client starts the operation of the developer server function in a calling mode, the request body of the client is forwarded as an input parameter of the function, and the result of the function execution is returned to the client in an HTTP message mode through a Master Docker; after being executed, the Serverless function does not destroy the Docker container it runs, but continues to snoop on the client call waiting for the next time.
5. The method for locally deploying a Serverless function stream according to claim 1, wherein 2) the Master Docker is used as a unified API interface of the function stream, and provides an entry calling mode of the function stream, and the maintained contents include: (1) service address of the function; the Master Docker manages the URL address of each Serverless, and the abstraction of function communication is realized at the function service level; (2) defining a function flow; the Master Docker maintains a function flow defined by a developer, and generates a function flow static diagram corresponding to the function flow definition through a function programming idea; (3) the execution state of the function flow; in combination with the function flow static diagram described in the step (2), a Master Docker abstracts each Serverless function into an execution node on the diagram, and monitors the deployment state and the execution state of the function flow; the components that build Master Docker include:
monitoring a server; the Master Docker is used as a unified API interface of the function flow, and a server is operated to monitor the request of the client;
a service forwarding function; the Master Docker registers the service address of each Serverless function as the own route, establishes the mapping from the route to the service address of the Serverless function, and forwards the request to the corresponding Serverless function through the corresponding route when the monitoring server monitors the request call of the client;
a result aggregation function; when the Master Docker executes the function flow, a plurality of Serverless functions are called, the returned results are respectively called and aggregated into a dictionary, the dictionary describes the input and the output of each function flow execution step, and the Master Docker returns the aggregated results to a developer to obtain a single step execution result in the function flow execution process.
6. The method for locally deploying a Serverless function flow according to claim 5, wherein in 2), the process of generating the corresponding function flow static diagram from the function flow defined by the developer by the Master Docker through the function programming idea comprises: (1) master Docker regards the Serverless functions involved in the developer-defined function flow as pure functions; (2) master Docker regards the Serverless function step definitions involved in the developer-defined function flow as a pipeline, concatenating multiple Serverless functions together so that data flows from the output of one Serverless function to the input of another Serverless function; (3) and generating a Serverless function flow static diagram corresponding to the Serverless function flow definition by taking the Master Docker as a node of the diagram and taking the pipeline as an edge of the diagram.
7. The method for locally deploying the Serverless function flow according to claim 1, wherein the external service of the Serverless function flow is cooperatively deployed together with the deployment of the Serverless function flow in a manner of a Docker by a Docker-compound technology;
the external service refers to a data processing service relied on by the Serverless function in running, and a developer deploys the Serverless function together with the external service to obtain a complete function execution flow;
external service: storing the access record of the Serverless function to the external service in a Docker container of the external service in a log form so as to enable a developer to monitor the access state of the external service; the data processing capability of the external service is temporary so that the developer reproduces the operation scenario; the external service allows a developer to define a service initiation state, and allows the developer to define service initiation data so as to simulate various on-line operation scenes.
8. The method for local deployment of a Serverless function stream according to claim 1, wherein the method for constructing an external service image runs a server on an original official image of the external service, and the server is configured as follows: (1) the server monitors operation logs of external services, including binary logs, query logs and error logs; creating a monitoring process in the same network naming space as the external service container to access an operation log of the external service, wherein the monitoring process acquires an operation log file through a default log path of an external service mirror image; (2) the server acquires real-time data of the operation log through network connection or socket connection with external service; the server connects to the server of the external service container using a TCP socket to acquire real-time operation log data by using a built-in communication protocol of the external service; (3) the server is capable of performing a specific operation in response to a change event in the operation log; the server contains a custom event processing module for responding to specific data change events, including logging the events to log files or communicating the event data to external systems.
9. The method of claim 1, wherein the external service co-deployment method, wherein the steps of the mentioned co-deployment method include: (1) definition and configuration of external services; first, the developer needs to define the external services on which the function flow depends, for each external service, the service needs to be provided:
i) A service type; specifying a type of external service;
ii) a connection parameter; including host address, port number, user name and password;
iii) Configuration information; providing other configuration parameters as required;
(2) external service containerization; for each external service, it is containerized, implemented using the container orchestration tool Docker-compound.
CN202311462881.4A 2023-11-06 2023-11-06 Local deployment method of Serverless function flow Pending CN117519842A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311462881.4A CN117519842A (en) 2023-11-06 2023-11-06 Local deployment method of Serverless function flow

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311462881.4A CN117519842A (en) 2023-11-06 2023-11-06 Local deployment method of Serverless function flow

Publications (1)

Publication Number Publication Date
CN117519842A true CN117519842A (en) 2024-02-06

Family

ID=89744888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311462881.4A Pending CN117519842A (en) 2023-11-06 2023-11-06 Local deployment method of Serverless function flow

Country Status (1)

Country Link
CN (1) CN117519842A (en)

Similar Documents

Publication Publication Date Title
US10360141B2 (en) Automated application test system
US11048572B2 (en) System and method for failure management using distributed execution traces
US20080320071A1 (en) Method, apparatus and program product for creating a test framework for testing operating system components in a cluster system
US20030233634A1 (en) Open debugging environment
US20170364844A1 (en) Automated-application-release-management subsystem that supports insertion of advice-based crosscutting functionality into pipelines
CN112698921A (en) Logic code operation method and device, computer equipment and storage medium
CN115357369B (en) CRD application integration calling method and device in k8s container cloud platform
CN114115838A (en) Data interaction method and system based on distributed components and cloud platform
CN111045652A (en) Power distribution network development and service system
US20060112397A1 (en) Cross-architecture software development
US20060136933A1 (en) Server-side eventing for managed server applications
CN108496157B (en) System and method for providing runtime trace using an extended interface
CN115103012B (en) Geospatial information microservice integration system and method
CN116827838A (en) Micro-service chaos test method and system based on automatic dependency discovery and agent
CN111414187A (en) Service integration open platform and spatial information application method
CN116502437A (en) Signal-level simulation platform clouding method based on cloud+end architecture
CN117519842A (en) Local deployment method of Serverless function flow
CN116263694A (en) Warehouse cluster deployment method and device and computing equipment
US20180276036A1 (en) System and method for providing a native job control language execution engine in a rehosting platform
Mos A framework for adaptive monitoring and performance management of component-based enterprise applications
WO2024040930A1 (en) Software deployment architecture management method and related device
US11645193B2 (en) Heterogeneous services for enabling collaborative logic design and debug in aspect oriented hardware designing
WO2024012101A1 (en) Distributed-service generation method and system, and computing device and storage medium
CN117873602A (en) Management method, device, equipment and storage medium of distributed computing framework
Neto A Container-based architecture for accelerating software tests via setup state caching and parallelization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination