CN116521334A - CI/CD task execution method - Google Patents

CI/CD task execution method Download PDF

Info

Publication number
CN116521334A
CN116521334A CN202310329666.0A CN202310329666A CN116521334A CN 116521334 A CN116521334 A CN 116521334A CN 202310329666 A CN202310329666 A CN 202310329666A CN 116521334 A CN116521334 A CN 116521334A
Authority
CN
China
Prior art keywords
target
pipeline
description
code
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310329666.0A
Other languages
Chinese (zh)
Inventor
邹建列
王仁达
李楠轩
王子健
袁坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202310329666.0A priority Critical patent/CN116521334A/en
Publication of CN116521334A publication Critical patent/CN116521334A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3867Concurrent instruction execution, e.g. pipeline or look ahead using instruction pipelines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a CI/CD task execution method, which relates to the technical field of cloud computing, and comprises the following steps: receiving pipeline description parameters of the CI/CD, and determining the association relation between the pipeline description parameters and the trigger component; when the code warehouse generates an event, determining a target trigger component corresponding to the event according to a pre-configured corresponding relation, and determining a target pipeline corresponding to the target trigger component according to an association relation; the second description parameters are sent to the target user, so that the target user can generate an operation environment of the target pipeline on the cloud platform by using the second description parameters; and sending the target subtasks in the target pipeline to the target user so that the target user executes the target subtasks in the running environment. The method and the device can greatly reduce the resource consumption and avoid resource waste; the CI/CD task delivery can be supported by using a code warehouse with various forms; and the data isolation among users can be realized, and the safety of the data is ensured.

Description

CI/CD task execution method
Technical Field
The application relates to the field of cloud computing, in particular to a CI/CD task execution method.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
Serverless (Serverless) is a model of cloud computing. Based on platform as a service (PaaS), no server operation provides a miniature architecture, and terminal clients do not need to deploy, configure or manage server services, and server services required by code running are provided by a cloud platform. The Serverless architecture processes incoming requests by automatically running a sufficient number of service instances, and has the characteristics of high scalability, workflow driving, billing per use, and the like.
In recent years, a large number of individual, enterprise-level users have used the Serverless product for practical production. In actual use, the large-scale enterprise-level user multiplexes the existing delivery flow in the enterprise, the client can realize CI/CD (a method for frequently delivering applications to the client by introducing automation in the application development stage) through a certain secondary research and development, and the code is delivered to the Serverless product. While small and medium-sized enterprises and personal enterprises have no extra effort to put into the part of work, manual test deployment to a Serverless console is mostly adopted. Based on this, existing solutions, which support users to deploy components provided by commercial companies on their own infrastructure, require users to run the cluster at all times, resulting in idle computational resource waste.
Disclosure of Invention
The embodiment of the application provides a CI/CD task execution method, which can execute the CI/CD task in a user-controllable and credible execution environment with lower consumption of computing resources based on a Serverless architecture, thereby improving the code delivery efficiency of a user.
According to an aspect of the present application, there is also provided a CI/CD task execution method, which is applied to a server architecture, the method including: receiving pipeline description parameters of the CI/CD, and determining the association relation between the pipeline description parameters and a trigger component; the pipeline description parameters comprise a first description parameter and a second description parameter, wherein the first description parameter is used for describing subtasks of the pipeline, and the second description parameter is used for describing resources required by the execution pipeline; when an event is generated by a code warehouse, determining a target trigger component corresponding to the event according to a pre-configured corresponding relation, and determining a target pipeline corresponding to the target trigger component according to the association relation; sending the second description parameters to a target user so that the target user can generate an operating environment of the target pipeline on a cloud platform by using the second description parameters; and sending the target subtasks in the target pipeline to the target user so that the target user executes the target subtasks in the running environment.
According to another aspect of the present application, there is also provided a CI/CD task performing method, which is applied to a server architecture, the method including: receiving a second description parameter sent to a target user by a provider, and generating an operation environment of a target pipeline on a cloud platform by using the second description parameter; wherein the second description parameter is used for describing resources required by the execution pipeline; the provider is used for receiving the pipeline description parameters of the CI/CD, determining the association relation between the pipeline description parameters and the trigger components, determining a target trigger component corresponding to an event according to a preset corresponding relation when the code warehouse generates the event, and determining the target pipeline corresponding to the target trigger component according to the association relation; and executing the target subtask in the running environment when receiving the target subtask of the target pipeline.
According to another aspect of the present application, there is also provided an electronic apparatus including: a processor; and a memory storing a program, wherein the program comprises instructions that when executed by the processor cause the processor to perform a method according to the above.
According to another aspect of the present application, there is also provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method steps according to the above.
According to another aspect of the present application, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements the above-mentioned method steps.
In the embodiment of the application, the pipeline description parameters of the CI/CD are received, and the association relation between the pipeline description parameters and the trigger component is determined; the pipeline description parameters comprise a first description parameter and a second description parameter, wherein the first description parameter is used for describing subtasks of the pipeline, and the second description parameter is used for describing resources required by the execution pipeline; when an event is generated by a code warehouse, determining a target trigger component corresponding to the event according to a pre-configured corresponding relation, and determining a target pipeline corresponding to the target trigger component according to the association relation; sending the second description parameters to a target user so that the target user can generate an operating environment of the target pipeline on a cloud platform by using the second description parameters; and sending the target subtasks in the target pipeline to the target user so that the target user executes the target subtasks in the running environment. The application adopts the Serverless architecture to process the pipeline task of the CI/CD, so that the resource consumption can be greatly reduced when no task is executed, and the resource waste is avoided; when the code warehouse generates an event, the trigger component triggers the target assembly line, so that the CI/CD task delivery can be supported by using the code warehouse with various forms, in addition, the user definition of the target user to the CI/CD task operation environment can be realized through the second description parameters, the data isolation among users is realized, and the safety of the data is ensured.
Drawings
Further details, features and advantages of the present disclosure are disclosed in the following description of exemplary embodiments, with reference to the following drawings, wherein:
FIG. 1 illustrates a flowchart one of a CI/CD task execution method according to an embodiment of the application;
FIG. 2 illustrates a second flowchart of a CI/CD task execution method according to an embodiment of the application;
FIG. 3 shows a Serverless CI/CD task execution engine architecture flow diagram;
fig. 4 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Considering the usage scenario relative comparison determination of Serverless, most users can actually multiplex a set of delivery flows. Therefore, the small and medium-sized users do not need to implement the complex and determined process again, and do not need to pay a lot of cost for the process, so that more resources are consumed. Therefore, if the Serverless platform can provide necessary infrastructure to help the user complete the work of the link, the small and medium users can be liberated, and the cost of code delivery of the small and medium users can be reduced.
To solve the above-mentioned problems, a first existing solution supports a customer to deploy components provided by a commercialized company on its own infrastructure. The client may trigger deployment by placing the corresponding CI/CD description file in the code, submitting the code. The solution requires the clients to run the cluster at all times, and takes up costs for idle computing resources. For small and medium-sized users, they need to pay not only a certain authorized cost, but also cost to be born for the idle resources of the cluster. Without the multi-tenant related scheme, sharing the cluster with other users is dangerous, so that small and medium-sized users cannot reduce the cost by sharing the cluster, and the product provider cannot directly package the users to provide services.
In the existing scheme II, the user only needs to submit a description file of the CI/CD process in the code warehouse, and the scheme executes the CI/CD process each time the user submits the description file. Although this solution is also Serverless to the customer, the product form of this solution is bound to the code repository, and when using the GitLab (an online code repository based on Git, which is a version control system for storage and version control of the code) code repository, the code repository on gitub (an online code repository based on Git, which is a version control system for storage and version control of the code) is not supported, and when using the gitub code repository, the code repository on GitLab is also not supported. In addition, in the scheme, tasks are images running in a cluster of a provider, only user-defined running images are supported, related configuration of a user-defined network and a container cannot be supported, and a user cannot see the resource occupation and efficiency conditions of task running without perfect observability.
Based on the background, the application now provides a CI/CD task execution method, the scheme is a Serverless multi-tenant scheme, a user does not need to operate and maintain a cluster, the cost does not need to be paid for the idle of the cluster, and a product provider does not need to pay cost for the limitation of the cluster; the scheme can be applied to code warehouses in any form and is triggered only by a trigger component; the scheme builds the Serverless cluster, and a user can customize a function for running the task, so that the task runs in a customized network environment, cloud resources which are not disclosed in the construction process are more flexible than a container mirror image mode for defining the running task.
The following is a description of terms involved in the present application.
Serverless: a server-free computing architecture with low cost is provided.
CI/CD: continuous integration and continuous delivery, a DevOps technology, helps customers to build and deploy fast and frequently.
Management and control assembly one: is responsible for providing pipeline dependent APIs (Application Programming Interface, application programming interfaces).
And a second management and control component: responsible for orchestrating pipeline execution.
Working assembly: is responsible for the actual construction and deployment work.
In this embodiment, a CI/CD task execution method is provided, fig. 1 is a flowchart of a CI/CD task execution method according to an embodiment of the present application, and method steps involved in fig. 1 are described below.
Firstly, it should be noted that, the server is a calculation model based on event driving, and the user only needs to write and upload the passcode, and does not need to care about management and maintenance of the server. In the Serverless model, a cloud service provider automatically manages and allocates computing resources, and users only pay for the actual computing resources used, without purchasing and maintaining servers in advance. Thus, the method builds a pipelined system based on the Serverless architecture, that is, the components or program products used to perform the following steps S102-S108 and their associated steps are Serverless, and the user is not concerned with the management and maintenance of the server.
Step S102, receiving pipeline description parameters of the CI/CD, and determining the association relation between the pipeline description parameters and a trigger component; the pipeline description parameters comprise a first description parameter and a second description parameter, wherein the first description parameter is used for describing subtasks of the pipeline, and the second description parameter is used for describing resources needed by the execution pipeline.
In this step, pipeline description parameters are submitted by the user for describing the data of the pipeline system to be built. The pipeline system is used to perform CI/CD tasks, each of which may be a pipeline. Each set of pipeline description parameters may be used to describe a pipeline, so that when a set of pipeline description parameters of a CI/CD is received, the set of parameters is bound to a trigger component to determine an association between the two.
The trigger component is configurable according to actual requirements, and is configured to trigger a pipeline, for example, a Webhook (a callback function based on HTTP (Hyper Text Transfer Protocol, hypertext transfer protocol)) may be configured to implement lightweight event-driven communications between 2 Application Programming Interfaces (APIs). It should be noted that the name Webhook is a simple combination of both the web (indicating that it is an HTTP-based communication) and a hook programming function (allowing applications to intercept calls or other events that may be of interest). Webhook hooks events that occur on the server application and prompts the server to send the payload to the client over the web. Webhook will only receive calls when certain conditions are met, such as when there is a data update by the connected external system. Webhook is better suited for smaller data requests and lighter tasks than using them to act as the primary data transfer service. Webhook is typically not used to request data on a regular basis and is triggered only when new data is available.
Each set of pipeline description parameters comprises a first description parameter and a second description parameter, wherein the first description parameter is used for describing subtasks of the pipeline, and the second description parameter is used for describing resources required by the execution pipeline. One pipeline can be composed of a plurality of subtasks, and execution of the pipeline is realized by executing the subtasks successively. The resources needed when the pipeline is executed include storage resources, computing resources, and network resources, which are described by the second description parameters.
In this step, when the user submits the pipeline description parameter, the user submits the second description parameter, and based on the second description parameter, the user can be used for customizing the execution environment of the pipeline, so that debugging difficulty can be reduced.
Step S104, when the code warehouse generates an event, determining a target trigger component corresponding to the event according to a pre-configured corresponding relation, and determining a target pipeline corresponding to the target trigger component according to the association relation.
In this step, a code repository may be used to store codes, receive codes newly submitted by users, and maintain codes already stored therein. The form, authority, etc. of the code warehouse can be selected according to actual requirements, and the embodiment of the invention is not particularly limited.
Code repository generation events refer to the occurrence of code submissions, code moves, etc. in the code repository to operate on the code. In the attribute of the code warehouse, the corresponding relation between the trigger component and the event is preconfigured. When the code warehouse generates an event, determining a target trigger component corresponding to the event according to a pre-configured corresponding relation, and determining a target pipeline corresponding to the target trigger component according to the association relation. The target pipeline is the pipeline that is currently to be processed as determined from the plurality of pipelines.
In this step, the pipeline is automatically triggered by the triggering component, and thus CI/CD task delivery can be supported by using a code warehouse with various forms.
And step S106, the second description parameters are sent to a target user, so that the target user generates the running environment of the target pipeline on a cloud platform by using the second description parameters.
In this step, it should be noted that the method may be applied to a product provider, which may provide services to a plurality of users. The provider and the user have different rights to accounts on the cloud platform, and the rights between the users are also different. And the second description parameters are sent to the target user, and the target user utilizes the second description parameters to self-define the running environment of the target pipeline on the cloud platform in the account authority of the target user, namely the second description parameters are sent to the target user, so that the target user utilizes the second description parameters to generate the running environment of the target pipeline on the cloud platform. The target user is a user who wants to use the service provided by the product provider, and may be any user corresponding to the provider on the cloud platform. In the step, the data isolation between users can be realized, and the safety of the data is ensured.
It should be noted that, the cloud platform is a service platform based on a cloud computing technology, and provides a series of cloud computing services, including computing, storage, network, security, and other services. The cloud platform can help enterprises and individuals to quickly build and deploy the application programs, and the reliability, expandability and safety of the application programs are improved. The cloud platform can also provide the characteristics of elastic calculation, pay-per-demand, automatic management and the like, helps users reduce IT (Internet Technology ) cost, and improves the utilization rate of IT resources.
It should be further noted that the execution environment may include any information for executing the work components of the target pipeline, such as network configuration information, container configuration information, and mirror configuration information.
Step S108, sending the target subtasks in the target pipeline to the target user so that the target user executes the target subtasks in the running environment.
In the step, after a target user establishes a running environment required by a target pipeline for execution on a cloud platform, a provider sends one or more executable subtasks in the pipeline to the target user as target subtasks, namely, the target subtasks in the target pipeline are sent to the target user, so that the target user executes the target subtasks in the running environment. The target user grants authorization to the cloud platform, so that the target subtasks can be executed according to the sequence of receiving the target subtasks based on the customized running environment in the cloud platform until all the subtasks in the target pipeline are completed, and the execution of the target pipeline is completed. It should be noted that the subtasks that can be executed refer to subtasks that are independent of other subtasks or dependent subtasks that have been completed.
It should be noted that, server is a technical architecture adopted by the cloud platform, and a user may register an account on the cloud platform and perform data processing based on the account. The construction of the pipeline running environment and the execution of the target subtasks are completed in respective accounts by the cloud platform according to the authorization of each user.
In the step, the provider and the user only need to pay for the resources consumed in the pipeline operation, so that the idle time cost is reduced to 0, the resource consumption is greatly reduced, and the resource waste is avoided.
In the embodiment of the application, the pipeline description parameters of the CI/CD are received, and the association relation between the pipeline description parameters and the trigger component is determined; the pipeline description parameters comprise a first description parameter and a second description parameter, wherein the first description parameter is used for describing subtasks of the pipeline, and the second description parameter is used for describing resources required by the execution pipeline; when an event is generated by a code warehouse, determining a target trigger component corresponding to the event according to a pre-configured corresponding relation, and determining a target pipeline corresponding to the target trigger component according to the association relation; sending the second description parameters to a target user so that the target user can generate an operating environment of the target pipeline on a cloud platform by using the second description parameters; and sending the target subtasks in the target pipeline to the target user so that the target user executes the target subtasks in the running environment. The application adopts the Serverless architecture to process the pipeline task of the CI/CD, so that the resource consumption can be greatly reduced when no task is executed, and the resource waste is avoided; when the code warehouse generates an event, the trigger component triggers the target assembly line, so that the CI/CD task delivery can be supported by using the code warehouse with various forms, in addition, the user definition of the target user to the CI/CD task operation environment can be realized through the second description parameters, the data isolation among users is realized, and the safety of the data is ensured.
In a possible implementation manner, determining the target pipeline corresponding to the target trigger component according to the association relation may be performed according to the following steps: determining a target pipeline description parameter corresponding to the target trigger component according to the association relation; and determining a plurality of subtasks of the target pipeline and the dependency relationship among the plurality of subtasks according to the target pipeline description parameters and the events corresponding to the target trigger components.
In the embodiment of the invention, after a target trigger component corresponding to an event generated in a code warehouse is determined from a plurality of trigger components based on the corresponding relation of code warehouse configuration, pipeline description parameters bound by the target trigger component are used as target pipeline description parameters. According to the target pipeline description parameters and the events corresponding to the target trigger components, codes to be processed can be determined, the codes are split into multiple groups of sub-codes based on the target pipeline description parameters, and each group of sub-codes is used as a sub-task, so that multiple sub-tasks are obtained.
Since there is a dependency between each set of subcodes, the dependency between these subcodes can be determined, i.e. the dependency between the plurality of subcodes can be determined, according to the target pipeline description parameters. And taking the obtained dependency relationship among the plurality of sub-tasks as a target pipeline.
Through this step, code submitted by the code warehouse is converted into pipeline tasks so as to realize CI/CD based on Serverless architecture.
In one possible implementation, the sending of the target subtasks in the target pipeline to the target user may be performed as follows: determining the subtask processing sequence of the target pipeline according to the first description parameter; and determining a target subtask according to the subtask processing sequence, and sending the target subtask to the target user.
In the embodiment of the invention, the first description parameter is used for describing the subtasks of the pipeline, and specifically may include functions to be implemented of each subtask, input information, output information, dependency relationships among the subtasks, and the like. Based on the first description parameter analysis pipeline, tasks which can be executed are searched, and the subtask processing sequence of the target pipeline is determined. In the step, executable tasks are sent to target users for execution, so that subtasks of all users are guaranteed to be processed under respective authorities, the safety of data is guaranteed, and data isolation in a multi-tenant scene is realized.
In one possible implementation, a code repository generates events comprising: the code is processed through a GitLab code repository and/or a Github code repository, wherein the processing code includes a commit code and/or an adjust code.
In the embodiment of the invention, a user can process codes through a GitLab code warehouse and/or a Github code warehouse, and the two forms of code warehouses can be used simultaneously or alternatively. It should be noted that, in addition to the above two types of code warehouses, other types of code warehouses may be adopted, and may be specifically determined according to a user requirement, which is not specifically limited in the embodiment of the present invention.
It should be noted that the Github code repository is a currently larger code hosting platform, and the GitLab code repository has a complete management interface and authority control. In selecting a code repository, gitLab is a better choice from the privacy of the code, while Github may be better suited for code hosting of open source projects.
The manner in which the code is processed may include submitting the code and/or adjusting the code, and the user may submit new code to both types of code stores as well as adjust the code already present in the code stores. Means for adjusting the code include, for example: modifying the code content, changing the storage location of the code, merging the code or branching the code, and the like. It should be noted that, the manner of processing the code includes, but is not limited to, the foregoing manner, and may be selected according to actual needs, which is not particularly limited in the embodiment of the present invention. The specific operations of various processing codes can be used as different events, and the corresponding relations between the events and the trigger components are respectively set through the attribute parameters of the code warehouse.
In this step, since the scheme is based on the triggering component triggering the automatic pipeline, the triggering of the pipeline is independent of the form of the code warehouse, so that the code warehouse with different forms can be supported.
In one possible implementation, the scheme may further perform the following steps: persistence processing the pipeline description parameters; and receiving and recording the execution state and the execution result of the target pipeline.
In the embodiment of the invention, the pipeline description parameters can be processed in a persistence manner through RDS (Relational Database Service ) based on MySQL (relational database management system) and the like. The specific persistence mode adopted can be selected according to actual requirements, and the embodiment of the invention is not particularly limited.
In the embodiment of the invention, after the user completes the subtasks of the target pipeline, the user returns the task execution state and the execution result to the provider, and the provider records the execution state and the execution result of the target pipeline so as to provide observable capability for the user to check the resource occupation and the efficiency condition of the task operation. The execution state may include to-be-executed, in-execution, after execution, and the like, and the execution result may include execution success, execution failure, and the like, and may be specifically set according to actual requirements, which is not specifically limited in the embodiment of the present invention.
After the pipeline task is successfully executed, data in the form of binary files, approval information, notification and the like can be obtained.
In addition, it should be noted that RDS is an online database service that is ready-to-use, stable, reliable, and flexible. The system has multiple safety protection measures and a perfect performance supervision system, and provides professional database backup, recovery and optimization schemes. MySQL is a relational database management system that keeps data in different tables rather than placing all data in one large warehouse, which increases speed and flexibility.
In the steps, pipeline description parameters are stored in a lasting mode so as to conduct problem verification, the execution state and the result returned by a user are recorded, and perfect observability is provided.
Referring to the Serverless CI/CD task execution engine architecture flow diagram of FIG. 3, the implementation of the method is described in one embodiment.
In fig. 3, the first management and control component, the second management and control component and the working component are all components in a Serverless form, and based on the three components, the CI/CD task execution method provided by the embodiment of the present invention can be implemented. The management and control component is used for recording a load pair pipeline, persistence of task description and task execution state and result. The second management and control component is responsible for executing specific pipelines and tasks.
The user submits descriptions of the CI/CD pipeline and tasks in the form of templates to the first management and control component which persists the templates to the database for a while. The management and control component binds the corresponding pipeline template with the trigger component Webhook for a while. When code submission occurs in the code repository, the corresponding pipeline is triggered by Webhook. The management and control component submits the information submitted by the codes and the template information of the pipeline to a message queue for a while, and triggers the management and control component II to start to execute the pipeline through a trigger of the message queue.
The second management and control component analyzes the pipeline definition, searches for tasks that can be executed, and triggers the working component to execute. When the work component returns the result, the second management component continues to analyze and submit tasks that can be executed to the message queue until there are no tasks that can be executed.
And the product provider and the user of the task execution engine only need to pay for the resources consumed in the pipeline operation, so that the idle cost is effectively reduced. The task execution engine supports the customer to customize the working assembly, deploys the working assembly under own account, and self-defines everything required by the working assembly operation through a Serverless product. In addition, the management and control component can be separated from the working component, and the working component is deployed under the customer account number, so that each user can effectively share the cost of the product provider, and the global resource use condition is optimized.
By adopting the execution engine of the Serverless architecture, the CI/CD solution which is used when the box is opened can be realized functionally, and meanwhile, the flexibility of a self-defined flow is reserved; the idle cost can be reduced to zero in cost and a part of the engine execution cost is transferred to the user. The method and the device can provide CI/CD capability for users freely, deliver functions and applications of the clients to corresponding Serverless products, and improve the stability of the Serverless products.
It should be noted that, in the scheme, all components can be deployed under the client cloud account, and the deployment scheme can be specifically selected according to actual requirements.
The application provides a CI/CD task execution method, which adopts a full Serverless architecture to build a pipeline system, reduces the idle time cost to be zero, and does not require a user to pay extra resource cost for the CI/CD. The method can be used for a multi-tenant scene, can liberate small and medium-sized users, changes the current situation of manual delivery, and delivers codes to a Serverless product at extremely low cost.
The above description is made from the perspective of the Serverless product provider, and is explained from the perspective of the user side. Fig. 2 is a second flowchart of a CI/CD task performing method according to an embodiment of the present application, and the steps involved in fig. 2 are described below.
Step S202, receiving a second description parameter sent by a provider, and generating an operation environment of a target pipeline on a cloud platform by using the second description parameter; wherein the second description parameter is used for describing resources required by the execution pipeline; the provider is used for receiving the pipeline description parameters of the CI/CD, determining the association relation between the pipeline description parameters and the trigger components, determining a target trigger component corresponding to an event according to a pre-configured corresponding relation when the code warehouse generates the event, and determining the target pipeline corresponding to the target trigger component according to the association relation.
In this step, after the user logs in the cloud account registered in the cloud platform, the user may receive the second description parameter sent by the provider. The execution environment may include any information for executing the work components executing the target pipeline. And the user generates an operation environment of the target pipeline on the cloud platform by utilizing the second description parameters in the account authority of the user, so that data isolation between the users is realized, and the safety of the data is ensured.
The second description parameter is used for describing resources required by the execution pipeline; the provider is used for receiving the pipeline description parameters of the CI/CD, determining the association relation between the pipeline description parameters and the trigger components, determining a target trigger component corresponding to an event according to a pre-configured corresponding relation when the code warehouse generates the event, and determining the target pipeline corresponding to the target trigger component according to the association relation.
Each set of pipeline description parameters comprises a first description parameter and a second description parameter, wherein the first description parameter is used for describing subtasks of the pipeline, and the second description parameter is used for describing resources required by the execution pipeline. One pipeline can be composed of a plurality of subtasks, and execution of the pipeline is realized by executing the subtasks successively. In this step, when the user submits the pipeline description parameter, the user submits the second description parameter, and based on the second description parameter, the user can be used for customizing the execution environment of the pipeline, so that debugging difficulty can be reduced.
The code repository generation event refers to a case where code submission, code movement, etc. occur in the code repository to operate on the code. In the attribute of the code warehouse, the corresponding relation between the trigger component and the event is preconfigured. When the code warehouse generates an event, determining a target trigger component corresponding to the event according to a pre-configured corresponding relation, and determining a target pipeline corresponding to the target trigger component according to the association relation. In this step, the pipeline is triggered by the triggering component, and thus CI/CD task delivery can be supported by using a code warehouse with various forms.
Step S204, when a target subtask of the target pipeline is received, executing the target subtask in the running environment.
In the step, after the running environment of the target pipeline is generated, when a user receives a target subtask of the target pipeline sent by a provider, the target subtask is executed in the running environment, wherein the target subtask is one or more executable subtasks in the pipeline. Subtasks that may be performed refer to subtasks that are independent of other subtasks or on which the dependent subtasks have completed. By executing the target subtasks one by one, execution of the target pipeline is completed.
In the step, the provider and the user only need to pay for the resources consumed in the pipeline operation, so that the idle time cost is reduced to 0, the resource consumption is greatly reduced, and the resource waste is avoided.
In the embodiment of the invention, a second description parameter sent by a provider is received, and an operation environment of a target pipeline is generated on a cloud platform by using the second description parameter; wherein the second description parameter is used for describing resources required by the execution pipeline; the provider is used for receiving the pipeline description parameters of the CI/CD, determining the association relation between the pipeline description parameters and the trigger components, determining a target trigger component corresponding to an event according to a preset corresponding relation when the code warehouse generates the event, and determining the target pipeline corresponding to the target trigger component according to the association relation; and executing the target subtask in the running environment when receiving the target subtask of the target pipeline. The application adopts the Serverless architecture to process the pipeline task of the CI/CD, so that the resource consumption can be greatly reduced when no task is executed, and the resource waste is avoided; when the code warehouse generates an event, the trigger component triggers the target assembly line, so that the CI/CD task delivery can be supported by using the code warehouse with various forms, in addition, the user definition of the target user to the CI/CD task operation environment can be realized through the second description parameters, the data isolation among users is realized, and the safety of the data is ensured.
In one possible implementation, the second description parameters include network resource parameters, computing resource parameters, and storage resource parameters; generating an operating environment of the target pipeline on the cloud platform by using the second description parameters can be performed according to the following steps: and generating one or more of network configuration information, container configuration information and mirror configuration information according to the second description parameters.
In the embodiment of the invention, when the pipeline is executed, the needed resources comprise storage resources, computing resources and network resources, and the resources needed when the pipeline is executed can be described through the second description parameters. The execution environment may include any information for executing the work components of the target pipeline, such as network configuration information, container configuration information, and mirror configuration information, and thus one or more of the network configuration information, the container configuration information, and the mirror configuration information are generated according to the second description parameter.
In this step, based on the second description parameter, relevant configuration of the client custom network and the container is supported, and the user can customize everything when the task is executed. In practical research and development, if the mirror image related to the custom operation of the client has problems, the verification link is greatly reduced, and the efficiency of problem verification is improved.
In one possible embodiment, the following steps may also be performed: sending an execution result obtained by executing the target pipeline to a target receiver of the Serverless architecture; the target receiver is used for realizing a target function by utilizing the Serverless architecture.
In this step, the execution result obtained by the pipeline execution of the scheme, such as binary files, messages, etc., is sent to the target receiver of the Serverless architecture, so as to implement delivering the code to the Serverless product at extremely low cost. The target receiver may be a Serverless product based on a Serverless architecture.
The Serverless product is a program product which can be used for realizing target functions, and the target functions can be selected according to actual requirements. For example, a server product may be used to implement an event-driven full-support computing service, through which a user need not manage infrastructure such as a server, but simply write code and upload, and which prepares computing resources for the user, runs the user's code in an elastic and reliable manner, and provides log-querying, performance monitoring, alerting, and other functions.
The present disclosure also provides a CI/CD task performing apparatus, which is applied to a Serverless architecture, including: the receiving module is used for receiving the pipeline description parameters of the CI/CD and determining the association relation between the pipeline description parameters and the trigger component; the pipeline description parameters comprise a first description parameter and a second description parameter, wherein the first description parameter is used for describing subtasks of the pipeline, and the second description parameter is used for describing resources required by the execution pipeline; the trigger module is used for determining a target trigger assembly corresponding to an event according to a pre-configured corresponding relation when the code warehouse generates the event, and determining a target pipeline corresponding to the target trigger assembly according to the association relation; the sending module is used for sending the second description parameters to a target user so that the target user can generate an operation environment of the target pipeline on a cloud platform by utilizing the second description parameters; and the task module is used for sending the target subtasks in the target pipeline to the target user so that the target user can execute the target subtasks in the running environment.
In the embodiment of the application, the pipeline description parameters of the CI/CD are received, and the association relation between the pipeline description parameters and the trigger component is determined; the pipeline description parameters comprise a first description parameter and a second description parameter, wherein the first description parameter is used for describing subtasks of the pipeline, and the second description parameter is used for describing resources required by the execution pipeline; when an event is generated by a code warehouse, determining a target trigger component corresponding to the event according to a pre-configured corresponding relation, and determining a target pipeline corresponding to the target trigger component according to the association relation; sending the second description parameters to a target user so that the target user can generate an operating environment of the target pipeline on a cloud platform by using the second description parameters; and sending the target subtasks in the target pipeline to the target user so that the target user executes the target subtasks in the running environment. The application adopts the Serverless architecture to process the pipeline task of the CI/CD, so that the resource consumption can be greatly reduced when no task is executed, and the resource waste is avoided; when the code warehouse generates an event, the trigger component triggers the target assembly line, so that the CI/CD task delivery can be supported by using the code warehouse with various forms, in addition, the user definition of the target user to the CI/CD task operation environment can be realized through the second description parameters, the data isolation among users is realized, and the safety of the data is ensured.
The system or the device is used for realizing the functions of the method in the above embodiment, and each module in the system or the device corresponds to each step in the method, which has been described in the method, and will not be described herein.
Optionally, determining, according to the association relation, a target pipeline corresponding to the target trigger component, including: determining a target pipeline description parameter corresponding to the target trigger component according to the association relation; and determining a plurality of subtasks of the target pipeline and the dependency relationship among the plurality of subtasks according to the target pipeline description parameters and the events corresponding to the target trigger components.
Optionally, sending the target subtasks in the target pipeline to the target user includes: determining the subtask processing sequence of the target pipeline according to the first description parameter; and determining a target subtask according to the subtask processing sequence, and sending the target subtask to the target user.
Optionally, the code repository generates events, including: the code is processed through a GitLab code repository and/or a Github code repository, wherein the processing code includes a commit code and/or an adjust code.
Optionally, the method further comprises: persistence processing the pipeline description parameters; and receiving and recording the execution state and the execution result of the target pipeline.
The application provides a CI/CD task execution device, which is applied to a Serverless architecture, and comprises: the environment module is used for receiving a second description parameter sent by a provider and generating an operation environment of the target pipeline on the cloud platform by utilizing the second description parameter; wherein the second description parameter is used for describing resources required by the execution pipeline; the provider is used for receiving the pipeline description parameters of the CI/CD, determining the association relation between the pipeline description parameters and the trigger components, determining a target trigger component corresponding to an event according to a preset corresponding relation when the code warehouse generates the event, and determining the target pipeline corresponding to the target trigger component according to the association relation; and the execution module is used for executing the target subtask in the running environment when receiving the target subtask of the target pipeline.
In the embodiment of the invention, a second description parameter sent by a provider is received, and the second description parameter is utilized to generate the running environment of a target pipeline in a cloud level; wherein the second description parameter is used for describing resources required by the execution pipeline; the provider is used for receiving the pipeline description parameters of the CI/CD, determining the association relation between the pipeline description parameters and the trigger components, determining a target trigger component corresponding to an event according to a preset corresponding relation when the code warehouse generates the event, and determining the target pipeline corresponding to the target trigger component according to the association relation; and executing the target subtask in the running environment when receiving the target subtask of the target pipeline. The application adopts the Serverless architecture to process the pipeline task of the CI/CD, so that the resource consumption can be greatly reduced when no task is executed, and the resource waste is avoided; when the code warehouse generates an event, the trigger component triggers the target assembly line, so that the CI/CD task delivery can be supported by using the code warehouse with various forms, in addition, the user definition of the target user to the CI/CD task operation environment can be realized through the second description parameters, the data isolation among users is realized, and the safety of the data is ensured.
The system or the device is used for realizing the functions of the method in the above embodiment, and each module in the system or the device corresponds to each step in the method, which has been described in the method, and will not be described herein.
Optionally, the second description parameters include network resource parameters, computing resource parameters, and storage resource parameters; generating an operating environment of the target pipeline on the cloud platform by using the second description parameters, wherein the operating environment comprises the following steps: and generating one or more of network configuration information, container configuration information and mirror configuration information according to the second description parameters.
Optionally, after the execution environment executes the target subtask, the method further includes: sending an execution result obtained by executing the target pipeline to a target receiver of the Serverless architecture; the target receiver is used for realizing a target function by utilizing the Serverless architecture.
The exemplary embodiments of the present disclosure also provide an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor for causing the electronic device to perform a method according to embodiments of the present disclosure when executed by the at least one processor.
The present disclosure also provides a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to an embodiment of the present disclosure.
The present disclosure also provides a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to embodiments of the disclosure.
Referring to fig. 4, a block diagram of an electronic device 400 that may be a server or user of the present disclosure will now be described, which is an example of a hardware device that may be applied to aspects of the present disclosure. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the electronic device 400 includes a computing unit 401 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In RAM 403, various programs and data required for the operation of device 400 may also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Various components in electronic device 400 are connected to I/O interface 405, including: an input unit 406, an output unit 407, a storage unit 408, and a communication unit 409. The input unit 406 may be any type of device capable of inputting information to the electronic device 400, and the input unit 406 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. The output unit 407 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 408 may include, but is not limited to, magnetic disks, optical disks. The communication unit 409 allows the electronic device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 401 include, but are not limited to, a processing unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 401 performs the respective methods and processes described above. For example, in some embodiments, the foregoing methods may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 400 via the ROM 402 and/or the communication unit 409. In some embodiments, the computing unit 401 may be configured to perform the aforementioned methods by any other suitable means (e.g., by means of firmware).
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As used in this disclosure, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (for example, a CRT (cathode ray tube) or an LCD (liquid crystal display)) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a user and a server. The user and the server are typically remote from each other and typically interact through a communication network. The relationship of user and server arises by virtue of computer programs running on the respective computers and having a user-server relationship to each other.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (11)

1. A CI/CD task execution method, wherein the method is applied to a server architecture, the method comprising:
receiving pipeline description parameters of the CI/CD, and determining the association relation between the pipeline description parameters and a trigger component; the pipeline description parameters comprise a first description parameter and a second description parameter, wherein the first description parameter is used for describing subtasks of the pipeline, and the second description parameter is used for describing resources required by the execution pipeline;
when an event is generated by a code warehouse, determining a target trigger component corresponding to the event according to a pre-configured corresponding relation, and determining a target pipeline corresponding to the target trigger component according to the association relation;
Sending the second description parameters to a target user so that the target user can generate an operating environment of the target pipeline on a cloud platform by using the second description parameters;
and sending the target subtasks in the target pipeline to the target user so that the target user executes the target subtasks in the running environment.
2. The method of claim 1, wherein determining a target pipeline corresponding to the target trigger component according to the association relationship comprises:
determining a target pipeline description parameter corresponding to the target trigger component according to the association relation;
and determining a plurality of subtasks of the target pipeline and the dependency relationship among the plurality of subtasks according to the target pipeline description parameters and the events corresponding to the target trigger components.
3. The method of claim 1, wherein sending the target user a target sub-task in the target pipeline comprises:
determining the subtask processing sequence of the target pipeline according to the first description parameter;
and determining a target subtask according to the subtask processing sequence, and sending the target subtask to the target user.
4. The method of claim 1, wherein the code repository generates events comprising:
the code is processed through a GitLab code repository and/or a Github code repository, wherein the processing code includes a commit code and/or an adjust code.
5. The method of any of claims 1-4, further comprising:
persistence processing the pipeline description parameters;
and receiving and recording the execution state and the execution result of the target pipeline.
6. A CI/CD task execution method, wherein the method is applied to a server architecture, the method comprising:
receiving a second description parameter sent by a provider, and generating an operation environment of a target pipeline on a cloud platform by using the second description parameter; wherein the second description parameter is used for describing resources required by the execution pipeline; the provider is used for receiving the pipeline description parameters of the CI/CD, determining the association relation between the pipeline description parameters and the trigger components, determining a target trigger component corresponding to an event according to a preset corresponding relation when the code warehouse generates the event, and determining the target pipeline corresponding to the target trigger component according to the association relation;
And executing the target subtask in the running environment when receiving the target subtask of the target pipeline.
7. The method of claim 6, wherein the second description parameters include network resource parameters, computing resource parameters, and storage resource parameters; generating an operating environment of the target pipeline on the cloud platform by using the second description parameters, wherein the operating environment comprises the following steps:
and generating one or more of network configuration information, container configuration information and mirror configuration information according to the second description parameters.
8. The method of claim 6 or 7, wherein after the execution of the target subtask by the runtime environment, further comprising:
sending an execution result obtained by executing the target pipeline to a target receiver of the Serverless architecture; the target receiver is used for realizing a target function by utilizing the Serverless architecture.
9. An electronic device, comprising:
a processor; and
a memory in which a program is stored,
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method steps according to any of claims 1-8.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method steps of any one of claims 1-8.
11. A computer program product, wherein the computer program product comprises a computer program which, when executed by a processor, implements the method steps of any of claims 1-8.
CN202310329666.0A 2023-03-29 2023-03-29 CI/CD task execution method Pending CN116521334A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310329666.0A CN116521334A (en) 2023-03-29 2023-03-29 CI/CD task execution method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310329666.0A CN116521334A (en) 2023-03-29 2023-03-29 CI/CD task execution method

Publications (1)

Publication Number Publication Date
CN116521334A true CN116521334A (en) 2023-08-01

Family

ID=87405540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310329666.0A Pending CN116521334A (en) 2023-03-29 2023-03-29 CI/CD task execution method

Country Status (1)

Country Link
CN (1) CN116521334A (en)

Similar Documents

Publication Publication Date Title
US10878355B2 (en) Systems and methods for incident queue assignment and prioritization
US10637796B2 (en) Linking instances within a cloud computing environment
US8843621B2 (en) Event prediction and preemptive action identification in a networked computing environment
JP2019522846A (en) Resource allocation for database provisioning
US11194572B2 (en) Managing external feeds in an event-based computing system
US10394538B2 (en) Optimizing service deployment in a distributed computing environment
WO2021243589A1 (en) Prioritizing sequential application tasks
CN111212388B (en) Method, system and electronic equipment for batch short message sending management
US10831575B2 (en) Invoking enhanced plug-ins and creating workflows having a series of enhanced plug-ins
US9760441B2 (en) Restoration of consistent regions within a streaming environment
US10877805B2 (en) Optimization of memory usage by integration flows
CN113191889A (en) Wind control configuration method, configuration system, electronic device and readable storage medium
CN113472638B (en) Edge gateway control method, system, device, electronic equipment and storage medium
CN116521334A (en) CI/CD task execution method
US20220276901A1 (en) Batch processing management
US11277300B2 (en) Method and apparatus for outputting information
CN115250276A (en) Distributed system and data processing method and device
CN112613955A (en) Order processing method and device, electronic equipment and storage medium
CN115484149B (en) Network switching method, network switching device, electronic equipment and storage medium
US11206190B1 (en) Using an artificial intelligence based system to guide user dialogs in designing computing system architectures
CN110262756B (en) Method and device for caching data
CN116775307A (en) Service processing method, device, equipment and storage medium
CN114756329A (en) Business process simulation method and device, electronic equipment and readable storage medium
CN115586959A (en) Resource allocation method, device, electronic equipment and storage medium
CN116955362A (en) Task processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination