CN116909588A - Automatic deployment method, device, equipment and storage medium for enterprise-level service bus - Google Patents

Automatic deployment method, device, equipment and storage medium for enterprise-level service bus Download PDF

Info

Publication number
CN116909588A
CN116909588A CN202310898212.5A CN202310898212A CN116909588A CN 116909588 A CN116909588 A CN 116909588A CN 202310898212 A CN202310898212 A CN 202310898212A CN 116909588 A CN116909588 A CN 116909588A
Authority
CN
China
Prior art keywords
deployment
container
file
server
target node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310898212.5A
Other languages
Chinese (zh)
Inventor
田家成
姜尚志
杨大龙
高中纤
张燕燕
陈丽
谢静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202310898212.5A priority Critical patent/CN116909588A/en
Publication of CN116909588A publication Critical patent/CN116909588A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management

Abstract

The application provides an automatic deployment method, device, equipment and storage medium for an enterprise-level service bus, relates to the technical field of computers, and is suitable for a centralized configuration management system comprising a server side and a client side. The method comprises the steps that a server side obtains a plurality of installation packages configured by a basic environment and determines a first configuration file of automatic installation; the server side installs a master-slave node basic environment of the container arranging platform for the client side based on the first configuration file so as to realize the automatic establishment of a related cluster of the container arranging platform; the server side selects a target node from the slave nodes of the container arrangement platform and configures an internet protocol address of the target node to a second configuration file; the server performs containerized deployment of the target deployment object in the target node based on the second configuration file so as to realize automatic deployment of the heavy-weight enterprise-level service bus with complex relevance, thereby reducing operation and maintenance cost.

Description

Automatic deployment method, device, equipment and storage medium for enterprise-level service bus
Technical Field
The present application relates to the field of computer technologies, and in particular, to an enterprise-level service bus automation deployment method, apparatus, device, and storage medium.
Background
With the rapid development of the information age, micro-service, distributed and cloud computing promote the explosive growth of the scale of the information system, the enterprise-level system construction mode of 'platform+application' is gradually applied to large enterprises and institutions, and the complexity of platform deployment and maintenance difficulty become barriers for the construction of enterprise user information systems. In order to adapt to the construction, deployment and maintenance of enterprise-level systems, a plurality of enterprises and institutions use Puppet (centralized configuration management system) to automatically manage in the deployment and maintenance process of the informatization systems, so that configuration files are managed, user timing tasks are executed, software deployment and system maintenance are realized, the burden of operation and maintenance personnel in the aspects of repeatability and batch operation is reduced, and part of problems of operation and maintenance management of the enterprise-level systems in a distributed environment are solved.
In the related art, centralized configuration management based on Puppet is a specific problem automation scheme which is in batch and repeated in the vertical field, but deployment scenes such as unfixed operating system, uneven system setting, nonstandard application scene and the like are usually faced in the deployment process of an enterprise-level service bus, when the deployment environment changes, operation and maintenance personnel are required to continuously adapt to the operating system, adjust the deployment mode, adjust deployment parameters or adjust the deployment flow, especially when the incoming service is in peak, the operation and maintenance personnel are required to timely expand capacity according to service pressure, and when part of nodes or services are out of order, the operation and maintenance personnel are required to timely process. Along with the improvement of the complexity of the enterprise-level service bus, the operation cluster of operation and maintenance personnel is larger and larger in scale, the variety and the number of functions are more and more, and the adaptation scene is more and more complicated, so that the operation and maintenance cost is higher and higher, and the automatic deployment of the heavy-weight enterprise-level service bus with complex relevance cannot be realized by simply carrying out the batch operation of fixed instructions in the vertical field.
Disclosure of Invention
The application provides an automatic deployment method, device, equipment and storage medium for an enterprise-level service bus, which are used for realizing the automatic deployment of a heavy-weight enterprise-level service bus with complex relevance, so that the operation and maintenance cost is reduced.
In a first aspect, the present application provides an enterprise-level service bus automation deployment method, which is applicable to a centralized configuration management system, where the centralized configuration management system includes a server and a client, and the enterprise-level service bus automation deployment method includes:
the method comprises the steps that a server side obtains a plurality of installation packages configured by a basic environment and determines a first configuration file for automatically installing the plurality of installation packages, wherein the installation packages comprise installation packages of a container engine, a container arranging platform and a network plug-in, and the first configuration file is used for configuring the closing of a network firewall and the modification of system file parameters;
the server installs a master-slave node basic environment of the container arranging platform for the client based on the first configuration file so as to realize the automatic establishment of a related cluster of the container arranging platform;
the server side selects a target node from the slave nodes in the container arrangement platform and configures an internet protocol address of the target node into a second configuration file;
And the server side performs containerized deployment of the target deployment object in the target node based on the second configuration file, wherein the target deployment object comprises an application store, a database, a core gateway and a management control side.
In a possible implementation manner, the target deployment object is an application store, and the server performs containerized deployment of the target deployment object in the target node based on the second configuration file, including: the server side sends a mirror image import instruction to the target node based on the second configuration file, wherein the mirror image import instruction is used for instructing the target node to conduct mirror image import on the prefabricated application store package so as to start a mirror image; the method comprises the steps that a server modifies parameters associated with services of an application store into pre-established source code files, wherein each service generates a source code file, the source code files are used for making images of the services, and the services comprise a service registration center, a tool public class, a login center, front-end services and back-end services; the method comprises the steps that a service end sets a starting sequence of services; the server creates a first YAML file, automatically executes the first YAML file by using an automatic deployment mode of the centralized configuration management system to realize the deployment of the application store as a first container, wherein the first YAML file is used for creating a controller for managing and publishing in a container arranging platform, creating an indirect management container in the controller and deploying the service in the indirect management container.
In one possible implementation, the server side encapsulates the nminix into a first container.
In a possible implementation manner, the target deployment object is a database, and the server performs containerized deployment of the target deployment object in the target node based on the second configuration file, including: the server side sends a mirror image import instruction to the target node based on the second configuration file, wherein the mirror image import instruction is used for instructing the target node to conduct mirror image import on the prefabricated database package so as to start a mirror image; the server modifies parameters associated with the services of the database into a pre-established source code file; the server side initializes a database instance of the target node; the server creates a second YAML file, and automatically executes the second YAML file by using an automatic deployment mode of the centralized configuration management system so as to realize the deployment of the database as a second container; the server creates the user in the database instance.
In a possible implementation manner, the target deployment object is a core gateway, and the server performs containerized deployment of the target deployment object in the target node based on the second configuration file, including: the server side sends a mirror image import instruction to the target node based on the second configuration file, wherein the mirror image import instruction is used for instructing the target node to carry out mirror image import on the prefabricated core gateway packet so as to start a mirror image; the method comprises the steps that a server modifies parameters associated with services of a core gateway into pre-established source code files, wherein each service generates a source code file, the source code files are used for making mirror images of the services, and the services comprise a capability access layer, a core processing layer, a capability access layer, a control center and a control center platform; the method comprises the steps that a service end sets a starting sequence of services; the server creates a third YAML file, automatically executes the third YAML file by using an automatic deployment mode of the centralized configuration management system to realize the deployment of the core gateway as a third container, wherein the third YAML file is used for creating a controller for management release in a container arranging platform, creating an indirect management container in the controller and deploying the service in the indirect management container; the server side containerizes the Nginx into a third container.
In a possible implementation manner, the target deployment object is a management control end, and the server performs containerized deployment of the target deployment object in the target node based on the second configuration file, including: the server side sends a mirror image import instruction to the target node based on the second configuration file, wherein the mirror image import instruction is used for instructing the target node to carry out mirror image import on the pre-manufactured management control end packet so as to start a mirror image; the server modifies parameters associated with the service of the management control end into a pre-established source code file; the server creates a fourth YAML file, and the fourth YAML file is automatically executed by using an automatic deployment mode of the centralized configuration management system so as to realize that the management control end is deployed as a fourth container.
In one possible implementation, the centralized configuration management system is Puppet and the container orchestration platform is K8s.
In a second aspect, the present application provides an enterprise-level service bus automation deployment device, which is suitable for a centralized configuration management system, where the centralized configuration management system includes a server and a client, and the enterprise-level service bus automation deployment device is integrated in the server;
wherein, enterprise level service bus automation deployment device includes:
The system comprises an acquisition module, a configuration module and a configuration module, wherein the acquisition module is used for acquiring a plurality of installation packages configured by a basic environment and determining a first configuration file for automatically installing the plurality of installation packages, the installation packages comprise an installation package of a container engine, a container arrangement platform and a network plug-in, and the first configuration file is used for configuring the closing of a network firewall and the modification of system file parameters;
the installation module is used for installing a master-slave node basic environment of the container arranging platform for the client based on the first configuration file so as to realize the automatic establishment of the related clusters of the container arranging platform;
a selection module for selecting a target node from the slave nodes in the container arrangement platform and configuring an internet protocol address of the target node into a second configuration file;
and the execution module is used for carrying out containerized deployment of the target deployment object in the target node based on the second configuration file, wherein the target deployment object comprises an application store, a database, a core gateway and a management control end.
In a third aspect, the present application provides an electronic device comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions for performing the method of the first aspect when executed by a processor.
In a fifth aspect, the application provides a computer program product comprising a computer program which, when executed, implements the method of the first aspect.
The enterprise-level service bus automatic deployment method, the enterprise-level service bus automatic deployment device and the storage medium are suitable for a centralized configuration management system, wherein the centralized configuration management system comprises a server side and a client side, the server side acquires a plurality of installation packages configured by a basic environment, and installs a master-slave node basic environment of a container arrangement platform for the client side so as to realize the automatic establishment of related clusters of the container arrangement platform; in addition, the server side selects a target node from the slave nodes in the container arrangement platform so as to realize container arrangement of the target arrangement object in the target node. The application realizes the double-line automatic containerized deployment of the enterprise-level service bus based on the centralized configuration management system and the containerized platform, decouples deployment logic, realizes service bus deployment scenes under different service scenes and different service orders, and self-adapts deployment scale, and can realize the rapid deployment of the enterprise-level service bus by simple containerized deployment by operation and maintenance personnel, thereby not only remarkably reducing the manual workload, but also realizing the automatic deployment and operation and maintenance management of the enterprise-level platform, and further reducing the operation and maintenance cost.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic view of an application scenario of an enterprise-level service bus automation deployment method according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart of an enterprise-class service bus automated deployment method according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a software architecture of a capability open platform provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a software architecture for a capacity open platform provided by an exemplary embodiment of the present application based on Puppet and K8s automated deployment;
FIG. 5 is a schematic diagram of an enterprise-class service bus automation deployment device according to an exemplary embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
Specific embodiments of the present application have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
It should be noted that the method, the device, the equipment and the storage medium for automatically deploying the enterprise-level service bus provided by the application can be used in the technical field of computers, such as the cloud computing field and the distributed field, and can also be used in the technical field of artificial intelligence or other related fields, and the application field of the application is not limited.
In the conventional deployment process of the informatization system, the service system usually only needs to pay attention to the database and the application system, the deployment flow is single and the operation flow is fixed, and the cluster-level deployment of the application system can be realized only by simply carrying out batch operation through the centralized configuration management system, so that the quick deployment of the informatization system is realized, and the service development is supported.
In the related art, in the deployment process of the enterprise-level service bus system, deployment scenes such as unfixed operating system, non-uniform system setting, non-standard application scene and the like are usually faced, and when the deployment environment changes, operation and maintenance personnel need to continuously adapt to the operating system, adjust the deployment mode, the deployment parameters and the deployment flow. For example: the deployment scale is changed from 12 hosts to 18 hosts, and corresponding disk setting, log library setting, access center setting, service center setting and access center setting all need to be cooperatively associated and changed, so that the cluster scale of operation and maintenance personnel operation is larger and larger, the number of function types and functions are more and the adaptation scene is more and more complex, further the cost of operation and maintenance is higher and higher, and the automatic deployment of the heavy-weight enterprise-level service bus with complex relevance cannot be realized by simply carrying out the batch operation of fixed instructions in the vertical field.
In view of the realization of complex deployment and operation and maintenance of enterprise-level platforms, the technology widely used at present is containerization, and the appearance of containerization platforms enables containerization deployment to be simple, and operations such as online service deployment, monitoring, operation and increase and decrease of machines can be managed and operated through the containerization platforms. In order to solve the problems, the embodiment of the application provides a centralized configuration management system and a container coding platform, which realizes a set of double-line automatic container deployment scheme, decouples deployment logic, and containers an application store, a database, a core gateway and a management control end, so that service bus deployment scenes and deployment scale self-adaption under different service orders are realized, and operation and maintenance personnel can realize quick deployment of an enterprise service bus through simple container deployment, thereby remarkably reducing manual workload and realizing automatic deployment and operation and maintenance management of the enterprise platform, and further reducing operation and maintenance cost.
Fig. 1 is a schematic application scenario diagram of an enterprise-level service bus automation deployment method according to an exemplary embodiment of the present application. As shown in fig. 1, the method for automatically deploying an enterprise-level service bus according to the exemplary embodiment of the present application is applied to a centralized configuration management system, where the centralized configuration management system includes a client and a server, and the number of clients may be at least two. In practical application, when related technicians such as operation and maintenance personnel need to perform enterprise-level service bus automatic deployment, a configuration client and a service end are in a connection state, and after the service end acquires a plurality of installation packages configured by a basic environment, the enterprise-level service bus automatic deployment method provided by the application is executed to complete the enterprise-level service bus automatic deployment, and a deployment result is sent to the service end, so that the related personnel can acquire the deployment result of the enterprise-level service bus.
It should be noted that, the server may be replaced by a server cluster or other computing devices with a certain computing power, and the client is typically disposed on a server with a certain computing power (for example, the server 1, the servers 2, … …, and the server N shown in fig. 1), which may be a computer, a notebook, a virtual machine, or the like.
An enterprise-level service bus automation deployment method according to an exemplary embodiment of the present application is described below with reference to fig. 2 in conjunction with the application scenario of fig. 1. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principle of the present application, and the embodiment of the present application is not limited by the application scenario shown in fig. 1.
Fig. 2 is a flow chart of an enterprise-class service bus automation deployment method according to an exemplary embodiment of the present application. As shown in fig. 2, the method for automatically deploying an enterprise-level service bus in an embodiment of the present application includes the following steps:
step 201, a server acquires a plurality of installation packages configured by a basic environment, and determines a first configuration file for automatically installing the plurality of installation packages, wherein the installation packages comprise an installation package of a container engine, a container arranging platform and a network plug-in, and the first configuration file is used for configuring the closing of a network firewall and the modification of system file parameters.
Illustratively, the container engine may be a Docker, and the network plug-in may include kubectl and Calico.
In the embodiment of the application, as shown in fig. 1, an operation and maintenance personnel operates a server to acquire a plurality of installation packages configured by a basic environment through modes such as network downloading or advanced downloading and importing, and stores relevant information such as network firewall closing and system file parameter modification into a first configuration file.
Step 202, the server installs a master-slave node basic environment of the container arranging platform for the client based on the first configuration file so as to realize the automatic establishment of the related clusters of the container arranging platform.
In this step, as shown in fig. 1, an operation and maintenance person operates a server, issues the installation instruction of the installation package or executes a pre-written automatic installation script to a client through the server, realizes batch installation of the Docker, the container arrangement platform and the network plug-in installation package, and completes batch closing of a network firewall of the client and modification of system file parameters according to a first configuration file, thereby realizing automatic establishment of related clusters of the container arrangement platform and reducing errors caused by manual operation; furthermore, the logs of the related modification and configuration parameters can be output in an interface mode, so that operation and maintenance personnel or related technicians can acquire the environment construction progress more intuitively, man-machine interaction is reduced, and the efficiency of environment deployment is improved.
It should be noted that, the master node and the slave nodes of the container arrangement platform are both deployed in the client, one master node is in a cluster, at least one or more slave nodes are in charge of the management and control of the cluster; through a secure shell (Secure Shell Protocol, abbreviated as SSH) protocol, a one-to-many connection from a server to a client can be created, and management and control of the server to the master node and the slave node of the container orchestration platform are achieved.
Step 203, the server selects a target node from the slave nodes in the container arrangement platform, and configures an internet protocol address of the target node into a second configuration file.
Illustratively, the container orchestration platform has 12 slave nodes, and associates the internet protocol addresses of nodes 1-12 with the target object to be deployed by each node through a second profile, e.g., nodes 1-5 for deploying target object a, node 6 for deploying target object B, node 7-11 for deploying target object C, and node 12 for deploying target object D.
And 204, the server performs containerized deployment of the target deployment object in the target node based on the second configuration file, wherein the target deployment object comprises an application store, a database, a core gateway and a management control end.
In the step, the operation and maintenance personnel operate the server, and the server executes the automatic execution script of the target deployment object in the target node based on the second configuration file, so that the containerized deployment of each target deployment object is realized. Correspondingly, the automation execution script can be pre-programmed by an operation and maintenance personnel or related technicians and deployed on the server side. Illustratively, the server side deploys the target object a at the node 1-node 5, deploys the target object B at the node 6, deploys the target object C at the node 7-node 11, and deploys the target object D at the node 12 based on the second configuration file. When the target objects have a dependency relationship with each other, the target objects need to be deployed in sequence according to a pre-designated sequence, and if the target objects have no dependency relationship, the target objects can be deployed synchronously.
In the embodiment of the application, a plurality of installation packages configured by the basic environment are acquired through the server side of the centralized configuration management system, and the master-slave node basic environment of the container arrangement platform is installed for the client side of the centralized configuration management system, so that the automatic establishment of the related clusters of the container arrangement platform is realized; in addition, the server side selects a target node from the slave nodes in the container arrangement platform, and the containerized deployment of the target deployment object is performed in the target node. By combining an automation scheme in the vertical field of a centralized configuration management system and containerization deployment of each target deployment object in a containerization platform, quick deployment of an enterprise-level service bus is realized, manual workload can be remarkably reduced, automatic deployment and operation and maintenance management of the enterprise-level platform can be realized, and automatic deployment of a heavy-weight enterprise-level service bus with complex relevance can be realized, so that operation and maintenance cost is reduced.
In some embodiments, the centralized configuration management system is Puppet, and the container arrangement platform is K8s (kubernetes, K8s for short).
Specifically, the Puppet adopts a C/S structure, all Puppet clients communicate with one or more Puppet server ends, each Puppet client is periodically connected with the Puppet server end, downloads the latest configuration file, and configures the Puppet clients strictly according to the configuration file, wherein the connection period can be set according to actual conditions. After the configuration is completed, the Puppet client sends a configuration result to the Puppet server, and an operation and maintenance person can know whether the configuration is effective or not through the configuration result received by the Puppet server. Centralized configuration management of running systems in different operating system platforms is realized through Puppet, and the burden of operation and maintenance personnel in the aspects of repeatability and batch operation is reduced.
Wherein, the K8s can be understood as an open-source container arrangement platform, which is a portable and extensible open-source platform for managing containerized workload and services, and can promote declarative configuration and automation.
In some embodiments, complex system deployments and operational dimensions are parsed, taking the enterprise-level service bus, capability open platform, as an example. Illustratively, fig. 3 is a schematic diagram of a software architecture of a capability open platform according to an exemplary embodiment of the present application. The application store is a management end of the enterprise service bus, and a visual capability quotient super-chemical mode is constructed by realizing a capability nano tube through the application store, so that the application store can be understood as a sales platform facing a user; the capability provider is an operation main body, and a capability user can order the required capability through an application store; the capability access layer, the core processing layer and the capability access layer are realization layers of capability functions; the control center and the management control end are used for managing and configuring the enterprise service bus, and are usually a visual operation interface; the database is used for storing relevant configuration, operation log, business data, report data and the like. As shown in fig. 3, some middleware is also integrated into each application module. Illustratively, the application store comprises Nginx, tomcat, zabbix and ES, the capability access layer comprises Tomacat, zabbix, nginx and SLB, the core processing layer, the capability exit layer and the control center comprise tomcat and Zabbix, and the control management end comprises Zookeeper, kafka, tomacat, zabbix and Nginx; additionally, the operating system may include Centos, rhel, kylin, and the like. Correspondingly, when the target deployment object is an application store, the server performs containerized deployment of the target deployment object in the target node based on the second configuration file, including:
The server side sends a mirror image import instruction to the target node based on the second configuration file, wherein the mirror image import instruction is used for instructing the target node to conduct mirror image import on the prefabricated application store package so as to start a mirror image;
the method comprises the steps that a server modifies parameters associated with services of an application store into pre-established source code files, wherein each service generates a source code file, the source code files are used for making images of the services, and the services comprise a service registration center, a tool public class, a login center, front-end services and back-end services;
the method comprises the steps that a service end sets a starting sequence of services;
the server creates a first YAML file, automatically executes the first YAML file by using an automatic deployment mode of the centralized configuration management system to realize the deployment of the application store as a first container, wherein the first YAML file is used for creating a controller for managing and publishing in a container arranging platform, creating an indirect management container in the controller and deploying the service in the indirect management container.
The source code file refers to Dockerfile, which can be understood as a set of rules of a custom mirror image, and is composed of a plurality of instructions, and each instruction in the Dockerfile corresponds to each layer in the Docker mirror image.
The services of the application store are composed of 5 services, and because of the interdependence relationship among the services, part of the services need to be started first, and other services can be started normally, so that operation and maintenance personnel are required to operate the service end to set the starting sequence of the services. By way of example, the order of initiation of the 5 services of the application store may be set as: the system comprises a service registration center, a tool public class, a login center, a front-end service and a back-end service; the starting sequence of the front-end service and the back-end service has no specific requirement, and the front-end service and the back-end service can be started simultaneously or in sequence.
YAML (YAML Ain't a Markup Language) is understood to be an easily understood data serialization language, commonly used for configuration and management, with the file suffix being. yml or.yaml. By way of example, a first YAML file is created by a server, the server issues configuration parameters in the first YAML file to a K8s cluster, a controller for management release is created in the K8s cluster, an indirect management container is created in the controller, and services related to an application store are deployed in a plurality of indirect management containers, so that the deployment of the application store as the first container is completed. The controller may be a depoyment, the indirect management container may be a Pod, where the Pod is the smallest control unit in K8s and a service or an application program needs to be deployed in the Pod, and the Pod exists in the Pod, and one Pod may have one or more containers.
Further, in some embodiments, the server side encapsulates the nginnx into the first container. The Nginx is a WEB server (WORLD WIDE WEB, abbreviated as WEB) and can be used as a load balancing server and a reverse proxy. When a user accesses an application store through Nginx, load balancing can be achieved through the Nginx, and concurrency of the system is improved. When the container is abnormal or the service is abnormal, the system automatically starts a brand new container, so that the maintenance cost is reduced.
Based on the foregoing embodiments, in some embodiments, when the target deployment object is a database, the server performs containerized deployment of the target deployment object in the target node based on the second configuration file, including:
the server side sends a mirror image import instruction to the target node based on the second configuration file, wherein the mirror image import instruction is used for instructing the target node to conduct mirror image import on the prefabricated database package so as to start a mirror image;
the server modifies parameters associated with the services of the database into a pre-established source code file;
the server side initializes a database instance of the target node;
the server creates a second YAML file, and automatically executes the second YAML file by using an automatic deployment mode of the centralized configuration management system so as to realize the deployment of the database as a second container;
The server creates the user in the database instance.
The database (My Structured Query Language, mySQL for short) is a relational database management system, for example. In the enterprise-level service bus of the capability open platform, mySQL can be used for storing relevant data such as platform configuration, transaction logs, business data, report data and the like; according to a master-slave synchronization mechanism of MySQL, the read-write separation of the configuration library and the report library is realized; and realizing the high-efficiency reading and writing of the log library through a split library and split table deployment structure of MySQL. In practical applications, because the container may be stopped or deleted at any time, when the container is abnormal, the data in the container will be lost, and in order to avoid the data loss in the container, the exemplary embodiment of the present application uses the data volume mount to store the data. In addition, as MySQL belongs to a relational database, input/Output (IO) requirements are high, when a plurality of mysqls are run in one client, the IOs are accumulated, which results in an IO bottleneck and greatly reduces the read-write performance of the MySQL. Based on this, the exemplary embodiment of the present application separates the database program from the data, where the data is stored in the shared storage, and the program is stored in the container, and when the container is abnormal or MySQL service is abnormal, a new container will be automatically started, so as to improve the robustness of the database, and further reduce the operation and maintenance costs.
In some embodiments, when the target deployment object is a core gateway, the server performs containerized deployment of the target deployment object in the target node based on the second configuration file, including:
the server side sends a mirror image import instruction to the target node based on the second configuration file, wherein the mirror image import instruction is used for instructing the target node to carry out mirror image import on the prefabricated core gateway packet so as to start a mirror image;
the method comprises the steps that a server modifies parameters associated with services of a core gateway into pre-established source code files, wherein each service generates a source code file, the source code files are used for making mirror images of the services, and the services comprise a capability access layer, a core processing layer, a capability access layer, a control center and a control center platform;
the method comprises the steps that a service end sets a starting sequence of services;
the server creates a third YAML file, automatically executes the third YAML file by using an automatic deployment mode of the centralized configuration management system to realize the deployment of the core gateway as a third container, wherein the third YAML file is used for creating a controller for management release in a container arranging platform, creating an indirect management container in the controller and deploying the service in the indirect management container;
The server side containerizes the Nginx into a third container.
The services of the core gateway consist of 5 services, and because of the interdependence relationship among the services, part of the services need to be started first, and other services can be started normally, so that operation and maintenance personnel are required to operate the service end to set the starting sequence of the services. For example, the order of starting up the 5 services of the core gateway may be set as: the system comprises a control center, a control center table, a capability access layer, a core processing layer and a capability access layer; the control center and the control center station need to be started preferentially and then can start other services, and the starting sequence of the capability access layer, the core processing layer and the capability access layer is not specifically required, and the control center station can be started simultaneously or according to the sequence.
The capacity access layer, the core processing layer and the capacity access layer are subjected to containerization deployment by adopting K8s, so that the bearing capacity of the core gateway is improved on one hand, and the node can dynamically expand or contract according to service pressure; on the other hand, fault isolation can be performed, when part of nodes are in fault, the fault nodes are stopped in time, and the capacitor is restarted, so that the working stability of the nodes is ensured, and the high availability of the core gateway is realized. In addition, as the three layers of computing nodes adopt a containerized deployment mode, the deployment flexibility is increased, and more complex network scenes and service demands can be supported.
In addition, the control center is used as an interaction layer with three layers of computing nodes, and K8s is adopted for containerized deployment. For example, the control center can employ technologies such as a repository, a thread pool and an object pool to provide stable, reliable and efficient configuration services, and has functions such as automatic discovery of nodes, synchronization of node configuration, timing tasks and the like. For example, in order to achieve separation of management and computation, the exemplary embodiment of the present application deploys 5 services corresponding to the three-layer computing node and the control center in 5 Pod; the third YAML file is automatically executed by using the automatic deployment mode of Puppet, so that the deployment of the core gateway as a third container is realized, the Nginx is containerized into the third container, meanwhile, a layer of load balancing (Server Load Balancer, SLB for short) server is added to a capacity access layer, and when a user accesses an API interface of a capacity open platform through the SLB server, the load balancing can be realized, and the concurrency of the system is further improved.
In some embodiments, when the target deployment object is a management control end, the server end performs containerized deployment of the target deployment object in the target node based on the second configuration file, including:
the server side sends a mirror image import instruction to the target node based on the second configuration file, wherein the mirror image import instruction is used for instructing the target node to carry out mirror image import on the pre-manufactured management control end packet so as to start a mirror image;
The server modifies parameters associated with the service of the management control end into a pre-established source code file;
the server creates a fourth YAML file, and the fourth YAML file is automatically executed by using an automatic deployment mode of the centralized configuration management system so as to realize that the management control end is deployed as a fourth container.
The management control end is based on the distributed architecture design of Java platform enterprise edition (Java Enterprise Edition, java EE for short), so that an application system has platform independence, and can be deployed in any application server conforming to Java EE specifications, and based on the distributed architecture design, the management control end is used as a service and is packaged by using K8s, so that the reliability of an application program of the management control end is further improved. Through K8s, the working state of the management control end can be monitored, the fault is automatically transferred, and when the management control end fails to start, the container where the management control end is located is replaced by a new container, and the container is restarted.
Illustratively, fig. 4 is a schematic diagram of a software architecture of a capability open platform provided by an exemplary embodiment of the present application based on Puppet and K8s automated deployment. As shown in fig. 4, some middleware is integrated in each application module, similar to that in fig. 3, and will not be described again here; in addition, the container orchestration platform profile may include Configmap, deployment, service, ingress and a launch configuration of the container, etc. In practical application, the containerized deployment of the database needs to be performed first, and there is no explicit sequential requirement on the containerized deployment of the application store, the core gateway and the management control end, and the containerized deployment can be sequentially deployed according to the sequence or can be simultaneously deployed.
In summary, the present application has at least the following advantages:
1. based on a Puppet+K8s double-line automatic container deployment scheme, vertical field batch automation is realized based on Puppet, decoupling of an enterprise-level service bus architecture is transversely provided, and self-adaption of enterprise-level service bus deployment modes under different scenes is realized;
2. the configuration of the system basic environment and the configuration of the configuration file of K8s in the early stage are automatically changed through Puppet, so that the cost of manual deployment is reduced, and the accuracy of deployment is improved;
3. the K8s containerized deployment is used for all the application store, the database, the core gateway and the management control end, so that the load, the deployment as required and the automatic expansion and contraction capacity can be automatically scheduled and balanced to meet the requirement of large flow, and the functions of fault transfer and automatic recovery of container restarting are provided;
4. after containerized deployment, the reliability of the existing application is improved, K8s self-contained monitoring can be used for monitoring the working state of the application in the container, and the cost of artificial operation and maintenance is further reduced;
5. because the enterprise-level service bus is more in application and higher in system complexity, the K8s can manage and coordinate a plurality of containers, so that communication and cooperation among the containers and dependency relationship and network problems among the containers are ensured;
6. Some limiting factors of the deployment scene can be ignored through K8s, the binding requirement of the application system on the environment is reduced, and the flexibility and expandability of the application system are improved.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Fig. 5 is a schematic structural diagram of an enterprise-level service bus automation deployment device according to an exemplary embodiment of the present application, which is suitable for a centralized configuration management system, where the centralized configuration management system includes a server and a client, and the enterprise-level service bus automation deployment device is integrated in the server, or the enterprise-level service bus automation deployment device is the server. As shown in fig. 5, the enterprise-class service bus automation deployment device 50 includes an acquisition module 51, an installation module 52, a selection module 53, and an execution module 54, wherein:
an obtaining module 51, configured to obtain a plurality of installation packages configured by a base environment, and determine a first configuration file for automatically installing the plurality of installation packages, where the installation packages include an installation package of a container engine, a container arrangement platform, and a network plug-in, and the first configuration file is used to configure shutdown of a network firewall and modification of system file parameters;
The installation module 52 is configured to install, for the client, a master-slave node base environment of the container arrangement platform based on the first configuration file, so as to implement automatic establishment of a relevant cluster of the container arrangement platform;
a selection module 53 for selecting a target node from the slave nodes in the container arrangement platform and configuring an internet protocol address of the target node into a second configuration file;
and the execution module 54 is configured to perform containerized deployment of a target deployment object in the target node based on the second configuration file, where the target deployment object includes an application store, a database, a core gateway, and a management control end.
In one possible implementation, the execution module 54 may be specifically configured to: transmitting an image importing instruction to the target node based on the second configuration file, wherein the image importing instruction is used for instructing the target node to conduct image importing on the prefabricated application store package so as to start an image; modifying parameters associated with services of an application store into pre-created source code files, wherein each service generates a source code file, the source code files are used for making images of the services, and the services comprise a service registration center, a tool public class, a login center, front-end services and back-end services; setting a starting sequence of the service; the method comprises the steps of creating a first YAML file, automatically executing the first YAML file by using an automatic deployment mode of a centralized configuration management system to realize deployment of an application store as a first container, wherein the first YAML file is used for creating a controller for management release in a container arrangement platform, creating an indirect management container in the controller, and deploying services in the indirect management container.
In one possible implementation, execution module 54 may also be configured to: the nmginx is containerized into a first container.
In one possible implementation, the execution module 54 may be specifically configured to: transmitting an image importing instruction to the target node based on the second configuration file, wherein the image importing instruction is used for instructing the target node to conduct image importing on a prefabricated database packet so as to start an image; modifying parameters associated with the services of the database into a pre-established source code file; initializing a database instance of the target node; creating a second YAML file, and automatically executing the second YAML file by using an automatic deployment mode of the centralized configuration management system to realize that the database is deployed as a second container; creating users in the database instance.
In one possible implementation, the execution module 54 may be specifically configured to: transmitting an image importing instruction to the target node based on the second configuration file, wherein the image importing instruction is used for instructing the target node to conduct image importing on the prefabricated core gateway packet so as to start an image; modifying parameters associated with services of a core gateway into pre-established source code files, wherein each service generates a source code file, the source code files are used for making mirror images of the services, and the services comprise a capability access layer, a core processing layer, a capability access layer, a control center and a control center platform; setting a starting sequence of the service; creating a third YAML file, automatically executing the third YAML file by using an automatic deployment mode of a centralized configuration management system to realize the deployment of a core gateway as a third container, wherein the third YAML file is used for creating a controller for management release in a container arranging platform, creating an indirect management container in the controller and deploying services in the indirect management container; the nmginx is containerized into a third container.
In one possible implementation, the execution module 54 may be specifically configured to: transmitting an image importing instruction to the target node based on the second configuration file, wherein the image importing instruction is used for instructing the target node to conduct image importing on a management control end package which is manufactured in advance so as to start an image; modifying parameters associated with the service of the management control terminal into a pre-established source code file; and creating a fourth YAML file, and automatically executing the fourth YAML file by using an automatic deployment mode of a centralized configuration management system to realize that a management control end is deployed as a fourth container.
In one possible implementation, the centralized configuration management system is Puppet and the container orchestration platform is K8s.
The enterprise-level service bus automatic deployment device provided by the embodiment of the application can execute the technical scheme shown in the embodiment of the method, and has similar implementation principle and beneficial effects, and the description is omitted.
Fig. 6 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application. As shown in fig. 6, the electronic device 60 of the present embodiment includes:
at least one processor 61; and a memory 62 communicatively coupled to the at least one processor;
wherein the memory 62 stores instructions executable by the at least one processor 61 for causing the electronic device to perform the method as described in any of the embodiments above.
Alternatively, the memory 62 may be separate or integrated with the processor 61.
The memory 62 may include a high-speed random access memory (Random Access Memory, simply referred to as RAM), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 61 may be a central processing unit (Central Processing Unit, CPU for short), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), or one or more integrated circuits configured to implement embodiments of the present application. Specifically, when the method for automatically deploying the enterprise-level service bus described in the foregoing method embodiment is implemented, the electronic device may be, for example, an electronic device having a processing function, such as a server.
Optionally, the electronic device may also include a communication interface 63. In a specific implementation, if the communication interface 63, the memory 62, and the processor 61 are implemented independently, the communication interface 63, the memory 62, and the processor 61 may be connected to each other through a bus and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. Buses may be divided into address buses, data buses, control buses, etc., but do not represent only one bus or one type of bus.
Alternatively, in a specific implementation, if the communication interface 63, the memory 62, and the processor 61 are implemented integrally on a single chip, the communication interface 63, the memory 62, and the processor 61 may complete communication through internal interfaces.
The implementation principle and technical effects of the electronic device provided in this embodiment may be referred to the foregoing embodiments, and will not be described herein again.
The embodiment of the application also provides a computer readable storage medium, wherein computer executable instructions are stored in the computer readable storage medium, and when a processor executes the computer executable instructions, the method of any of the previous embodiments is realized.
The computer readable storage medium may be implemented by any type or combination of volatile or nonvolatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read Only Memory, PROM for short), read Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. A readable storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. In the alternative, the readable storage medium may be integral to the processor. The processor and the readable storage medium may reside in an application specific integrated circuit. Of course, the processor and the readable storage medium may reside as discrete components in an enterprise-class service bus automation deployment apparatus.
Embodiments of the present application also provide a computer program product comprising a computer program which, when executed by a processor, implements a method as described in any of the preceding embodiments.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. An automatic deployment method for an enterprise-level service bus is characterized by being suitable for a centralized configuration management system, wherein the centralized configuration management system comprises a service end and a client, and the automatic deployment method for the enterprise-level service bus comprises the following steps:
the method comprises the steps that a plurality of installation packages configured by a basic environment are obtained by a server side, and a first configuration file for automatically installing the installation packages is determined, wherein the installation packages comprise an installation package of a container engine, a container arranging platform and a network plug-in, and the first configuration file is used for configuring the closing of a network firewall and the modification of system file parameters;
the server installs a master-slave node basic environment of the container arrangement platform for the client based on the first configuration file so as to realize the automatic establishment of a related cluster of the container arrangement platform;
the server selects a target node from the slave nodes in the container arrangement platform, and configures an internet protocol address of the target node into a second configuration file;
And the server performs containerized deployment of a target deployment object in the target node based on the second configuration file, wherein the target deployment object comprises an application store, a database, a core gateway and a management control end.
2. The method for automatically deploying an enterprise-level service bus according to claim 1, wherein the target deployment object is an application store, and the server performs containerized deployment of the target deployment object in the target node based on the second configuration file, including:
the server side sends a mirror image import instruction to the target node based on the second configuration file, wherein the mirror image import instruction is used for instructing the target node to conduct mirror image import on a prefabricated application store packet so as to start a mirror image;
the service end modifies parameters related to the services of the application store into pre-established source code files, wherein each service generates a source code file, the source code files are used for making mirror images of the services, and the services comprise a service registration center, a tool public class, a login center, front-end services and back-end services;
the service end sets the starting sequence of the service;
The server creates a first YAML file, automatically executes the first YAML file by using an automatic deployment mode of the centralized configuration management system to realize that the application store is deployed as a first container, wherein the first YAML file is used for creating a controller for managing and publishing in the container arrangement platform, creating an indirect management container in the controller, and deploying the service in the indirect management container.
3. The enterprise-class service bus automated deployment method of claim 2, further comprising:
the server side containerizes the Nginx into the first container.
4. The method for automatically deploying an enterprise-level service bus according to claim 1, wherein the target deployment object is a database, and the server performs containerized deployment of the target deployment object in the target node based on the second configuration file, including:
the server side sends a mirror image import instruction to the target node based on the second configuration file, wherein the mirror image import instruction is used for instructing the target node to conduct mirror image import on a prefabricated database packet so as to start a mirror image;
The server modifies parameters associated with the services of the database into a pre-established source code file;
the server side initializes a database instance of the target node;
the server creates a second YAML file, and automatically executes the second YAML file by using an automatic deployment mode of the centralized configuration management system so as to realize that the database is deployed as a second container;
the server creates a user in the database instance.
5. The method for automatically deploying an enterprise-level service bus according to claim 1, wherein the target deployment object is a core gateway, and the server performs containerized deployment of the target deployment object in the target node based on the second configuration file, including:
the server side sends a mirror image import instruction to the target node based on the second configuration file, wherein the mirror image import instruction is used for instructing the target node to conduct mirror image import on a prefabricated core gateway packet so as to start a mirror image;
the service end modifies parameters related to the services of the core gateway into pre-established source code files, wherein each service generates a source code file, the source code files are used for making mirror images of the services, and the services comprise a capability access layer, a core processing layer, a capability access layer, a control center and a control center platform;
The service end sets the starting sequence of the service;
the server creates a third YAML file, automatically executes the third YAML file by using an automatic deployment mode of the centralized configuration management system to realize the deployment of the core gateway as a third container, wherein the third YAML file is used for creating a controller for managing and publishing in the container arranging platform, creating an indirect management container in the controller and deploying the service in the indirect management container;
the server side containerizes the Nginx into the third container.
6. The method for automatically deploying an enterprise-level service bus according to claim 1, wherein the target deployment object is a management control end, and the server performs containerized deployment of the target deployment object in the target node based on the second configuration file, including:
the server side sends a mirror image import instruction to the target node based on the second configuration file, wherein the mirror image import instruction is used for instructing the target node to carry out mirror image import on a pre-manufactured management control end packet so as to start a mirror image;
the server modifies parameters associated with the service of the management control end into a pre-established source code file;
And the server creates a fourth YAML file, and automatically executes the fourth YAML file by using an automatic deployment mode of the centralized configuration management system so as to realize that the management control end is deployed as a fourth container.
7. The automated deployment method of an enterprise-class service bus of any one of claims 1-6, wherein the centralized configuration management system is Puppet and the container orchestration platform is K8s.
8. The enterprise-level service bus automatic deployment device is characterized by being suitable for a centralized configuration management system, wherein the centralized configuration management system comprises a service end and a client, and the enterprise-level service bus automatic deployment device is integrated in the service end;
wherein, the automatic deployment device of enterprise-class service bus includes:
the system comprises an acquisition module, a configuration module and a configuration module, wherein the acquisition module is used for acquiring a plurality of installation packages configured by a basic environment and determining a first configuration file for automatically installing the plurality of installation packages, the installation packages comprise an installation package of a container engine, a container arrangement platform and a network plug-in, and the first configuration file is used for configuring the closing of a network firewall and the modification of system file parameters;
The installation module is used for installing a master-slave node basic environment of the container arrangement platform for the client based on the first configuration file so as to realize the automatic establishment of the related clusters of the container arrangement platform;
a selection module for selecting a target node from the slave nodes in the container arrangement platform and configuring an internet protocol address of the target node into a second configuration file;
and the execution module is used for carrying out containerized deployment of a target deployment object in the target node based on the second configuration file, wherein the target deployment object comprises an application store, a database, a core gateway and a management control end.
9. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of claims 1 to 7.
10. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1 to 7.
CN202310898212.5A 2023-07-20 2023-07-20 Automatic deployment method, device, equipment and storage medium for enterprise-level service bus Pending CN116909588A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310898212.5A CN116909588A (en) 2023-07-20 2023-07-20 Automatic deployment method, device, equipment and storage medium for enterprise-level service bus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310898212.5A CN116909588A (en) 2023-07-20 2023-07-20 Automatic deployment method, device, equipment and storage medium for enterprise-level service bus

Publications (1)

Publication Number Publication Date
CN116909588A true CN116909588A (en) 2023-10-20

Family

ID=88364405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310898212.5A Pending CN116909588A (en) 2023-07-20 2023-07-20 Automatic deployment method, device, equipment and storage medium for enterprise-level service bus

Country Status (1)

Country Link
CN (1) CN116909588A (en)

Similar Documents

Publication Publication Date Title
US10002155B1 (en) Dynamic code loading
EP3761172A1 (en) Deep learning task scheduling method and system and related apparatus
CN112214330A (en) Method and device for deploying master nodes in cluster and computer-readable storage medium
US8799453B2 (en) Managing networks and machines for an online service
US8190562B2 (en) Linking framework for information technology management
US20170154017A1 (en) Web Application Management
US11237814B2 (en) System and method for supporting custom hooks during patching in an application server environment
US20170060570A1 (en) Managing Software Version Upgrades in a Multiple Computer System Environment
CN108243012B (en) Charging application processing system, method and device in OCS (online charging System)
WO2012054192A2 (en) Web service patterns for globally distributed service fabric
WO2022037612A1 (en) Method for providing application construction service, and application construction platform, application deployment method and system
US20110219019A1 (en) System And Method For Providing Network-Based Services To Users With High Availability
CN113204353B (en) Big data platform assembly deployment method and device
CN115048205B (en) ETL scheduling platform, deployment method thereof and computer-readable storage medium
US10860364B2 (en) Containerized management services with high availability
CN103077034B (en) hybrid virtualization platform JAVA application migration method and system
US10721335B2 (en) Remote procedure call using quorum state store
CN110941474A (en) Method, system, equipment and storage medium for sharing computing resources by Hadoop and Kubernetes system
CN111459619A (en) Method and device for realizing service based on cloud platform
CN116909588A (en) Automatic deployment method, device, equipment and storage medium for enterprise-level service bus
CN116028163A (en) Method, device and storage medium for scheduling dynamic link library of container group
CN116107694A (en) Deployment method and device of k8s sub-cluster and storage medium
CN117112122A (en) Cluster deployment method and device
CN114327770A (en) Container cluster management system and method
CN113641641A (en) Switching method, switching system, equipment and storage medium of file storage service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination