CN117453339A - Distributed architecture management system, micro-service platform, device and storage medium - Google Patents

Distributed architecture management system, micro-service platform, device and storage medium Download PDF

Info

Publication number
CN117453339A
CN117453339A CN202311176452.0A CN202311176452A CN117453339A CN 117453339 A CN117453339 A CN 117453339A CN 202311176452 A CN202311176452 A CN 202311176452A CN 117453339 A CN117453339 A CN 117453339A
Authority
CN
China
Prior art keywords
module
distributed architecture
management system
proxy
pod
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311176452.0A
Other languages
Chinese (zh)
Inventor
龙桂锋
伍朗
冯智斌
许毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Huafa Financial Technology Research Institute Co ltd
Original Assignee
Zhuhai Huafa Financial Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Huafa Financial Technology Research Institute Co ltd filed Critical Zhuhai Huafa Financial Technology Research Institute Co ltd
Priority to CN202311176452.0A priority Critical patent/CN117453339A/en
Publication of CN117453339A publication Critical patent/CN117453339A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45591Monitoring or debugging support
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Stored Programmes (AREA)

Abstract

The invention provides a distributed architecture management system, a micro-service platform, equipment and a storage medium, wherein the distributed architecture management system comprises: the Engine module Docker Engine is used for being responsible for the creation and management of the container; the monitoring module Kubelet is used for monitoring the work of the Pod process; the proxy module kube-proxy is used for providing a proxy for Pod; the log management module Fluentd is used for collecting, storing and inquiring logs; wherein the Pod is a minimum unit of the distributed architecture management system. The invention relates to the technical field of computer communication, which is used for engineers to manage a Docker and a container more flexibly.

Description

Distributed architecture management system, micro-service platform, device and storage medium
Technical Field
The present disclosure relates to the field of computer network communications, and in particular, to a distributed architecture management system, a micro-service platform, a device, and a storage medium.
Background
The Docker is an open-source application container engine, after the Docker is installed on the physical host, a plurality of containers can be carried on the basis of the Docker, the containers are isolated from each other, the containers share an operating system of the physical host, and each container can place and execute different application programs. Dock is a virtualization technology built on the operating system level, greatly simplifying the application deployment steps and maintenance costs.
However, the application of Docker to specific service implementation is difficult, i.e. various aspects such as arrangement, management and scheduling are not easy.
Therefore, how to perform higher-level and more flexible management on dockers and containers is a technical problem that those skilled in the art are urgent to solve.
Disclosure of Invention
In view of the foregoing, embodiments of the present disclosure provide a distributed architecture management system, micro-service platform, apparatus, and storage medium for engineers to manage dockers and containers more flexibly and at a higher level.
In a first aspect, embodiments of the present disclosure provide a distributed architecture management system, the distributed architecture management system comprising:
the Engine module Docker Engine is used for being responsible for the creation and management of the container;
the monitoring module Kubelet is used for monitoring the work of the Pod process;
the proxy module kube-proxy is used for providing a proxy for Pod;
the log management module Fluentd is used for collecting, storing and inquiring logs;
wherein the Pod is a minimum unit of the distributed architecture management system.
Optionally, the distributed architecture management system further includes:
the management controller Master is used for being responsible for the management and control of the distributed architecture management system;
At least one workload Node, the controller distributes load for the workload Node.
Optionally, the Master is disposed on an independent server, and the server runs the following processes:
an access process controller API Server for taking charge of the access process of the distributed architecture management system;
an automation control center Controller Manager for taking charge of the automation control of the resource object;
and the resource scheduling center Scheduler is used for a process responsible for resource scheduling.
Optionally, the distributed architecture management system further includes:
and the maintenance module Controller is used for being responsible for maintaining the state of the distributed structure management system.
Optionally, the distributed architecture management system further includes:
the image management module Container is used for being responsible for image management.
Optionally, the Pod includes a normal Pod and a static Pod.
Optionally, the distributed architecture management system further includes:
and the server Service is used for providing an external access interface for the Pod providing the same Service.
In a second aspect, embodiments of the present disclosure provide a distributed architecture management micro-service platform, wherein,
the Engine module Docker Engine is used for being responsible for the creation and management of the container;
The monitoring module Kubelet is used for monitoring the work of the Pod process;
the proxy module kube-proxy is used for providing a proxy for Pod;
the log management module Fluentd is used for collecting, storing and inquiring logs;
wherein the Pod is a minimum unit of the distributed architecture management system;
the container creation module, the monitoring module, the proxy module and the log management module are carried on the distributed architecture management micro-service platform.
In a third aspect, embodiments of the present disclosure further provide an electronic device, including:
the Engine module Docker Engine is used for being responsible for the creation and management of the container;
the monitoring module Kubelet is used for monitoring the work of the Pod process;
the proxy module kube-proxy is used for providing a proxy for Pod;
the log management module Fluentd is used for collecting, storing and inquiring logs;
wherein the Pod is a minimum unit of the distributed architecture management system;
the memory is in communication connection with any one of an Engine module Docker Engine, a monitoring module Kubelet, an agent module kube-proxy and a log management module Fluentd; wherein,
the memory stores instructions executable by any one of the Engine module Docker Engine, the monitoring module Kubelet, the proxy module kube-proxy and the log management module Fluentd, and the instructions are executed by any one of the Engine module Docker Engine, the monitoring module Kubelet, the proxy module kube-proxy and the log management module Fluentd.
In a fourth aspect, the disclosed embodiments also provide a computer-readable storage medium storing computer instructions for execution by any one of an Engine module Docker Engine, a monitoring module Kubelet, an agent module kube-proxy, and a log management module Fluentd.
Each of the above aspects has the following technical effects:
the environmental difference of online and offline is eliminated, and the environmental consistency standardization of the application life cycle is ensured. The developer realizes the construction of the standard development environment by using the mirror image, and the migration is carried out by encapsulating the complete environment and the mirror image of the application after the development is finished, so that the testing and operation and maintenance personnel can directly deploy the software mirror image to test and release, and the continuous integration, testing and release processes are greatly simplified. The user does not need to worry about binding by the cloud platform any more, and meanwhile, the application multi-platform hybrid deployment becomes possible. Based on the environment consistency and standardization provided by the Docker, the container mirror image can be version-controlled by using tools such as Git, and compared with the version control based on codes, the version control of the whole application running environment can be realized by you, and the container mirror image can be quickly rolled back once faults occur. Compared with the prior virtual machine mirror image, the container compression and backup speed is faster, and the mirror image starting is as fast as starting a common process. Docker has no additional overhead of a management program, shares an operating system with the bottom layer, has better performance and lower system load, can run more application instances under the same condition, and can more fully utilize system resources. Meanwhile, the system has good resource isolation and limiting capacity, can accurately allocate resources such as CPU, memory and the like to the applications, and ensures that the applications cannot be influenced mutually. Docker carries out great innovation on the basis of the original Linux container, sets a whole set of standardized configuration method for the container, packages the application and the depending running environment into mirror images, really realizes the concept of 'construction times and running everywhere', and greatly improves the cross-platform property of the container. One developer can visit Docker and install and deploy within 15 minutes, which is a leap over the history of container use. Because of its ease of use, there are more people beginning to pay attention to container technology, speeding up the pace of container standardization. The method can share the application stores with huge and practical finished products, so that the developer can freely download the micro-service components, and great convenience is provided for the developer.
The foregoing description is only an overview of the disclosed technology, and may be implemented in accordance with the disclosure of the present disclosure, so that the above-mentioned and other objects, features and advantages of the present disclosure can be more clearly understood, and the following detailed description of the preferred embodiments is given with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of a distributed architecture management system according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
It should be appreciated that the following specific embodiments of the disclosure are described in order to provide a better understanding of the present disclosure, and that other advantages and effects will be apparent to those skilled in the art from the present disclosure. It will be apparent that the described embodiments are merely some, but not all embodiments of the present disclosure. The disclosure may be embodied or practiced in other different specific embodiments, and details within the subject specification may be modified or changed from various points of view and applications without departing from the spirit of the disclosure. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the following claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present disclosure, one skilled in the art will appreciate that one aspect described herein may be implemented independently of any other aspect, and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. In addition, such apparatus may be implemented and/or such methods practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should also be noted that the illustrations provided in the following embodiments merely illustrate the basic concepts of the disclosure by way of illustration, and only the components related to the disclosure are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided in order to provide a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
Before explaining the present invention in detail, a distributed architecture management system to which the present invention relates is described so that those skilled in the art can better understand the present invention.
The invention relates to a distributed architecture solution based on container technology, which is a container cluster management system of Google open source, the inside of Google is called Borg, and the distributed architecture based on Docer is mainly used for automatically deploying, expanding and managing containerized application programs. The Kubernetes can perfectly support the distributed system, has perfect cluster control capability, is built with an intelligent load balancer, and has strong fault discovery and self-repairing capability. Meanwhile, a perfect management tool is provided for development, deployment test, operation and maintenance monitoring and the like. The core idea of Kubernetes is: all are centered on services, according to this core idea, kubernetes can make a system constructed on the Kubernetes independently run on a physical machine, a virtual machine group or the cloud, so Service is the core of the Kubernetes for distributed cluster construction, and has to possess the following key characteristics: having a uniquely assigned name. Having a virtual IP and port. Can provide some remote service capability. May be mapped onto a set of container applications that provide such remote service capabilities.
The specific embodiments of the invention are as follows:
embodiments of the present disclosure provide a distributed architecture management system, including:
s11, an Engine module Docker Engine is used for being responsible for creating and managing the container;
s12, a monitoring module Kubelet is used for monitoring the work of the Pod process;
in this embodiment, the monitoring module Kubelet is responsible for tasks such as creating, starting, stopping, etc. the Pod corresponds to the container. Is responsible for maintaining the lifecycle of the container while managing Volume and network.
S13, an agent module kube-proxy is used for providing an agent for Pod;
in this embodiment, the proxy module kube-proxy is an important component for implementing Kubernetes Service communication and load balancing mechanisms. Is responsible for providing Service discovery and load balancing inside the cluster for Service.
S14, a log management module Fluentd is used for collecting, storing and inquiring logs;
wherein the Pod is a minimum unit of the distributed architecture management system.
It will be appreciated that Pod is the smallest unit for creation, scheduling and management by kurbelmes, running on Node nodes, and contains a plurality of service containers, which share a network namespace, ip address, port, and can communicate through localhost.
Fig. 1 is a schematic diagram of a distributed architecture management system according to an embodiment of the present invention, where a kernel of a Docker is derived from LXC (Linux Container, hereinafter referred to as LXC), which is a Linux Container virtual technology. Currently popular cloud architecture is based on virtual machines, which belong to the virtualization technology. The container technology such as Docker is also a virtualization technology, and belongs to lightweight virtualization. The three main cores are respectively: image, container, repository. Such as a special file system. It contains some configuration parameters (e.g., environment variables) prepared for the runtime in addition to the files that provide the programs, libraries, resources, configurations, etc. needed by the runtime of the container. The mirror does not contain any dynamic data, nor does its content change after construction. Kubernetes, commonly referred to as a K8S Cluster (Cluster). This cluster mainly comprises two parts: a Master Node (Master Node), a group of Node nodes (compute nodes). Pod is the most basic unit of operation of Kubernetes. A Pod represents a process running in a cluster that internally encapsulates one or more tightly related containers. In addition to Pod, K8S has a Service concept, and a Service can be regarded as an external access interface of a Pod that provides the same Service. If so, the Docker is responsible for creating the container. Kubelet is primarily responsible for monitoring pods assigned to the Node in which it resides, including creation, modification, monitoring, deletion, etc. Kube-proxy is primarily responsible for providing a proxy for Pod objects. Fluentd is mainly responsible for log collection, storage and query.
In this embodiment, on the one hand, deployment and testing is continued. The environmental difference of online and offline is eliminated, and the environmental consistency standardization of the application life cycle is ensured. The developer realizes the construction of the standard development environment by using the mirror image, and the migration is carried out by encapsulating the complete environment and the mirror image of the application after the development is finished, so that the testing and operation and maintenance personnel can directly deploy the software mirror image to test and release, and the continuous integration, testing and release processes are greatly simplified. In yet another aspect, cross-cloud platform support. The user does not need to worry about binding by the cloud platform any more, and meanwhile, the application multi-platform hybrid deployment becomes possible. In yet another aspect, environmental standardization and version control. Based on the environment consistency and standardization provided by the Docker, the container mirror image can be version-controlled by using tools such as Git, and compared with the version control based on codes, the version control of the whole application running environment can be realized by you, and the container mirror image can be quickly rolled back once faults occur. Compared with the prior virtual machine mirror image, the container compression and backup speed is faster, and the mirror image starting is as fast as starting a common process. In yet another aspect, high resource utilization and isolation. Docker has no additional overhead of a management program, shares an operating system with the bottom layer, has better performance and lower system load, can run more application instances under the same condition, and can more fully utilize system resources. Meanwhile, the system has good resource isolation and limiting capacity, can accurately allocate resources such as CPU, memory and the like to the applications, and ensures that the applications cannot be influenced mutually. In yet another aspect, cross-platform functionality is achieved by mirroring. Docker carries out great innovation on the basis of the original Linux container, sets a whole set of standardized configuration method for the container, packages the application and the depending running environment into mirror images, really realizes the concept of 'construction times and running everywhere', and greatly improves the cross-platform property of the container. In yet another aspect, it is easy to understand and easy to use. One developer can visit Docker and install and deploy within 15 minutes, which is a leap over the history of container use. Because of its ease of use, there are more people beginning to pay attention to container technology, speeding up the pace of container standardization. In yet another aspect, a mirrored repository is applied. The method can share the application stores with huge and practical finished products, so that the developer can freely download the micro-service components, and great convenience is provided for the developer.
Optionally, the distributed architecture management system further includes:
the management controller Master is used for being responsible for the management and control of the distributed architecture management system;
in this embodiment, the Master is configured to be responsible for management and control of the entire cluster, which can be understood as a cluster control node of the cluster, and a cluster control node of Kubernetes, which is responsible for management and control of the entire distributed architecture management system, and has an etcd service for storing data of all resource objects, all control commands that we execute are sent to him, and he is responsible for a specific execution process, where the Master node usually monopolizes a server, and runs the above set of critical processes.
At least one workload Node, the controller distributes load for the workload Node.
In this embodiment, the workload Node may be understood that other machines in the Kubernetes cluster are called Node nodes, where a Node may be a physical host or a virtual machine, and each Node may be assigned some load by a Master Node, so that the Node is a workload Node in the Kubernetes cluster, and when a Node is down, the workload may be automatically transferred to other nodes by the Master. A set of critical processes run on top of the Node nodes.
Optionally, the Master is disposed on an independent server, and the server runs the following processes:
an access process controller API Server for taking charge of the access process of the distributed architecture management system;
in this embodiment, the API Server of the ingress process controller, which can be understood as Kubernetes API Server, provides a key service process of the Http Rest interface, is the only ingress for adding, deleting, modifying, searching, etc. operations in Kubernetes, and is an ingress process for cluster control.
An automation control center Controller Manager for taking charge of the automation control of the resource object;
in this embodiment, the automation control center Controller Manager, which can be understood as Kubernetes Controller Manager, is an automation control center for all resource objects in Kubernetes.
And the resource scheduling center Scheduler is used for a process responsible for resource scheduling.
In this embodiment, the resource scheduling center Scheduler, which can be understood as Kubernetes Scheduler, is a process responsible for resource scheduling by the resource scheduling center.
Optionally, the method further comprises:
and the maintenance module Controller is used for being responsible for maintaining the state of the distributed structure management system.
It will be appreciated that the maintenance module Controller is a kurbernes for managing and guaranteeing the Pod owned by the cluster.
Optionally, the method further comprises:
the image management module Container is used for being responsible for image management.
In this embodiment, the image management module Container is responsible for image management and true operation of Pod and Container.
Optionally, the Pod includes a normal Pod and a static Pod.
Optionally, the method further comprises:
and the server Service is used for providing an external access interface for the Pod providing the same Service.
In this embodiment, the distributed architecture management system is an automated deployment platform based on dockers and kubernetes, abbreviated as Dock8s. First, the deployment is convenient, the construction environment becomes very easy, the development environment is just the address of one or a plurality of container images, and at most, an execution script for controlling the deployment flow is needed. Or further placing your environment mirror image and mirror image script into a git project, releasing the mirror image and mirror image script to the cloud, and pulling the mirror image and mirror image script to the local when needed. Second, deployment security. Developing test, online environment inconsistencies is a nuisance. The development environment, the test environment and the production environment can be kept uniform in version and dependence through container technology, and the code is ensured to be executed in a highly uniform environment. And the unification of test environments can also solve the requirement of CI flow on the environments. Today, there is an increasing demand for distributed technology and capacity expansion, and if operations and maintenance can use container technology to perform environmental deployment, not only is there a lot of savings in deployment time, but also many errors due to manual configuration of the environment can be minimized. Third, the isolation is good. Often, whether development or production, multiple services may need to run on a machine, and the configuration of dependencies required by each service may be different, which can be problematic if two applications need to use the same dependency, or if there is some conflict between dependencies required by the two applications. It is preferable to isolate the different services provided by the different applications on the same server. While containers have natural advantages in this respect, each container is an isolated environment, you need to provide services inside the container, and the container can be provided in its entirety. This highly cohesive behavior allows for rapid separation of problematic services, and in some complex systems, rapid debugging and timely processing. (of course, it should be noted that this isolation is only compared with the server, and the virtual machine technology has better isolation). Fourth, roll back quickly. The rollback mechanism before the container typically requires redeployment based on the last version of the application and replacement of the current version of the problem. In the initial era, there was likely to be a complete set of development-to-deployment procedures, which often required a long time to perform. In a git-based environment, it may be to rollback some historical commit and then redeploy. These are not fast enough compared to container technology and may cause new problems (because of the new version based modifications). While container technology naturally has rollback properties, because each history container or image is preserved, replacing a container or some history image is very quick and simple. Fifth, the cost is low. In the past, a new server or a virtual machine is required for constructing an application. The purchase cost and the operation and maintenance cost of the server are high, and the virtual machine needs to occupy a lot of unnecessary resources. In contrast, container technology is compact and lightweight, and only needs to rely on the construction of application requirements inside a container, which is the most important reason for rapid development of container technology. Sixth, the management cost is lower. With the continued popularity and development of container technology, so too has the attendant container management and organization techniques. Orchestration tools such as Docker Swarm, kubernetes, mesos, etc. are also continually updated iteratively, which allows the container technology more possibilities and more space to play in the production environment. With the development of large environments, the use and learning costs of containers such as dock are also reduced, and more developers and enterprises are choosing.
In a second aspect, embodiments of the present disclosure provide a distributed architecture management micro-service platform, wherein,
the Engine module Docker Engine is used for being responsible for the creation and management of the container;
a monitoring module Kubelet for monitoring the Pod process,
the proxy module kube-proxy is used for providing a proxy for Pod;
the log management module Fluentd is used for collecting, storing and inquiring logs;
wherein the Pod is a minimum unit of the distributed architecture management system;
the container creation module, the monitoring module, the proxy module and the log management module are carried on the distributed architecture management micro-service platform.
In this embodiment, the core network adopts a micro-service architecture, which is also a single-body architecture (Monolithic) perfectly matched with the container, and becomes a micro-service architecture (Microservices), which is equivalent to a omnipotent type to N proprietary types. Each of the proprietary types is assigned to an isolated container, giving the greatest degree of flexibility.
In this embodiment, the distributed architecture management system is an automated deployment platform based on dockers and kubernetes, abbreviated as Dock8s. First, the deployment is convenient, the construction environment becomes very easy, the development environment is just the address of one or a plurality of container images, and at most, an execution script for controlling the deployment flow is needed. Or further placing your environment mirror image and mirror image script into a git project, releasing the mirror image and mirror image script to the cloud, and pulling the mirror image and mirror image script to the local when needed. Second, deployment security. Developing test, online environment inconsistencies is a nuisance. The development environment, the test environment and the production environment can be kept uniform in version and dependence through container technology, and the code is ensured to be executed in a highly uniform environment. And the unification of test environments can also solve the requirement of CI flow on the environments. Today, there is an increasing demand for distributed technology and capacity expansion, and if operations and maintenance can use container technology to perform environmental deployment, not only is there a lot of savings in deployment time, but also many errors due to manual configuration of the environment can be minimized. Third, the isolation is good. Often, whether development or production, multiple services may need to run on a machine, and the configuration of dependencies required by each service may be different, which can be problematic if two applications need to use the same dependency, or if there is some conflict between dependencies required by the two applications. It is preferable to isolate the different services provided by the different applications on the same server. While containers have natural advantages in this respect, each container is an isolated environment, you need to provide services inside the container, and the container can be provided in its entirety. This highly cohesive behavior allows for rapid separation of problematic services, and in some complex systems, rapid debugging and timely processing. (of course, it should be noted that this isolation is only compared with the server, and the virtual machine technology has better isolation). Fourth, roll back quickly. The rollback mechanism before the container typically requires redeployment based on the last version of the application and replacement of the current version of the problem. In the initial era, there was likely to be a complete set of development-to-deployment procedures, which often required a long time to perform. In a git-based environment, it may be to rollback some historical commit and then redeploy. These are not fast enough compared to container technology and may cause new problems (because of the new version based modifications). While container technology naturally has rollback properties, because each history container or image is preserved, replacing a container or some history image is very quick and simple. Fifth, the cost is low. In the past, a new server or a virtual machine is required for constructing an application. The purchase cost and the operation and maintenance cost of the server are high, and the virtual machine needs to occupy a lot of unnecessary resources. In contrast, container technology is compact and lightweight, and only needs to rely on the construction of application requirements inside a container, which is the most important reason for rapid development of container technology. Sixth, the management cost is lower. With the continued popularity and development of container technology, so too has the attendant container management and organization techniques. Orchestration tools such as Docker Swarm, kubernetes, mesos, etc. are also continually updated iteratively, which allows the container technology more possibilities and more space to play in the production environment. With the development of large environments, the use and learning costs of containers such as dock are also reduced, and more developers and enterprises are choosing.
In another embodiment, the invention provides an electronic device, which adopts the following technical scheme:
the electronic device includes:
the Engine module Docker Engine is used for being responsible for the creation and management of the container;
the monitoring module Kubelet is used for monitoring the work of the Pod process;
the proxy module kube-proxy is used for providing a proxy for Pod;
the log management module Fluentd is used for collecting, storing and inquiring logs;
wherein the Pod is a minimum unit of the distributed architecture management system;
the memory is in communication connection with any one of an Engine module Docker Engine, a monitoring module Kubelet, an agent module kube-proxy and a log management module Fluentd; wherein,
the memory stores instructions executable by any one of the Engine module Docker Engine, the monitoring module Kubelet, the proxy module kube-proxy and the log management module Fluentd, and the instructions are executed by any one of the Engine module Docker Engine, the monitoring module Kubelet, the proxy module kube-proxy and the log management module Fluentd.
An electronic device according to an embodiment of the present disclosure includes a memory. The memory is for storing non-transitory computer readable instructions. In particular, the memory may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like.
Therefore, the electronic device has all the beneficial effects of the distributed architecture management system, and in the embodiment, the distributed architecture management system is an automatic deployment platform based on dockers and kubernetes, which is abbreviated as Dock8s. First, the deployment is convenient, the construction environment becomes very easy, the development environment is just the address of one or a plurality of container images, and at most, an execution script for controlling the deployment flow is needed. Or further placing your environment mirror image and mirror image script into a git project, releasing the mirror image and mirror image script to the cloud, and pulling the mirror image and mirror image script to the local when needed. Second, deployment security. Developing test, online environment inconsistencies is a nuisance. The development environment, the test environment and the production environment can be kept uniform in version and dependence through container technology, and the code is ensured to be executed in a highly uniform environment. And the unification of test environments can also solve the requirement of CI flow on the environments. Today, there is an increasing demand for distributed technology and capacity expansion, and if operations and maintenance can use container technology to perform environmental deployment, not only is there a lot of savings in deployment time, but also many errors due to manual configuration of the environment can be minimized. Third, the isolation is good. Often, whether development or production, multiple services may need to run on a machine, and the configuration of dependencies required by each service may be different, which can be problematic if two applications need to use the same dependency, or if there is some conflict between dependencies required by the two applications. It is preferable to isolate the different services provided by the different applications on the same server. While containers have natural advantages in this respect, each container is an isolated environment, you need to provide services inside the container, and the container can be provided in its entirety. This highly cohesive behavior allows for rapid separation of problematic services, and in some complex systems, rapid debugging and timely processing. (of course, it should be noted that this isolation is only compared with the server, and the virtual machine technology has better isolation). Fourth, roll back quickly. The rollback mechanism before the container typically requires redeployment based on the last version of the application and replacement of the current version of the problem. In the initial era, there was likely to be a complete set of development-to-deployment procedures, which often required a long time to perform. In a git-based environment, it may be to rollback some historical commit and then redeploy. These are not fast enough compared to container technology and may cause new problems (because of the new version based modifications). While container technology naturally has rollback properties, because each history container or image is preserved, replacing a container or some history image is very quick and simple. Fifth, the cost is low. In the past, a new server or a virtual machine is required for constructing an application. The purchase cost and the operation and maintenance cost of the server are high, and the virtual machine needs to occupy a lot of unnecessary resources. In contrast, container technology is compact and lightweight, and only needs to rely on the construction of application requirements inside a container, which is the most important reason for rapid development of container technology. Sixth, the management cost is lower. With the continued popularity and development of container technology, so too has the attendant container management and organization techniques. Orchestration tools such as Docker Swarm, kubernetes, mesos, etc. are also continually updated iteratively, which allows the container technology more possibilities and more space to play in the production environment. With the development of large environments, the use and learning costs of containers such as dock are also reduced, and more developers and enterprises are choosing.
It should be understood by those skilled in the art that, in order to solve the technical problem of how to obtain a good user experience effect, the present embodiment may also include well-known structures such as a communication bus, an interface, and the like, and these well-known structures are also included in the protection scope of the present disclosure.
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. A schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 2 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 2, the electronic device may include a processing means (e.g., a central processing unit, a graphic processor, etc.) that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) or a program loaded from a storage means into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the electronic device are also stored. The processing device, ROM and RAM are connected to each other via a bus. An input/output (I/O) interface is also connected to the bus.
In general, the following devices may be connected to the I/O interface: input means including, for example, sensors or visual information gathering devices; output devices including, for example, display screens and the like; storage devices including, for example, magnetic tape, hard disk, etc.; a communication device. The communication means may allow the electronic device to communicate wirelessly or by wire with other devices, such as edge computing devices, to exchange data. While fig. 2 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via a communication device, or installed from a storage device, or installed from ROM.
The detailed description of the present embodiment may refer to the corresponding description in the foregoing embodiments, and will not be repeated herein.
A computer-readable storage medium according to an embodiment of the present disclosure has stored thereon non-transitory computer-readable instructions. The computer instructions are executed by any one of an Engine module Docker Engine, a monitoring module Kubelet, an agent module kube-proxy, and a log management module Fluentd.
The computer-readable storage medium described above includes, but is not limited to: optical storage media (e.g., CD-ROM and DVD), magneto-optical storage media (e.g., MO), magnetic storage media (e.g., magnetic tape or removable hard disk), media with built-in rewritable non-volatile memory (e.g., memory card), and media with built-in ROM (e.g., ROM cartridge).
Therefore, the embodiment has all the advantages of the distributed architecture management system, and in the embodiment, the distributed architecture management system is an automatic deployment platform based on dockers and kubernetes, which is abbreviated as Dock8s. First, the deployment is convenient, the construction environment becomes very easy, the development environment is just the address of one or a plurality of container images, and at most, an execution script for controlling the deployment flow is needed. Or further placing your environment mirror image and mirror image script into a git project, releasing the mirror image and mirror image script to the cloud, and pulling the mirror image and mirror image script to the local when needed. Second, deployment security. Developing test, online environment inconsistencies is a nuisance. The development environment, the test environment and the production environment can be kept uniform in version and dependence through container technology, and the code is ensured to be executed in a highly uniform environment. And the unification of test environments can also solve the requirement of CI flow on the environments. Today, there is an increasing demand for distributed technology and capacity expansion, and if operations and maintenance can use container technology to perform environmental deployment, not only is there a lot of savings in deployment time, but also many errors due to manual configuration of the environment can be minimized. Third, the isolation is good. Often, whether development or production, multiple services may need to run on a machine, and the configuration of dependencies required by each service may be different, which can be problematic if two applications need to use the same dependency, or if there is some conflict between dependencies required by the two applications. It is preferable to isolate the different services provided by the different applications on the same server. While containers have natural advantages in this respect, each container is an isolated environment, you need to provide services inside the container, and the container can be provided in its entirety. This highly cohesive behavior allows for rapid separation of problematic services, and in some complex systems, rapid debugging and timely processing. (of course, it should be noted that this isolation is only compared with the server, and the virtual machine technology has better isolation). Fourth, roll back quickly. The rollback mechanism before the container typically requires redeployment based on the last version of the application and replacement of the current version of the problem. In the initial era, there was likely to be a complete set of development-to-deployment procedures, which often required a long time to perform. In a git-based environment, it may be to rollback some historical commit and then redeploy. These are not fast enough compared to container technology and may cause new problems (because of the new version based modifications). While container technology naturally has rollback properties, because each history container or image is preserved, replacing a container or some history image is very quick and simple. Fifth, the cost is low. In the past, a new server or a virtual machine is required for constructing an application. The purchase cost and the operation and maintenance cost of the server are high, and the virtual machine needs to occupy a lot of unnecessary resources. In contrast, container technology is compact and lightweight, and only needs to rely on the construction of application requirements inside a container, which is the most important reason for rapid development of container technology. Sixth, the management cost is lower. With the continued popularity and development of container technology, so too has the attendant container management and organization techniques. Orchestration tools such as Docker Swarm, kubernetes, mesos, etc. are also continually updated iteratively, which allows the container technology more possibilities and more space to play in the production environment. With the development of large environments, the use and learning costs of containers such as dock are also reduced, and more developers and enterprises are choosing.
The detailed description of the present embodiment may refer to the corresponding description in the foregoing embodiments, and will not be repeated herein.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
In this disclosure, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions, and the block diagrams of devices, apparatuses, devices, systems involved in this disclosure are merely illustrative examples and are not intended to require or implicate that connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
In addition, as used herein, the use of "or" in the recitation of items beginning with "at least one" indicates a separate recitation, such that recitation of "at least one of A, B or C" for example means a or B or C, or AB or AC or BC, or ABC (i.e., a and B and C). Furthermore, the term "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
Various changes, substitutions, and alterations are possible to the techniques described herein without departing from the teachings of the techniques defined by the appended claims. Furthermore, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. The processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A distributed architecture management system, the distributed architecture management system comprising:
the Engine module Docker Engine is used for being responsible for the creation and management of the container;
the monitoring module Kubelet is used for monitoring the work of the Pod process;
the proxy module kube-proxy is used for providing a proxy for Pod;
the log management module Fluentd is used for collecting, storing and inquiring logs;
Wherein the Pod is a minimum unit of the distributed architecture management system.
2. The distributed architecture management system of claim 1, wherein the distributed architecture management system further comprises:
the management controller Master is used for being responsible for the management and control of the distributed architecture management system;
at least one workload Node, the controller distributes load for the workload Node.
3. The distributed architecture management system of claim 2, wherein the hypervisor Master is disposed on a separate server running the following processes:
an access process controller API Server for taking charge of the access process of the distributed architecture management system;
an automation control center Controller Manager for taking charge of the automation control of the resource object;
and the resource scheduling center Scheduler is used for a process responsible for resource scheduling.
4. The distributed architecture management system of claim 1, wherein the distributed architecture management system further comprises:
and the maintenance module Controller is used for being responsible for maintaining the state of the distributed structure management system.
5. The distributed architecture management system of claim 1, wherein the distributed architecture management system further comprises:
The image management module Container is used for being responsible for image management.
6. The distributed architecture management system of claim 1, wherein,
the Pod includes a normal Pod and a static Pod.
7. The distributed architecture management system of claim 1, wherein the distributed architecture management system further comprises:
and the server Service is used for providing an external access interface for the Pod providing the same Service.
8. A distributed architecture management micro service platform, wherein,
the Engine module Docker Engine is used for being responsible for the creation and management of the container;
the monitoring module Kubelet is used for monitoring the work of the Pod process;
the proxy module kube-proxy is used for providing a proxy for Pod;
the log management module Fluentd is used for collecting, storing and inquiring logs;
wherein the Pod is a minimum unit of the distributed architecture management system;
the container creation module, the monitoring module, the proxy module and the log management module are carried on the distributed architecture management micro-service platform.
9. An electronic device, the electronic device comprising:
the Engine module Docker Engine is used for being responsible for the creation and management of the container;
The monitoring module Kubelet is used for monitoring the work of the Pod process;
the proxy module kube-proxy is used for providing a proxy for Pod;
the log management module Fluentd is used for collecting, storing and inquiring logs;
wherein the Pod is a minimum unit of the distributed architecture management system;
the memory is in communication connection with any one of an Engine module Docker Engine, a monitoring module Kubelet, an agent module kube-proxy and a log management module Fluentd; wherein,
the memory stores instructions executable by any one of the Engine module Docker Engine, the monitoring module Kubelet, the proxy module kube-proxy and the log management module Fluentd, and the instructions are executed by any one of the Engine module Docker Engine, the monitoring module Kubelet, the proxy module kube-proxy and the log management module Fluentd.
10. A computer-readable storage medium storing computer instructions that are executed by any one of an Engine module Docker Engine, a monitoring module Kubelet, a proxy module kube-proxy, and a log management module Fluentd.
CN202311176452.0A 2023-09-12 2023-09-12 Distributed architecture management system, micro-service platform, device and storage medium Pending CN117453339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311176452.0A CN117453339A (en) 2023-09-12 2023-09-12 Distributed architecture management system, micro-service platform, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311176452.0A CN117453339A (en) 2023-09-12 2023-09-12 Distributed architecture management system, micro-service platform, device and storage medium

Publications (1)

Publication Number Publication Date
CN117453339A true CN117453339A (en) 2024-01-26

Family

ID=89591784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311176452.0A Pending CN117453339A (en) 2023-09-12 2023-09-12 Distributed architecture management system, micro-service platform, device and storage medium

Country Status (1)

Country Link
CN (1) CN117453339A (en)

Similar Documents

Publication Publication Date Title
CN110603522B (en) Method for containerizing application program on cloud platform
US11329885B2 (en) Cluster creation using self-aware, self-joining cluster nodes
CN112866333B (en) Cloud-native-based micro-service scene optimization method, system, device and medium
CN112437915A (en) Method for monitoring multiple clusters and application programs on cloud platform
CN109885316B (en) Hdfs-hbase deployment method and device based on kubernetes
US20120271874A1 (en) System and method for cloud computing
EP3944082A1 (en) Extending the kubernetes api in-process
WO2019060228A1 (en) Systems and methods for instantiating services on top of services
US20140007092A1 (en) Automatic transfer of workload configuration
US20090210873A1 (en) Re-tasking a managed virtual machine image in a virtualization data processing system
US9959157B1 (en) Computing instance migration
CN107783816A (en) The method and device that creation method and device, the big data cluster of virtual machine create
CN111343219B (en) Computing service cloud platform
CN112099917B (en) Regulation and control system containerized application operation management method, system, equipment and medium
CN111274002A (en) Construction method and device for supporting PAAS platform, computer equipment and storage medium
CN108509435B (en) Method and device for mounting remote file by example system
US11108638B1 (en) Health monitoring of automatically deployed and managed network pipelines
CN113467882A (en) Method and system for deploying containers
US20210294730A1 (en) Managing resources used during a development pipeline
US11593103B1 (en) Anti-pattern detection in extraction and deployment of a microservice
US11750451B2 (en) Batch manager for complex workflows
CN116502437A (en) Signal-level simulation platform clouding method based on cloud+end architecture
CN117453339A (en) Distributed architecture management system, micro-service platform, device and storage medium
CN113407257A (en) Mysql cluster deployment method and device, electronic equipment and storage medium
CN118056183A (en) Optimizing just-in-time compilation processes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination