Kubernetes-based container cloud architecture and interaction method among modules thereof
Technical Field
The invention relates to the field of cloud computing technology and containers, in particular to a Kubernetes-based container cloud architecture and an interaction method among modules of the Kubernetes-based container cloud architecture.
Background
With the rapid development of the mobile internet, the number of net citizens and the internet surfing time are rapidly increased, and the background architecture of the website is continuously changed to meet the increasingly huge access requirements. The design of the server architecture includes that all services such as Web service and database service are deployed on one physical server from the beginning, the subsequent database service is separated from the Web service, so that the performance and the safety of the server are improved, the load is distributed on a plurality of servers by using a load balancing technology so as to reduce the pressure of a single server, and server nodes are automatically added and removed according to the access requirement and the formulated expansion strategy by monitoring indexes such as a CPU (central processing unit) and a memory of the existing server cluster in a recent automatic expansion and contraction mode. Compared with the traditional manual node adding and deleting method, the method has the advantages of fast response, low operation and maintenance cost and high stability.
However, there are many problems in implementing the auto scaling architecture using the conventional server and cloud computing technology, so that the auto scaling method has not been widely used. Firstly, the time required for deploying the service module to the server is long, and the mobile internet application requires that the service module must be quickly iterated online, so that a large amount of manpower and material resources are wasted due to frequent deployment and updating of the service module. Meanwhile, for massive access requests, a traditional server or cloud computing architecture cannot quickly start a new service node, so that access delay and even system crash are caused. Because the architecture of the traditional cloud computing technology basically adopts a virtual machine mode, in some subdivided fields, the inherent defects of the traditional cloud computing technology cause the application effect to be unsatisfactory. In recent years, with the popularity of cloud computing technology and the rapid development of related technologies, the implementation of cloud computing technology infrastructure is not limited to a virtual machine as an option.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a Kubernetes-based container cloud architecture which can be conveniently applied to a container system requiring high availability and expansibility.
The invention further aims to provide an interaction method among modules of a Kubernetes-based container cloud architecture.
The purpose of the invention is realized by the following technical scheme:
a Kubernetes-based container cloud architecture comprises a mirror image construction module, a data warehouse module, a load balancing module, a service discovery module, a container monitoring module and a server module which is respectively connected with the mirror image construction module, the data warehouse module, the load balancing module, the service discovery module and the container monitoring module, wherein the server module is connected with the mirror image construction module, the data warehouse module, the load balancing module, the service discovery module and the container monitoring module
The mirror image construction module is used for providing mirror image file making, storing and distributing services;
the data warehouse module is used for storing and processing data information of the database in the cluster;
the load balancing module is used for carrying out load balancing on each computing node in the Kubernetes cluster;
the service discovery module is used for acquiring the information of the dynamic change of each computing node in the Kuberntes cluster;
and the container monitoring module is used for collecting and displaying the information of the running state of each computing node of the Kubernetes cluster.
The mirror image construction module is a Docker private library module and uses Dockerfile packaging technology and Kubernets file arrangement templates.
And the data warehouse module processes the Mysql and MongoDB data by adopting hadoop nodes.
The load balancing module adopts an Nginx-Plus agent tool.
The service discovery module adopts an Etcd storage system.
The server module employs a CentOS operating system.
The other purpose of the invention is realized by the following technical scheme:
an interaction method between modules of a Kubernetes-based container cloud architecture comprises the following steps:
(1) updating codes, namely uploading a new code image to an image construction module by a developer, and downloading the new image from the image construction module by an operation and maintenance person and loading for operation;
(2) load balancing, namely, a request end sends a request, and a load balancing module distributes the received request to any computing node according to a load balancing strategy; the computing node processes the request and returns the result to the load balancing module, and the load balancing module returns the response result to the request end;
(3) service discovery, namely registering information of the newly added computing node to a service discovery module when the newly added computing node is added; when a computing node is removed, firstly, a load balancing module is informed to remove the computing node, and then the relevant information of the computing node is removed from a service discovery module;
(4) the container monitoring comprises the steps that a monitoring module running in each server module is used as a slave node, a container monitoring module is used as a master node, the slave node periodically sends index data of the server module to the master node, and the master node receives the index data of each slave node, draws a graph and displays the graph on a host browser where a system monitoring module is located;
(5) and (3) data processing, namely operating hadoop in a Kubernetes cluster, and processing mass MongoDB and Mysql data acquired in the container application so as to achieve the data effectiveness and the analyzability of the container system.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the LXC container technology gradually occupies a place in the field of cloud computing due to the characteristics of high starting speed, good performance and the like. In this regard, Docker is an open source application container engine that allows developers to package applications and dependencies into a portable container and then release them to Linux machines. The containers use a sandbox mechanism, without any interface between each other, and can easily run in the host and the data center. The Kubernetes adopted by the invention is used as an engine for managing the lightweight container Docker, so that the container can be conveniently and quickly created, the full life cycle management of the container can be carried out, and the popularization of the container technology is further improved. The main functions of the kubernets container cluster management system include: packaging, instantiating and running the application program by using a Docker; running and managing containers across hosts in a cluster manner; to solve communication problems between containers operating between different hosts, and the like. Furthermore, Kubernetes is one of the earliest container scheduling schemes based on Docker, is expected to become the most successful container scheduling scheme through mature system architecture and strong horizontal capacity expansion capacity, and serves the future cloud computing field.
2. The invention can be conveniently applied to a container system which needs high availability and expansibility, and simultaneously realizes the following functions: (1) monitoring the running conditions of a CentOS server and a Docker container running the CentOS server; (2) dynamically extending and retracting the cluster by utilizing a Kubernetes technology according to the monitoring data; (3) by constructing a Docker private Registry warehouse and formulating an operation management strategy, the continuous integration of Docker images is realized, and services can be updated without restarting a container.
Drawings
Fig. 1 is a schematic structural diagram of a Kubernetes-based container cloud architecture according to the present invention.
Fig. 2 is an architecture diagram of an operation process of a mirror building module of the container cloud architecture shown in fig. 1.
Fig. 3 is an architecture diagram of an operation process of a load balancing module of the container cloud architecture shown in fig. 1.
Fig. 4 is an architecture diagram of an operation process of the service discovery module of the container cloud architecture shown in fig. 1.
Fig. 5 is an architecture diagram of an operation process of the container monitoring module of the container cloud architecture shown in fig. 1.
The parts in the attached drawing are marked as follows, namely a mirror image construction module, a 2-load balancing module, a 3-service discovery module and a 4-container monitoring module.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
As shown in fig. 1 to 5, the following are specific:
method for realizing automatic mirror image deployment
The mirror image construction module 1 is used for providing a mirror image construction strategy from development to deployment and about job definition, mainly supports an automatic combination method of mirror images and source codes, and is more suitable for timely updating of container mirror images during service updating. The steps of the mirror image construction process are shown in fig. 2, and the detailed steps are as follows:
(1) code submission
The method comprises the steps that developers submit codes programmed in various languages to a version management warehouse, and in the method, SVN is used as a version management tool, and the tool is suitable for guaranteeing the consistency of the codes in a team cooperative project development process.
(2) Defining jobs
Meanwhile, the operation and maintenance personnel compile Docker mirror image packaging files Dockerfiles suitable for projects and yaml files based on Kubernets, and create corresponding container operation rules strictly according to Dockerfile compiling rules and yaml file arranging rules.
(3) Building mirror files
According to the CI/CD automatic deployment process, based on the source code in the step (1) and the Dockerfile file in the step (2), an image mirror image file of the container is created and pushed to a private Docker mirror image warehouse.
(4) Running mirror files
Running the image file in a reasonable arrangement in Kubernets according to the image file in (3) and the yaml file in (2).
The steps provide storage and running services of the mirror image, and the mirror image warehouse uses a private library installation mode recommended by Docker, so that the problem of combination of job definition and source code is solved.
(II) Container System expansion method
The load balancing module 2 is used for carrying out load balancing on each Kubernetes node in the dynamically changing cluster; the method is realized by adopting a Nginx-Plus tool, and the Nginx-Plus tool is a reliable load balancing tool which is released by Nginx company and can be combined in a Kubernetes cluster, and is particularly suitable for some Web systems with larger loads. By matching with Kubernets, the load balancing capability of the Kubernets can be enhanced, so that the expansibility of the container system is improved. The load balancing module is shown in fig. 3.
The service discovery module 3 is used for storing the information of each node of the dynamically changed Kubernetes cluster in real time; the method adopts an Etcd storage system to store information of each computing node. Etcd is a highly available Key/Value storage system, mainly used for sharing configuration and service discovery. The information of each Kubernetes node can be quickly and effectively added and removed through the Etcd. The service discovery module is shown in fig. 4.
(III) Container monitoring method
The container monitoring module 4 is used for acquiring, storing and displaying container running state information of each node of the Kubernetes cluster; the CentOS is the most common server operating system for enterprises, and based on the system, the invention realizes a container monitoring module based on the combination of Zabbix and cAdvisor. Zabbix is an enterprise-level solution providing distributed system monitoring and network monitoring functions based on a Web interface; and the cAdvisor is a daemon process used for collecting, aggregating and outputting container operation indexes, and can acquire various performance data of Docker in a Kubernets cluster through the cAdvisor. Docker is supported in Zabbix, and container Monitoring is achieved using Zabbix Docker Monitoring. The container monitoring system is shown in fig. 5, and the monitoring steps are as follows:
(1) and deploying the Zabbix Server on the Kubernets main node, monitoring the main node, importing a Web management interface, and distributing the Web management interface to the child nodes.
(2) And deploying the Zabbix Agent at the child nodes to realize host monitoring of each child node.
(3) And starting the cAdvisor container at each Kubernetes node to realize the monitoring of the performance of the host container.
(4) And displaying the obtained monitoring data on the front end through HTTP and JSON interfaces.
(IV) interaction method of modules
An interaction method between modules of a Web server architecture based on Kubernetes comprises the following processes:
(1) and (3) updating the code, namely taking a host as a Docker private library server, uploading a new code image to the host running the private library module by a developer in an automatic mode when the service code is updated, and pulling the new service image from the private library module and loading the new service image to run each time a Kubernetes node is newly added by the system. As shown in fig. 2.
(2) Load balancing, namely, sending an http request by a request end, and distributing the received request to any Kubernetes node by a load balancing module according to a load balancing strategy; and then the node processes the request and returns the result to the load balancing module, and the load balancing module returns the response result to the request end.
The load balancing module periodically queries the service discovery module (Etcd), and updates and reloads the configuration file of the load balancing module according to the query result. Specifically, the load balancing module periodically pulls the latest kubernets node information from the service discovery module, as shown in fig. 3.
(3) And service discovery is realized by adopting an Etcd open source tool. When a new computing node is added, registering information of the new computing node with a service discovery module (Etcd); when a computing node is removed, the load balancing module is first notified to remove the computing node, and then the relevant information of the computing node is removed from the service discovery module, as shown in fig. 4.
(4) And (4) container monitoring, namely adopting an active mode reported from the slave node to the master node. The monitoring module running in each server module is used as a slave node, 1 host running system monitoring module is used as a master node, the slave node periodically sends index data of the server module, such as CPU occupation ratio, residual memory and the like, to the master node, and the master node receives the index data of each slave node, draws a graph and displays the graph on a host browser where the system monitoring module is located, as shown in FIG. 5.
(5) And (3) data processing, namely operating hadoop in a Kubernetes cluster, and processing a large amount of MongoDB and Mysql data acquired in the container application so as to achieve the data effectiveness and the analyzability of the container system.
(V) deployment requirements of the method of the invention
The invention requires at least four hosts: one host serves as a host of the service discovery module, one host serves as a host of the load balancing module, one host runs the container monitoring module, one host serves as a server running the mirror image construction module, and the other hosts serve as nodes running the server module. Each host runs respective service logic, and a Kubernetes-based high-availability server cluster is deployed and completed.
The invention discloses a Kubernetes-based container cloud architecture and an interaction method among modules, which can be conveniently applied to a container system needing high availability and expansibility, and simultaneously realize the following functions of (1) monitoring a CentOS server and the operation condition of a Docker container operated by the CentOS server; (2) dynamically extending and retracting the cluster by utilizing a Kubernetes technology according to the monitoring data; (3) the continuous integration of the Docker mirror image is realized by constructing a Docker private Registry warehouse and formulating an operation management strategy, and the service is updated in a mode of not restarting a container. The high concurrency and stability of the container system is achieved by the above functions.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.