CN113169952B - Container cloud management system based on block chain technology - Google Patents

Container cloud management system based on block chain technology Download PDF

Info

Publication number
CN113169952B
CN113169952B CN201880097738.0A CN201880097738A CN113169952B CN 113169952 B CN113169952 B CN 113169952B CN 201880097738 A CN201880097738 A CN 201880097738A CN 113169952 B CN113169952 B CN 113169952B
Authority
CN
China
Prior art keywords
application
user
node
master
management
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880097738.0A
Other languages
Chinese (zh)
Other versions
CN113169952A (en
Inventor
韦小强
田江波
刘广德
陈奇
姚鑫
张鹏
王与实
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bei Jing Lianyunjue Technology Ltd
Original Assignee
Bei Jing Lianyunjue Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bei Jing Lianyunjue Technology Ltd filed Critical Bei Jing Lianyunjue Technology Ltd
Publication of CN113169952A publication Critical patent/CN113169952A/en
Application granted granted Critical
Publication of CN113169952B publication Critical patent/CN113169952B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords

Abstract

The invention discloses a container cloud management system based on a block chain technology, which comprises a plurality of management Node masters and a plurality of working nodes, wherein each management Node Master provides a cluster distributed storage database etcd; when a user deploys or deletes an application to the container cloud management system through the working Node, the management Node Master performs user signature verification and/or data consistency verification based on a block chain technology consensus algorithm. The container cloud management system can effectively improve the system security, gives the user the management authority to own resources, and the platform side and any third party can not operate the application of the user and read the operation record of the user and the data of the user, so that the user can conveniently use the independent public cloud platform provided by the third party on the premise of not worrying about the application and data security problems, and the use cost of the user is effectively reduced.

Description

Container cloud management system based on block chain technology
Technical Field
The invention belongs to the technical field of computer cloud computing, and particularly relates to a container cloud management system based on a block chain technology.
Background
Cloud computing technology can provide available, convenient, and on-demand network access into a shared pool of configurable computing resources (resources including networks, servers, storage, applications, etc.) that can be provisioned quickly with little administrative effort or interaction with service providers.
Currently, the cloud computing has the following forms: public clouds, private clouds, and hybrid clouds, where public clouds are considered the primary modality of cloud computing. The public cloud generally refers to a cloud which is provided by a third-party provider for a user and can be used for free or at low cost, the cloud can be generally used through the Internet, and the core attribute of the public cloud is a shared resource service.
In the field of cloud computing, cloud computing platforms can be classified into three types, i aas (infrastructure as a service), paaS (platform as a service), and SaaS (software as a service), according to their functions. Among them, in the PaaS (platform as a service) field, there are currently mainly three service schemes:
1. the PaaS service is an application management platform provided by an IaaS service provider for users, and aims to enable the users to manage the applications of the users more conveniently and provide value-added services.
2. PaaS public cloud services, such as Google's GAE (Google App Engine), sina's SAE (Sina App Engine), etc., can allow a user to complete all life cycles from application development, construction, testing, deployment, etc. in a PaaS platform, but only support limited development languages, and all applications and data are on a PaaS common cloud platform.
3. In order to protect application security and data security developed by users, some PaaS private cloud services do not wish to use application management platforms provided by a PaaS public cloud platform and an IaaS service provider, but select a self-built PaaS private cloud platform, which can be developed by the users themselves or can select a service provider in a project implementation manner.
The above schemes 1-3 all require the user to bind the resource, application, data, and platform together for installation, deployment, operation, and maintenance. Among them, the solutions 1 and 2 need to tightly couple the resources, the applications, the data, and the platform, so neither of the solutions 1 and 2 can be adopted by the user who is concerned about the security of the applications and the data.
Although the scheme 3 is a preferred option for users who pay attention to application and data security, in the process of building the cloud platform, not only the self-building of the platform needs to invest higher personnel cost and server cost, but also the selection of external providers needs project implementation cost with a high price, and the cost and the platform implementation period are higher than those of the schemes 1 and 2.
However, it is expected that, with the continuous development of cloud computing technology, the PaaS public cloud platform will be accepted by more and more users due to the advantages of flexibility, low cost, payment according to usage amount, and the like, and as long as the security problem of the existing PaaS public cloud platform can be solved, the PaaS public cloud platform will be expected to become a preferred option for users to deploy internet applications.
Therefore, how to solve the security problem of the PaaS public cloud platform which is most worried by the user at present and how to enable the user to conveniently apply the independent PaaS public cloud platform provided by the third party on the premise of not worrying about the application and data security problems is a problem to be solved by the invention.
Disclosure of Invention
The invention aims to provide a container cloud management system based on a block chain technology, which is used for removing the tight coupling among resources, applications, data and a platform, reducing the use cost of a user and solving the safety problem of the resources, the applications and the data. The security issues here may relate to: 1. in a container cloud management platform, the problem of management authority of a user on own resources is solved, so that the platform side cannot operate the resources of the user; 2. in a container cloud management platform, the problem of operation permission of a user on application and data is solved, so that the platform side cannot operate the application of the user and read operation records of the user and the data of the user with minimum permission; 3. the problem of how to ensure that the normal use of the platform is not influenced and the application service and the data security of a user are not influenced when part of management nodes of the container cloud management platform are crashed or invaded is solved; 4. how to solve the auditing problem of the operation records of the user applied to the container cloud management platform.
The purpose of the invention is realized by the following technical scheme:
the invention discloses a container cloud management system based on a block chain technology, which comprises a plurality of management Node masters, wherein each of the management Node masters can be communicated with other management Node masters through a network, each of the management Node masters can receive and process an access request of a working Node, and the access request is a request of the working Node for adding into the container cloud management system; before the working Node sends the access request to the Master, the user generates a public and private key account for the access request based on the working Node.
Further, the access request includes that the working Node registers self information to the Master; the self information comprises a host name, a kernel version, operating system information and a docker version of the working Node; the management Node Master processes the access request of the working Node, and the management Node Master brings the working Node into cluster scheduling, and monitors the state of the working Node in the cluster scheduling in real time.
Further, the real-time monitoring specifically comprises: the working Node sends the state information to the Master Node at regular time, and the Master Node writes the received state information into the etcd database, and analyzes and processes the state information.
Further, each of the plurality of Master management nodes provides a cluster distributed storage database etcd, and the Master management Node processing the access request of the working Node includes the Master management Node writing the self information of the working Node into the cluster distributed storage database etcd.
Further, after the access request, the Master of the management Node synchronizes the information and the state information of the accessed working Node to all the masters of other management nodes in the container cloud management system.
Preferably, after the access request, the accessed working Node may communicate with each of the plurality of management nodes masters through a network, and the communicating between the working Node and each of the plurality of management nodes masters includes the working Node monitoring an operation of the management Node masters for an application.
Preferably, the operation for the application includes: creating an application, modifying an application, and deleting an application; when the operation aiming at the application is to create the application, the working Node creates the application according to the creation requirement; when the operation aiming at the application is the modification application, the working Node modifies the application according to the modification requirement; when the operation for the application is to delete the application, the working Node deletes the application according to a deletion requirement, the application is based on Pod, and the operation for the application includes: create Pod, modify Pod, and delete Pod.
In the invention, a user can access own equipment into a container cloud management system to become a working Node of the system, and the user can send an application creation request to any one of a plurality of management nodes Master through the accessed working Node, wherein the application creation request carries a user public key, signature information signed by the user using a private key and original template data used for creating application.
Further, the management node Master can receive and respond to the application creation request, and the management node Master responding to the application creation request includes performing user signature verification on the application creation request by using a user public key, and refusing to create the application if the user signature verification fails, and prompting that the user application creation fails.
Further, if the user signature is successfully verified, the management node Master performs a write operation of writing the application creation request into the cluster distributed storage database etcd of the management node Master, wherein the write operation includes distributing the application creation request to all other management node masters.
Preferably, the write operation further includes verifying, by using a block chain consensus algorithm, data consistency and signature data of all cluster distributed storage databases etcd in the container cloud management system based on the application creation request, where the verifying specifically includes: if the number of the cluster distributed storage databases etcd passing the verification in the container cloud management system is more than half, writing the application creation request into each cluster distributed storage database etcd in the container cloud management system; otherwise, each cluster distributed storage database etcd in the container cloud management system refuses to write the application creation request, and prompts a user that the application creation fails.
Further, the screening, by the management Node Master, of the working nodes meeting the conditions is used for creating an application, and the screening, by the management Node Master, of the working nodes meeting the conditions is used for creating an application specifically as follows: and the management Node Master screens the working nodes meeting the conditions according to the scheduling rules.
Further, the management node Master writes the screening result into the cluster distributed storage database etcd.
Further, the work Node acquires and processes the creation application through a Watch mechanism, where the acquiring the creation application includes: the working Node obtains the user public key carried by the application creation request, the signature information signed by the user using the private key, and the original template data used for creating the application.
Further, the processing the create application comprises: the working Node uses the user public key to verify the user signature based on the application creation request, if the user signature is verified successfully, the application is created, and the user application is prompted to be created successfully; and if the user signature verification fails, refusing to create the application and prompting the user that the application creation fails.
Further, the working Node sends the created result state and the subsequent operation state to the Master, and the Master can monitor the state of the system resource in real time, judge and process the system resource according to the current state, and restore the resource state to the expected state.
In addition, the user can also send an application deletion request to any one of the plurality of management nodes Master through the accessed working Node, wherein the application deletion request carries a user public key and signature information signed by the user using a private key.
Further, the management node Master can receive and respond to the application deletion request, the management node Master responds to the application deletion request and includes user public key to conduct user signature verification on the application deletion request, and if the user signature verification fails, the management node Master refuses to delete the application and prompts the user that the application deletion fails.
Further, if the user signature is verified successfully, the management node Master queries a target application resource object in all cluster distributed storage databases etcd in the container cloud management system.
Preferably, if the user signature is verified successfully, the Master of the management node distributes the application deletion request to the masters of all other management nodes.
Further, the verification of data consistency and signature data is performed on all cluster distributed storage databases etcd in the container cloud management system by using a block chain consensus algorithm based on the application deletion request, and the verification specifically comprises the following steps: if the number of the cluster distributed storage databases etcd passing the verification in the container cloud management system is more than half, the management Node Master sends the application deletion request to the corresponding working Node; otherwise, the management node Master refuses the application deletion request and prompts the user that the application creation fails.
Further, the corresponding work Node receives and processes the application deletion request, where the processing of the application deletion request specifically includes: the working Node uses the user public key to verify the user signature based on the application deletion request, if the user signature verification fails, the application deletion is refused, and the user application deletion failure is prompted; and if the user signature is successfully verified, deleting the application.
Further, after the application is deleted, the corresponding working Node sends a deletion result to the Master, and the Master executes a write operation of writing the deletion result into the etcd.
Further, the write operation further includes that all cluster distributed storage databases etcd in the container cloud management system use a block chain consensus algorithm to verify the data consistency and signature data of the deletion result, where the verification specifically includes: if the number of the cluster distributed storage databases etcd passing the verification in the container cloud management system is more than half, writing the deletion result into each cluster distributed storage database etcd in the container cloud management system, and prompting a user that the application deletion is successful; otherwise, each cluster distributed storage database etcd in the container cloud management system refuses to write the deletion result, and prompts a user to delete application failure.
Preferably, the application to which the present invention relates is Pod based.
Compared with the prior art, the container cloud management system based on the block chain technology provided by the invention can enable a user to have own management authority on own resources, and a platform side and any third party cannot operate the application of the user and read the operation record of the user and the data of the user, so that the user can conveniently apply the container cloud management platform independent of the third party on the premise of not worrying about the application and data safety problems, and the use cost of the user can be effectively reduced.
Drawings
FIG. 1 is a schematic diagram of a cluster architecture of a prior art container cloud management system;
FIG. 2 is a business flow diagram of a prior art container cloud management system;
fig. 3 is a service flow diagram of a container cloud management system based on a blockchain technique according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and should not be taken to be limiting.
As is known, the container technology is increasingly used in cloud computing, and the container as referred to herein is essentially a virtualization technology, which is different from a virtual machine in that the virtual machine is a hardware virtualization and the container is a virtualization for an operating system. Generally, a container packages an application and an application execution environment together, and when the application is deployed, the whole container is directly deployed, because the container is provided with the application execution environment, the problem that the application is abnormally deployed due to environment change in the deployment process does not exist, and one-time construction and execution can be realized.
In general, the existing container cloud management system is based on a container management platform for managing containers, and the existing container management platform includes a kubernets container management platform, a mess container management platform, and a Swarm container management platform.
The Kubernetes container management platform is the most popular distributed architecture leading scheme based on the container technology at present, and adopts a distributed architecture to divide machines in a cluster into a management Node Master and a group of working nodes Node. The main functions of the kubernets container management platform include: the Docker is used for packaging, instantiating and running the application program, running and managing containers across hosts in a cluster mode, solving the communication problem between the containers running among different hosts and the like.
For convenience of description and illustration, the improved container cloud management system based on the blockchain technology provided by the embodiments of the present invention will be described below based on the kubernets container management platform, but it will be understood by those skilled in the art that the container cloud management system based on the blockchain technology of the present invention may also be based on other types of container management platforms.
Fig. 1 shows a cluster architecture of a container cloud management system of the prior art. Taking Kubernetes container management platform as an example, in fig. 1, a cluster architecture of the Kubernetes container management platform mainly includes a management Node Master, a work Node, and a Storage Node Storage.
In general, the management node Master provides: (1) providing an API Server of service through a kube-API Server process, wherein the kube-API Server process is an API interface of a cluster and is a unique entry for performing operations such as adding, deleting, modifying and searching on all resources; (2) a Scheduler providing services through a kube-Scheduler process, where the kube-Scheduler process is responsible for scheduling cluster resources, such as binding Pod (the smallest management element in Kubernetes) to a work Node; (3) the method comprises the steps that a Controller Manager of service is provided through a kube-Controller-Manager process, wherein the kube-Controller-Manager process is an automatic management control center of a cluster and is responsible for management and automatic repair processes of a working Node, a Pod copy, a service endpoint, a name space, a service account, a resource quota and the like in the cluster, and the cluster is ensured to be in an expected working state; (4) the etcd serving as the Storage node Storage is a cluster distributed Storage database which is responsible for storing persistent data information.
The processes in the management node Master are used for realizing management functions of resource management, pod scheduling, elastic expansion, safety control, system monitoring and error correction and the like of the whole cluster, and are all completed automatically.
Further, the working Node provides: (1) the kubelet process is used for managing Pods, containers, images, volumes and the like, and managing the nodes; (2) the system comprises a kube-proxy process, a management node Master and a management node proxy, wherein the kube-proxy process provides network proxy and load balance and realizes communication with a kube-api server process in the management node Master; (3) and the docker engine process is responsible for the container management work of the nodes.
These processes in the working Node are responsible for the creation, starting, monitoring, restarting, destruction of Pod, and implementing a load balancer in software mode.
Fig. 2 shows a business flow of a container cloud management system of the related art. Taking kubernets container management platform as an example, as shown in fig. 2, the service flow of the existing kubernets container management platform mainly includes:
(1) The working Node access flow is as follows:
in the access process, firstly, a work Node to be accessed starts a kubelet process service, the work Node is actively registered to a management Node Master in a container cloud management system by means of an automatic registration mechanism of the kubelet process service, the management Node Master receives the registration information of the working Node, writes the registration information into the etcd in the management Node Master, and brings the working Node which is successfully registered into cluster scheduling.
Further, the kube-controller-manager process in the Master management Node monitors the status of the registered worker Node in real time.
Further, after the kubel process completes registration, the state information of the working Node of the kubel process is reported to the Master of the management Node at regular time, the Master of the management Node writes the received state information into the etcd of the Master of the management Node, and meanwhile, corresponding analysis processing is carried out based on the state information of the working Node.
Further, the kubel process of the working Node may simultaneously monitor the/registry/posts/$ nodame and/registry/posts directory in the etcd of the Master of the management Node through the API Server of the Master of the management Node by means of the Watch mechanism (observation notification mechanism), and all operations for Pod will be monitored by the kubel process.
Normally, the working Node will respond to the above mentioned snooping as follows: (1) if finding the Pod newly bound to the working Node, then creating the Pod according to the requirement of the Pod list; (2) if the modification of the Pod created in the working Node is found, the Pod is modified correspondingly; (3) and if finding that the Pod in the working Node needs to be deleted, executing corresponding Pod deletion operation.
(2) An application deployment process:
step S1: a user uses a client on a working Node to submit a Pod creation request (supporting data in JSON or YAML format) to a management Node Master through a kubecect (a Kubernetes client can be used for directly operating Kubernetes) or a RESTful API interface;
step S2: a kube-api server process in the management node Master receives and processes the Pod creation request submitted by a user, and stores original template data to an etcd in the management node Master;
and step S3: the kube-seciduler process in the management Node Master tries to distribute a working Node for the Pod by discovering that a new unbound Pod is generated;
and step S4: the kube-scheduler process filters and screens working nodes (such as CPU and Memory required by Pod) meeting conditions according to a scheduling rule, selects the working Node with the highest score according to the current running state of the screened working Node, binds the Pod to the working Node, and then writes the binding result into the etcd in the management Node Master;
step S5: a kubelet process in a working Node discovers and acquires a newly created Pod task through a Watch mechanism (an observation notification mechanism), and then calls a DockeraPI (the DockeraPI is an external operation interface provided by a docker engine process) to create and start the Pod;
step S6: the kubel process will report the created result state and the subsequent running state to the Master of the management node periodically;
step S7: and simultaneously monitoring the state of the resources in the cluster in real time by using a kube-controller-manager process in the Master of the management node, judging and processing according to the current state, and trying to restore the state of the resources to the expected state.
As follows, table 1 shows a specific format of kubecect sending data in the prior art:
Figure BDA0002980738790000091
TABLE 1
(3) And (3) application deletion flow:
step S1: a user uses a client to submit a Pod deletion request to a Master of a management node through a kubecect or RESTful API interface;
step S2: a kube-api server process in the management Node Master receives and processes the Pod delete request submitted by a user, queries a resource object matched with the Pod delete request in an etcd in the management Node Master, generates a delete task and issues the delete task to a kube process in a working Node;
and step S3: a kubelet process in the working Node calls a Docker API (the Docker API is an external operation interface provided by a Docker engine process) to delete Pod related containers, clear Pod related data and release resources;
and step S4: the kubel process reports the deletion result to a kube-api server process in a Master of a management node;
step S5: and the kube-apiserver process updates the result to the etcd in the Master of the management node and cleans the resource object.
Based on the above contents, it can be seen from the specific service process of the existing container cloud management system that the whole process requires close coupling between resources, applications, data and a platform, and has a potential safety hazard, a user has no control right on the management right of own resources, the operation right of applications and data, and the like, and the platform side can operate the resources of the user or operate the applications of the user and read the operation records of the user and the data of the user, and if a certain management node or management nodes Master in the container cloud management system crashes or is overcome, the use and safety of the whole system can be affected, and the application and service safety of all users can be further affected.
Nowadays, the block chain technology is increasingly used for its high security. The blockchain technology is a brand new distributed infrastructure and computing approach that utilizes blockchain data structures to authenticate and store data, distributed node consensus algorithms to generate and update data, cryptography to secure data transmission and access, and intelligent contracts composed of automated script code to program and manipulate data. The block chain technology has the obvious decentralization characteristic and is open, autonomous and information cannot be tampered. The block chain technology realizes trust establishment and rights and interests acquisition among different nodes through a consensus algorithm, and ensures that data in all nodes in a cluster in a distributed system are completely the same and can reach a certain proposal. Common block chain consensus algorithms include a Raft consensus algorithm, a Paxos consensus algorithm, a workload attestation (POW), a rights and interests attestation (POS), a delegation rights and interests attestation (DPOS), and the like.
Therefore, the block chain technology can effectively improve the safety and consistency of the distributed system. Based on the knowledge, in order to further optimize the business process of the container cloud management system and improve the use safety and reliability of the system, the embodiment of the invention aims to provide an improved container cloud management system based on the block chain technology.
Specifically, the container cloud management system provided in the embodiment of the present invention includes: the system comprises at least three management nodes Master, wherein the management nodes Master can provide a kube-api server process, a kube-scheduler process and a kube-controller-manager process and is provided with a cluster distributed storage database etcd; at least one work Node, which can provide a kubel process, a kube-proxy process, and a docker engine process.
Further, in the container cloud management system based on the block chain technology according to the embodiment of the present invention, at least three management nodes masters may communicate with each other through a network, and each working Node may communicate with any one of the management nodes masters through the network.
Fig. 3 shows a business process of the container cloud management system according to the embodiment of the present invention, where the process specifically includes:
(1) The working Node access flow is as follows:
before a user wants to access a container cloud management system by using equipment of the user as a working Node, each user needs to generate a public and private key account of the user in advance, wherein the public key can be disclosed to all people, the private key is kept by the user and cannot be leaked, and once the private key is leaked, the safety cannot be guaranteed.
In the access process, firstly, each working Node to be accessed starts a kubbelet process, the working Node is actively registered to one management Node Master in a container cloud management system by means of an automatic registration mechanism of the kubbelet process, data such as a host name, a kernel version, operating system information, a docker version and the like of the working Node are carried during registration, and after the management Node Master receives registration information of the working Node, the registration information is written into an etcd in the management Node Master, and the working Node which is successfully registered is brought into cluster scheduling.
Further, the kube-controller-manager process in the Master management Node monitors the status of the registered worker Node in real time.
Further, after the kubel process completes registration, the state information of the corresponding working Node is reported to the Master of the management Node at regular time, and the Master of the management Node also writes the received state information into the etcd and performs corresponding analysis processing based on the state information of the working Node.
Further, the kubel process in the working Node simultaneously listens to the/registry/posts/$ nodame and/registry/posts directory in the etcd in the Master of the management Node through the API Server of the Master of the management Node by means of the Watch mechanism (observation notification mechanism), and all operations for Pod will be monitored by the kubel process.
Normally, the working Node will respond to the above mentioned snooping as follows: (1) if finding there is a Pod newly bound to the working Node, then creating the Pod according to the requirement of the Pod list; (2) if the modification of the created Pod in the working Node is found, the created Pod is modified correspondingly; (3) if the Pod in the working Node needs to be deleted, the corresponding Pod delete operation is executed to delete the Pod.
Particularly, after any working Node in the container cloud management system according to the embodiment of the present invention accesses itself to one of the management nodes Master in the system through the access process, the registration information data may be synchronized to all other management nodes masters in the system through the kube-api server process and etcd in the management Node Master, so that one working Node does not need to correspond to a specific management Node or management nodes Master, but can flexibly select any management Node Master in the network to access.
Furthermore, although the working Node is accessed through one of the Master management nodes, the working Node can communicate with all other Master management nodes in the system through the network after accessing the container cloud management system, and a user can deploy an application in the system through the Master management nodes in the system based on the working Node.
(2) An application deployment process:
step S1: when application deployment is needed, a user uses a client on a working Node accessed to a container cloud management system to submit a Pod creation request (supporting data in a JSON or YAML format) to one management Node Master in the container cloud management system through a kubecect (a Kubernetes self-contained client can be used for directly operating the Kubernetes) or a RESTful API interface, wherein the Pod creation request carries a user public key, signature information signed by the user using a private key and original template data used for creating a Pod;
step S2: a kube-api server process in the Master of the management node receives, processes and analyzes the Pod creation request submitted by a user;
and step S3: the kube-api server process uses a user public key to carry out user signature verification on the Pod creation request, if the user signature verification fails, the creation of a corresponding Pod is refused, and the user is prompted to fail in the Pod creation;
and step S4: if the kube-api server process successfully verifies the user signature, a write operation of attempting to write the Pod creation request into the etcd in the Master of the management node is executed, and correspondingly, the etcd also distributes the Pod creation request to the etcd in all the masters of other management nodes according to an internal consensus algorithm;
step S5: if the write operation is successfully executed, the kube-sechdler process in the Master of the management Node finds that a new unbound Pod is generated, and tries to allocate a required working Node to the Pod;
step S6: the kube-scheduler process screens working nodes meeting conditions (such as CPU, memory and other resources required by Pod) according to a scheduling rule, selects the best working Node according to the current running state of the screened working nodes, selects the working Node with the highest score, binds the Pod to the working Node, and then writes the binding result into an etcd in a management Node Master;
step S7: a kubel process in a working Node discovers and acquires a new Pod creation task through a Watch mechanism (observation notification mechanism);
step S8: the kubelet process receives the Pod creation task, uses a user public key to verify the user signature, refuses to create the Pod if the user signature is verified to be failed, and prompts the user that the Pod creation is failed;
step S9: if the kubelet process successfully verifies the user signature, calling DockerrAPI (a Docker API is an external operation interface provided by a Docker engine process) to create and start the Pod, and prompting the user that the Pod is successfully created;
step S10: the kubel process will report the created result state and the subsequent running state to the Master of the management node periodically;
step S11: and a kube-controller-manager process in the Master of the management node monitors the state of the resources in the cluster in real time, judges and processes the resources according to the current state and tries to restore the resource state to the expected state.
Specifically, in step S4 described above: when the kube-api server process tries to write the Pod creation request into the etcd in the Master, the etcd performs the second verification of data consistency and signature data in the etcd in all other masters in the system through the Raft consensus algorithm.
Specifically, the etcd in all the management nodes Master in the container cloud management system in the embodiment of the present invention is integrally regarded as an etcd cluster, and the secondary verification specifically includes: (1) when the number of etcds in the etcd cluster is more than 1/2, the number of the etcds passes the secondary verification, the Pod creation requester can be successfully written into each etcd in the etcd cluster, and then the etcd in the management node Master returns the successful data writing information to the kube-api process and continues to execute the step S5; (2) if the number of the etcds in the etcd cluster is larger than or equal to 1/2, which does not pass the secondary verification, the data and operation consensus fails, the etcd cluster refuses to write the Pod creation request sent by the kube-api server process, and then the etcd in the management node Master returns data write failure information to the kube-api server process to prompt a user that Pod creation fails.
Based on the above, in the step S3, if the kube-apiserver process fails to verify the user signature, the Pod creation request is directly rejected; in the step S4, if the data write failure information is returned to the kube-apiserver process, it means that the kube-apiserver process has no right to write the Pod creation request into the etcd cluster, and at this time, the kube-apiserver process also directly refuses to accept the Pod creation request.
Preferably, although the step S4 uses a Raft consensus algorithm, it can be understood by those skilled in the art that other suitable block chain consensus algorithms can be applied to the step S4.
As follows, table 2 shows a specific format of kubecect sending data in the embodiment of the present invention:
Figure BDA0002980738790000141
TABLE 2
(3) Application deletion process:
step S1: a user uses a client to submit a Pod deletion request to one management node Master in a container cloud management system through a kubecect or RESTful API (application programming interface), wherein the Pod deletion request carries a user public key and signature information signed by a user using a private key;
step S2: a kube-api server process in the Master of the management node receives, processes and analyzes the Pod delete request submitted by a user;
and step S3: the kube-apiserver process uses a user public key to verify the user signature of the Pod delete request, if the user signature is verified successfully, the Pod delete is refused, and the user is prompted to delete the Pod delete failure;
and step S4: if the kube-apiserver process successfully verifies the user signature, a resource object matched with the kube-apiserver process in the etcd in the container cloud management system is inquired, and a Pod deletion task is generated, the etcd cluster also verifies the data consistency and signature data of the Pod-apiserver Pod deletion task through a Raft consensus algorithm, and the verification specifically comprises the following steps: (1) when the etcd more than 1/2 of the etcd in the etcd cluster passes the verification, responding to the kube-api process that the verification is successful, and sending the Pod deleting task to the kube process in the corresponding working Node by the kube-api process; (2) if the number of the etcds larger than or equal to 1/2 in the etcd cluster does not pass the verification, directly rejecting the Pod delete task and prompting the user that the Pod delete fails;
step S5: the kubelet process in the corresponding working Node receives the Pod deleting task, and uses the user public key again to verify the user signature, if the user signature is failed to verify, the kubelet process refuses to delete the corresponding Pod, and prompts the user that the Pod deletion is failed;
step S6: if the kubelet process successfully verifies the user signature, calling a dockerAPI (the Docker API is an external operation interface provided by the Docker engine process) to delete the corresponding Pod, cleaning related data of the Pod, and releasing resources;
step S7: the kubel process reports the deletion result to the kube-apiserver process;
step S8: the kube-apiserver process writes the Pod deletion result into the etcd cluster, and the etcd cluster verifies the data consistency and the signature data through the Raft consensus algorithm again, wherein the verification specifically comprises the following steps: (1) when the etcd cluster passes the verification, the etcd cluster receives the writing of the Pod deletion result, responds to the kube-api server process, the writing of the Pod deletion result is successful, and prompts a user that the Pod deletion is successful; (2) and if the number of the etcds in the etcd cluster is more than or equal to 1/2, and the etcds do not pass the verification, directly rejecting the writing of the Pod deleting result and prompting the user that the Pod deleting fails.
Preferably, although the step S8 uses a Raft consensus algorithm, it can be understood by those skilled in the art that other suitable block chain consensus algorithms can be applied in the step S8.
(4) The user log auditing function is as follows:
all the operation logs in the etcd in the container cloud management system adopt an additional mode and do not support log deletion operation, so that the individual operation logs can be checked and tracked at any time to complete related log auditing operation.
The container cloud management system provided by the invention is improved based on the block chain technology and the consensus algorithm, the safety problems of resources, applications and data are well solved, a user has the management authority of own resources, the platform side cannot operate the applications of the user and read the operation records of the user and the data of the user, and the user can conveniently apply an independent public cloud platform provided by a third party on the premise of not worrying about the safety problems of the applications and the data.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention.

Claims (33)

1. A container cloud management system based on a block chain comprises a plurality of management Node masters, wherein each management Node Master provides a cluster distributed storage database etcd, each management Node Master communicates with other management Node masters through a network, each management Node Master receives and processes an access request of a working Node, and the access request requests the working Node to join the container cloud management system; before the working Node sends the access request to the Master, the user generates a public and private key account for the access request based on the working Node; wherein, the first and the second end of the pipe are connected with each other,
a user can send an application creation request to any one of the management nodes Master through the accessed working Node; the application creation request carries a user public key, signature information signed by the user using a private key and original template data used for creating the application;
the management node Master can receive and respond to the application creation request, and the management node Master responds to the application creation request and performs user signature verification on the application creation request by using a user public key;
if the user signature verification fails, refusing to create the application, and prompting the user that the application creation fails; if the user signature is verified successfully, the Master executes a write-in operation of writing the application creation request into the etcd of the cluster distributed storage database;
the write operation comprises the step of distributing the application creation request to all other management nodes Master;
the write operation further includes that all cluster distributed storage databases etcd in the container cloud management system use a block chain consensus algorithm to verify data consistency and signature data based on the application creation request, where the verification specifically includes: if the number of the cluster distributed storage databases etcd passing the verification in the container cloud management system is more than half, writing the application creation request into each cluster distributed storage database etcd in the container cloud management system; otherwise, each cluster distributed storage database etcd in the container cloud management system refuses to write the application creation request, and prompts a user that the application creation fails.
2. The system according to claim 1, wherein the access request includes that the worker Node registers its own information with the Master.
3. The system according to claim 2, wherein the self information includes a host name, a kernel version, operating system information, and a docker version of the work Node.
4. The system according to claim 2 or 3, wherein the management Node Master processing the access request of the working Node includes the management Node Master writing the information of the working Node into its clustered distributed storage database etcd.
5. The system according to any of claims 1 to 4, wherein the processing of the access request of the worker Node by the management Node Master comprises the incorporation of the worker Node into a cluster schedule by the management Node Master.
6. The system according to claim 5, wherein the management Node Master monitors the status of the worker Node incorporated in the cluster scheduling in real time.
7. The block chain-based container cloud management system according to claim 6, wherein the real-time monitoring specifically comprises: the working Node sends the state information to the Master Node at regular time, and the Master Node writes the received state information into the etcd database, and analyzes and processes the state information.
8. The system according to any one of claims 1 to 7, wherein after the access request, the Master synchronizes self information and state information of the accessed working Node to all other masters in the system.
9. The system according to any of claims 1 to 8, wherein after the access request, the accessed working nodes Node can communicate with each of the plurality of management nodes Master through a network.
10. The system according to claim 9, wherein the working Node communicates with each of the plurality of management nodes masters, and the working Node monitors an operation of the management nodes masters for an application.
11. The system according to claim 10, wherein the operations for the application comprise: create applications, modify applications, and delete applications.
12. The system according to claim 11, wherein when the operation for the application is creating an application, the worker Node creates the application according to a creation requirement; when the operation aiming at the application is to modify the application, the working Node modifies the application according to modification requirements; when the operation for the application is to delete the application, the working Node deletes the application according to a deletion requirement.
13. The system according to any one of claims 10 to 12, wherein the application is Pod-based, and the operations for the application include: create Pod, modify Pod, and delete Pod.
14. The system according to claim 13, wherein the management Node Master screens qualified work nodes for creating applications.
15. The system according to claim 14, wherein the management Node Master filters the eligible work nodes Node to create an application specifically as: and the management Node Master screens the working nodes meeting the conditions according to the scheduling rules.
16. The system according to claim 15, wherein the management node Master writes the filtering result into its cluster distributed storage database etcd.
17. The system according to any of claims 14 to 16, wherein the worker Node acquires and processes the create application through a Watch mechanism.
18. The system according to claim 17, wherein the obtaining the creation application comprises: the working Node obtains the user public key carried by the application creation request, the signature information signed by the user using the private key, and the original template data used for creating the application.
19. The system according to claim 18, wherein the processing the create application comprises: the working Node uses the user public key to verify the user signature based on the application creation request, if the user signature is verified successfully, the application is created, and the user application is prompted to be created successfully; and if the user signature verification fails, refusing to create the application, and prompting the user that the application is failed to create.
20. The system according to claim 19, wherein after the application is successfully created, the worker Node periodically sends a creation result status and a subsequent operation status to the Master.
21. The system according to any one of claims 1 to 20, wherein the management node Master is capable of monitoring the state of system resources in real time, judging the processing according to the current state, and restoring the resource state to an expected state.
22. The system according to any of claims 1 to 21, wherein a user can send an application deletion request to any of the plurality of management nodes Master through the working Node to which the user is connected.
23. The system according to claim 22, wherein the application deletion request carries signature information signed by a public key of the user and a private key used by the user.
24. The system according to claim 23, wherein the management node Master is capable of receiving and responding to the application deletion request.
25. The system according to claim 24, wherein the management node Master responds to the application deletion request by performing user signature verification on the application deletion request using a user public key, and if the user signature verification fails, refusing to delete the application and prompting the user that the application deletion fails.
26. The system according to claim 25, wherein if the user signature is successfully verified, the Master queries a target application resource object in all clustered distributed storage databases etcd in the system.
27. The system according to claim 26, wherein if the user signature is successfully verified, the management node Master distributes the application deletion request to all other management node masters.
28. The system according to claim 27, wherein all clustered distributed storage databases etcd in the system perform verification of data consistency and signature data based on the application deletion request by using a block chain consensus algorithm, and the verification specifically comprises: if the number of the cluster distributed storage databases etcd passing the verification in the container cloud management system is more than half, the management Node Master sends the application deletion request to the corresponding working Node; otherwise, the management node Master refuses the application deletion request and prompts the user that the application deletion fails.
29. The system according to claim 28, wherein the corresponding work Node receives and processes the application deletion request, and the processing of the application deletion request specifically includes: the working Node uses the user public key to verify the user signature based on the application deletion request, if the user signature verification fails, the application deletion is refused, and the user application deletion failure is prompted; and if the user signature is successfully verified, deleting the application.
30. The system according to claim 29, wherein after an application is deleted, the corresponding work Node sends a deletion result to the Master.
31. The system according to claim 30, wherein the management node Master performs a write operation of writing the deletion result into the clustered distributed storage database etcd.
32. The system according to claim 31, wherein the write operation further includes a verification of data consistency and signature data of the deletion result by all clustered distributed storage databases etcd in the system using a block chain consensus algorithm, where the verification specifically includes: if the number of the cluster distributed storage databases etcd passing the verification in the container cloud management system is more than half, writing the deletion result into each cluster distributed storage database etcd in the container cloud management system, and prompting a user that the application deletion is successful; otherwise, each cluster distributed storage database etcd in the container cloud management system refuses to write the deletion result, and prompts a user to delete application failure.
33. The system according to any one of claims 1 to 32, wherein the block chain consensus algorithm is a Raft consensus algorithm.
CN201880097738.0A 2018-09-29 2018-09-29 Container cloud management system based on block chain technology Active CN113169952B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/108575 WO2020062131A1 (en) 2018-09-29 2018-09-29 Container cloud management system based on blockchain technology

Publications (2)

Publication Number Publication Date
CN113169952A CN113169952A (en) 2021-07-23
CN113169952B true CN113169952B (en) 2022-12-02

Family

ID=69952642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880097738.0A Active CN113169952B (en) 2018-09-29 2018-09-29 Container cloud management system based on block chain technology

Country Status (2)

Country Link
CN (1) CN113169952B (en)
WO (1) WO2020062131A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111580930A (en) * 2020-05-09 2020-08-25 山东汇贸电子口岸有限公司 Native cloud application architecture supporting method and system for domestic platform
CN112333004A (en) * 2020-10-13 2021-02-05 北京京东尚科信息技术有限公司 Container cluster gene-based proprietary cloud streaming type reconstruction and verification method and device
US11575499B2 (en) 2020-12-02 2023-02-07 International Business Machines Corporation Self auditing blockchain
CN112634058A (en) * 2020-12-22 2021-04-09 无锡井通网络科技有限公司 Data mutual trust and mutual sharing and intercommunication platform based on block chain
CN112995335B (en) * 2021-04-07 2022-09-23 上海道客网络科技有限公司 Position-aware container scheduling optimization system and method
CN113296711B (en) * 2021-06-11 2022-10-28 中国科学技术大学 Method for optimizing distributed storage delay in database scene
CN113312429B (en) * 2021-06-22 2023-01-17 工银科技有限公司 Intelligent contract management system, method, medium, and article in a blockchain
CN113656148B (en) * 2021-08-20 2024-02-06 北京天融信网络安全技术有限公司 Container management method, device, electronic equipment and readable storage medium
CN114625320B (en) * 2022-03-15 2024-01-02 江苏太湖慧云数据系统有限公司 Hybrid cloud platform data management system based on characteristics
CN114968092B (en) * 2022-04-28 2023-10-17 安超云软件有限公司 Method for dynamically supplying storage space based on QCOW2 technology under container platform and application
CN115550375B (en) * 2022-08-31 2024-03-15 云南电网有限责任公司信息中心 System, method and equipment for realizing block chain light weight based on containerization technology
CN117714386A (en) * 2022-09-06 2024-03-15 中兴通讯股份有限公司 Distributed system deployment method, distributed system deployment configuration method, distributed system deployment system, distributed system deployment equipment and medium
CN115189995B (en) * 2022-09-07 2022-11-29 江苏博云科技股份有限公司 Multi-cluster network federal communication establishing method, equipment and storage medium in Kubernetes environment
CN115499442B (en) * 2022-11-15 2023-01-31 四川华西集采电子商务有限公司 Rapid deployment type cloud computing architecture based on container arrangement
CN115834595A (en) * 2022-11-17 2023-03-21 浪潮云信息技术股份公司 Management method and system of Kubernetes control assembly

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106027643A (en) * 2016-05-18 2016-10-12 无锡华云数据技术服务有限公司 Resource scheduling method based on Kubernetes container cluster management system
CN106850621A (en) * 2017-02-07 2017-06-13 南京云创大数据科技股份有限公司 A kind of method based on container cloud fast construction Hadoop clusters

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197069A1 (en) * 2015-06-05 2016-12-08 Nutanix, Inc. Architecture for managing i/o and storage for a virtualization environment using executable containers and virtual machines
US10394663B2 (en) * 2016-12-16 2019-08-27 Red Hat, Inc. Low impact snapshot database protection in a micro-service environment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106027643A (en) * 2016-05-18 2016-10-12 无锡华云数据技术服务有限公司 Resource scheduling method based on Kubernetes container cluster management system
CN106850621A (en) * 2017-02-07 2017-06-13 南京云创大数据科技股份有限公司 A kind of method based on container cloud fast construction Hadoop clusters

Also Published As

Publication number Publication date
CN113169952A (en) 2021-07-23
WO2020062131A1 (en) 2020-04-02

Similar Documents

Publication Publication Date Title
CN113169952B (en) Container cloud management system based on block chain technology
US11709735B2 (en) Workflows for automated operations management
US11615195B2 (en) Systems and methods for providing multi-node resiliency for blockchain peers
EP3271819B1 (en) Executing commands within virtual machine instances
CN114787781B (en) System and method for enabling high availability managed failover services
US20180004503A1 (en) Automated upgradesystem for a service-based distributed computer system
WO2018133721A1 (en) Authentication system and method, and server
EP2893683A1 (en) Ldap-based multi-customer in-cloud identity management system
CN106911648B (en) Environment isolation method and equipment
US9264339B2 (en) Hosted network management
CN111737104A (en) Block chain network service platform, test case sharing method thereof and storage medium
CN111311254A (en) Service processing method, device and system based on block chain
CN112035062B (en) Migration method of local storage of cloud computing, computer equipment and storage medium
CN111683164A (en) IP address configuration method and VPN service system
CN116962260A (en) Cluster security inspection method, device, equipment and storage medium
CN117131493A (en) Authority management system construction method, device, equipment and storage medium
TW201828655A (en) Environment isolation method and device resolves the problem of high complexity and incomplete isolation carried at environmental isolation during the RPC call process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant