WO2020113670A1 - 防脑裂的OpenStack虚拟机高可用系统 - Google Patents

防脑裂的OpenStack虚拟机高可用系统 Download PDF

Info

Publication number
WO2020113670A1
WO2020113670A1 PCT/CN2018/121655 CN2018121655W WO2020113670A1 WO 2020113670 A1 WO2020113670 A1 WO 2020113670A1 CN 2018121655 W CN2018121655 W CN 2018121655W WO 2020113670 A1 WO2020113670 A1 WO 2020113670A1
Authority
WO
WIPO (PCT)
Prior art keywords
management
computing node
virtual machine
module
node device
Prior art date
Application number
PCT/CN2018/121655
Other languages
English (en)
French (fr)
Inventor
张傲
吴江
田松
Original Assignee
武汉烽火信息集成技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 武汉烽火信息集成技术有限公司 filed Critical 武汉烽火信息集成技术有限公司
Priority to BR112020004407-5A priority Critical patent/BR112020004407A2/pt
Priority to PH12020550045A priority patent/PH12020550045A1/en
Publication of WO2020113670A1 publication Critical patent/WO2020113670A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Definitions

  • the invention relates to the field of cloud computing, in particular to a high-availability system for preventing split brain OpenStack virtual machines, and belongs to the field of computers.
  • HA High Availability
  • the Nova module responsible for computing function management only provides the Evacuate interface for evacuating virtual machines to other nodes when the host fails, but the module itself lacks the scheduling management function for HA;
  • Masakari a sub-open source project that deals specifically with HA, has just become an official project from the OpenStack incubation project. The maturity of the project itself is still very low. It can only complete HA recovery in a few scenarios and cannot support commercial use.
  • the invention provides a high-availability system for preventing split brain OpenStack virtual machines, which is characterized by comprising a management terminal device, a management network, a computing node device and a shared storage device,
  • At least two management-end devices communicate through the management network to form a management cluster
  • the management terminal device and the computing node device are connected by communication through the management network,
  • the computing node device is connected to the shared storage device,
  • Each management device includes:
  • Nova control module including Nova's native virtual machine VM management process, used to manage the life cycle of the virtual machine VM;
  • Cluster management module used to collect cluster operating status information
  • High-availability module for high-availability management of all computing node devices
  • the high availability module runs a high availability management method, which includes the following operations:
  • Operation A-1 check whether the cluster status is normal through the operating status information collected by the cluster management module. If it is abnormal, trigger a cluster abnormal alarm and end, if it is normal, go to operation A-2;
  • Operation A-2 check the status reported by each computing node device through the management network, if it is normal, this round of inspection is terminated, otherwise go to the next operation A-3;
  • Operation A-3 according to the abnormal status reported by each computing node device through the management network, determine whether processing is needed one by one. If no processing is required, the abnormal processing of the computing node device ends, and go back to the previous operation A-2; otherwise Go to the next operation A-4;
  • Operation A-4 for the computing node device that needs to be processed in an abnormal state, check the status of the shared storage device connected to it, and when the shared storage device is abnormal, control the cloud computing virtual machine running on the computing node device through the Nova control module
  • the VM program does not run and ends, otherwise, go to the next operation A-5;
  • Operation A-5 issue a Fencing isolation request to the connected computing node device with the shared storage device in a normal state, and fencing is to kill and shut down the cloud computing virtual machine VM program of the node;
  • Operation A-6 issue a command to the Nova control module to trigger the cloud computing virtual machine VM program running on the computing node device,
  • the computing node device In addition to installing the cloud computing virtual machine VM program, the computing node device also has:
  • Nova-computer computer module used to directly respond to the management process of the management terminal device to control the running state of the virtual machine VM, and communicate with the Hypervisor API;
  • Libvirt management module used to provide standard Hypervisor API interface management process on KVM
  • the Lock management module in conjunction with the Libvirt management module, is used to update and monitor the lock heartbeat of the shared storage device;
  • the high-availability computing node module is at least used to report the heartbeat to the management device,
  • the method of running the high-availability computing node module includes the following operations:
  • Operation C-1 when the virtual machine VM continues to update and store the lock heartbeat, if the write is normal, no processing is required, otherwise, if the lock heartbeat is written abnormally, go to operation C-2;
  • Operation C-3 if the management device returns the processing result within the specified time, go to operation C-5, otherwise go to operation C-4;
  • the Lock management module performs Fencing isolation operation, that is, killing or isolating the cloud computing virtual machine VM program of the computing node device;
  • the Lock management module determines whether Fencing is required according to the processing result returned by the management device.
  • the high availability module After the management device sends a Fencing request to the connected computing node device with the shared storage device in a normal state, the high availability module also runs the following operations:
  • Operation B-1 continuously monitor the Fencing event reported by the computing node device, and once the message is received, go to operation B-2;
  • Operation B-2 check whether the cluster status is normal through the operating status information collected by the cluster management module. If it is abnormal, trigger a cluster abnormal alarm and end, if it is normal, go to operation B-3;
  • Operation B-3 check the network status reported by each computing node device through the management network, if it is normal, this round of inspection is terminated, otherwise go to operation B-4;
  • Operation B-4 according to the abnormal status reported by each computing node device through the management network, determine whether processing is required, and if processing is not required, proceed to operation B-6; otherwise, go to operation B-5;
  • Operation B-5 For the computing node device in the abnormal state that needs to be processed, check the status of the shared storage device connected to it. When the shared storage device is abnormal, go to operation B-6 without Fencing and end, otherwise, transfer To operation B-7;
  • Operation B-6 for scenarios that do not require Fencing, issue a stop Fencing request to the corresponding computing node device;
  • Operation B-7 for the scenario that requires Fencing, issue a Fencing request to the corresponding computing node device,
  • the process of recovery after the process of the Lock management module restarts includes the following operations:
  • Operation D-2 once the lock heartbeat registration fails, kill closes the cloud computing virtual machine VM program of the computing node device;
  • the Libvirt management module records all the computing node devices that have been shut down by the VM program of the cloud computing virtual machine that was killed, and records them in the isolation log file;
  • Operate D-4 regularly check the isolation log file, and if there is an update, go to operation D-5;
  • Operation D-5 report the isolation log files of all computing node devices to the management device. If the report fails, the process ends and the next report is required; otherwise, after reporting to the management device, the management device will issue an instruction to proceed restore.
  • the management device After reporting to the management device, the management device performs the following specific operations:
  • the management device receives the quarantine log file reported by the agent computing node device, and determines whether to perform automatic processing. If the automatic processing is transferred to the operation D-8, if the automatic processing is not required, the operation is switched to the operation D-7;
  • the management terminal device automatically processes the fencing cloud computing virtual machine VM program, and calls the Nova interface to control the cloud computing virtual machine VM program to resume running again.
  • the shared storage device is managed and operated by CephFS or NFS file management program,
  • the VM management process of the virtual machine includes Nova-api, Nova-conductor or Nova-scheduler,
  • the cluster management module includes Etcd or Consul.
  • the management network includes:
  • the management network plane is used to connect the management terminal device and provide management services
  • Storage network plane used to connect to the back-end shared storage device, used to provide storage services
  • the service network plane is used to connect computing node devices and provide access services for cloud computing virtual machine VMs.
  • the management network plane, storage network plane, and service network plane of the management network are normal, the network status reported by the computing node device in operation A-2 through the management network is judged to be normal, otherwise according to the specifics of the abnormal computing node device
  • the type of interruption is which one or more of the management network plane, storage network plane, and service network plane are processed accordingly.
  • the management network includes:
  • the management network plane is used to connect the management terminal device and provide management services
  • Storage network plane used to connect to the back-end shared storage device, used to provide storage services
  • Service network plane used to connect computing node devices, used to provide virtual machine VM access services
  • the network status reported by the computing node device in operation B-3 through the management network is judged to be normal, otherwise according to the abnormal computing node device’s
  • the specific interruption type is which one or more of the management network plane, storage network plane, and service network plane perform corresponding Fencing processing.
  • the cloud computing virtual machine VM program has a VMGuestOS operating system, which performs the following recovery operations after Fencing:
  • Operation E-1 the Qga in VMGuestOS and the highly available computing node module of the computing node device continue to maintain a lock heartbeat, and when the VM program of the cloud computing virtual machine fails, go to operation E-2;
  • Operation E-2 when the highly available computing node module receives the report of the abnormal event, it is reported to the management device;
  • the management device After receiving the report of the abnormal event, the management device directly calls the Nova interface to control the cloud computing virtual machine VM program to resume operation again.
  • the failure includes a blue screen of the computing node device where the VM program of the cloud computing virtual machine is running, or a stuck or dead machine.
  • the management device After reporting to the management device, the management device performs the following specific operations:
  • Operation D-6 the management device receives the quarantine log file reported by the agent computing node device, and determines whether to perform automatic processing. If the automatic processing is transferred to the operation D-8, if the automatic processing is not required, the operation is switched to D-7;
  • the management terminal device automatically processes the fencing cloud computing virtual machine VM program, and calls the Nova interface to control the cloud computing virtual machine VM program to resume running again.
  • the anti-brain split OpenStack virtual machine high-availability system has a high-availability module, which can run a high-availability management method, and detects connected computing nodes in real time through a series of operations from A-1 to A-6
  • the status of the device and the shared storage device according to the type of abnormal state learned: the abnormality of the computing node device or the abnormality of the shared storage device, specifically which part of the management network plane, storage network plane, business network plane in the management network is abnormal And determine whether to perform Fencing operation to close the cloud computing virtual machine VM program corresponding to the abnormal computing node device, so as to ensure the high availability of the cloud computing virtual machine VM program of the computing node device in the system.
  • the management device can run a series of operations from C-1 to C-5, update and store the lock heartbeat of the Lock distributed read-write lock in real time, and report the fault conditions written during the update in real time.
  • the management device operate according to the processing result of the management device: whether Fencing closes the VM program of the cloud computing virtual machine of the computing node device, thereby protecting the lock distributed read-write lock from the host level of the computing node device Refine down to the VM level of the virtual machine, which can perform concurrent read and write protection for a single virtual machine.
  • FIG. 1 is a schematic structural diagram of a high-availability system of an OpenStack virtual machine for preventing split brain in an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a high-availability management method for a high-availability management terminal device of an OpenStack virtual machine for preventing split brain in an embodiment of the present invention
  • FIG. 3 is a schematic flow chart of Fencing for a high-availability module of a high-availability management terminal device of an OpenStack virtual machine for preventing split brain in an embodiment of the present invention
  • FIG. 4 is a schematic flowchart of a high-availability management method for a highly available computing node device of an OpenStack virtual machine for preventing split brain in an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a recovery process after restarting the process of the Lock management module of the highly available computing node device of the OpenStack virtual machine for preventing split brain in the embodiment of the present invention.
  • FIG. 6 is a schematic diagram of steps for performing a recovery operation of a cloud computing virtual machine VM program of a high-availability computing node device of an OpenStack virtual machine for preventing split brain in an embodiment of the present invention.
  • Virtual Machine is a virtual machine, which refers to a complete computer system with complete hardware system functions simulated by software and running in a completely isolated environment.
  • OpenStack OpenStack is an open source cloud computing management platform project. It is a free software and open source project authorized by the Apache license, developed and initiated by NASA (National Aeronautics and Space Administration) and Rackspace.
  • the computing resource management component in the OpenStack project includes nova-api, nova-scheduler, nova-conductor, nova-compute and other processes.
  • the core computing controller of the entire OpenStack project it is used to implement the life cycle management of user virtual machine instances to provide virtual services, such as virtual machine creation, power on, shutdown, suspend, pause, adjustment, migration, restart, destruction, etc.
  • Nova-api an interactive interface provided by Nova externally, and a message processing portal. Managers can manage internal infrastructure through this interface, and can also provide services to users through this interface. After receiving the request, after basic verification, it will send each request to the next module through the message queue.
  • Nova-scheduler mainly completes the scheduling of each virtual machine instance in Nova. According to the conditions such as CPU architecture, host memory, load, whether it has certain hardware requirements, etc., each instance can be scheduled and assigned to the appropriate node.
  • Nova-conductor Nova's internal processor for long tasks. It mainly deals with the tracking management of time-consuming tasks such as the creation and migration of virtual machine instances. In addition, it is also responsible for the access control of the database to avoid Nova-compute from directly accessing the database.
  • Nova-computer located on the computing node, is the real executor of virtual machine lifecycle management operations. Receive requests through the message queue, respond to the management processes of the control node, and directly be responsible for various communications with the Hypervisor.
  • Nova controller a role definition or title.
  • Nova processes including Nova-api, nova-conductor, nova-scheduler, etc., which are mainly responsible for processing virtual machine management operations; they are generally deployed on independent nodes called management nodes, which are not related to nova-compute Compute nodes are deployed together.
  • HaStack uses C-S structure to provide one of the two self-developed components of HA function, located on the server side. As the brain of HA management, it is used to manage the overall HA behavior, and its functions are performed by the highly available modules.
  • HaStack-agent one of the two self-developed components that uses the C-S structure to provide HA functionality, is located on the Agent side. Mainly responsible for mounting the shared directory, reporting the node's heartbeat status and VM Fencing events; and cooperating with HaStack to complete the management of some HA actions, its function is run by the highly available computing node module.
  • API Application, Programming, Interface
  • application programming interface application programming interface.
  • the component exposes the kernel through the API for external access and calling.
  • Hypervisor is an intermediate software layer that runs between a physical server and an operating system. It allows multiple operating systems and applications to share a set of basic physical hardware, so it can also be regarded as a "meta" operating system in a virtual environment. As an abstraction of platform hardware and operating system, it can coordinate access to all physical devices and virtual machines on the server, also called virtual machine monitor (Virtual Machine Monitor). Hypervisor is the core of all virtualization technologies. The ability to support non-disruptive migration of multiple workloads is a basic function of Hypervisor. When the server starts and executes the hypervisor, it will allocate the appropriate amount of memory, CPU, network and disk to each virtual machine, and load the guest operating system of all virtual machines.
  • KVM Kernel-based Virtual Machine
  • KVM Kernel-based Virtual Machine
  • is a complete hardware-based virtualization mainly provides kernel-based virtual machines.
  • Libvirt a management process that provides a standard Hypervisor API interface on top of KVM.
  • Lock run by the Lock management module 304, is set in the computing node device 300, cooperates with the libvirt component, and is located on the upper layer of the architecture of the shared storage device 500 to complete the update and monitoring of various lock heartbeats. Used to provide distributed read-write locks to control and manage concurrent writes to the same storage.
  • the innovative Lock module in this embodiment is a distributed read-write lock manager newly invented with reference to the native Lock function. You can also use the native Lock module as needed, or perform adaptive secondary development of the native Lock.
  • Etcd a highly available distributed key-value database, is implemented in GO language and guarantees strong consistency through a consistency algorithm.
  • cluster software it is mainly used to provide the following two functions: one is to form a three-plane cluster to sense the global health status for HA decision; the other is to serve as an information bridge between HaStack and HaStack-agent.
  • Ceph a unified distributed storage software designed for excellent performance, reliability, and scalability.
  • CephFS a distributed file system based on Ceph storage. In this solution, it is mainly used to store lock files of various Lock modules.
  • NFS Network File System
  • the NFS server can allow the NFS client to mount the shared directory on the remote NFS server to the local NFS client.
  • the client application of the local NFS can transparently read and write files located on the remote NFS server, just like accessing local disk partitions and directories.
  • GuestOS Guest in the field of virtualization is used to refer to the virtualized system, which is an example of a virtual machine running software (such as an operating system). GuestOS is the operating system for virtual machines.
  • QGA It is the abbreviation of Qemu (Emulator)-Guest (Guest)-Agent (Agent). It is a common application running inside a virtual machine, that is, a serial port is added to the virtual machine to communicate with the host. Implement a way for the host machine to interact with the virtual machine VM.
  • the high-availability system of the split-proof OpenStack virtual machine includes a management device 100, a management network 200, a computing node device 300, and a shared storage device 400.
  • At least two management-end devices communicate through the management network to form a management cluster 110.
  • the management terminal device and the computing node device are communicatively connected through the management network.
  • the computing node device is connected to the shared storage device.
  • FIG. 1 here are three management terminal devices 100 (that is, control nodes A, B, and C in the figure), three computing node devices 300 (that is, computing nodes A, B, and C in the figure) and A shared storage device 400 will be described as an example.
  • all three computing node devices 300 are connected to one shared storage device 400, that is, three computing node devices 300 share one shared storage device 400.
  • Each management device 100 includes a Nova control module 101, a cluster management module 102, and a high availability module 103.
  • Nova control module 101 namely Nova controller in the figure, includes Nova's native virtual machine VM management process, which is used to manage the life cycle of the virtual machine VM.
  • the cluster management module 102 namely Etcd in the figure, is used to collect the running status information of the cluster.
  • the high availability module 103 that is, FitOS HaStack in the figure, is used for high availability management of all computing node devices.
  • the management network 200 is divided into three major network planes, namely a management network plane 201, a storage network plane 202, and a service network plane 203.
  • the management network plane 201 is used to connect to the management terminal device and is used to provide management services.
  • the storage network plane 202 is used to connect to the back-end shared storage device and is used to provide storage services.
  • the service network plane 203 is used to connect computing node devices, and is used to provide access services for cloud computing virtual machine VMs.
  • All nodes are connected to the three planes, and the cluster management module 102, that is, Etcd in the figure corresponds to each plane to form a corresponding cluster.
  • each computing node device 300 also has a Nova-computer computer module 302, a libvirt management module 303, a lock management module 304, and a highly available computing node module 305 .
  • Nova-computer computer module 302 namely Nova-compute in the figure, is used to directly control the running state of the cloud computing virtual machine VM in response to each management process of the management terminal device, and communicate with the Hypervisor API.
  • the Libvirt management module 303 namely Libvirt in the figure, is used to provide a management process of a standard Hypervisor API interface on the KVM.
  • the Lock management module 304 namely Lock in the figure, cooperates with the Libvirt management module to update and monitor the lock heartbeat of the shared storage device.
  • the highly available computing node module 305 that is, the HaStack-agent in the figure, is at least used to report the lock heartbeat to the management device.
  • Nova-controller run by Nova control module 101, including Nova-api, Nova-conductor or Nova-scheduler and other virtual machine management processes, is set in the management device 100, and is mainly used to manage the life cycle of the virtual machine VM operating.
  • HaStack which is run by the high-availability module 103, is set in the management device 100 and is used to manage global HA behavior.
  • the cluster software is run by the cluster management module 102, and the software used includes Etcd, Consul, etc. In this embodiment, Etcd is used. Used in combination with the HaStack component, it is set in the management device 100 and is used to sense the health status of the entire cluster for HA decision-making, and serves as an information bridge between the highly available module 103 and the highly available computing node module 305.
  • Nova-compute a native Nova process
  • Nova-computer computer module 302 is run by Nova-computer computer module 302 and is set in computing node device 300 to respond to the management processes of the control node. It is the real executor of virtual machine life cycle management operations and is directly responsible for Hypervisor carries out various communications.
  • HaStack-agent used in conjunction with the nova-compute process, is run by the high-availability compute node module 305, set in the compute node device 300, and is mainly responsible for mounting shared directories, reporting the node's lock heartbeat status, and cooperating with HaStack components to complete part of the HA Action management functions.
  • Libvirt set in the computing node device 300, is run by the Libvirt management module 303, and provides a standard Hypervisor API management process on top of the virtual machine VM.
  • Lock run by the Lock management module 304, is set in the computing node device 300, cooperates with the libvirt component, and is located on the upper layer of the architecture of the shared storage device 500 to complete the update and monitoring of various lock heartbeats.
  • the innovative Lock module in this embodiment is a distributed read-write lock manager newly invented with reference to the native Lock function. You can also use the native Lock module as needed, or perform adaptive secondary development of the native Lock.
  • the shared storage system is run by the shared storage device 400.
  • the software programs used include CephFS and NFS, which provide shared file system storage.
  • the high availability module 103 runs a method of high availability management.
  • the method includes the following operations:
  • operation A-1 check whether the cluster status is normal through the running status information collected by the cluster management module. If it is abnormal, trigger a cluster abnormal alarm and end. If it is normal, go to operation A-2.
  • HaStack checks whether the cluster status is normal. If it is abnormal, it triggers a cluster abnormal alarm and ends this round of inspection; if it is normal, it proceeds to operation A-2.
  • Operation A-2 check the status reported by each computing node device through the management network. If it is normal, this round of inspection is terminated, otherwise go to the next operation A-3.
  • HaStack checks the status of the three-plane management network reported by each node through the HaStack-agent. If all are normal, the round of inspection is terminated; otherwise, go to operation A-3.
  • Operation A-3 according to the abnormal status reported by each computing node device through the management network, determine whether processing is needed one by one. If no processing is required, the abnormal processing of the computing node device ends, and go back to the previous operation A-2; otherwise Go to the next step A-4.
  • HaStack processes the nodes with exceptions one by one, and determines the subsequent processing strategy based on which network plane is interrupted by each node, and compares the HA strategy matrix; if no processing is required, the node ends abnormal processing and returns to operation A-3; Otherwise, if subsequent processing is required, go to operation A-4.
  • Operation A-4 for the computing node device that needs to be processed in an abnormal state, check the status of the shared storage device connected to it, and when the shared storage device is abnormal, control the cloud computing virtual machine running on the computing node device through the Nova control module
  • the VM program does not run and ends, otherwise, go to the next operation A-5.
  • HaStack checks the working status of the shared storage device 400. If the shared storage device 400 is abnormal at this time, it cannot trigger HA, that is, the cloud computing virtual machine VM does not run. This round of processing ends; otherwise, if the storage is normal, go to operation A-5.
  • a Fencing request is issued to the connected computing node device with the shared storage device in a normal state, and fencing is to kill the VM program of the cloud computing virtual machine of the node.
  • Operation A-6 issuing a command to the Nova control module to trigger the cloud computing virtual machine VM program running on the computing node device to run.
  • Node module operation includes the following operations:
  • Operation C-1 When the cloud computing virtual machine VM continues to update and store the lock heartbeat, no processing is required if the write is normal, otherwise, once the lock heartbeat write is abnormal, go to operation C-2.
  • the virtual machine VM continuously updates Lock's lock heartbeat and stores it; if the write in the storage is normal, no processing is required; otherwise, once the lock heartbeat write is abnormal for more than a predetermined time, the operation proceeds to operation C-2.
  • the Lock management module reports the storage abnormal event to the management device, and waits for the management device to feedback the processing result.
  • Lock notifies HaStack-agent, reports the underlying storage abnormal event to HaStack, and waits for HaStack to provide the processing result.
  • Operation C-3 If the management device returns the processing result within the specified time, go to operation C-5, otherwise go to operation C-4.
  • HaStack returns the processing opinion within the predetermined time, then go to operation C-5; otherwise, go to operation C-4.
  • the Lock management module performs a Fencing operation, that is, kills the VM program of the cloud computing virtual machine of the computing node device.
  • Lock performs the Fencing isolation operation according to the default settings, that is, kills or shuts down all virtual machine VMs running on the computing node.
  • the Lock management module determines whether Fencing is required according to the processing result returned by the management device.
  • Embodiment 1 On the basis of Embodiment 1, as shown in FIG. 3, when the management device 100 issues a Fencing request to the computing node device connected to the shared storage device in a normal state, HaStack needs to determine how to respond to the underlying HaStack-agent according to the current status of the environment. The storage interrupt event reported by the terminal, for this reason, the high availability module also runs the following operations:
  • Operation B-1 continuously monitoring the Fencing event reported by the computing node device, and once receiving the message, go to operation B-2.
  • HaStack continuously monitors the Fencing event reported by HaStack-agent, and once the message is received, it proceeds to operation B-2.
  • operation B-2 check whether the cluster status is normal through the running status information collected by the cluster management module. If it is abnormal, trigger a cluster abnormal alarm and end. If it is normal, go to operation B-3.
  • HaStack checks whether the cluster status is normal, if it is abnormal, it triggers a cluster abnormal alarm, and ends this round of inspection; if it is normal, go to operation B-3.
  • Operation B-3 check the network status reported by each computing node device through the management network, if it is normal, this round of inspection is terminated, otherwise go to operation B-4.
  • HaStack checks the three-plane status of the management network reported by each node through HaStack-agent.
  • Operation B-4 according to the abnormal status reported by each computing node device through the management network, determine whether processing is required, and if processing is not required, proceed to operation B-6; otherwise, go to operation B-5.
  • HaStack processes the nodes with exceptions one by one, according to the specific interrupt type of each node, compares the HA strategy matrix to determine the subsequent Fencing processing strategy; if no processing is required, go to operation B-6; otherwise if subsequent processing is required, go to operation B -5.
  • Operation B-5 For the computing node device in the abnormal state that needs to be processed, check the status of the shared storage device connected to it. When the shared storage device is abnormal, go to operation B-6 without Fencing and end, otherwise, transfer Go to operation B-7.
  • HaStack checks the storage status. If the storage is abnormal, Fencing is not required, go to operation B-6; otherwise, go to operation B-7.
  • Operation B-6 for scenarios where Fencing is not required, issue a stop Fencing request to the corresponding computing node device.
  • HaStack issues a request to stop Fencing to HaStack-agent.
  • Operation B-7 for the scenario that requires Fencing, issue a Fencing request to the corresponding computing node device.
  • HaStack issues a Fencing request to HaStack-agent.
  • the recovery process includes the following operations:
  • Operation D-1 When the Libvirt management module is started, register and obtain the lock heartbeat through the Lock management module. If the registration fails, go to operation D-2.
  • Libvirt registers with Lock and acquires the lock heartbeat when it starts, and if it fails, it proceeds to operation D-2.
  • Operation D-2 once the lock heartbeat registration fails, kill closes the cloud computing virtual machine VM program of the computing node device.
  • the Libvirt management module records all computing node devices that have been shut down by the VM program of the cloud computing virtual machine that was killed, and records them in the Fencing log file.
  • Operate D-4 regularly check the quarantine log file, and if there is an update, go to operation D-5.
  • HaStack-agent regularly checks the Fencing log on the node, and once it finds an update, it moves to operation D-5.
  • Operation D-5 report the isolation log files of all computing node devices to the management device. If the report fails, the process ends and the next report is required; otherwise, after reporting to the management device, the management device will issue an instruction to proceed restore.
  • the HaStack-agent reports all Fencing logs to HaStack. If the report fails, the processing ends and the next report is required.
  • the management device After reporting to the management device, the management device performs the following specific operations:
  • the management device receives the Fencinglog file reported by the agent computing node device, and determines whether to perform automatic processing. If the automatic processing shifts to the operation D-8, if the automatic processing is not required, the operation moves to the operation D-7.
  • HaStack receives the Fencing log reported by the agent, and determines whether to perform automatic processing according to the processing switch configured in advance: if the automatic processing is turned to D-8, if the automatic processing is not required, to D-7.
  • HaStack does not automatically restore all Fencing virtual machines, but only reports to the police, and the subsequent administrators manually restore.
  • the management terminal device automatically processes the fencing cloud computing virtual machine VM program, and calls the Nova interface to control the cloud computing virtual machine VM program to resume running again.
  • HaStack needs to automatically handle the Fencing virtual machine, which will call the Nova interface one by one to trigger the HA recovery process.
  • the cloud computing virtual machine VM program has a VM GuestOS operating system, which performs the following recovery operation after Fencing:
  • the Qga in the VM GuestOS and the HaStack-agent of the computing node continue to maintain a heartbeat. Once the blue screen in the virtual machine is stuck or stuck, go to operation E-2.
  • Operation E-2 when the highly available computing node module receives the report of the abnormal event, it is reported to the management device.
  • HaStack-agent when HaStack-agent receives an abnormal event, it will immediately report it to HaStack.
  • the management device After receiving the report of the abnormal event, the management device directly calls the Nova interface to control the cloud computing virtual machine VM program to resume operation again.
  • HaStack after receiving an abnormal event inside the VM of the virtual machine, HaStack directly issues an HA command to Nova to trigger HA recovery.
  • this embodiment provides a management method for a highly-available management terminal device of a split-open OpenStack virtual machine, which includes the following operations:
  • Operation A-1 check whether the cluster status is normal through the collected operating status information. If it is abnormal, trigger the cluster abnormal alarm and end, if it is normal, go to operation A-2;
  • Operation A-2 check the status reported by each computing node device through the management network, if it is normal, this round of inspection is terminated, otherwise go to the next operation A-3;
  • Operation A-3 according to the abnormal status reported by each computing node device through the management network, determine whether processing is needed one by one. If no processing is required, the abnormal processing of the computing node device ends, and go back to the previous operation A-2; otherwise Go to the next operation A-4;
  • Operation A-4 for the computing node device that needs to be processed in an abnormal state, check the status of the shared storage device connected to it, and when the shared storage device is abnormal, control the cloud computing virtual machine running on the computing node device through the Nova control module
  • the VM program does not run and ends, otherwise, go to the next operation A-5;
  • Operation A-5 issuing a Fencing request to the computing node device with the connected shared storage device in a normal state
  • Operation A-6 issuing a command to the Nova control module to trigger the cloud computing virtual machine VM program running on the computing node device to run.
  • Operation B-1 continuously monitor the Fencing event reported by the computing node device, and once the message is received, go to operation B-2;
  • Operation B-2 check whether the cluster status is normal through the collected operating status information. If it is abnormal, trigger the cluster abnormal alarm and end, if it is normal, go to operation B-3;
  • Operation B-3 check the network status reported by each computing node device through the management network, if it is normal, this round of inspection is terminated, otherwise go to operation B-4;
  • Operation B-4 according to the abnormal status reported by each computing node device through the management network, determine whether processing is required, and if processing is not required, proceed to operation B-6; otherwise, go to operation B-5;
  • Operation B-5 For the computing node device in the abnormal state that needs to be processed, check the status of the shared storage device connected to it. When the shared storage device is abnormal, go to operation B-6 without Fencing and end, otherwise, transfer To operation B-7;
  • Operation B-6 for scenarios that do not require Fencing, issue a stop Fencing request to the corresponding computing node device;
  • Operation B-7 for the scenario that requires Fencing, issue a Fencing request to the corresponding computing node device.
  • this embodiment provides a management method for a highly available computing node device of a split-open OpenStack virtual machine, which includes the following operations:
  • Operation C-1 when the virtual machine VM continues to update and store the lock heartbeat, if the write is normal, no processing is required, otherwise, if the lock heartbeat is written abnormally, go to operation C-2;
  • Operation C-3 if the management device returns the processing result within the specified time, go to operation C-5, otherwise go to operation C-4;
  • Operation C-4 if the management device does not return the processing result within the specified time, the Lock management module executes the Fencing operation, that is, killing or isolating the cloud computing virtual machine VM program of the computing node device;
  • the Lock management module determines whether Fencing is required according to the processing result returned by the management device.
  • the process of recovery after the process of the Lock management module restarts includes the following operations:
  • Operation D-2 once the lock heartbeat registration fails, kill closes the cloud computing virtual machine VM program of the computing node device;
  • the Libvirt management module records all the computing node devices that have been shut down by killing the VM program of the cloud computing virtual machine, and records them in the Fencing log file;
  • Operate D-4 regularly check the Fencing log files, and if there is an update, go to operation D-5;
  • Operation D-5 report the Fencing log files of all computing node devices to the management device. If the report fails, the processing is ended and the next report is required; otherwise, after reporting to the management device, the management device will issue an instruction to proceed restore.
  • Operation E-1 Qga in VM GuestOS and the highly available computing node module of the computing node device continue to maintain a lock heartbeat, and when the VM program of the cloud computing virtual machine fails, go to operation E-2;
  • Operation E-2 when the highly available computing node module receives the report of the abnormal event, it is reported to the management device;
  • the management device After receiving the report of the abnormal event, the management device directly calls the Nova interface to control the cloud computing virtual machine VM program to resume operation again.
  • the failure includes a blue screen of the computing node device where the VM program of the cloud computing virtual machine is running, or a stuck or dead machine.
  • the invention has been secondary developed based on the original OpenStack version.
  • a set of independent high-availability systems for OpenStack virtual machines with anti-brain split are independently developed on the periphery of OpenStack. Get rid of the dependence on the IPMI plane detection/hardware dog in the traditional HA solution, and realize the complete virtual machine high availability (HA) technical method of carrier-grade reliability. For this reason, the present invention provides an improved OpenStack anti-brain split High availability system for virtual machines.
  • split-brain refers to a highly available (HA) system.
  • HA highly available
  • two connected control nodes or computing nodes are disconnected, they are originally a whole system and split into Two independent nodes. At this time, the two nodes begin to compete for shared resources. As a result, the system will be chaotic and data will be corrupted.
  • the improved anti-brain split OpenStack virtual machine high availability management terminal device and management method provided by the improvement of the present invention That can solve this problem.
  • the anti-brain split OpenStack virtual machine high availability system since it has a high availability module, it can run the high availability management method, and through a series of operations from A-1 to A-6, detect connected computing nodes in real time
  • the status of the device and shared storage device depends on the type of abnormal state learned: the abnormality of the computing node device or the abnormality of the shared storage device, specifically which part of the management network plane, storage network plane, service network plane in the management network is abnormal And determine whether to perform Fencing operation to close the cloud computing virtual machine VM program corresponding to the abnormal computing node device, so as to ensure the high availability of the cloud computing virtual machine VM program of the computing node device in the system.
  • the management device can run a series of operations from C-1 to C-5, update and store the lock heartbeat of the Lock distributed read-write lock in real time, and report the fault conditions written during the update in real time.
  • To the management device operate according to the processing result of the management device: whether Fencing closes or isolates the VM program of the cloud computing virtual machine of the computing node device, so that the protection strength of the Lock distributed read-write lock is controlled by the computing node device
  • the host level is refined to the virtual machine VM level, and concurrent read and write protection can be performed for a single virtual machine.
  • the lock protection strength of the Lock distributed read-write lock is refined from the host level of the computing node device to the virtual machine VM level, and concurrent read and write protection can be performed for a single virtual machine.
  • the self-invented full-process VM Fencing protection mechanism prevents the virtual machine from being abnormally terminated due to the failure of the shared storage device and other failures affecting the underlying lock heartbeat.
  • the asynchronous notification mechanism is adopted to solve the problem of HA management VM disconnection caused by Lock restart, and automatic recovery is realized.
  • HaStack implements three planes of management network (management network plane, business network plane, storage network plane) by integrating Etcd and Qga ) The health status, and the precise perception of the virtual machine VM internal operating state:

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Computer And Data Communications (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Hardware Redundancy (AREA)

Abstract

防脑裂的OpenStack虚拟机高可用系统,包括管理端装置、管理网络、计算节点装置以及共享存储装置,其中,至少两个管理端装置之间通过管理网络进行通信而组成管理集群,管理端装置与计算节点装置通过管理网络通信连接,计算节点装置与共享存储装置连接,每个管理端装置包括:Nova控制模块;集群管理模块;以及高可用模块,用于对所有的计算节点装置进行高可用管理;计算节点装置除安装有云计算虚拟机VM程序之外,还具有:Nova-computer计算机模块;Libvirt管理模块,用于在KVM上提供标准的Hypervisor API接口的管理进程;Lock管理模块,与Libvirt管理模块配合,用于对共享存储装置的的锁心跳进行更新和监控;以及高可用计算节点模块,至少用于将锁心跳上报给管理端装置。

Description

防脑裂的OpenStack虚拟机高可用系统 技术领域
本发明涉及云计算领域,具体涉及防脑裂的OpenStack虚拟机高可用系统,属于计算机领域。
背景技术
随着云技术方案的成熟,基于OpenStack的云计算平台也越来越广泛应用到各种领域,大量的业务系统被移植到云平台提供服务。其中,虚拟机高可用即HA(High availability)功能,作为虚拟化平台重要特性引入云环境,在当前环境交互中已经愈发重要。该功能用于当物理主机出现故障时来自动恢复正在运行的虚拟机,在提升云平台可靠性的同时,也能够大大提升整个平台的可维护性。
但是,在原生OpenStack中,却并未提供完整的HA解决方案:
一方面,负责计算功能管理的Nova模块中,仅提供了Evacuate接口用于主机故障时将虚拟机疏散到其他节点,但模块本身缺少对HA的调度管理功能;
另一方面,专门处理HA的子开源项目Masakari才刚刚从OpenStack孵化项目成为正式项目,项目本身成熟度依然很低,仅能完成少数场景下的HA恢复,尚无法支持商用。
此外,一些厂商也提供了各自的高可用方案,比如美国Red hat公司提供的方案,是通过Pacemaker软件来实现HA及Fencing(隔离)功能。整个方案需要依赖IPMI平面与硬件狗,且只能处理主机监听网络异常等简单场景,无法处理和区分计算节点上其他网络平面(如管理网络平面、业务网络平面、存储网络平面等)故障的复杂场景。
发明内容
本发明提供一种防脑裂的OpenStack虚拟机高可用系统,其特征在于,包括管理端装置、管理网络、计算节点装置以及共享存储装置,
其中,至少两个管理端装置之间通过管理网络进行通信而组成管理集群,
管理端装置与计算节点装置通过管理网络通信连接,
计算节点装置与共享存储装置连接,
每个管理端装置包括:
Nova控制模块,包括Nova原生的虚拟机VM管理进程,用于对虚拟机VM的生命周期进行管理操作;
集群管理模块,用于收集集群的运行状况信息;以及
高可用模块,用于对所有的计算节点装置进行高可用管理,
高可用模块运行高可用管理的方法,该方法包括以下操作:
操作A-1,通过集群管理模块收集的运行状况信息检查集群状态是否正常,如果异常,则触发集群异常告警并结束,如果正常,则转到操作A-2;
操作A-2,检查各个计算节点装置通过管理网络上报的状态,如果正常,则此轮检查终止,否则转到下一步操作A-3;
操作A-3,根据每个计算节点装置通过管理网络上报的异常状态,逐个判断是否需要进行处理,如果无需处理,则该计算节点装置异常处理结束,转回上一步操作A-2;否则转到下一步操作A-4;
操作A-4,对于需要处理的异常状态的计算节点装置,检查与之连接的共享存储装置的状态,当共享存储装置异常时,通过Nova控制模块控制该计算节点装置上运行的云计算虚拟机VM程序不运行,并结束,否则,转到下一步操作A-5;
操作A-5,向所连接的共享存储装置状态正常的计算节点装置下发Fencing隔离请求,fencing即kill隔离关闭该节点的云计算虚拟机VM程序;
操作A-6,向Nova控制模块下发命令,触发该计算节点装置上运行的云计算虚拟机VM程序运行,
计算节点装置除安装有云计算虚拟机VM程序之外,还具有:
Nova-computer计算机模块,用于直接响应管理端装置各管理进程来控制虚拟机VM的运行状态,并与Hypervisor API进行通信;
Libvirt管理模块,用于在KVM上提供标准的Hypervisor API接口的管理进程;
Lock管理模块,与Libvirt管理模块配合,用于对共享存储装置的的锁心跳进行更新和监控;以及
高可用计算节点模块,至少用于将锁心跳上报给管理端装置,
其中,高可用计算节点模块运行包括以下操作的方法:
操作C-1,当虚拟机VM持续更新并存储锁心跳时,若写入正常则无需处理,否则一旦锁心跳写入异常,则转到操作C-2;
操作C-2,Lock管理模块向管理端装置上报存储异常事件,并等待管理端装置反馈处理结果;
操作C-3,若管理端装置在规定时间内返回了处理结果,则转到操作C-5,否则转到操作C-4;
操作C-4,若管理端装置未在规定时间内返回处理结果,则Lock管理模块执行Fencing隔离操作,即kill关闭或隔离该计算节点装置的云计算虚拟机VM程序;
操作C-5,Lock管理模块按照管理端装置返回的处理结果,判断是否需要Fencing。
本发明提供的防脑裂的OpenStack虚拟机高可用系统,还可以具有这样的特征:
其中,当管理端装置向所连接的共享存储装置状态正常的计算节点装置下发Fencing请求后,高可用模块还运行以下操作:
操作B-1,持续监听计算节点装置上报的Fencing事件,一旦收到消息则转到操作B-2;
操作B-2,通过集群管理模块收集的运行状况信息检查集群状态是否正常,如果异常,则触发集群异常告警并结束,如果正常,则转到操作B-3;
操作B-3,检查各个计算节点装置通过管理网络上报的网络状态,如果正常,则此轮检查终止,否则转到操作B-4;
操作B-4,根据每个计算节点装置通过管理网络上报的异常状态,判断是否需要进行处理,如果无需处理,则进行操作B-6;否则转到操作B-5;
操作B-5,对于需要处理的异常状态的计算节点装置,检查与之连接的共享存储装置的状态,当共享存储装置异常时,无需Fencing并转到操作B-6,并结束,否则,转到操作B-7;
操作B-6,针对无需Fencing的场景,向对应的计算节点装置下发停止Fencing请求;
操作B-7,针对需要Fencing的场景,向对应的计算节点装置下发执行Fencing请求,
Lock管理模块的进程重启后恢复的过程包括以下操作:
操作D-1,在Libvirt管理模块启动时,通过Lock管理模块注册并获取锁心跳,如注册失败则转到S2;
操作D-2,一旦锁心跳注册失败,则kill关闭该计算节点装置的云计算虚拟机VM程序;
操作D-3,Libvirt管理模块记录所有被kill关闭云计算虚拟机VM程序的计算节点装置,并记录在隔离日志文件中;
操作D-4,定期检查隔离日志文件,发现有更新则转到操作D-5;
操作D-5,向管理端装置上报所有计算节点装置的隔离日志文件,若上报失败,则此次处理结束,留待下次上报;否则,上报给管理端装置后,由管理端装置发出指示进行恢复。
本发明提供的防脑裂的OpenStack虚拟机高可用系统,还可以具有这样的特征:
其中,在上报给管理端装置后,管理端装置进行以下的具体操作:
操作D-6,管理端装置收到agent计算节点装置上报的隔离日志文件,判断是否要进行自动处理,若自动处理转向操作D-8,若无需自动处理,转向操作D-7;
操作D-7,管理端装置告警待由人工处理;
操作D-8,管理端装置自动处理被Fencing的云计算虚拟机VM程序,调用Nova接口控制云计算虚拟机VM程序再次恢复运行。
本发明提供的防脑裂的OpenStack虚拟机高可用系统,还可以具有这样的特征:
共享存储装置为CephFS或NFS文件管理程序管理运行,
虚拟机VM管理进程包括Nova-api、Nova-conductor或Nova-scheduler,
集群管理模块包括Etcd或Consul。
本发明提供的防脑裂的OpenStack虚拟机高可用系统,还可以具有这样的特征:
管理网络包括:
管理网络平面,用于对接管理端装置,用于提供管理服务;
存储网络平面,用于对接后端的共享存储装置,用于提供存储服务;
业务网络平面,用于对接计算节点装置,用于提供云计算虚拟机VM的访问服务。
本发明提供的防脑裂的OpenStack虚拟机高可用系统,还可以具有这样的特征:
其中,当管理网络的管理网络平面、存储网络平面以及业务网络平面均正常时,操作A-2中计算节点装置通过管理网络上报的网络状态才判断为正常,否则根据异常的计算节点装置的具体中断类型是管理网络平面、存储网络平面以及业务网络平面中的哪一种或几种进行相应的处理。
本发明提供的防脑裂的OpenStack虚拟机高可用系统,还可以具有这样的特征:
其中,其中,管理网络包括:
管理网络平面,用于对接管理端装置,用于提供管理服务;
存储网络平面,用于对接后端的共享存储装置,用于提供存储服务;
业务网络平面,用于对接计算节点装置,用于提供虚拟机VM的访问服务,
对应的,当管理网络的管理网络平面、存储网络平面以及业务网络平面均正常时,操作B-3中计算节点装置通过管理网络上报的网络状态才判断为正常,否则根据异常的计算节点装置的具体中断类型是管理网络平面、存储网络平面以及业务网络平面中的哪一种或几种进行相应的Fencing处理。
本发明提供的防脑裂的OpenStack虚拟机高可用系统,还可以具 有这样的特征:
其中,云计算虚拟机VM程序具有VMGuestOS操作系统,该操作系统在Fencing后进行以下的恢复操作:
操作E-1,VMGuestOS中的Qga与计算节点装置的高可用计算节点模块持续保持锁心跳,当云计算虚拟机VM程序出现故障时,转到操作E-2;
操作E-2,当高可用计算节点模块接收到异常事件的报告时,上报给管理端装置;
操作E-3,管理端装置收到异常事件的报告后,直接调用Nova接口控制云计算虚拟机VM程序再次恢复运行。
本发明提供的防脑裂的OpenStack虚拟机高可用系统,还可以具有这样的特征:
其中,故障包括云计算虚拟机VM程序运行所在的计算节点装置蓝屏或卡死、死机。
本发明提供的防脑裂的OpenStack虚拟机高可用系统,还可以具有这样的特征:
其中,在上报给管理端装置后,管理端装置进行以下的具体操作:
操作D-6,管理端装置收到agent计算节点装置上报的隔离日志文件,判断是否要进行自动处理,若自动处理转向操作D-8,若无需自动处理,转向操作D-7;
操作D-7,管理端装置告警待由人工处理;
操作D-8,管理端装置自动处理被Fencing的云计算虚拟机VM程序,调用Nova接口控制云计算虚拟机VM程序再次恢复运行。
发明的作用和效果
根据本发明提供的防脑裂的OpenStack虚拟机高可用系统,因为具有高可用模块,其能够运行高可用管理的方法,通过A-1到A-6的一系列操作,实时检测连接的计算节点装置以及共享存储装置的状态,根据获知的异常状态的类型:计算节点装置的异常还是共享存储装置的异常,具体的是管理网络中的管理网络平面、存储网络平面、业务网络平面哪一部分的异常,并判断后决定是否进行Fencing操作来关闭对应出现异常的计算节点装置的云计算虚拟机VM程序,从而保证系统中的计算节点装置的云计算虚拟机VM程序的高可用性。
因为具有高可用计算节点模块,其能够运行C-1到C-5的一系列操作,实时更新并存储Lock分布式读写锁的锁心跳,并将更新时的写入的故障情况实时的上报给管理端装置,根据管理端装置的处理结果进行操作:是否Fencing关闭该计算节点装置的云计算虚拟机VM程序,从而将Lock分布式读写锁的锁保护力度,由计算节点装置的 主机级别细化到虚拟机VM级别,能够针对单个虚拟机进行并发读写保护。
附图说明
图1是本发明的实施例中防脑裂的OpenStack虚拟机高可用系统的结构示意图;
图2是本发明的实施例中防脑裂的OpenStack虚拟机高可用管理端装置的高可用管理方法的流程示意图;
图3是本发明的实施例中防脑裂的OpenStack虚拟机高可用管理端装置的高可用模块进行Fencing的流程示意图;
图4是本发明的实施例中防脑裂的OpenStack虚拟机高可用计算节点装置的高可用管理方法的流程示意图;
图5是本发明的实施例中防脑裂的OpenStack虚拟机高可用计算节点装置的Lock管理模块的进程重启后恢复的过程示意图;以及
图6是本发明的实施例中防脑裂的OpenStack虚拟机高可用计算节点装置的云计算虚拟机VM程序在Fencing后进行恢复操作的步骤示意图。
具体实施方式
为了使本发明实现的技术手段、创作特征、达成目的与功效易于明白了解,以下实施例结合附图对本发明家教管理系统防脑裂的OpenStack虚拟机高可用系统作具体阐述。
英文缩写和技术专有名称解释
VM,Virtual Machine即虚拟机,指通过软件模拟的具有完整硬件系统功能的、运行在一个完全隔离环境中的完整计算机系统。
OpenStack,OpenStack是一个开源的云计算管理平台项目,由NASA(美国国家航空航天局)和Rackspace合作研发并发起的,以Apache许可证授权的自由软件和开放源代码项目。
Nova,OpenStack项目中的计算资源管理组件,包含nova-api、nova-scheduler、nova-conductor、nova-compute等进程。作为整个OpenStack项目的核心计算控制器,用于实现对用户虚拟机实例的生命周期管理来提供虚拟服务,提供诸如虚拟机创建、开机、关机、挂起、暂停、调整、迁移、重启、销毁等虚拟机VM的生命周期进行操作,以及配置CPU、内存规格,集群调度等功能。
Nova-api,Nova对外提供的交互接口,消息处理入口。管理者可以通过这个接口来管理内部基础设施,也可以通过这个接口向用户提供服务。当接收到请求后,经过基本校验后,它会将各请求通过消息队列发送到下一个模块去。
Nova-scheduler,主要完成Nova中各虚拟机实例的调度工作。 能够根据诸如CPU构架、主机的内存、负载、是否具备某种硬件要求等条件,将各实例调度分配到合适的节点上。
Nova-conductor,Nova内部用于长任务的处理者。主要处理诸如虚拟机实例的创建、迁移等耗时较长的任务的跟踪管理。此外还负责数据库的访问权限控制,避免Nova-compute直接访问数据库。
Nova-computer,位于计算节点上,是虚拟机生命周期管理操作的真正执行者。通过消息队列接收请求,响应控制节点各管理进程,直接负责与Hypervisor进行各种通信。
Nova controller,一种角色定义或称呼。一般指代包括Nova-api、nova-conductor、nova-scheduler等主要负责处理虚拟机管理操作的Nova各进程;一般会被部署在被称为管理节点的独立节点上,不与nova-compute所在的计算节点部署在一起。
HaStack采用C-S结构提供HA功能的两个自研组件之一,位于Server端。作为HA管理的大脑,用来管理全局的HA行为,其功能由高可用模块执行。
HaStack-agent,采用C-S结构提供HA功能的两个自研组件之一,位于Agent端。主要负责挂载共享目录,上报本节点心跳状态与VM Fencing事件;并配合HaStack完成部分HA动作的管理,其功能由高可用计算节点模块运行。
API,Application Programming Interface,应用编程接口。组件通过API将内核暴露出去,供外界访问调用。
Hypervisor,是一种运行在物理服务器和操作系统之间的中间软件层,可允许多个操作系统和应用共享一套基础物理硬件,因此也可以看作是虚拟环境中的“元”操作系统。作为平台硬件和操作系统的抽象,它可以协调访问服务器上的所有物理设备和虚拟机,也叫虚拟机监视器(Virtual Machine Monitor)。Hypervisor是所有虚拟化技术的核心。非中断地支持多工作负载迁移的能力是Hypervisor的基本功能。当服务器启动并执行Hypervisor时,它会给每一台虚拟机分配适量的内存、CPU、网络和磁盘,并加载所有虚拟机的客户操作系统。
KVM,Kernel-based Virtual Machine,是一个开源的系统虚拟化模块,是基于硬件的完全虚拟化,主要提供基于内核的虚拟机。
Libvirt,在KVM之上提供标准的Hypervisor API接口的管理进程。
Lock,由Lock管理模块304运行,设置在计算节点装置300中,与libvirt组件配合,位于共享存储装置500的架构上层,完成各种锁心跳的更新与监控。用于提供分布式读写锁,来控制和管理对同一存储的并发写入。本实施例中创新的Lock模块,是参考原生Lock 功能而新发明的分布式读写锁管理器。也可根据需要,使用原生Lock模块,或对原生Lock进行适应性二次开发。
Etcd,高可用的分布式键值(key-value)数据库,由GO语言实现,通过一致性算法来保证强一致性。在本方案中作为集群软件,主要用来提供以下两点功能:一是组建三平面集群,感知全局健康状态供HA决策;二是作为HaStack与HaStack-agent间的信息桥梁。
Consul,HashiCorp公司推出的开源工具,用于实现分布式系统的服务发现与配置。在本方案中作为集群软件,起到三平面检测及HaStack与HaStack-agent间信息桥梁的作用。
Ceph,一种为优秀的性能、可靠性和可扩展性而设计的统一的分布式存储软件。
CephFS,基于Ceph存储提供的分布式文件系统。在本方案中,主要用来存储各种Lock模块的锁文件。
NFS,即网络文件系统,它允许网络中的计算机之间通过TCP/IP网络共享文件或目录。NFS服务器可以允许NFS客户端将远端NFS服务器端的共享目录挂载到本地的NFS客户端中。在NFS的应用中,本地NFS的客户端应用可以透明地读写位于远端NFS服务器上的文件,就像访问本地的磁盘分区和目录一样。
Fencing:指在分布式领域,当部分资源状态不确定时,出于数据保护避免脑裂的目的,采用的将可疑资源进行隔离关闭的处理方式。
GuestOS:Guest在虚拟化领域,用来指代被虚拟出来的系统,也就是运行了软件(例如操作系统)的虚拟机示例。GuestOS即虚拟机用的操作系统。
QGA:是Qemu(模拟器)-Guest(访客)-Agent(代理端)的简称,是一个运行在虚拟机内部的普通应用程序,即是在虚拟机上增加一个串口与主机进行socket通信,来实现一种宿主机和虚拟机VM进行交互的方式。
实施例1
如图1所示,防脑裂的OpenStack虚拟机高可用系统,包括管理端装置100、管理网络200、计算节点装置300以及共享存储装置400。
其中,至少两个管理端装置之间通过管理网络进行通信而组成管理集群110。
管理端装置与计算节点装置通过管理网络通信连接。
计算节点装置与共享存储装置连接。
具体的如图1所示,这里以三个管理端装置100(即图中的控制节点A、B、C)、三个计算节点装置300(即图中的计算节点A、B、C)和一个共享存储装置400为例进行说明。
实施例中,三个计算节点装置300都和一个共享存储装置400连接,即三个计算节点装置300共享一个共享存储装置400。
每个管理端装置100包括Nova控制模块101、集群管理模块102、高可用模块103。
Nova控制模块101,即图中的Nova controller,包括Nova原生的虚拟机VM管理进程,用于对虚拟机VM的生命周期进行管理操作。
集群管理模块102,即图中的Etcd,用于收集集群的运行状况信息。
高可用模块103,即图中的FitOS HaStack,用于对所有的计算节点装置进行高可用管理。
管理网络200被划分为三大网络平面,分别是管理网络平面201、存储网络平面202、业务网络平面203。
管理网络平面201,用于对接管理端装置,用于提供管理服务。
存储网络平面202,用于对接后端的共享存储装置,用于提供存储服务。
业务网络平面203,用于对接计算节点装置,用于提供云计算虚拟机VM的访问服务。
所有的节点都连接在三大平面上,集群管理模块102,即图中的Etcd分别对应各个平面组建对应的集群。
每个计算节点装置300除安装有云计算虚拟机VM程序301,即图中的VM之外,还具有Nova-computer计算机模块302、Libvirt管理模块303、Lock管理模块304、高可用计算节点模块305。
Nova-computer计算机模块302,即图中的Nova-compute,用于直接响应管理端装置各管理进程来控制云计算虚拟机VM的运行状态,并与Hypervisor API进行通信。
Libvirt管理模块303,即图中Libvirt,用于在KVM上提供标准的Hypervisor API接口的管理进程。
Lock管理模块304,即图中的Lock,与Libvirt管理模块配合,用于对共享存储装置的的锁心跳进行更新和监控。
高可用计算节点模块305,即图中的HaStack-agent,至少用于将锁心跳上报给管理端装置。
以下对管理端装置100、计算节点装置300中涉及的OpenStack虚拟机的云计算虚拟机Nova的各个组件和服务进行解释。
Nova-controller,由Nova控制模块101来运行,包括Nova-api、Nova-conductor或Nova-scheduler等虚拟机管理进程,设置在管理端装置100中,主要用来对虚拟机VM的生命周期进行管理操作。
HaStack,由高可用模块103来运行,设置在管理端装置100 中,用来管理全局的HA行为。
集群软件,由集群管理模块102来运行,使用的软件包括Etcd、Consul等,本实施例使用Etcd。与HaStack组件结合使用,设置在管理端装置100中,用于感知整个集群的健康状态供HA决策,且作为高可用模块103与高可用计算节点模块305间的信息桥梁。
Nova-compute,原生Nova进程,由Nova-computer计算机模块302运行就,设置在计算节点装置300中,用于响应控制节点各管理进程,是虚拟机生命周期管理操作的真正执行者,直接负责与Hypervisor进行各种通信。
HaStack-agent,与nova-compute进程结合使用,由高可用计算节点模块305运行,设置在计算节点装置300中,主要负责挂载共享目录,上报本节点锁心跳状态,并配合HaStack组件完成部分HA动作的管理功能。
Libvirt,设置在计算节点装置300中,由Libvirt管理模块303运行,在虚拟机VM之上提供标准的Hypervisor API接口的管理进程。
Lock,由Lock管理模块304运行,设置在计算节点装置300中,与libvirt组件配合,位于共享存储装置500的架构上层,完成各种锁心跳的更新与监控。用于提供分布式读写锁,来控制和管理对同一存储的并发写入。本实施例中创新的Lock模块,是参考原生Lock功能而新发明的分布式读写锁管理器。也可根据需要,使用原生Lock模块,或对原生Lock进行适应性二次开发。共享存储系统,由共享存储装置400运行,采用的软件程序包括CephFS、NFS,提供共享文件系统存储。
如图2所示,高可用模块103运行高可用管理的方法,该方法包括以下操作:
操作A-1,通过集群管理模块收集的运行状况信息检查集群状态是否正常,如果异常,则触发集群异常告警并结束,如果正常,则转到操作A-2。
具体就是HaStack检查集群状态是否正常,如果异常,则触发集群异常告警,结束此轮检查;如果正常,则转到操作A-2。
操作A-2,检查各个计算节点装置通过管理网络上报的状态,如果正常,则此轮检查终止,否则转到下一步操作A-3。
具体就是,HaStack检查各节点通过HaStack-agent上报的管理网络三平面状态,如果均正常,则此轮检查终止;否则转到操作A-3。
操作A-3,根据每个计算节点装置通过管理网络上报的异常状态,逐个判断是否需要进行处理,如果无需处理,则该计算节点装置异常处理结束,转回上一步操作A-2;否则转到下一步操作A-4。
具体就是,HaStack逐个处理有异常的节点,根据各节点具体是哪个网络平面中断,比对HA策略矩阵,确定后续处理策略;如果无需处理,则该节点异常处理结束,转回操作A-3;否则,如果需要后续处理,则转到操作A-4。
操作A-4,对于需要处理的异常状态的计算节点装置,检查与之连接的共享存储装置的状态,当共享存储装置异常时,通过Nova控制模块控制该计算节点装置上运行的云计算虚拟机VM程序不运行,并结束,否则,转到下一步操作A-5。
具体就是,HaStack检查共享存储装置400的工作状态,如果共享存储装置400此时异常则不能触发HA,即云计算虚拟机VM不运行。此轮处理结束;否则,若存储正常则转到操作A-5。
操作A-5,向所连接的共享存储装置状态正常的计算节点装置下发Fencing请求,fencing即kill关闭该节点的云计算虚拟机VM程序。
操作A-6,向Nova控制模块下发命令,触发该计算节点装置上运行的云计算虚拟机VM程序运行。
如图4所示,由于底层的共享存储装置400的存储故障会导致Lock的锁心跳无法按时写入,此时需要HaStack-agent与HaStack之间确认是否需执行Fencing,此时就需要高可用计算节点模块运行包括以下操作的方法:
操作C-1,当云计算虚拟机VM持续更新并存储锁心跳时,若写入正常则无需处理,否则一旦锁心跳写入异常,则转到操作C-2。
具体就是,在计算节点装置上,虚拟机VM持续更新Lock的锁心跳并存储;若存储中写入正常则无需处理;否则一旦锁心跳写入异常超过预定时间,则转到操作C-2。
操作C-2,Lock管理模块向管理端装置上报存储异常事件,并等待管理端装置反馈处理结果。
具体就是,Lock通知HaStack-agent,向HaStack上报底层存储异常事件,并等待HaStack提供处理结果。
操作C-3,若管理端装置在规定时间内返回了处理结果,则转到操作C-5,否则转到操作C-4。
具体就是,若HaStack在预定时间内返回了处理意见,则转到操作C-5;否则转到操作C-4。
操作C-4,若管理端装置未在规定时间内返回处理结果,则Lock管理模块执行Fencing操作,即kill关闭该计算节点装置的云计算虚拟机VM程序。
具体就是,一旦HaStack未按时返回结果,则Lock就按照默认设定执行Fencing隔离操作,即kill关掉或隔离该计算节点上运行 的所有虚拟机VM。
操作C-5,Lock管理模块按照管理端装置返回的处理结果,判断是否需要Fencing。
实施例2
在实施例1的基础上,如图3所示,当管理端装置100向所连接的共享存储装置状态正常的计算节点装置下发Fencing请求后,HaStack需根据环境现状确实如何响应底层HaStack-agent端上报的存储中断事件,为此高可用模块还运行以下操作:
操作B-1,持续监听计算节点装置上报的Fencing事件,一旦收到消息则转到操作B-2。
具体就是,HaStack持续监听HaStack-agent上报的Fencing事件,一旦收到消息则转到操作B-2。
操作B-2,通过集群管理模块收集的运行状况信息检查集群状态是否正常,如果异常,则触发集群异常告警并结束,如果正常,则转到操作B-3。
具体就是,HaStack检查集群状态是否正常,如果异常,则触发集群异常告警,结束此轮检查;如果正常,则转到操作B-3。
操作B-3,检查各个计算节点装置通过管理网络上报的网络状态,如果正常,则此轮检查终止,否则转到操作B-4。
具体就是,HaStack检查各节点通过HaStack-agent上报的管理网络三平面状态。
操作B-4,根据每个计算节点装置通过管理网络上报的异常状态,判断是否需要进行处理,如果无需处理,则进行操作B-6;否则转到操作B-5。
HaStack逐个处理有异常的节点,根据各节点具体中断类型,比对HA策略矩阵,确定后续Fencing处理策略;如果无需处理,则转到操作B-6;否则若需要后续处理,则转到操作B-5。
操作B-5,对于需要处理的异常状态的计算节点装置,检查与之连接的共享存储装置的状态,当共享存储装置异常时,无需Fencing并转到操作B-6,并结束,否则,转到操作B-7。
具体就是,HaStack检查存储状态,若存储异常则无需Fencing,转到操作B-6;否则转到操作B-7。
操作B-6,针对无需Fencing的场景,向对应的计算节点装置下发停止Fencing请求。
具体就是,针对无需Fencing的场景,HaStack向HaStack-agent下发停止Fencing请求。
操作B-7,针对需要Fencing的场景,向对应的计算节点装置下发执行Fencing请求。
具体就是,针对需要Fencing的场景,HaStack向HaStack-agent下发执行Fencing请求。
实施例3
在实施例1-2的基础上,如图5所示,由于Lock大量数据是存储在内存中的,未做持久化。因此如果Lock模块/进程异常重启后,原来所有挂载在锁空间下的所有资源均会被清空,这种情况会导致原虚拟机VM全部脱管,此时需要由Lock管理模块进程重启后恢复,该恢复过程包括以下操作:
操作D-1,在Libvirt管理模块启动时,通过Lock管理模块注册并获取锁心跳,如注册失败则转到操作D-2。
具体就是,Libvirt在启动时通过Lock注册并获取锁心跳,一旦失败则转到操作D-2。
操作D-2,一旦锁心跳注册失败,则kill关闭该计算节点装置的云计算虚拟机VM程序。
操作D-3,Libvirt管理模块记录所有被kill关闭云计算虚拟机VM程序的计算节点装置,并记录在Fencing log日志文件中。
操作D-4,定期检查隔离日志文件,发现有更新则转到操作D-5。
具体就是,HaStack-agent定期检查节点上的Fencing log,一旦发现有更新则转到操作D-5。
操作D-5,向管理端装置上报所有计算节点装置的隔离日志文件,若上报失败,则此次处理结束,留待下次上报;否则,上报给管理端装置后,由管理端装置发出指示进行恢复。
具体就是,HaStack-agent向HaStack上报所有Fencing log,若上报失败,则此次处理结束,留待下次上报。
实施例4
在实施例3的基础上,其中,在上报给管理端装置后,管理端装置进行以下的具体操作:
操作D-6,管理端装置收到agent计算节点装置上报的Fencinglog文件,判断是否要进行自动处理,若自动处理转向操作D-8,若无需自动处理,转向操作D-7。
具体就是,HaStack收到agent上报的Fencing log,根据提前配置好的处理开关,确定是否要进行自动处理:若自动处理转向操作D-8,若无需自动处理,转向操作D-7。
操作D-7,管理端装置告警待由人工处理。
具体就是,HaStack不自动恢复所有Fencing虚拟机,只上报告警,交由后续管理员手动恢复。
操作D-8,管理端装置自动处理被Fencing的云计算虚拟机VM程序,调用Nova接口控制云计算虚拟机VM程序再次恢复运行。
具体就是,HaStack需要自动处理Fencing虚拟机,会逐个调用Nova接口触发HA恢复流程。
实施例5
进一步的,在上述实施例1-4的基础上,云计算虚拟机VM程序具有VM GuestOS操作系统,该操作系统在Fencing后进行以下的恢复操作:
操作E-1,VM GuestOS中的Qga与计算节点装置的高可用计算节点模块持续保持锁心跳,当云计算虚拟机VM程序出现故障时,转到操作E-2。
具体就是,VM GuestOS中的Qga会与计算节点的HaStack-agent持续保持心跳,一旦当虚拟机内蓝屏或卡死时,转到操作E-2。
操作E-2,当高可用计算节点模块接收到异常事件的报告时,上报给管理端装置。
具体就是,当HaStack-agent接收到异常事件时,会立即上报给HaStack。
操作E-3,管理端装置收到异常事件的报告后,直接调用Nova接口控制云计算虚拟机VM程序再次恢复运行。
具体就是,HaStack收到虚拟机VM内部异常事件后,直接向Nova下发HA命令,触发HA恢复。
实施例6
如图2所示,本实施例提供一种防脑裂的OpenStack虚拟机高可用的管理端装置的管理方法,包括以下操作:
操作A-1,通过收集的运行状况信息检查集群状态是否正常,如果异常,则触发集群异常告警并结束,如果正常,则转到操作A-2;
操作A-2,检查各个计算节点装置通过管理网络上报的状态,如果正常,则此轮检查终止,否则转到下一步操作A-3;
操作A-3,根据每个计算节点装置通过管理网络上报的异常状态,逐个判断是否需要进行处理,如果无需处理,则该计算节点装置异常处理结束,转回上一步操作A-2;否则转到下一步操作A-4;
操作A-4,对于需要处理的异常状态的计算节点装置,检查与之连接的共享存储装置的状态,当共享存储装置异常时,通过Nova控制模块控制该计算节点装置上运行的云计算虚拟机VM程序不运行,并结束,否则,转到下一步操作A-5;
操作A-5,向所连接的共享存储装置状态正常的计算节点装置下发Fencing请求;
操作A-6,向Nova控制模块下发命令,触发该计算节点装置上运行的云计算虚拟机VM程序运行。
实施例7
在实施例6提供的方法的基础上,如图3所示,当向所连接的共享存储装置状态正常的计算节点装置下发Fencing请求后,还运行以下操作:
操作B-1,持续监听计算节点装置上报的Fencing事件,一旦收到消息则转到操作B-2;
操作B-2,通过收集的运行状况信息检查集群状态是否正常,如果异常,则触发集群异常告警并结束,如果正常,则转到操作B-3;
操作B-3,检查各个计算节点装置通过管理网络上报的网络状态,如果正常,则此轮检查终止,否则转到操作B-4;
操作B-4,根据每个计算节点装置通过管理网络上报的异常状态,判断是否需要进行处理,如果无需处理,则进行操作B-6;否则转到操作B-5;
操作B-5,对于需要处理的异常状态的计算节点装置,检查与之连接的共享存储装置的状态,当共享存储装置异常时,无需Fencing并转到操作B-6,并结束,否则,转到操作B-7;
操作B-6,针对无需Fencing的场景,向对应的计算节点装置下发停止Fencing请求;
操作B-7,针对需要Fencing的场景,向对应的计算节点装置下发执行Fencing请求。
实施例8
如图4所示,本实施例提供一种防脑裂的OpenStack虚拟机高可用的计算节点装置的管理方法,包括以下操作:
操作C-1,当虚拟机VM持续更新并存储锁心跳时,若写入正常则无需处理,否则一旦锁心跳写入异常,则转到操作C-2;
操作C-2,Lock管理模块向管理端装置上报存储异常事件,并等待管理端装置反馈处理结果;
操作C-3,若管理端装置在规定时间内返回了处理结果,则转到操作C-5,否则转到操作C-4;
操作C-4,若管理端装置未在规定时间内返回处理结果,则Lock管理模块执行Fencing操作,即kill关闭或隔离该计算节点装置的云计算虚拟机VM程序;
操作C-5,Lock管理模块按照管理端装置返回的处理结果,判断是否需要Fencing。
实施例9
在实施例8的基础上,Lock管理模块的进程重启后恢复的过程包括以下操作:
操作D-1,在Libvirt管理模块启动时,通过Lock管理模块注册并获取锁心跳,如注册失败则转到S2;
操作D-2,一旦锁心跳注册失败,则kill关闭该计算节点装置的云计算虚拟机VM程序;
操作D-3,Libvirt管理模块记录所有被kill关闭云计算虚拟机VM程序的计算节点装置,并记录在Fencing log文件中;
操作D-4,定期检查Fencing log文件,发现有更新则转到操作D-5;
操作D-5,向管理端装置上报所有计算节点装置的Fencing log文件,若上报失败,则此次处理结束,留待下次上报;否则,上报给管理端装置后,由管理端装置发出指示进行恢复。
实施例10
在实施例8、9的基础上,在Fencing后进行以下的恢复操作:
操作E-1,VM GuestOS中的Qga与计算节点装置的高可用计算节点模块持续保持锁心跳,当云计算虚拟机VM程序出现故障时,转到操作E-2;
操作E-2,当高可用计算节点模块接收到异常事件的报告时,上报给管理端装置;
操作E-3,管理端装置收到异常事件的报告后,直接调用Nova接口控制云计算虚拟机VM程序再次恢复运行。
故障包括云计算虚拟机VM程序运行所在的计算节点装置蓝屏或卡死、死机。
实施例的作用与效果
本发明基于原生OpenStack版本进行了二次开发,通过对几种关键技术进行整合,在OpenStack外围自主开发了一套独立的防脑裂的OpenStack虚拟机高可用系统。摆脱了传统HA方案中对IPMI平面探测/硬件狗等依赖,实现了电信级可靠性的完整虚拟机高可用(HA)技术方,为此本发明中提供了一种改进的防脑裂的OpenStack虚拟机高可用系统。
在云计算系统中,脑裂(split-brain),指在一个高可用(HA)系统中,当联系着的两个控制节点或计算节点断开联系时,本来为一个整体的系统,分裂为两个独立节点,这时两个节点开始争抢共享资源,结果会导致系统混乱,数据损坏,通过本发明的改进所提供的改进的防脑裂的OpenStack虚拟机高可用管理端装置及管理方法即可以解决这个问题。
根据实施例提供的防脑裂的OpenStack虚拟机高可用系统,因为具有高可用模块,其能够运行高可用管理的方法,通过A-1到A-6的一系列操作,实时检测连接的计算节点装置以及共享存储装置的状态,根据获知的异常状态的类型:计算节点装置的异常还是共享存储装置的异常,具体的是管理网络中的管理网络平面、存储网络平面、 业务网络平面哪一部分的异常,并判断后决定是否进行Fencing操作来关闭对应出现异常的计算节点装置的云计算虚拟机VM程序,从而保证系统中的计算节点装置的云计算虚拟机VM程序的高可用性。
因为具有高可用计算节点模块,其能够运行C-1到C-5的一系列操作,实时更新并存储Lock分布式读写锁的锁心跳,并将更新时的写入的故障情况实时的上报给管理端装置,根据管理端装置的处理结果进行操作:是否Fencing关闭或隔离该计算节点装置的云计算虚拟机VM程序,从而将Lock分布式读写锁的锁保护力度,由计算节点装置的主机级别细化到虚拟机VM级别,能够针对单个虚拟机进行并发读写保护。
通过锁心跳来禁止多个虚拟机同时写磁盘,从根本上解决“脑裂”的发生。
将Lock分布式读写锁的锁保护力度,由计算节点装置的主机级别细化到虚拟机VM级别,能够针对单个虚拟机进行并发读写保护。
通过自主发明的全流程的VM Fencing保护机制,防止由于共享存储装置异常等故障影响底层锁心跳而导致的虚拟机被异常终止。
过程中,采用异步通知机制,解决Lock重启导致的HA VM的脱管问题,实现了自动恢复。
进一步,独立于原生OpenStack,自主开发的HaStack服务,用于管理HA整个调度,HaStack通过集成Etcd及Qga,实现了对下层全主机的管理网络三平面(管理网络平面、业务网络平面、存储网络平面)的健康状态,及虚拟机VM内部运行状态的精确感知:
1.通过调整心跳打点周期与消息来快速确认计算节点装置物理平面的各故障点,提供高精度的判断依据供HaStack决策。
2.针对单个计算节点装置管理网络三平面的各类异常,通过可配置的HA故障对应处理的方案,支持用户对对应的方案进行自设置的定制化HA恢复策略。
3.通过集成Qga来进行虚拟机VM健康监测,一旦发生虚拟机VM内部蓝屏、卡死等故障则立刻触发HA恢复,实现自愈。
4.针对各种集群、存储、网络连线异常,均添加了相应的保护机制。
上述实施方式为本发明的优选案例,并不用来限制本发明的保护范围。

Claims (10)

  1. 一种防脑裂的OpenStack虚拟机高可用系统,其特征在于,包括管理端装置、管理网络、计算节点装置以及共享存储装置,
    其中,至少两个管理端装置之间通过所述管理网络进行通信而组成管理集群,
    所述管理端装置与所述计算节点装置通过管理网络通信连接,
    所述计算节点装置与所述共享存储装置连接,
    每个管理端装置包括:
    Nova控制模块,包括Nova原生的虚拟机VM管理进程,用于对虚拟机VM的生命周期进行管理操作;
    集群管理模块,用于收集所述集群的运行状况信息;以及
    高可用模块,用于对所有的所述计算节点装置进行高可用管理,
    所述高可用模块运行高可用管理的方法,该方法包括以下操作:
    操作A-1,通过所述集群管理模块收集的运行状况信息检查集群状态是否正常,如果异常,则触发集群异常告警并结束,如果正常,则转到操作A-2;
    操作A-2,检查各个所述计算节点装置通过管理网络上报的状态,如果正常,则此轮检查终止,否则转到下一步操作A-3;
    操作A-3,根据每个所述计算节点装置通过管理网络上报的异常状态,逐个判断是否需要进行处理,如果无需处理,则该计算节点装置异常处理结束,转回上一步操作A-2;否则转到下一步操作A-4;
    操作A-4,对于需要处理的异常状态的所述计算节点装置,检查与之连接的共享存储装置的状态,当共享存储装置异常时,通过所述 Nova控制模块控制该计算节点装置上运行的所述云计算虚拟机VM程序不运行,并结束,否则,转到下一步操作A-5;
    操作A-5,向所连接的共享存储装置状态正常的所述计算节点装置下发隔离请求;
    操作A-6,向所述Nova控制模块下发命令,触发该计算节点装置上运行的所述云计算虚拟机VM程序运行,
    所述计算节点装置除安装有云计算虚拟机VM程序之外,还具有:
    Nova-computer计算机模块,用于直接响应所述管理端装置各管理进程来控制所述虚拟机VM的运行状态,并与Hypervisor API进行通信;
    Libvirt管理模块,用于在KVM上提供标准的Hypervisor API接口的管理进程;
    Lock管理模块,与所述Libvirt管理模块配合,用于对共享存储装置的的锁心跳进行更新和监控;以及
    高可用计算节点模块,至少用于将所述锁心跳上报给所述管理端装置,
    其中,所述高可用计算节点模块运行包括以下操作的方法:
    操作C-1,当所述虚拟机VM持续更新并存储锁心跳时,若写入正常则无需处理,否则一旦所述锁心跳写入异常,则转到操作C-2;
    操作C-2,所述Lock管理模块向管理端装置上报存储异常事件,并等待管理端装置反馈处理结果;
    操作C-3,若管理端装置在规定时间内返回了处理结果,则转到 操作C-5,否则转到操作C-4;
    操作C-4,若管理端装置未在规定时间内返回处理结果,则所述Lock管理模块执行隔离操作;
    操作C-5,所述Lock管理模块按照管理端装置返回的处理结果,判断是否需要隔离。
  2. 根据权利要求1所述的防脑裂的OpenStack虚拟机高可用系统,其特征在于:
    其中,当管理端装置向所连接的共享存储装置状态正常的所述计算节点装置下发隔离请求后,所述高可用模块还运行以下操作:
    操作B-1,持续监听所述计算节点装置上报的隔离事件,一旦收到消息则转到操作B-2;
    操作B-2,通过所述集群管理模块收集的运行状况信息检查集群状态是否正常,如果异常,则触发集群异常告警并结束,如果正常,则转到操作B-3;
    操作B-3,检查各个所述计算节点装置通过管理网络上报的网络状态,如果正常,则此轮检查终止,否则转到操作B-4;
    操作B-4,根据每个所述计算节点装置通过管理网络上报的异常状态,判断是否需要进行处理,如果无需处理,则进行操作B-6;否则转到操作B-5;
    操作B-5,对于需要处理的异常状态的所述计算节点装置,检查与之连接的共享存储装置的状态,当共享存储装置异常时,无需隔离 并转到操作B-6,并结束,否则,转到操作B-7;
    操作B-6,针对无需隔离的场景,向对应的所述计算节点装置下发停止隔离请求;
    操作B-7,针对需要隔离的场景,向对应的所述计算节点装置下发执行隔离请求,
    所述Lock管理模块的进程重启后恢复的过程包括以下操作:
    操作D-1,在所述Libvirt管理模块启动时,通过所述Lock管理模块注册并获取所述锁心跳,如注册失败则转到S2;
    操作D-2,一旦锁心跳注册失败,则关闭或隔离该计算节点装置的云计算虚拟机VM程序;
    操作D-3,所述Libvirt管理模块记录所有被关闭或隔离云计算虚拟机VM程序的计算节点装置,并记录在隔离日志文件中;
    操作D-4,定期检查隔离日志文件,发现有更新则转到操作D-5;
    操作D-5,向管理端装置上报所有计算节点装置的隔离日志文件,若上报失败,则此次处理结束,留待下次上报;否则,上报给管理端装置后,由管理端装置发出指示进行恢复。
  3. 根据权利要求1所述的防脑裂的OpenStack虚拟机高可用系统,其特征在于:
    其中,在上报给管理端装置后,管理端装置进行以下的具体操作:
    操作D-6,管理端装置收到计算节点装置上报的隔离日志文件,判断是否要进行自动处理,若自动处理转向操作D-8,若无需自动处理,转向操作D-7;
    操作D-7,管理端装置告警待由人工处理;
    操作D-8,管理端装置自动处理被隔离的云计算虚拟机VM程序,调用Nova接口控制云计算虚拟机VM程序再次恢复运行。
  4. 根据权利要求1所述的防脑裂的OpenStack虚拟机高可用系统,其特征在于:
    所述共享存储装置为CephFS或NFS文件管理程序管理运行,
    所述虚拟机VM管理进程包括Nova-api、Nova-conductor或Nova-scheduler,
    所述集群管理模块包括Etcd或Consul。
  5. 根据权利要求1所述的防脑裂的OpenStack虚拟机高可用系统,其特征在于:
    所述管理网络包括:
    管理网络平面,用于对接所述管理端装置,用于提供管理服务;
    存储网络平面,用于对接后端的所述共享存储装置,用于提供存储服务;
    业务网络平面,用于对接所述计算节点装置,用于提供所述云计算虚拟机VM的访问服务。
  6. 根据权利要求5所述的防脑裂的OpenStack虚拟机高可用系统,其特征在于:
    其中,当所述管理网络的管理网络平面、存储网络平面以及业务网络平面均正常时,操作A-2中所述计算节点装置通过管理网络上报的网络状态才判断为正常,否则根据异常的所述计算节点装置的具体中断类型是管理网络平面、存储网络平面以及业务网络平面中的哪一种或几种进行相应的处理。
  7. 根据权利要求2所述的防脑裂的OpenStack虚拟机高可用系统,其特征在于:
    其中,其中,所述管理网络包括:
    管理网络平面,用于对接所述管理端装置,用于提供管理服务;
    存储网络平面,用于对接后端的所述共享存储装置,用于提供存储服务;
    业务网络平面,用于对接所述计算节点装置,用于提供虚拟机VM的访问服务,
    对应的,当所述管理网络的管理网络平面、存储网络平面以及业务网络平面均正常时,操作B-3中所述计算节点装置通过管理网络上报的网络状态才判断为正常,否则根据异常的所述计算节点装置的具体中断类型是管理网络平面、存储网络平面以及业务网络平面中的哪一种或几种进行相应的隔离处理。
  8. 根据权利要求1所述的防脑裂的OpenStack虚拟机高可用系统,其特征在于:
    其中,所述云计算虚拟机VM程序具有VM GuestOS操作系统,该操作系统在隔离后进行以下的恢复操作:
    操作E-1,VM GuestOS中的Qga与计算节点装置的高可用计算节点模块持续保持锁心跳,当所述云计算虚拟机VM程序出现故障时,转到操作E-2;
    操作E-2,当高可用计算节点模块接收到异常事件的报告时,上报给管理端装置;
    操作E-3,管理端装置收到异常事件的报告后,直接调用Nova接口控制云计算虚拟机VM程序再次恢复运行。
  9. 根据权利要求8所述的防脑裂的OpenStack虚拟机高可用系统,其特征在于:
    其中,所述故障包括所述云计算虚拟机VM程序运行所在的计算节点装置蓝屏或卡死、死机。
  10. 根据权利要求2所述的防脑裂的OpenStack虚拟机高可用系统,其特征在于:
    其中,在上报给管理端装置后,管理端装置进行以下的具体操作:
    操作D-6,管理端装置收到计算节点装置上报的隔离日志文件,判断是否要进行自动处理,若自动处理转向操作D-8,若无需自动处理,转向操作D-7;
    操作D-7,管理端装置告警待由人工处理;
    操作D-8,管理端装置自动处理被隔离的云计算虚拟机VM程序,调用Nova接口控制云计算虚拟机VM程序再次恢复运行。
PCT/CN2018/121655 2018-12-04 2018-12-18 防脑裂的OpenStack虚拟机高可用系统 WO2020113670A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
BR112020004407-5A BR112020004407A2 (pt) 2018-12-04 2018-12-18 sistema de alta disponibilidade de uma máquina virtual openstack para impedir split-brain.
PH12020550045A PH12020550045A1 (en) 2018-12-04 2020-02-05 High-availability System of OpenStack Virtual Machine for Preventing Split-brain

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811474780.8A CN109614201B (zh) 2018-12-04 2018-12-04 防脑裂的OpenStack虚拟机高可用系统
CN201811474780.8 2018-12-04

Publications (1)

Publication Number Publication Date
WO2020113670A1 true WO2020113670A1 (zh) 2020-06-11

Family

ID=66005497

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/121655 WO2020113670A1 (zh) 2018-12-04 2018-12-18 防脑裂的OpenStack虚拟机高可用系统

Country Status (4)

Country Link
CN (1) CN109614201B (zh)
BR (1) BR112020004407A2 (zh)
PH (1) PH12020550045A1 (zh)
WO (1) WO2020113670A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214466B (zh) * 2019-07-12 2024-05-14 海能达通信股份有限公司 分布式集群系统及数据写入方法、电子设备、存储装置
CN111212127A (zh) * 2019-12-29 2020-05-29 浪潮电子信息产业股份有限公司 一种存储集群及业务数据的维护方法、装置和存储介质
CN113765709B (zh) * 2021-08-23 2022-09-20 中国人寿保险股份有限公司上海数据中心 基于Openstack云平台多维监控的虚拟机高可用实现系统及方法
CN113965459A (zh) * 2021-10-08 2022-01-21 浪潮云信息技术股份公司 基于consul进行主机网络监控实现计算节点高可用的方法
CN114090184B (zh) * 2021-11-26 2022-11-29 中电信数智科技有限公司 一种虚拟化集群高可用性的实现方法和设备
CN115858222B (zh) * 2022-12-19 2024-01-02 安超云软件有限公司 一种虚拟机故障处理方法、系统及电子设备
CN116382850B (zh) * 2023-04-10 2023-11-07 北京志凌海纳科技有限公司 一种利用多存储心跳检测的虚拟机高可用管理装置及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103684941A (zh) * 2013-11-23 2014-03-26 广东新支点技术服务有限公司 基于仲裁服务器的集群裂脑预防方法和装置
CN107239383A (zh) * 2017-06-28 2017-10-10 郑州云海信息技术有限公司 一种OpenStack虚拟机的故障监控方法及装置
CN107885576A (zh) * 2017-10-16 2018-04-06 北京易讯通信息技术股份有限公司 一种基于OpenStack的私有云中虚拟机HA的方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104253860B (zh) * 2014-09-11 2017-08-08 武汉噢易云计算股份有限公司 一种基于共享存储消息队列的虚拟机高可用实现方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103684941A (zh) * 2013-11-23 2014-03-26 广东新支点技术服务有限公司 基于仲裁服务器的集群裂脑预防方法和装置
CN107239383A (zh) * 2017-06-28 2017-10-10 郑州云海信息技术有限公司 一种OpenStack虚拟机的故障监控方法及装置
CN107885576A (zh) * 2017-10-16 2018-04-06 北京易讯通信息技术股份有限公司 一种基于OpenStack的私有云中虚拟机HA的方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WU, JIANG: "A Better VM HA Solution: Split-brain Solving & Host Network Fault Awareness", OPEN INFRASTRUCTURE SUMMIT, 14 November 2018 (2018-11-14), pages 1 - 30, XP009521666 *

Also Published As

Publication number Publication date
BR112020004407A2 (pt) 2021-06-22
CN109614201B (zh) 2021-02-09
PH12020550045A1 (en) 2020-10-12
CN109614201A (zh) 2019-04-12

Similar Documents

Publication Publication Date Title
WO2020113669A1 (zh) 防脑裂的OpenStack虚拟机高可用计算节点装置及管理方法
WO2020113668A1 (zh) 防脑裂的OpenStack虚拟机高可用管理端装置及管理方法
WO2020113670A1 (zh) 防脑裂的OpenStack虚拟机高可用系统
US20190065275A1 (en) Systems and methods for providing zero down time and scalability in orchestration cloud services
CN106716360B (zh) 支持多租户应用服务器环境中的补丁修补的系统和方法
US9684545B2 (en) Distributed and continuous computing in a fabric environment
US9652326B1 (en) Instance migration for rapid recovery from correlated failures
US6477663B1 (en) Method and apparatus for providing process pair protection for complex applications
US9846706B1 (en) Managing mounting of file systems
US20130185716A1 (en) System and method for providing a virtualized replication and high availability environment
US20070067366A1 (en) Scalable partition memory mapping system
US10983877B1 (en) Backup monitoring with automatic verification
US9703651B2 (en) Providing availability of an agent virtual computing instance during a storage failure
US20140173329A1 (en) Cascading failover of blade servers in a data center
Glider et al. The software architecture of a san storage control system
US20220291850A1 (en) Fast restart of large memory systems
US11119872B1 (en) Log management for a multi-node data processing system
US7467324B1 (en) Method and apparatus for continuing to provide processing on disk outages
JP3467750B2 (ja) 分散オブジェクト処理システム
CN114691304A (zh) 实现集群虚拟机高可用的方法和装置、设备和介质
Dell
WO2022108914A1 (en) Live migrating virtual machines to a target host upon fatal memory errors
US20200125434A1 (en) Preventing corruption by blocking requests
Lee et al. NCU-HA: A lightweight HA system for kernel-based virtual machine
US11977431B2 (en) Memory error prevention by proactive memory poison recovery

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18942243

Country of ref document: EP

Kind code of ref document: A1

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112020004407

Country of ref document: BR

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 112020004407

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20200304

122 Ep: pct application non-entry in european phase

Ref document number: 18942243

Country of ref document: EP

Kind code of ref document: A1