CN113726899A - Construction method of available micro data center for colleges and universities based on OpenStack - Google Patents

Construction method of available micro data center for colleges and universities based on OpenStack Download PDF

Info

Publication number
CN113726899A
CN113726899A CN202111022871.XA CN202111022871A CN113726899A CN 113726899 A CN113726899 A CN 113726899A CN 202111022871 A CN202111022871 A CN 202111022871A CN 113726899 A CN113726899 A CN 113726899A
Authority
CN
China
Prior art keywords
availability
openstack
management
data center
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111022871.XA
Other languages
Chinese (zh)
Other versions
CN113726899B (en
Inventor
李雷孝
李�杰
高昊昱
康泽锋
马志强
万剑雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia University of Technology
Original Assignee
Inner Mongolia University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University of Technology filed Critical Inner Mongolia University of Technology
Priority to CN202111022871.XA priority Critical patent/CN113726899B/en
Publication of CN113726899A publication Critical patent/CN113726899A/en
Application granted granted Critical
Publication of CN113726899B publication Critical patent/CN113726899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1034Reaction to server failures by a load balancer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention belongs to the technical field of micro data center construction, and particularly relates to an OpenStack-based construction method of an available micro data center for colleges and universities, which comprises the following steps: s1, constructing a cloud platform super-fusion framework, wherein the cloud platform super-fusion framework comprises a hardware facility foundation, fusion framework resource pooling and resource scheduling management automation; s2, respectively constructing a database with high availability of a persistent layer, a message queue, a storage layer and a network layer; s3, constructing a reliability evaluation model; and S4, verifying that the product is available. The invention provides a scheme suitable for constructing a micro data center decentralized high-availability cloud platform in colleges and universities based on OpenStack, so that the service availability of the OpenStack cloud platform reaches a high-availability standard, and the environmental fault of the cloud platform can be identified in real time. The reliability evaluation model and the test cases are utilized to further verify the fault tolerance, reliability and high availability of the cluster, and the availability of the service can reach 99.99%.

Description

Construction method of available micro data center for colleges and universities based on OpenStack
Technical Field
The invention belongs to the technical field of micro data center construction, and particularly relates to a construction method of an available micro data center for colleges and universities based on OpenStack.
Background
Except for the HP Helion scheme, the existing OpenStack HA scheme HAs the characteristic of centralization of control or management nodes, and is obviously not suitable for constructing a miniature data center in scale. The control node is the core of the whole platform for providing services to the outside, once the control node fails, the availability of the cloud platform is greatly reduced, and the reliability of a small number of control nodes becomes a high-availability short board of the whole cloud platform. The root cause of this is that these solutions separate the control node from the computation and storage nodes, and a physical server only installs the control component as the control node or only installs the computation or storage component as the computation or storage node, so that the single-point failure resistance of the whole cluster cannot be improved with the increase of nodes. Although the Helion scheme of the HP integrates the node control and the calculation roles, the Helion scheme of the HP is not combined with a distributed storage scheme, and redundant backup of mirror image storage cannot be performed, namely high storage availability cannot be achieved.
Disclosure of Invention
Aiming at the technical problem of insufficient cloud service reliability of the college data center, the invention provides the construction method of the OpenStack-based college available miniature data center, which is high in fault tolerance rate, high in availability and strong in reliability.
In order to solve the technical problems, the invention adopts the technical scheme that:
an OpenStack-based construction method for a college and university available micro data center comprises the following steps:
s1, constructing a cloud platform super-fusion framework, wherein the cloud platform super-fusion framework comprises a hardware facility foundation, fusion framework resource pooling and resource scheduling management automation;
s2, respectively constructing a database with high availability of a persistent layer, a message queue, a storage layer and a network layer;
s3, constructing a reliability evaluation model;
and S4, verifying that the product is available.
A switch is arranged in the hardware facility foundation and adopts a port stacking mode; network virtualization is arranged in the fusion framework resource pooling, and the network virtualization adopts super-fusion node configuration on a network card; and elastic expansion is arranged in the resource scheduling management automation, and the expandability of an OpenStack cloud platform is increased by the elastic expansion.
The method for constructing the database persistence layer high availability in the step S2 is as follows: the state of the database in each node and the data in the database must be consistent with the contents of the databases of other nodes in the cluster, and a Galera cluster is adopted to realize a MariaDB multi-master mode.
The Galera cluster includes three maria db nodes that are peer-to-peer and master nodes to each other.
The message queue in S2 is highly available to implement advanced message queue protocol using RabbitMQ components.
The method for constructing the storage layer high availability in the step S2 is as follows: the method comprises the steps that a rear-end storage adopts a Cinder butt joint Ceph of an OpenStack component, the Ceph is an extensible and software-defined open source storage system, the rear-end storage adopts a FileStore storage mode in the Ceph, the FileStore storage mode is that a Journal is written before data is written, and the Journal is changed into two writing operations for one request.
The method for constructing the network layer high availability in the step S2 includes: the network layer adopts Keepalived + Haproxy, the Keepalived provides functions of server health check and fault node isolation for realizing a TCP/IP layer health check mechanism in an OSI seven-layer protocol, each super fusion node in the cloud platform super fusion framework is provided with the Keepalived to become a Keepalived node, and the Keepalived nodes are communicated by adopting a VRRP protocol defined by the Keepalived; the Haproxy is free and open source code software written by using C language, and is used for load balancing.
The reliability evaluation model in S3 comprises physical server hardware HW, an operating system OS, a storage system Ceph, MariaDB nodes, a RabbitMQ component, an identity authentication Keystone, a volume management Cinder, a network management Neutron, a mirror management Glance, a computation management Nova, an SDEP block representing the dependency relationship among the components, the hardware HW of the physical server is connected with an operating system OS which is respectively connected with a RabbitMQ component through a storage system Ceph and a MariaDB node, the RabbitMQ component is respectively connected with the network management Neutron through an identity authentication Keystone and a volume management Cinder, the network management Neutron is connected with a computation management Nova through a mirror image management Glance, the physical server hardware HW points to the operating system OS through an SDEP block, the maridb node points to the identity authentication Keystone through an SDEP block, the identity authentication Keystone points to network management Neutron and mirror image management Glance through an SDEP block respectively, the mirror management chord points to the computation management Nova through an SDEP block; the component pointed to by the SDEP block indicates the component that must be repaired first in the event of a failure, and whether the component pointed to by the SDEP block can operate depends on the state of the previous component.
The verification high availability in the S4 includes testing software level errors, testing hardware level errors, testing network level errors;
the method for testing the software level errors comprises the following steps: comprises the following steps:
step one, simulating the occurrence of errors, and closing Nova service;
secondly, an administrator logs in OpenStack;
step three, enumerating and calculating Nova instances and calculating response time;
fourthly, recovering Nova service;
the method for testing hardware level errors comprises the following steps: comprises the following steps:
step one, simulating the occurrence of errors, and restarting any node server;
step two, calculating response time;
the method for testing the network level error comprises the following steps: comprises the following steps:
step one, simulating the occurrence of errors, and unplugging a network cable of any node server;
and step two, calculating response time by adopting a ping command.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a scheme suitable for constructing a micro data center decentralized high-availability cloud platform in colleges and universities based on OpenStack, so that the service availability of the OpenStack cloud platform reaches a high-availability standard, and the environmental fault of the cloud platform can be identified in real time. The reliability evaluation model and the test cases are utilized to further verify the fault tolerance, reliability and high availability of the cluster, and the availability of the service can reach 99.99%.
Drawings
FIG. 1 is a schematic diagram of a cloud platform hyper-convergence architecture of the present invention;
FIG. 2 is a schematic diagram of a Galera cluster according to the present invention;
FIG. 3 is a schematic diagram of a response to a keepalived + hash user request in accordance with the present invention;
fig. 4 is a schematic diagram of the reliability evaluation model of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "connected" and "connected" are to be interpreted broadly, e.g., as being fixed or detachable or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
An OpenStack-based construction method for a college and university available micro data center comprises the following steps:
s1, constructing a cloud platform super-fusion framework, wherein the cloud platform super-fusion framework comprises a hardware facility foundation, a fusion framework resource pooling and a resource scheduling management automation as shown in figure 1;
s2, respectively constructing a database with high availability of a persistent layer, a message queue, a storage layer and a network layer;
s3, constructing a reliability evaluation model;
and S4, verifying that the product is available.
Furthermore, a switch is arranged in the hardware facility foundation, and the switch adopts a port stacking mode; network virtualization is arranged in the fusion framework resource pooling, and the network virtualization adopts super-fusion node configuration on a network card; and elastic expansion is arranged in the resource scheduling management automation, and the expandability of the OpenStack cloud platform is increased by the elastic expansion.
Further, the method for constructing the database persistence layer high availability in S2 is as follows: the state of the database in each node and the data in the database must be consistent with the contents of the databases of other nodes in the cluster, and a Galera cluster is adopted to realize a MariaDB multi-master mode.
Further, a Galera cluster includes three maria db nodes that are peer-to-peer and master nodes to each other. Any one of the MariaDB nodes can be connected when a client reads and writes data. In a read operation, the data read from each node is the same. When a write operation is performed, when data is written to a node, the cluster will be synchronized with other nodes.
Further, the message queue high availability in S2 employs the RabbitMQ component to implement the advanced message queue protocol. The main function of the rabbitMQ component is to synchronize state information of operations and services on the cloud platform among the components, and is responsible for communication among the components of OpenStack. The RabbitMQ supports high availability and strong expansibility, a node newly added into the cluster can be dynamically added into the cluster only by specifying some information (such as a cluster name, a port and a main node IP address) of the cluster, and an existing node does not need to restart or revise the configuration file.
Further, the method for constructing the storage layer height in S2 is as follows: the backend storage adopts Cinder of OpenStack component to butt Ceph, the Ceph is an extensible and software-defined open source storage system, the backend storage adopts a FileStore storage mode in the Ceph, the FileStore storage mode is that before data is written, Journal is written for one time, and the Journal is changed into two times of writing operation for one time of request.
Further, the method for constructing the network layer high availability in S2 is as follows: the network layer adopts Keepalived + Haproxy, Keepalived provides the functions of server health check and fault node isolation for realizing a TCP/IP layer health check mechanism in an OSI seven-layer protocol, Keepalived is installed on each super fusion node in a cloud platform super fusion framework to become a Keepalived node, the Keepalived nodes adopt VRRP protocol communication defined by Keepalived, a main node is selected from a plurality of nodes by a cluster according to an election algorithm provided by Keepalived, and at the moment, a virtual IP is placed on the node. Other nodes send heartbeat check data packets to the main node at intervals to request the main node to respond, if the main node does not send heartbeat check confirmation within a certain time, the other nodes consider that the main node is in failure, and at the moment, the virtual IP can be migrated to a new main node.
Keepalived supports a plurality of routing modes, the difference of the routing modes can affect the virtual IP transit client request and real server response modes, and the Keepalived supports the following two routing modes: NAT routing and DR routing. The NAT routing has a risk of a single point of failure, and if a load balancer or a repeater fails, the cluster service may be unavailable and is not suitable for cluster use of the super-convergence architecture. So the scheme adopts DR routing. The difference between this method and the NAT is that the back-end server directly sends the request processing result of the user back to the user, and does not pass through the repeater or the load balancer any more. This weakens the role of the transponder and may even remove a specialized transponder. The forwarding is not performed any more, so that the response time to the user request is shortened, the delay is reduced, and the user experience is better. And each super-convergence server is provided with Keepalived, so that the problem of single-point failure can be prevented, the high availability characteristic is ensured, and the network delay time of response can be shortened.
The Haproxy is free and open source code software written by using C language, and is used for load balancing. The effect of high availability of the cluster can be achieved by matching with Keepalived. The specific sending of the user request to which server at the back end is determined by a load balancing algorithm in the Haproxy, and the Haproxy only needs to specify a virtual IP provided by Keepalived in a configuration file. The common Haproxy strategy is that Roundrobinc takes turns to receive requests in the order of server1, server2, server3, according to a server list. This may affect the instance address accessed to be different from the address requested by the user; the leasecon refers to selecting the server with the least receiving connection among the plurality of servers. Since the cloud platform is highly available, a plurality of servers with the same load cannot be elected. The Source policy will hash the Source IP, and the fixed IP will be always bound to one of the servers unless the bound server is down and the IP address will not be migrated. After comparison, the Source strategy is finally selected to ensure that the request of the user to the cloud platform is fixedly sent to one of the servers, and the problem that the address of the server where the virtual machine instance is located is inconsistent with the request address of the user is avoided. The response to the user request after using Keepalived to collocate Haproxy is shown in fig. 3. As shown in FIG. 3, three servers all run Kespaived + Haproxy, but appear out as a virtual IP. In normal use, a virtual IP binding one of the servers is created for a user, and VRRP is used between the servers for survival detection. If the server fails to respond, other servers can take over the virtual IP in a state that the user does not perceive the virtual IP, and continue to provide corresponding services, so that high availability is ensured.
Further, the reliability evaluation model in S3 includes a physical server hardware HW, an operating system OS, a storage system Ceph, a maridb node, a RabbitMQ component, an identity authentication key, a volume management cider, a network management Neutron, a mirror image management glare, a computation management Nova, and an SDEP block indicating a dependency relationship between the components, the physical server hardware HW is connected with the operating system OS, the operating system OS is connected with the RabbitMQ component through the storage system Ceph and the maridb node, the RabbitMQ component is connected with the network management Neutron through the identity authentication key and the mirror image mq component, the network management Neutron is connected with the computation management Nova through the mirror image management glare, the physical server hardware points to the operating system OS through the HW SDEP block, the maridb node points to the identity authentication key through the SDEP block, the identity authentication key points to the network management Neutron, the mirror image management glare, the management glare directs to the network management glare through the SDEP block; the component pointed to by the SDEP block indicates the component that must be repaired first in the event of a failure, and whether the component pointed to by the SDEP block can operate depends on the state of the previous component. As can be seen from fig. 4, the operating system OS depends on the physical server hardware HW, whereas all the latter OpenStack components depend on the operating system OS. The database MariaDB component is the basis of identity authentication Keystone, volume management circle and network, mirror image and calculation, Ceph provides a bottom data storage service for other components, and the mirror image management component Glance provides mirror image for the calculation management Nova used by an end user to create an instance.
Further, the invention realizes the high availability scheme proposed in the text on three machines with 16 CPUs, 251G memory and 64T storage, and performs high availability verification on the scheme. The invention designs a test case from the following three angles: testing software level errors, testing hardware level errors, testing network level errors.
Further, the method for testing software level errors comprises the following steps: as shown in table 1, the following steps are included:
step one, simulating the occurrence of errors, and closing Nova service;
secondly, an administrator logs in OpenStack;
step three, enumerating and calculating Nova instances and calculating response time;
and step four, recovering the Nova service.
Figure BDA0003242194570000081
TABLE 1
Further, the method for testing hardware level errors comprises the following steps: as shown in table 2, the following steps are included:
step one, simulating the occurrence of errors, and restarting any node server;
and step two, calculating response time.
Figure BDA0003242194570000091
TABLE 2
Further, the method for testing the network level error comprises the following steps: as shown in table 3, the following steps were included:
step one, simulating the occurrence of errors, and unplugging a network cable of any node server;
and step two, calculating response time by adopting a ping command.
Figure BDA0003242194570000092
TABLE 3
By using the test case for testing, the Nova component can be quickly recovered to be normal within 5 seconds, and can be seamlessly switched to other nodes when a fault occurs. Providing a guaranteed Service Level Agreement (SLA) standard may measure the availability of a cloud platform. And substituting the test result of the experiment into an SLA calculation formula, and ensuring that the service level agreement grade II is ensured when the cloud platform sacrifices that the service standard reaches 99.99%. This further verifies the high availability of the present solution.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are encompassed in the scope of the present invention.

Claims (9)

1. A construction method of an OpenStack-based college and university available micro data center is characterized by comprising the following steps: comprises the following contents:
s1, constructing a cloud platform super-fusion framework, wherein the cloud platform super-fusion framework comprises a hardware facility foundation, fusion framework resource pooling and resource scheduling management automation;
s2, respectively constructing a database with high availability of a persistent layer, a message queue, a storage layer and a network layer;
s3, constructing a reliability evaluation model;
and S4, verifying that the product is available.
2. The OpenStack-based construction method for the high-availability miniature data center in colleges and universities, according to claim 1, is characterized in that: a switch is arranged in the hardware facility foundation and adopts a port stacking mode; network virtualization is arranged in the fusion framework resource pooling, and the network virtualization adopts super-fusion node configuration on a network card; and elastic expansion is arranged in the resource scheduling management automation, and the expandability of an OpenStack cloud platform is increased by the elastic expansion.
3. The OpenStack-based construction method for the high-availability miniature data center in colleges and universities, according to claim 1, is characterized in that: the method for constructing the database persistence layer high availability in the step S2 is as follows: the state of the database in each node and the data in the database must be consistent with the contents of the databases of other nodes in the cluster, and a Galera cluster is adopted to realize a MariaDB multi-master mode.
4. The OpenStack-based construction method for the high-availability miniature data center in colleges and universities according to claim 3, wherein the construction method comprises the following steps: the Galera cluster includes three maria db nodes that are peer-to-peer and master nodes to each other.
5. The OpenStack-based construction method for the high-availability miniature data center in colleges and universities, according to claim 1, is characterized in that: the message queue in S2 is highly available to implement advanced message queue protocol using RabbitMQ components.
6. The OpenStack-based construction method for the high-availability miniature data center in colleges and universities, according to claim 1, is characterized in that: the method for constructing the storage layer high availability in the step S2 is as follows: the method comprises the steps that a rear-end storage adopts a Cinder butt joint Ceph of an OpenStack component, the Ceph is an extensible and software-defined open source storage system, the rear-end storage adopts a FileStore storage mode in the Ceph, the FileStore storage mode is that a Journal is written before data is written, and the Journal is changed into two writing operations for one request.
7. The OpenStack-based construction method for the high-availability miniature data center in colleges and universities, according to claim 1, is characterized in that: the method for constructing the network layer high availability in the step S2 includes: the network layer adopts Keepalived + Haproxy, the Keepalived provides functions of server health check and fault node isolation for realizing a TCP/IP layer health check mechanism in an OSI seven-layer protocol, each super fusion node in the cloud platform super fusion framework is provided with the Keepalived to become a Keepalived node, and the Keepalived nodes are communicated by adopting a VRRP protocol defined by the Keepalived; the Haproxy is free and open source code software written by using C language, and is used for load balancing.
8. The OpenStack-based construction method for the high-availability miniature data center in colleges and universities, according to claim 1, is characterized in that: the reliability evaluation model in S3 comprises physical server hardware HW, an operating system OS, a storage system Ceph, MariaDB nodes, a RabbitMQ component, an identity authentication Keystone, a volume management Cinder, a network management Neutron, a mirror management Glance, a computation management Nova, an SDEP block representing the dependency relationship among the components, the hardware HW of the physical server is connected with an operating system OS which is respectively connected with a RabbitMQ component through a storage system Ceph and a MariaDB node, the RabbitMQ component is respectively connected with the network management Neutron through an identity authentication Keystone and a volume management Cinder, the network management Neutron is connected with a computation management Nova through a mirror image management Glance, the physical server hardware HW points to the operating system OS through an SDEP block, the maridb node points to the identity authentication Keystone through an SDEP block, the identity authentication Keystone points to network management Neutron and mirror image management Glance through an SDEP block respectively, the mirror management chord points to the computation management Nova through an SDEP block; the component pointed to by the SDEP block indicates the component that must be repaired first in the event of a failure, and whether the component pointed to by the SDEP block can operate depends on the state of the previous component.
9. The OpenStack-based construction method for the high-availability miniature data center in colleges and universities, according to claim 1, is characterized in that: the verification high availability in the S4 includes testing software level errors, testing hardware level errors, testing network level errors;
the method for testing the software level errors comprises the following steps: comprises the following steps:
step one, simulating the occurrence of errors, and closing Nova service;
secondly, an administrator logs in OpenStack;
step three, enumerating and calculating Nova instances and calculating response time;
fourthly, recovering Nova service;
the method for testing hardware level errors comprises the following steps: comprises the following steps:
step one, simulating the occurrence of errors, and restarting any node server;
step two, calculating response time;
the method for testing the network level error comprises the following steps: comprises the following steps:
step one, simulating the occurrence of errors, and unplugging a network cable of any node server;
and step two, calculating response time by adopting a ping command.
CN202111022871.XA 2021-09-01 2021-09-01 Construction method of available micro data center for colleges and universities based on OpenStack Active CN113726899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111022871.XA CN113726899B (en) 2021-09-01 2021-09-01 Construction method of available micro data center for colleges and universities based on OpenStack

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111022871.XA CN113726899B (en) 2021-09-01 2021-09-01 Construction method of available micro data center for colleges and universities based on OpenStack

Publications (2)

Publication Number Publication Date
CN113726899A true CN113726899A (en) 2021-11-30
CN113726899B CN113726899B (en) 2022-10-04

Family

ID=78680754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111022871.XA Active CN113726899B (en) 2021-09-01 2021-09-01 Construction method of available micro data center for colleges and universities based on OpenStack

Country Status (1)

Country Link
CN (1) CN113726899B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827148A (en) * 2022-04-28 2022-07-29 北京交通大学 Cloud security computing method and device based on cloud fault-tolerant technology and storage medium
CN116049136A (en) * 2022-12-21 2023-05-02 广东天耘科技有限公司 Cloud computing platform-based MySQL cluster deployment method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150295844A1 (en) * 2012-12-03 2015-10-15 Hewlett-Packard Development Company, L.P. Asynchronous framework for management of iaas
CN106104460A (en) * 2014-03-06 2016-11-09 国际商业机器公司 Reliability in distributed memory system strengthens
CN108462746A (en) * 2018-03-14 2018-08-28 广州西麦科技股份有限公司 A kind of container dispositions method and framework based on openstack
CN110750334A (en) * 2019-10-25 2020-02-04 北京计算机技术及应用研究所 Network target range rear-end storage system design method based on Ceph
CN111290839A (en) * 2020-05-09 2020-06-16 南京江北新区生物医药公共服务平台有限公司 IAAS cloud platform system based on openstack
CN111444020A (en) * 2020-03-31 2020-07-24 中国科学院计算机网络信息中心 Super-fusion computing system architecture and fusion service platform
CN112615666A (en) * 2020-12-19 2021-04-06 河南方达空间信息技术有限公司 Micro-service high-availability deployment method based on RabbitMQ and HAproxy

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150295844A1 (en) * 2012-12-03 2015-10-15 Hewlett-Packard Development Company, L.P. Asynchronous framework for management of iaas
CN106104460A (en) * 2014-03-06 2016-11-09 国际商业机器公司 Reliability in distributed memory system strengthens
CN108462746A (en) * 2018-03-14 2018-08-28 广州西麦科技股份有限公司 A kind of container dispositions method and framework based on openstack
CN110750334A (en) * 2019-10-25 2020-02-04 北京计算机技术及应用研究所 Network target range rear-end storage system design method based on Ceph
CN111444020A (en) * 2020-03-31 2020-07-24 中国科学院计算机网络信息中心 Super-fusion computing system architecture and fusion service platform
CN111290839A (en) * 2020-05-09 2020-06-16 南京江北新区生物医药公共服务平台有限公司 IAAS cloud platform system based on openstack
CN112615666A (en) * 2020-12-19 2021-04-06 河南方达空间信息技术有限公司 Micro-service high-availability deployment method based on RabbitMQ and HAproxy

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
唐飞雄: "基于OpenStack的高可用私有云的实施案例", 《计算机系统应用》 *
徐鹏: "基于OpenStack的中小企业私有云构建及高可用性研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827148A (en) * 2022-04-28 2022-07-29 北京交通大学 Cloud security computing method and device based on cloud fault-tolerant technology and storage medium
CN114827148B (en) * 2022-04-28 2023-01-03 北京交通大学 Cloud security computing method and device based on cloud fault-tolerant technology and storage medium
CN116049136A (en) * 2022-12-21 2023-05-02 广东天耘科技有限公司 Cloud computing platform-based MySQL cluster deployment method and system
CN116049136B (en) * 2022-12-21 2023-07-28 广东天耘科技有限公司 Cloud computing platform-based MySQL cluster deployment method and system

Also Published As

Publication number Publication date
CN113726899B (en) 2022-10-04

Similar Documents

Publication Publication Date Title
US11237927B1 (en) Resolving disruptions between storage systems replicating a dataset
CN110750334B (en) Ceph-based network target range rear end storage system design method
AU2019203861B2 (en) System and method for ending view change protocol
US9128626B2 (en) Distributed virtual storage cloud architecture and a method thereof
CN107734026B (en) Method, device and equipment for designing network additional storage cluster
US20220091771A1 (en) Moving Data Between Tiers In A Multi-Tiered, Cloud-Based Storage System
EP3961365A1 (en) Synchronously replicating datasets and other managed objects to cloud-based storage systems
US20120079090A1 (en) Stateful subnet manager failover in a middleware machine environment
US8612561B2 (en) Virtual network storage system, network storage device and virtual method
CN108200124B (en) High-availability application program architecture and construction method
CN113726899B (en) Construction method of available micro data center for colleges and universities based on OpenStack
US8671218B2 (en) Method and system for a weak membership tie-break
US20160057009A1 (en) Configuration of peered cluster storage environment organized as disaster recovery group
CN102088490B (en) Data storage method, device and system
US20130111187A1 (en) Data read and write method and apparatus, and storage system
CN112000635A (en) Data request method, device and medium
CN111431980B (en) Distributed storage system and path switching method thereof
CN107463339B (en) NAS storage system
CN104811476A (en) Highly-available disposition method facing application service
US10929041B1 (en) Block-storage service supporting multi-attach
US10990464B1 (en) Block-storage service supporting multi-attach and health check failover mechanism
CN108512753B (en) Method and device for transmitting messages in cluster file system
CN110348826A (en) Strange land disaster recovery method, system, equipment and readable storage medium storing program for executing mostly living
CN110022333A (en) The communication means and device of distributed system
CN116781564B (en) Network detection method, system, medium and electronic equipment of container cloud platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant