CN116107707A - Data base engine platform and use method - Google Patents

Data base engine platform and use method Download PDF

Info

Publication number
CN116107707A
CN116107707A CN202310051167.XA CN202310051167A CN116107707A CN 116107707 A CN116107707 A CN 116107707A CN 202310051167 A CN202310051167 A CN 202310051167A CN 116107707 A CN116107707 A CN 116107707A
Authority
CN
China
Prior art keywords
component
network
virtual
virtualization
virtual machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310051167.XA
Other languages
Chinese (zh)
Inventor
马跃
周峰屹
赵梓行
李志强
龙俊伯
王子怡
李莫
涂武强
曹洁
刘健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siling Robot Technology Harbin Co ltd
Huaneng Jilin Power Generation Co ltd
Original Assignee
Siling Robot Technology Harbin Co ltd
Huaneng Jilin Power Generation Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siling Robot Technology Harbin Co ltd, Huaneng Jilin Power Generation Co ltd filed Critical Siling Robot Technology Harbin Co ltd
Priority to CN202310051167.XA priority Critical patent/CN116107707A/en
Publication of CN116107707A publication Critical patent/CN116107707A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/465Distributed object oriented systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to a data base engine platform and a using method thereof, which belong to the technical field of computers and comprise the following components: the computer virtualization module is used for building virtual computer hardware; the network virtualization module is used for building a virtual network; the network isolation module is used for separating interfaces of the internal separation local area network and the external service network; the high-availability system module is used for building a high-availability system; and the virtual machine operation and maintenance module is used for maintaining the operation of the virtual machine. Further, the computer virtualization module comprises a CPU virtualization component, a memory virtualization component and a hard disk I/0 virtualization component, wherein the CPU virtualization component is used for integrating a CPU of a physical server into a large CPU pool and dividing the CPU into virtual machines for use; the memory virtualization component is used for dynamically distributing the memory of the physical server to a plurality of virtual machines for use; the hard disk I/0 virtualization component is used for multiplexing limited peripheral resources.

Description

Data base engine platform and use method
Technical Field
The invention relates to a data base engine platform and a use method thereof, belonging to the technical field of computers.
Background
With the gradual development and landing of emerging technologies such as cloud computing, virtualization and big data, the new thought method not only affects business systems of various industries, but also silently changes the traditional IT infrastructure at the back end. IT architecture is becoming more and more important as an infrastructure for carrying business systems, with rapid deployment, reduced investment, and flexible expansion. The cloud computing can provide available, convenient and on-demand resources, becomes a main stream form of current IT architecture construction, and many newly-built systems are constructed by using a cloud mode, and meanwhile, a large number of existing service systems are migrated to a cloud computing environment. In cloud computing environments, virtualization, which is largely employed and deployed, is almost a fundamental technical model. Server virtualization is the first to go against, virtual machines need to be transferred to a target physical location in a network without limitation, and the rapidity of virtual machine growth and virtual machine migration become a normal service.
There is now a lack of a data base engine platform built from the cloud and having high performance, scalability, high availability, sufficient reliability, portability, and easy maintenance.
Disclosure of Invention
The invention aims to solve the problem that the existing data center does not adopt an engine platform constructed by super fusion cloud, and further provides a data base engine platform and a using method thereof
The technical scheme adopted by the invention for solving the problems is as follows: the present invention includes a data base engine platform comprising:
the computer virtualization module is used for building virtual computer hardware;
the network virtualization module is used for building a virtual network;
the network isolation module is used for separating interfaces of the internal separation local area network and the external service network;
the high-availability system module is used for building a high-availability system;
and the virtual machine operation and maintenance module is used for maintaining the operation of the virtual machine.
Further, the computer virtualization module comprises a CPU virtualization component, a memory virtualization component and a hard disk I/0 virtualization component, wherein the CPU virtualization component is used for integrating a CPU of a physical server into a large CPU pool and dividing the CPU into virtual machines for use; the memory virtualization component is used for dynamically distributing the memory of the physical server to a plurality of virtual machines for use; the hard disk I/0 virtualization component is used for multiplexing limited peripheral resources.
Furthermore, the hard disk I/0 virtualization component adopts a simplified driver as a back end, uses a driver in a client operating system as a front end, directly sends a communication request to the back end driver by using a special communication mechanism, and directly returns to the corresponding front end driver after the back end driver is processed.
Further, the network virtualization module comprises a virtual switch component, a virtual router component, a virtual network firewall component, a network load balancer component and an SR-I0V network card component, wherein the virtual switch component supports network isolation technologies such as FLAT, VLAN, vxLAN, GRE; the virtual router component is used for solving the problem that the network node is overloaded and guaranteeing high availability; the virtual network firewall component is used for protecting a virtual network; the network load balancer component is used for providing support for the network load balancer in a plug-in mode; the SR-I0V network card component is used for providing special resources for directly connecting the virtual machine to the I/O equipment.
Further, the high-availability system module comprises a redundancy and fault switching component, a service switching component and a mode switching component, wherein the redundancy and fault switching component is used for switching an instance to be run on hardware without failure when hardware for running the service instance fails; the service switching component is used for judging whether stateless service or stateful service is operated according to the request; the mode switching component is used for switching the main and standby modes or the main and standby modes according to the service state.
Further, the virtual machine operation and maintenance module comprises a virtual machine expansion component, a virtual machine availability guarantee component, a virtualization configuration optimizing component and a cluster load balancing component, wherein the virtual machine expansion component is used for dynamically expanding a memory, a CPU, a hard disk and a network port; the virtual machine availability guarantee component is used for redundant backup of data, distributed shared data storage, ensuring the independence of a virtualization technology and a platform and ensuring the high availability of the virtual machine; the virtualization configuration tuning component is used for tuning the CPU and the memory; the cluster load balancing component is used for loading computing resources and storage data, virtual machine online migration and Dynamic Resource Scheduling (DRS).
Further, the data base engine platform and the using method are realized through the following steps:
firstly, constructing a server cluster, and carrying out virtualization setting of a CPU, a memory and a hard disk I/0 by using a computer virtualization module;
the second step, the network virtualization module comprises a virtual switch component, a virtual router component, a virtual network firewall component, a network load equalizer component and an SR-I0V network card component, and the distributed virtual switch, the distributed virtual router, the virtual network firewall, the network load equalizer and the network card are respectively configured;
step three, a plurality of external networks are configured for the virtual machine by utilizing a network isolation module, and the internal local area network and the external service network of the user are respectively docked;
step four, ensuring the high availability of the virtual network by utilizing a high availability system module;
and fifthly, maintaining the operation of the platform by utilizing the virtual machine operation and maintenance module.
The beneficial effects of the invention are as follows: the super-fusion cloud platform software is used for building, so that pooling management of computing resources, storage resources and network resources is realized, a cloud data center is formed, the platform is ensured to have enough performance, the platform has transverse expansion capability, high availability of data can be provided through modes of copy, erasure codes and the like, a distributed architecture is adopted, the self-contained data and service redundancy mechanism is adopted, the automatic recovery capability is ensured through software definition, unified WEB interface management is realized, one-key management is realized, the computing, storage and network resources are uniformly managed, and the complexity of maintenance is reduced.
Drawings
FIG. 1 is a general topology of the present invention;
FIG. 2 is a schematic diagram of the VMCS and vCPU structures of the present invention;
FIG. 3 is a schematic illustration of the vCPU deployment of the present invention;
FIG. 4 is a diagram illustrating memory virtualization according to the present invention;
FIG. 5 is a schematic illustration of east-west flow operation of the present invention;
FIG. 6 is a schematic diagram of the north-south flow operation of the present invention;
fig. 7 is a schematic view of a virtual network firewall according to the present invention:
FIG. 8 is a schematic diagram of a load balancer structure of the present invention;
fig. 9 is a schematic diagram of the independence of the present invention.
Detailed Description
The first embodiment is as follows: the present embodiment will be described with reference to fig. 1 to 9, in which a data base engine platform includes:
the computer virtualization module is used for building virtual computer hardware;
the network virtualization module is used for building a virtual network;
the network isolation module is used for separating interfaces of the internal separation local area network and the external service network;
the high-availability system module is used for building a high-availability system;
and the virtual machine operation and maintenance module is used for maintaining the operation of the virtual machine.
The network virtualization module solves the management and operation problems of the traditional hardware network by providing a brand new network operation mode and a network scheme defined by pure software, and remarkably reduces the hardware cost.
The second embodiment is as follows: 1-9, the computer virtualization module comprises a CPU virtualization component, a memory virtualization component and a hard disk I/0 virtualization component, wherein the CPU virtualization component is used for integrating a CPU of a physical server into a large CPU pool and dividing the CPU into virtual machines for use; the memory virtualization component is used for dynamically distributing the memory of the physical server to a plurality of virtual machines for use; the hard disk I/0 virtualization component is used for multiplexing limited peripheral resources. As shown in fig. 2, 3, and 4, the CPU virtualization component operates with one VMCS for each vCPU. When vcpu switches from the physical CPU, its running context is saved in the corresponding VMCS structure; when vcpu is switched to run on a physical CPU, its running context is imported from the corresponding VMCS structure onto the physical CPU. In this way, independent operation among vcpus can be realized, wherein the Guest OS schedules vcpus in two stages: the first level of scheduling is implemented by the platform. The platform is responsible for the dispatching of the vCPU on the physical processing unit, namely, physical CPU resources are distributed to the virtual machine for use according to a certain dispatching mechanism, and the second-stage dispatching is realized by the Guest OS. The Guest OS maps the core thread to the corresponding virtual CPU; the memory virtualization component is configured to translate a guest virtual Address (Guest Virtual Address, GVA) to a real Machine Memory Address (MA). Without virtualization, there is also a translation between virtual addresses and real memory addresses. The MMU is the unit responsible for this translation. After the virtualization function is added, in order to enable multiple virtual machines to share the memory on the physical server, a memory virtual layer must be added to realize conversion from GVA to MA, and two steps are needed to convert from GVA to MA: the first step is to translate from GVA to guest physical address GPA (Guest Physical Address), which is accomplished by the guest operating system, and the second step is to translate from GPA to MA. The guest operating system cannot directly access the actual machine memory, so the platform needs to be responsible for mapping guest physical memory to actual machine memory; the operation process of the hard disk I/0 virtualization component is that a simplified driver (back end) is provided by a platform, the driver in a client operating system is a front end, requests from other modules are directly sent to the back end driver of the client operating system through a special communication mechanism between the front end driver and the client operating system, the back end driver sends back a notice to the front end after the requests are processed, the operation process of common hard disk I/0 virtualization is divided into two types, the first is that an interface which is completely identical with a physical device is accurately simulated through software, and the client driver can drive the virtual device without modification. The mechanism does not need extra hardware cost, but has lower performance due to software simulation, and the second is to directly allocate the physical device to a certain client, the client directly accesses the I/O device, and the method used by the hard disk I/0 virtualization component is compared with the two methods, so that the problems that the first mechanism has poorer performance and the second mechanism can not be reused can be solved.
And a third specific embodiment: 1-9, the network virtualization module is used for virtualizing a network facility module, a management, automation and network arrangement module, and is a software application for providing functions, where the network virtualization module includes a virtual switch component, a virtual router component, a virtual network firewall component, a network load balancer component and an SR-I0V network card component, and the virtual switch component supports network isolation technologies such as FLAT, VLAN, vxLAN, GRE; the virtual router component is used for solving the problem that the network node is overloaded and guaranteeing high availability; the virtual network firewall component is used for protecting a virtual network; the network load balancer component is used for providing support for the network load balancer in a plug-in mode; the SR-I0V network card component is used for providing special resources for directly connecting the virtual machine to the I/O equipment. The virtual switch component adopts a distributed virtual switch, the virtual machine switch can support network isolation technologies such as FLAT, VLAN, vxLAN, GRE, and network among tenants is isolated by adopting a VxLAN under default conditions. A high-availability switch management service is provided for managing virtual machine interconnection problems of computing nodes in the HyHive cluster, and distributed interconnection and interworking of a plurality of virtual machine switches at the cluster level are realized by running an independent virtual switch in each HyHive node and effectively configuring the virtual switch, and then opening the virtual switches among the nodes by means of a tunnel network technology. Using VxLAN technology can provide a larger scale virtual network, because VxLAN ID is 24-bit, scalability in virtualized cloud environments can be improved, enabling up to 1600 tens of thousands of quarantine networks to be created. This overcomes the limitations of VLANs with 12-bit VLAN IDs, enabling a maximum of 4094 quarantine networks to be created. The forwarding table of the external switch does not grow with the number of virtual machines behind the physical ports on the server. The use of VxLAN technology also reduces the MAC address duplication range of virtual machines located in the same VxLAN segment. The virtual router component adopts a distributed virtual route (DVR Distributed Virtual Route) which is a concept proposed for solving the problems of overload of network nodes and high availability of the network nodes, and all network services are carried out on the network nodes without adopting DVRs, so that a large amount of traffic flows to the network nodes, and great pressure is brought to the network nodes. In order to relieve the pressure of the network node, avoid the network node becoming the bottleneck of the super fusion platform, and simultaneously, in order to promote the high availability of the fusion platform, the HyHive virtual network starts to support DVR, i.e. most of functions of the network node are distributed and deployed in the computing nodes, and the functions of the network node are realized by a plurality of computing nodes together. On one hand, the method improves the expansion performance of the super-fusion cluster, namely, the synchronous expansion of the network performance can be realized by expanding the computing nodes; on the other hand, the high availability performance of the cluster is improved. Because the functions of the network nodes are evacuated to a plurality of computing nodes, the service pause probability caused by single-point faults is greatly reduced, and after DVR is introduced, the routing functions in the super fusion platform can be divided into the following scenes:
as shown in fig. 5, in the east-west traffic, when the same machine is used, the router directly forwards on br-int without passing through an external network bridge; different machines, vm for two different subnets of tenant T1: VM1 and VM4 are on different machines, VM1 is to access VM4, and the request process is for IR1 on compute node 1 to function as a router. The packet of the return trip, router IR2 on compute node 2 is active.
The ids, internal interfaces, functions, etc. of the two routers are virtually identical. I.e., the same router, but in fact exists on multiple computing nodes. But the same router would collide if exposed to the external network. For example, when a request packet leaves compute node 1, the source of the band should be the mac of the target subnet gateway, but this mac is also present on compute node 2. Thus, intercept is performed on br-int, modifying its source mac to that of the tunnel port. Similarly, the computing node 2 intercepts the source mac as the mac of the tunnel port on br-int, replaces the mac of the normal subnet gateway, and throws the mac directly to the target virtual machine.
As shown in fig. 6, the north-south traffic is similar to the conventional mode without floating IP; with floating IP, in this case, the specially responsible external router on compute node will be responsible for forwarding, i.e., IR2 on compute node 1 and IR1 on compute node 2.
At the network node, the service base does not change except that the L3 service needs to be configured in dvr_snat mode. There would be one more specialized snat-xxx namespaces on the namespaces, handling north-south traffic from the non-flowing IP of the compute nodes.
The compute node needs to additionally enable l3_agent (dvr mode), and metadata agent.
As shown in fig. 7, the virtual network firewall component implements the basic functions of the firewall by introducing a qbr Linux traditional bridge between the virtual network port of the virtual machine on the computing node and the virtual machine integrated bridge, and configuring the basic firewall rules on the bridge. All rules are implemented by default in a filter table (default table) on the Compute node, which is checked for rules on the INPUT, OUTPUT, FORWARD CHAINs of the filter table, respectively, and on the Compute node, iptables-line-numbers-vnL [ CHAIN ] may be used to obtain rules for the filter table (which may specify rules on a certain CHAIN).
As shown in fig. 8, load policy: the load balancer uses the real IP address and virtual port of a virtual network. Listener: the load balancer is able to snoop requests on multiple ports, each snooped port requiring a specific snooper to be assigned. Pool: a load pool maintains a set of members that provide load content through a load balancer, which are in essence IP addresses of a set of virtual machines. Membrane: the load members are virtual machine servers behind the load balancer. Each load member specifies a specific IP address and port to receive traffic for the load. Health monitor: the health monitor detects whether a particular load member is responding normally to the loaded traffic. The health monitor is in binding association with the load pool.
The SR-I0V network card component is a hardware-based virtualization solution, and can improve network performance and scalability. The network card SR-IOV enables the direct connection of Virtual machines to I/O devices, the shared devices will provide dedicated resources, and also use shared common resources, thus guaranteeing that each Virtual machine has access to unique resources, each SR-IOV device can have one physical Function (Physical Function, PF), and each PF can have a maximum of 64,000 Virtual Functions (VFs) associated with it. Once the SR-IOV is enabled in the PF, the PCI configuration space of each VF may be accessed through the PF's bus, device and function number (routing ID). Each VF has a PCI memory space for mapping its register set. The VF device driver operates on the set of registers to enable its functions and is shown as an actual PCI device. The SR-IOV network card integrates the SR-IOV function onto the physical network card to virtualize a single physical network card into a plurality of VF interfaces, each VF interface is provided with an independent virtual PCIe channel, and the virtual PCIe channels share the PCIe channels of the physical network card. Each virtual machine can occupy one or more VF interfaces, so that the virtual machine can directly access the VF interfaces of the virtual machine without coordination intervention of a platform, and the network throughput performance is greatly improved.
The specific embodiment IV is as follows: 1-9, the high availability system module includes a redundancy and failover component for switching an instance to run on hardware without failure when hardware running a service instance fails, a service switching component, and a mode switching component; the service switching component is used for judging whether stateless service or stateful service is operated according to the request; the mode switching component is used for switching the main and standby modes or the main and standby modes according to the service state.
The redundancy and fail-over component enables high availability by redundant hardware running redundant service instances, one running the service instance's hardware fails, the system can fail-over, thereby using another service instance running on hardware that has not failed. One key aspect of high availability is the elimination of single point of failure (SPOFs). SPOF is a single device or software that can cause a system shutdown or data loss if the system fails or data is lost. To eliminate SPOFS, a redundant mechanism is considered in design, from the underlying hardware to the upper layer service example: wherein the setting is made from several categories:
the network components, such as the exchanger, are designed as stackable exchanger groups in the scheme, and the physical machine network card binding is adopted to ensure the redundancy of the network channels;
application and automatic service migration, wherein redundancy is considered in deployment design of all services of a platform;
the storage component provides fault isolation and redundancy strategies of different levels such as a hard disk level, a rack level, a node level, a machine room level and the like according to actual conditions;
facility services such as electricity, air conditioning and fire protection suggest the use of two sets of redundant equipment to the greatest extent in practical production applications.
The service switching component involves the following two types:
stateless services provide a response after a request and no further services are needed. In order to make stateless services highly available, redundancy needs to be provided and load balanced.
Stateful services, subsequent requests for services depend on the services of the result of the first request. State services are more difficult to manage because a single operation typically involves more than one request. Providing additional instances and load balancing does not solve the problem. For example, a horizon user interface is less useful if it automatically resets each time a new page is reset. The HyHive service with state includes a database and a message queue. Making status services highly available depends on whether you choose active/passive or active/active configuration.
The mode switching component involves the following two types:
a primary/backup mode (Active/Passive) maintains a redundant backup service to take over traffic on-line when the primary service fails. The active-standby mode is generally applicable to stateful services, and external access active-standby services are switched by a Virtual IP (VIP).
The main mode (Active/Active) has a backup service for each service, but we let both the main and backup services work simultaneously and synchronously. In this way, if there is a malfunction, the user of the upper layer does not perceive the malfunction. The main mode is usually suitable for stateless services, and most management services in HyHive are designed as the main mode.
Fifth embodiment: 1-9, the virtual machine operation and maintenance module comprises a virtual machine telescoping component, a virtual machine availability guarantee component, a virtualization configuration optimizing component and a cluster load balancing component, wherein the virtual machine telescoping component is used for dynamic expansion of a memory, a CPU, a hard disk and a network port; the virtual machine availability guarantee component is used for redundant backup of data, distributed shared data storage, ensuring the independence of a virtualization technology and a platform and ensuring the high availability of the virtual machine; the virtualization configuration tuning component is used for tuning the CPU and the memory; the cluster load balancing component is used for loading computing resources and storage data, virtual machine online migration and Dynamic Resource Scheduling (DRS).
The virtual machine expansion component is used for dynamically expanding the configuration of the virtual machine under the condition that the virtual machine is not closed/restarted, so that good expansion of the virtual machine in management is reflected, and the virtual machine expansion component has huge technical advantages relative to the hardware management of the physical machine, and relates to the following classes:
dynamic memory expansion: after the virtual machine is online for a period of time, due to the explosive growth of the service, the memory resource becomes a primary resource bottleneck, and the normal operation of the service is seriously affected, while the traditional virtual machine configuration adjustment mode requires that the virtual machine must be shut down for operation, which has serious application limitation and risk in the online production environment. Through memory pre-allocation of the platform and a corresponding virtualization technology, the function of online adjustment of the memory capacity of the virtual machine without stopping is realized.
Dynamic CPU extension: when the CPU resource of the virtual machine reaches the bottleneck, the virtual machine on-line expansion CPU number of the virtual machine of part of the operating system can be supported by combining the virtualization technology and the related technology of the operating system of the virtual machine.
Dynamic hard disk expansion: when the disk capacity of the virtual machine cannot meet the storage requirement, a new disk can be dynamically mounted for the virtual machine under the condition of not interrupting the operation of the virtual machine. The bottom layer uses a DATATOM self-developed storage cluster INFINITY. INFINITY groups the resources of all nodes into one large block device storage pool. When a new disk is needed to be added to the virtual machine, only a new storage space is needed to be divided from the storage pool for mounting the virtual machine. On one hand, the dynamic disk mounting does not need to close the operation of the virtual machine, and the uninterrupted service can be ensured. On the other hand, the operation of the user is very simple, the user only needs to specify the size and the drive letter of the disk to be mounted, and other operations are realized by the INFINITY cluster.
Dynamic portal expansion: when the network of the virtual machine needs to be adjusted, hyHive uses a software-defined virtual network, supports flexible online virtual machine network port expansion by combining a virtualization technology, can add a new network port for the virtual machine in a new subnet at any time, adds new network port equipment for the virtual machine, opens a network link in the virtual network, and configures a network firewall and the like.
The virtual machine availability guarantee component is used for improving the availability of cloud resources, particularly virtual machines, reducing the risk or time of system offline to the maximum extent, guaranteeing the continuity and availability of upper-layer services, and relates to the following classification:
redundancy backup of data: by means of the distributed storage Infinity, the data security assurance of multiple copies or erasure codes can be provided while the storage performance improvement brought by the distributed clusters is considered, multiple data redundancy strategies such as a hard disk level, a node level, a rack level, a cluster level and the like can be provided according to actual conditions, storage of a third party can be also performed, data of a virtual machine can be backed up to the storage of the third party according to a planning task set by a user so as to cope with a scheme required by disaster tolerance, meanwhile, the virtual machine also supports a snapshot function, a backup function of the data is provided while a relatively small storage space is used, and second-level snapshot rollback is supported at the rear end of partial storage.
Distributed shared storage: the bottom storage Infinity is usually in a full-symmetry mode in design, the possibility of single-point faults is avoided, and when a single or even a plurality of nodes are faulty, as long as the storage cluster functions normally, the virtual machine can still read data in a degraded mode, and the virtual machine is guaranteed to be not stopped. The design of the Infinicity metadata-free node also ensures that the virtual machine on the node does not need to access fixed metadata nodes and read actual data as the traditional storage is accessed, thereby really realizing efficient data sharing access.
Virtualization technology and platform independence: the platform is designed to use a large number of models of cluster control service and node proxy, and actually controls the proxy service of the underlying virtualization software distributed on each node. As shown in figure 9, the advantage of such design is that the proxy service on each node has relatively independent functionality and has forwarding coupling with the cluster control service, and when the cluster control service fails, the proxy software of each node can ensure the normal operation of the function of the proxy software of each node to a great extent, so that the influence on the underlying virtualization function is greatly reduced. In the experimental environment of software fault tolerance test, the whole HyHive management platform is in a paralyzed state in an artificial mode, virtual machines on the platform can all operate, and the computing, storage and network functions are kept normal, so that the influence of platform paralysis is not received at all.
Virtual machine is highly available: through the support of the bottom distributed shared storage and the design of the transmission and coupling architecture of good control service, proxy service and message queue communication modes, the system fault can be well isolated under the condition of encountering the system fault, and the redundant resources can be used for quickly recovering the service.
The virtualized configuration tuning component is used for achieving the purpose of tuning by optimizing configuration parameters brought by QEMU-KVM when a virtual machine is started, and relates to the following categories:
CPU tuning: in order to improve the hit rate of the Cache, the vCPUs share the Cache as much as possible, and the vCPU binding can bring about 16% performance improvement for the virtual machine. Meanwhile, in order to solve the problem that the non-core binding application preemptively occupies the CPU resource of the core binding application, the function of vCPU isolation isolates the CPU resource of the core binding application from the CPU of the physical machine process and the common virtual machine, so that the virtual machine binding the vCPU is ensured to monopolize one physical core, and the performance reduction of the virtual machine binding the vCPU is avoided.
And (3) memory optimization: the method comprises the three steps of closing memory sharing, locking memory and opening a large page, wherein the memory sharing combines a plurality of identical memories in an operating program into a single memory page so as to save the use amount of a system memory, and the function is closed to improve the memory access performance of a virtual machine; locking the memory can prohibit the physical machine from exchanging out the memory pages of the virtual machine, so that the delay of memory access can be effectively avoided; opening the large page memory increases the page size of the memory management unit embedded in the CPU, can reduce the time of system management and page access, reduce the burden of memory operation, and reduce the possibility of performance bottleneck caused by page table access, thereby improving the system performance.
The cluster load balancing component includes the following classes:
computing resource load: when the virtual machines are created, the clusters can score the physical machines through a sequencing algorithm according to the creation requirements, hardware configuration requirements, hardware capacity requirements and the like of the virtual machines and the actual resource use conditions of the physical machines, and finally, the virtual machines are preferentially scheduled to a most suitable node for virtual machine creation. Under the condition of starting the load of the computing resources, the whole resources of the cluster can reach a certain load balancing level, and most application scenes can be basically met.
Storing data load: the virtual machine data uses distributed storage Infinity, when the virtual machine data is in response to changes, the virtual machine data comprises the conditions of disk failure, node downtime, node capacity expansion, node capacity shrinkage and the like, the Infinity can achieve automatic data re-equalization, the re-equalization process does not need human intervention in the whole process, and the virtual machine data re-equalization method has the characteristics of high efficiency, stability and safety.
Virtual machine online migration: on the premise that the virtual machine is not stopped, the virtual machine is migrated between the nodes at will under the condition that the service of the virtual machine is not perceived, so that the resource load balance of the cluster and the scalability of the large-scale cluster are effectively ensured.
Dynamic Resource Scheduling (DRS): the load of each node of the cluster is automatically balanced to a reasonable range through virtual machine online migration, physical machine monitoring, virtual machine monitoring technology and the like. In the framework design, according to the historical and real-time data of the physical machine monitoring and virtual machine monitoring modules, a dynamic resource scheduling process is started after a covariance-based balance evaluation algorithm exceeds a predefined balance threshold. The main objective of the dynamic resource scheduling process is to select a batch of suitable virtual machines and physical machines, and to keep the balance of the cluster within a reasonable range again through an online virtual machine migration technology. Of course, states of some virtual machines with busy service, physical machines with specificity (which have entered a maintenance mode) and the like are fully considered, and intelligent treatment is carried out on special conditions.
The present invention is not limited to the preferred embodiments, but is capable of modification and variation in detail, and other embodiments, such as those described above, of making various modifications and equivalents will fall within the spirit and scope of the present invention.

Claims (8)

1. A data base engine platform, characterized by: the data base engine platform comprises:
the computer virtualization module is used for building virtual computer hardware;
the network virtualization module is used for building a virtual network;
the network isolation module is used for separating interfaces of the internal separation local area network and the external service network;
the high-availability system module is used for building a high-availability system;
and the virtual machine operation and maintenance module is used for maintaining the operation of the virtual machine.
2. A data base engine platform as claimed in claim 1, wherein: the computer virtualization module comprises a CPU virtualization component, a memory virtualization component and a hard disk I/0 virtualization component, wherein the CPU virtualization component is used for integrating a CPU of a physical server into a large CPU pool and dividing the CPU into virtual machines for use; the memory virtualization component is used for dynamically distributing the memory of the physical server to a plurality of virtual machines for use; the hard disk I/0 virtualization component is used for multiplexing limited peripheral resources.
3. A data base engine platform as claimed in claim 2, wherein: the hard disk I/0 virtualization component adopts a simplified driver as a back end, uses a driver in a client operating system as a front end, directly sends a communication request to the back end driver by using a special communication mechanism, and directly returns to the corresponding front end driver after the back end driver is processed.
4. A data base engine platform as claimed in claim 1, wherein: the network virtualization module is used for virtualizing a network facility module, and the management, automation and network arrangement module is a software application for providing functions.
5. A data base engine platform as claimed in claim 4, wherein: the network virtualization module comprises a virtual switch component, a virtual router component, a virtual network firewall component, a network load equalizer component and an SR-I0V network card component, wherein the virtual switch component supports network isolation technologies such as FLAT, VLAN, vxLAN, GRE; the virtual router component is used for solving the problem that the network node is overloaded and guaranteeing high availability; the virtual network firewall component is used for protecting a virtual network; the network load balancer component is used for providing support for the network load balancer in a plug-in mode; the SR-I0V network card component is used for providing special resources for directly connecting the virtual machine to the I/O equipment.
6. A data base engine platform as claimed in claim 1, wherein: the high-availability system module comprises a redundancy and fault switching component, a service switching component and a mode switching component, wherein the redundancy and fault switching component is used for switching an instance to be operated on hardware without failure when hardware for operating the service instance fails; the service switching component is used for judging whether stateless service or stateful service is operated according to the request; the mode switching component is used for switching the main and standby modes or the main and standby modes according to the service state.
7. A data base engine platform as claimed in claim 1, wherein: the virtual machine operation and maintenance module comprises a virtual machine telescoping component, a virtual machine availability guarantee component, a virtualization configuration optimizing component and a cluster load balancing component, wherein the virtual machine telescoping component is used for dynamic expansion of a memory, a CPU, a hard disk and a network port; the virtual machine availability guarantee component is used for redundant backup of data, distributed shared data storage, ensuring the independence of a virtualization technology and a platform and ensuring the high availability of the virtual machine; the virtualization configuration tuning component is used for tuning the CPU and the memory; the cluster load balancing component is used for loading computing resources and storage data, virtual machine online migration and Dynamic Resource Scheduling (DRS).
8. The data base engine platform and method of use of claim 1, wherein: the data base engine platform and the using method are realized through the following steps:
firstly, constructing a server cluster, and carrying out virtualization setting of a CPU, a memory and a hard disk I/0 by using a computer virtualization module;
the second step, the network virtualization module comprises a virtual switch component, a virtual router component, a virtual network firewall component, a network load equalizer component and an SR-I0V network card component, and the distributed virtual switch, the distributed virtual router, the virtual network firewall, the network load equalizer and the network card are respectively configured;
step three, a plurality of external networks are configured for the virtual machine by utilizing a network isolation module, and the internal local area network and the external service network of the user are respectively docked;
step four, ensuring the high availability of the virtual network by utilizing a high availability system module;
and fifthly, maintaining the operation of the platform by utilizing the virtual machine operation and maintenance module.
CN202310051167.XA 2023-02-02 2023-02-02 Data base engine platform and use method Pending CN116107707A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310051167.XA CN116107707A (en) 2023-02-02 2023-02-02 Data base engine platform and use method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310051167.XA CN116107707A (en) 2023-02-02 2023-02-02 Data base engine platform and use method

Publications (1)

Publication Number Publication Date
CN116107707A true CN116107707A (en) 2023-05-12

Family

ID=86257654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310051167.XA Pending CN116107707A (en) 2023-02-02 2023-02-02 Data base engine platform and use method

Country Status (1)

Country Link
CN (1) CN116107707A (en)

Similar Documents

Publication Publication Date Title
CN106899518B (en) Resource processing method and device based on Internet data center
EP2617157B1 (en) Performing partial subnet initialization in a middleware machine environment
US9600309B2 (en) SR-IOV failover and aggregation control system to ensure within-physical-port VEB loopback
CN107707393B (en) Multi-active system based on Openstack O version characteristics
CN110224871B (en) High-availability method and device for Redis cluster
EP1323037B1 (en) Method and apparatus for controlling an extensible computing system
CN110633170A (en) Localized service recovery
US8225134B2 (en) Logical partitioning of a physical device
JP4712279B2 (en) Method and apparatus for controlling extensible computing system
US20060174087A1 (en) Computer system, computer, storage system, and control terminal
CN105099793B (en) Hot spare method, apparatus and system
US20090031320A1 (en) Storage System and Management Method Thereof
CN107612960B (en) Integrated control system in power grid dispatching
CN110912991A (en) Super-fusion-based high-availability implementation method for double nodes
JP2013037433A (en) Server, server system, and server redundancy switch method
CN104811476A (en) Highly-available disposition method facing application service
CN113849136B (en) Automatic FC block storage processing method and system based on domestic platform
CN106612314A (en) System for realizing software-defined storage based on virtual machine
CN112003794B (en) Floating IP current limiting method, system, terminal and storage medium
CN116107707A (en) Data base engine platform and use method
US7830880B2 (en) Selective build fabric (BF) and reconfigure fabric (RCF) flooding
Dell
JPH04311251A (en) Multiprocessor system
CN209842600U (en) Virtual machine platform separation thermal migration system
CN112988335A (en) High-availability virtualization management system, method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination