CN113918174A - Bare metal server deployment method, deployment controller and server cluster - Google Patents

Bare metal server deployment method, deployment controller and server cluster Download PDF

Info

Publication number
CN113918174A
CN113918174A CN202111253305.XA CN202111253305A CN113918174A CN 113918174 A CN113918174 A CN 113918174A CN 202111253305 A CN202111253305 A CN 202111253305A CN 113918174 A CN113918174 A CN 113918174A
Authority
CN
China
Prior art keywords
bare metal
deployment
server
metal server
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111253305.XA
Other languages
Chinese (zh)
Inventor
姚建华
汤星
刘慧青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huayun Data Holding Group Co Ltd
Original Assignee
Huayun Data Holding Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huayun Data Holding Group Co Ltd filed Critical Huayun Data Holding Group Co Ltd
Priority to CN202111253305.XA priority Critical patent/CN113918174A/en
Publication of CN113918174A publication Critical patent/CN113918174A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • G06F8/63Image based installation; Cloning; Build to order
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4406Loading of operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • H04L12/4675Dynamic sharing of VLAN information amongst network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles

Abstract

The invention provides a bare metal server deployment method, a deployment controller and a server cluster, wherein the method comprises the steps of uploading an initial configuration file corresponding to a central processing unit configured by a bare metal server to a mirror image server, and acquiring and storing a mapping relation formed by the initial configuration file and architecture information of the central processing unit; planning a deployment network of the bare metal server, and acquiring configuration information of the bare metal server to determine node record information of a node to which the bare metal server belongs after being started; pre-configuring and storing a starting configuration file adaptive to the central processing unit; and calling the starting configuration file by the ironic service, and matching the starting configuration file matched with the central processing unit according to the deployment network and the node record information so as to start the bare metal server. The method and the device realize the deployment of the plurality of bare metal servers in the scene that the central processing unit is in the heterogeneous architecture, and realize the batch and quick deployment of the bare metal servers.

Description

Bare metal server deployment method, deployment controller and server cluster
Technical Field
The invention relates to the technical field of server deployment, in particular to a bare metal server deployment method, a bare metal server deployment controller and a server cluster deployed based on the bare metal server deployment method.
Background
Cloud computing significantly impacts the manner in which information technology infrastructure is consumed. With the aid of virtualization technologies, workloads can be deployed using various virtual infrastructures, from public cloud environments to local hardware-dependent premise data centers. New workloads are continuously created, deployed, and consumed for applications via the virtual infrastructure. The Bare Metal Server (BMS) is a physical server of a computing service with virtual machine elasticity and physical machine performance, and can better bear the workload of the deployment.
In a cloud computing scenario with high security isolation and high service response requirements, a large number of bare metal servers need to be deployed. The architecture of the Central Processing Unit (CPU) included in each bare metal server may vary. At present, the starting (deployment) and management of bare metal servers in a cloud computing platform generally adopt a mode of combining an IPMI server and a PXE client with cloud network configuration. The applicant indicates that in the prior art, in a process of deploying bare metal servers with different CPU architectures to a cloud computing platform, there is a need to write deployment examples such as network configuration, storage configuration, and check script separately for different CPU architectures and in combination with an Ironic service, so that in the prior art, a deployment (starting) process of bare metal servers with different CPU architectures is cumbersome, complex, and prone to deployment errors. Therefore, large-scale and automatic deployment of bare metal servers in a scenario where the central processing unit is in an heterogeneous architecture cannot be realized.
In view of the above, there is a need to improve a method for deploying a plurality of bare metal servers in a scenario where a central processing unit is in an heterogeneous architecture in the prior art, so as to solve the above problems.
Disclosure of Invention
The invention aims to disclose a bare metal server deployment method, a deployment controller and a server cluster, which are used for solving the technical problem of complex deployment in the deployment process of a plurality of bare metal servers in the situation that a central processing unit is in an isomerization architecture in the prior art, so as to realize large-scale and automatic deployment of the bare metal servers in the situation that the central processing unit is in the isomerization architecture.
In order to achieve one of the above objects, the present invention provides a method for deploying bare metal servers, including the following steps:
s1, uploading an initial configuration file corresponding to a central processing unit configured by the bare metal server to a mirror image server, acquiring a mapping relation formed by the initial configuration file and architecture information of the central processing unit, and storing the mapping relation;
s2, planning a deployment network of the bare metal server, and acquiring configuration information of the bare metal server to determine node record information of a node to which the bare metal server belongs after being started;
s3, pre-configuring and storing a starting configuration file matched with the central processing unit;
and S4, calling the starting configuration file by an ironic service, and matching the starting configuration file matched with the central processing unit according to the deployment network and the node record information to start the bare metal server.
As a further improvement of the present invention, the step S2 further includes:
planning a deployment network and a service network of a bare metal server, and visually displaying the node record information, wherein the operation of planning the deployment network and the service network of the bare metal server is executed on a visual interface or a Neutron client, so as to determine the VLAN ID and IP address field to which the bare metal server belongs through the operation of planning the deployment network and the service network of the bare metal server.
As a further refinement of the present invention, the initial configuration file comprises: kernel mirroring, initializing a memory disk and operating system mirroring;
the initialization memory disk deploys IPA and an operating system mirror image;
the startup configuration file comprises: and the kernel mirrors and initializes the memory disk.
As a further improvement of the invention, the mapping is saved to a configuration file of the ironic service,
or
The mapping relation is saved to a storage device so as to initiate a request for calling the mapping relation from the storage device through a visual interface;
wherein the configuration file of the ironic service is saved to a database mounted to the ironic service.
As a further improvement of the present invention, the step S1 further includes: writing a first unique identification number and a second unique identification number which are respectively returned from an uploading kernel mirror image and an initializing memory disc to a mirror image server into a configuration file of the ironic service, wherein,
the mapping relation comprises a user uploading the initial configuration file, architecture information of a central processing unit corresponding to the initial configuration file, a first unique identification number and a second unique identification number which are respectively returned from an uploading kernel mirror image and an initializing memory disc to a mirror image server.
As a further improvement of the present invention, the obtaining the configuration information of the bare metal server includes: judging whether a user acquires the architecture information of a central processing unit corresponding to the bare metal server;
if so, manually inputting architecture information of a central processing unit corresponding to the bare metal server in a visual interface or an ironic service by a user;
and if not, acquiring the architecture information of the central processing unit corresponding to the bare metal server through a monitoring process in the ironic service.
As a further improvement of the present invention, the configuration information includes: IPMI IPS setting information, a user name, a password, physical network card information and architecture information of the central processing unit corresponding to the bare metal server.
As a further improvement of the present invention, the obtaining of the architecture information of the central processing unit corresponding to the bare metal server through the monitoring process in the ironic service includes the following substeps:
monitoring DHCP messages of a network card of the bare metal server by a monitoring process;
analyzing options of the DHCP message to obtain the architecture information of the central processing unit corresponding to the metal server;
save to a database mounted to the ironic service.
As a further improvement of the present invention, the storage device includes: and the shared storage space is linked through an address to store the starting configuration file, the shared storage space opens the authority to all the started and/or un-started bare metal servers, and the shared storage space is deployed in the TFTP server.
As a further improvement of the present invention, the operation of pre-configuring the start-up configuration file adapted to the central processing unit in step S3 is performed on a visual interface or a Neutron client.
As a further improvement of the present invention, the step S1 further includes: and storing third unique identification numbers respectively returned by uploading the operating system mirror images to the mirror image server in a database, so that a user selects the third unique identification numbers in a visual interface after the bare metal server is started, and pulling the operating system mirror images matched with the third unique identification numbers from the mirror image server.
As a further improvement of the present invention, the step S4 further includes: after the bare metal server is started, starting the IPA of the initialization memory disc and the link of the kernel mirror image for acquiring the operating system mirror image so as to write the operating system mirror image into a local disc of the started bare metal server.
As a further improvement of the present invention, after writing the operating system image to a local disk of the bare metal server, the method further includes: the ornonic service sets a local disk of the bare metal server to be started from the local disk through an IPMI command, restarts the bare metal server, and loads the operating system image to a system disk of the bare metal server after the bare metal server is restarted.
Meanwhile, based on the same invention idea, the invention also discloses a deployment controller, and the deployment controller runs the deployment method of the bare metal server created by any one of the inventions.
As a further improvement of the present invention, the deployment controller deploys a Trunk port; the bare metal server is provided with a management network physical interface, a deployment network physical interface and a service network physical interface; a management network data plane is formed between the deployment controller and the bare metal servers, and a deployment network data plane and a service network data plane are formed between the deployment controller and the bare metal servers.
Finally, based on the same inventive concept, the present invention also discloses a server cluster, comprising:
a deployment controller for controlling the deployment of the deployment controller,
a data exchange device, and
at least one bare metal server started by the deployment controller through the bare metal server deployment method created by any one of the above inventions;
and the bare metal server is accessed to the data exchange equipment through a virtual network.
As a further improvement of the invention, the virtual network is a hybrid virtual network composed of one or any two of VXLAN virtual network, GRE virtual network, VLAN virtual network and GENEVE virtual network.
As a further improvement of the present invention, the deployment controller deploys a Trunk port; the bare metal server is provided with a management network physical interface, a deployment network physical interface and a service network physical interface; a management network data plane is formed between the deployment controller and the bare metal servers, and a deployment network data plane and a service network data plane are formed between the deployment controller and the bare metal servers.
Compared with the prior art, the invention has the beneficial effects that:
the bare metal server deployment method and the deployment controller disclosed by the invention can be well suitable for the deployment of a plurality of bare metal servers in a situation that a central processing unit is in an isomerization architecture, reduce the deployment difficulty of bare metal servers configured with central processing units of different architectures, and realize the batch and rapid deployment of the bare metal servers.
Drawings
FIG. 1 is an overall flow chart of a bare metal server deployment method of the present invention;
FIG. 2 is a schematic diagram of a bare metal server deployment process implemented by a deployment controller of the present invention through a bare metal server deployment method of the present invention;
FIG. 3 is a schematic diagram of the Ironic service;
FIG. 4 is a topology diagram of a server cluster according to the present invention;
FIG. 5 is a diagram illustrating a Database (DB) in FIG. 2 storing a plurality of mapping relationships formed by initial configuration files and architecture information of a central processing unit;
FIG. 6 is a diagram of the data plane of FIG. 4.
Detailed Description
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
It should be noted that when an element is referred to as "Connection of"another element," which may be directly connected to the other element or intervening elements may also be present. Before explaining technical schemes and inventive ideas included in the present application in detail, technical meanings indicated by partial terms or abbreviations referred to in the present application are briefly described or defined. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Term "And/or"includes any and all combinations of one or more of the associated listed items.
Term "BMS": bare Metal Servers (BMS), physical servers for computing-like services with virtual machine resiliency and physical machine performance.
Term "Ironic": the bare metal server component based on the Openstack environment is used for being responsible for opening and managing the bare metal server.
Term "Neutron"(i.e., Neutron service component): the network component based on the OpenStack environment is used for providing a two-layer/three-layer Virtual network, providing network connection for a Virtual Machine (VM) instance, and specifically comprises a Neutron-Server (a network plug-in for receiving and routing API requests to OpenStack), an OpenStack network plug-ins and agents (ports), Networks (Networks) and Subnets (Subnets), providing an IP address), and a Messaging Queue (routing information stored between the Neutron-Server and agents, and serving as a database for storing the network connection state of the plug-ins).
Term "Glance": and the component for providing the image service in the OpenStack environment is used for providing virtual machine images. The method specifically comprises the following steps: querying and obtaining metadata and images for images, maintaining image information (including metadata and images), executing a create snapshot command on a virtual machine instance to create a new image, or backing up the state of a virtual machine.
Term "PXE": a Pre-boot Execution Environment (Pre-boot Execution Environment), PXE enables a computer to boot over a network rather than from a local hard disk, optical drive, or the like. At present, the mainstream network cards are embedded with ROM chips supporting PXE. When the computer is booted, the BIOS calls the PXE client into the memory to execute, displays a command menu, and after the command menu is selected by a user, the PXE client downloads the operating system placed at the far end to the local for running through the network.
Term "DHCP": a Dynamic Host Control Protocol (DHCP) is used to centrally and dynamically allocate IP addresses to clients.
Term "TFTP": TFTP (Trivial File transfer protocol), which is a protocol in the TCP/IP protocol suite used for Trivial File transfer between a client (C) and a server (S), may be used for OS (operating system) and configuration updates of network devices.
The following exemplary embodiments are provided to illustrate the implementation of the present invention.
The first embodiment is as follows:
referring to fig. 1 to 3, the present embodiment shows a bare metal server deployment method (hereinafter referred to as "deployment method"), which includes the following steps S1 to S4. In the present application, the start-up means that the bare metal server is started for the first time, and is distinguished from the restart. The central processing unit in this embodiment is a CPU deployed in a bare metal server.
Step S1, uploading an initial configuration file corresponding to the central processing unit configured by the bare metal server to the mirror image server 50, obtaining a mapping relationship formed between the initial configuration file and the architecture information to which the central processing unit belongs, and storing the mapping relationship. The initial configuration file includes: kernel image (i.e., Kernel), initialization disk (i.e., Initrd), and operating system image (i.e., sysImg). The initialization memory disk deploys IPA211 and operating system image 212. The IPA (i.e., an icon copy agent) is a program that is built in the Initrd (initialized RAM Disk), and is automatically executed when the Initrd is started, and communicates with an icon-conductor process in the deployment controller 10 according to an incoming kernel parameter, so as to pull an operating system image of the bare metal server from an opposite end (i.e., a device or a system or a computer that is logically independent of the bare metal server to be started and the deployment controller 10), and write the operating system image into an operating system of the bare metal server. The Initrd (or Initrd file or initialization disk) is a compressed small root directory, which includes the driver modules, executable files and startup scripts that are necessary for the bare metal server in the startup phase (and the restart phase). Meanwhile, the mirroring server 50 in the present embodiment may be a liance mirroring server (see liance 11 in fig. 2). The mirror server 50 may be adapted according to the type of mirror file supported by the bare metal server, and need not be particularly limited. In a practical environment, as shown in fig. 4, the Glance11 is deployed in the mirroring server 50, and the Neutron12 is deployed in the network node 40. The Ironic service provides a flow for a user or administrator to initiate the bare metal server deployment method from the visualization interface 60 through the NOVA computing service and NOVAAPI (not shown).
The mapping is saved to the configuration file of the ironic service,
or
Is saved to storage to initiate a request to invoke the mapping relationship from storage through the visualization interface 60.
Wherein the configuration file of the ironic service is saved to a database (i.e., DB16 in fig. 2) mounted to the ironic service. As shown in fig. 5, the database 16 may store k mapping relationships, where the parameter k is a positive integer greater than or equal to 1. Specifically, in the present embodiment. The database 16 may store mapping 161 based on X86 architecture, mapping 162 based on MIPS architecture, mapping 163 based on ARM architecture, and mapping 16k based on other architectures, which are not exhaustive here.
In this embodiment, the storage device includes: the shared memory space is deployed to the TFTP server101 by address linking (e.g., URL) to save the shared memory space of the boot configuration file, which opens permissions to all bare metal servers that have been booted and/or un-booted. The shared memory space is deployed in the TFTP Server 101. Central processors with different architecture information (e.g., bare metal servers of a central processor based on X86 architecture or bare metal servers of a central processor based on MIPS architecture, etc.) generate a boot configuration file dedicated to one architecture information and store the boot configuration file in a shared storage space of TFTP server101, where the shared storage space refers to the shared directory of TFTP server/tftpboot in fig. 2, so as to facilitate deployment of network physical interface 202 (i.e., PXE Eth0) to pull Kernel and Initrd from the shared directory. The storage device also comprises a Cache and a nonvolatile memory.
Step S1 further includes: writing a first unique identification number and a second unique identification number respectively returned by the uploading kernel mirror image and the initializing memory disc 21 to the mirror image server 50 into a configuration file of the ironic service, wherein the mapping relationship comprises a user uploading the initial configuration file, architecture information of a central processing unit corresponding to the initial configuration file, and the first unique identification number and the second unique identification number respectively returned by the uploading kernel mirror image and the initializing memory disc 21 to the mirror image server 50. Referring to fig. 4, during the deployment of the bare metal server BMS _1, both the first unique identification number and the second unique identification number are perceived by a user or an administrator in the visual interface 60, and are issued operation instructions or commands by the user or the administrator in the visual interface 60 and can be selected by the user or the administrator.
Step S1 further includes: and saving the third unique identification numbers respectively returned by uploading the operating system images to the image server 50 to the database 16, so that after the bare metal server BMS _1 is started, a user selects the third unique identification number in the visual interface 60, and pulling the operating system image (sysImg) matched with the third unique identification number from the image server 50. In the embodiment, the first unique identification number to the third unique identification number are UUDI.
Step S2, planning a deployment network of the bare metal server BMS _1, and acquiring configuration information of the bare metal server BMS _1 to determine node record information of a node to which the bare metal server BMS _1 belongs after being started. The deployment network for planning the bare metal server BMS _1 may be determined according to a network planning setting in the server cluster. The network planning refers to Network Load Balancing (NLB), the number of computing nodes and a topology structure (note: the bare metal server is usually used as a computing node in a cluster or a server cluster), an IP address segment, VPN setting, a default gateway and other network configurations in the whole server cluster after the bare metal server is deployed, so as to realize the safe isolation of the bare metal server.
In this embodiment, the step S2 further includes: planning the deployment network and the service network of bare metal server BMS _1 and visually showing the node record information, the operation of planning the deployment network and the service network of bare metal server BMS _1 is performed at visual interface 60 or Neutron client (see Neutron12 in fig. 2) to determine the VLAN ID and IP address field to which bare metal server BMS _1 belongs by planning the operation of the deployment network and the service network of bare metal server BMS _ 1.
The obtaining of the configuration information of the bare metal server comprises: judging whether a user acquires the architecture information of a central processing unit corresponding to the bare metal server BMS _ 1;
if yes, the user manually inputs architecture information of a central processing unit corresponding to the bare metal server BMS _1 in the visual interface 60 (see fig. 4) or the ironic service (see fig. 3);
and if not, acquiring the architecture information of the central processing unit corresponding to the bare metal server BMS _1 through a monitoring process in the ironic service.
Specifically, in this embodiment, the obtaining of the architecture information of the central processing unit corresponding to the bare metal server BMS _1 through the monitoring process in the ironic service includes the following substeps:
and a substep S21, monitoring the DHCP message of the network card of the bare metal server BMS _1 by the monitoring process. The DHCP messages of the network card of the bare metal server BMS _1 are forwarded in the deployment network data plane 302 in fig. 2 and are monitored via the Trunk port 102.
S22, resolving options of the DHCP message to obtain the architecture information of the central processing unit corresponding to the metal server.
S23, save to the database (i.e., DB16) mounted to the ironic service.
The configuration information includes: IPMI IPS setting information, a user name, a password, physical network card information and architecture information of the central processing unit corresponding to the bare metal server. The physical network card information includes the physical network physical interface 201 (e.g., IPMI interface), the deployment network physical interface 202 (e.g., PEX Eth0), and the service network physical interface 203 (e.g., Eth1) in fig. 2.
Referring to fig. 3 and 4, the Ironic services include an Ironic-API14, an Ironic-Conductor13, and an Ironic-check15 in communication with the NOVAAPI, and the following listening process is performed by the Ironic-check 15. The Ironic-API14 forwards the request to the Ironic-Conductor13 via an RPC protocol (a protocol for remote procedure calls) call, the Ironic-Conductor13 creates a Port using the interface of the MAC call Neutron12 of the bare metal server, the process of creating a Port calls the normative command of the data exchange device 30 with the RPC protocol to perform the VLAN setting. The Ironic service has access to the data plane in common with a bare metal server cluster 20 containing at least one bare metal server.
In particular, the applicant indicates that, in this embodiment, the architecture information of the bare metal server to be deployed is obtained by means of manual entry or by means of monitoring a DHCP message through a bypass possessed by a monitoring process in the aforementioned ironic service. The manual entry refers to manually entering the architecture information of the central processing unit corresponding to the bare metal server in the visual interface 60 or the ironic service. By the technical means, the method can obviously adapt to the starting and batch deployment of the bare metal servers of the heterogeneous central processing units (namely, the CPUs with different architecture information), and reduces the deployment difficulty of the bare metal servers of the central processing units with different architectures, thereby realizing the batch deployment and the rapid deployment of the bare metal servers.
And step S3, pre-configuring and saving the starting configuration file adapted to the central processor. Initiating the configuration file includes: and the kernel mirrors and initializes the memory disk. The operation of provisioning the startup profile adapted to the central processor in step S3 is performed at the visualization interface 60 or the Neutron client 60. The operation of storing the configuration file can be carried out without preparing the same bare metal server again when the same bare metal server is started next time, so that a Kernel mirror image, an initrd mirror image and a service mirror image file which are needed when PXE information is opened are cached, and the problem of simplicity and convenience in deployment of the bare metal server in the repeated deployment process is solved.
And step S4, calling the starting configuration file by the ironic service, and matching the starting configuration file matched with the central processing unit according to the deployment network and the node record information to start the bare metal server. In an embodiment, the boot configuration file is an NBP file, that is, a Network Bootloader Program, and is used for loading a configuration file, a kernel file, an initrd and other files required by loading the bare metal server to boot the operating system in the deployment process. For example, based on the x86_64 architecture (i.e., the 64-bit x86 architecture central processing unit), the corresponding NBP file instance is unidimely. kpxe, and the corresponding NBP file instance is gruba 64. efi. The Neutron DNSmasq121 replies the corresponding IP address and the TFTP address, and determines the logic position of the adaptive NBP file (namely the shared directory of/tftpboot in the TFTP server101 in FIG. 2) according to the corresponding CPU architecture information; meanwhile, Neutron12 sets the NBP file as the DHCP PXR option attribute of the corresponding port, so that the bare metal server can obtain the required PXE information during the subsequent start, and the boot start of the bare metal server is executed through the PXE information. The Kernel and Initrd files 21 are obtained from the TFTP server101 through the NBP file to start Kernel mirroring and initialize the memory disk. The bare metal server loads the kernel mirror image and the initrd File when starting through the PXE, and loads a driver (Drive) and a File System (namely a File System) which are required by normal operation after starting for the bare metal server.
Step S4 further includes: after the bare metal server is started, the IPA211 of the initialization memory disk (i.e., Initrd file 21) and the Kernel image (i.e., Kernel in fig. 2) are started to obtain the link of the operating system image, so as to write the operating system image into a local disk of the bare metal server that has been started. The local disk is the only system disk of the bare metal server BMS _ 1. Meanwhile, after writing the operating system image (sysImg) into a local disk of the bare metal server, the method further includes: the ironic service sets a local disk of a bare metal server to be started from the local disk by an IPMI command, restarts the bare metal server, and loads an operating system image (sysImg) to a system disk (Sysdisk) of the bare metal server after the bare metal server is restarted, thereby completing the deployment operation of the bare metal server. The bare metal server deploys at least one data disk to store service data.
To sum up, the bare metal server deployment method disclosed in this embodiment can be well adapted to the deployment of multiple bare metal servers in a scenario where a central processing unit is in an heterogeneous architecture, reduces the deployment difficulty of bare metal servers configured with central processing units of different architectures, and realizes batch and rapid deployment of bare metal servers.
Example two:
based on the method for deploying bare metal servers disclosed in the first embodiment, the present embodiment also discloses a controller 10.
Referring to fig. 2 to 6, in the deployment controller 10 disclosed in the present embodiment, the deployment controller 10 operates the deployment method of the bare metal server in the first embodiment. The deployment controller 10 deploys the Trunk port 102. Bare metal server BMS _1 configures management network physical interface 201 (e.g., IPMI interface), deployment network physical interface 202 (e.g., PEX Eth0), and traffic network physical interface 203 (e.g., Eth 1). The deployment controller 10 and the bare metal servers BMS _1 form a management network data plane 301, a deployment network data plane 302 and a service network data plane 303.
The management network physical interface 201, the deployment network physical interface 202 and the service network physical interface 203 are regarded as a physical network card in the example, and respectively access the management network data plane 301, the deployment network data plane 302 and the service network data plane 303. The Trunk port 102 may allow a plurality of Tag-tagged VLAN packets to pass through, and may receive and send a plurality of packets, and forward the packets to a terminal device (e.g., a bare metal server BMS _1) conforming to the Tag tagging rule.
The messages generated by the DHCP request, pull Kernel and Initrd, and configuration data switch 30 in fig. 2 walk the deployment network data plane 302 through the Trunk port 102 and communicate with the deployment network physical interface 202. The Ironic service calls the IPA write os image and sends the message generated by the IPMI request to walk the management network data plane 301 through the Trunk port 102 and communicate with the management network physical interface 201. Service messages generated by the bare metal server BMS _1 after starting and restarting walk on a service network data plane 303 through the Trunk port 102 and are communicated with a service network physical interface 203. The data switching device 30 establishes a data plane to satisfy the forwarding requirement of the bare metal server BMS _1 for service data generated by the bare metal server BMS _1 responding to the user forwarding request in the deployment stage and the actual operation stage after deployment. The data exchange device 30 is specifically a configuration switch, and a user can connect to a computer through a Console port (i.e., a control port of the configuration switch), and configure operations such as a host name, an SSH telnet password, and a device management address of the configuration switch in a computer interface. The DATA switch 30 configures the IPMI interface to access the managed network DATA plane 301, configures the DEPLOY port to access the deployed network DATA plane 302, and configures the DATA port to access the traffic network DATA plane 303, respectively.
The deployment controller 10 disclosed in this embodiment and the deployment method of the bare metal server disclosed in the first embodiment have the same technical solutions, which are described in the first embodiment and are not described herein again.
Example three:
with reference to fig. 2 and fig. 4 to fig. 6, based on the technical solutions disclosed in the first and second embodiments, the present embodiment further discloses a server cluster, which includes:
a deployment controller (10) is provided that,
a data exchange device 30, and
at least one bare metal server (BMS _1 to BMS _ i, parameter i is a positive integer greater than or equal to 1) started by the deployment controller 10 by the bare metal server deployment method according to the first embodiment. The bare metal servers BMS _1 to BMS _ i are accessed to the data exchange device 30 through a virtual network. The BMS comprises a BMS cluster 20 to provide a resilient, scalable, high performance computing service with physical isolation for the entire server cluster through one or more of the BMS and a virtualization system (e.g., Hypervisor).
In this embodiment, the virtual network is a hybrid virtual network composed of one or any two of a VXLAN virtual network, a GRE virtual network, a VLAN virtual network, and a GENEVE virtual network. The deployment controller 10 deploys the Trunk port 102. The bare metal server configures a management network physical interface 201, a deployment network physical interface 202 and a service network physical interface 203. The deployment controller 10 and the bare metal servers form a management network data plane 301, a deployment network data plane 302 and a service network data plane 303. The virtual network is determined by the type supported by the data switching device 30, and the embodiment is not particularly limited.
The server cluster disclosed in this embodiment has the same technical solutions as those in the first and second embodiments, and reference is made to the description of the first and second embodiments, which is not repeated herein.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (18)

1. A method for deploying bare metal servers is characterized by comprising the following steps:
s1, uploading an initial configuration file corresponding to a central processing unit configured by the bare metal server to a mirror image server, acquiring a mapping relation formed by the initial configuration file and architecture information of the central processing unit, and storing the mapping relation;
s2, planning a deployment network of the bare metal server, and acquiring configuration information of the bare metal server to determine node record information of a node to which the bare metal server belongs after being started;
s3, pre-configuring and storing a starting configuration file matched with the central processing unit;
and S4, calling the starting configuration file by an ironic service, and matching the starting configuration file matched with the central processing unit according to the deployment network and the node record information to start the bare metal server.
2. The deployment method according to claim 1, wherein the step S2 further comprises:
planning a deployment network and a service network of a bare metal server, and visually displaying the node record information, wherein the operation of planning the deployment network and the service network of the bare metal server is executed on a visual interface or a Neutron client, so as to determine the VLAN ID and IP address field to which the bare metal server belongs through the operation of planning the deployment network and the service network of the bare metal server.
3. The deployment method of claim 1 wherein the initial configuration file comprises: kernel mirroring, initializing a memory disk and operating system mirroring;
the initialization memory disk deploys IPA and an operating system mirror image;
the startup configuration file comprises: and the kernel mirrors and initializes the memory disk.
4. The deployment method of claim 3 wherein the mapping is saved to a configuration file of an ironic service,
or
The mapping relation is saved to a storage device so as to initiate a request for calling the mapping relation from the storage device through a visual interface;
wherein the configuration file of the ironic service is saved to a database mounted to the ironic service.
5. The deployment method according to claim 4, wherein the step S1 further comprises: writing a first unique identification number and a second unique identification number which are respectively returned from an uploading kernel mirror image and an initializing memory disc to a mirror image server into a configuration file of the ironic service, wherein,
the mapping relation comprises a user uploading the initial configuration file, architecture information of a central processing unit corresponding to the initial configuration file, a first unique identification number and a second unique identification number which are respectively returned from an uploading kernel mirror image and an initializing memory disc to a mirror image server.
6. The deployment method of claim 1, wherein the obtaining the configuration information of the bare metal server comprises: judging whether a user acquires the architecture information of a central processing unit corresponding to the bare metal server;
if so, manually inputting architecture information of a central processing unit corresponding to the bare metal server in a visual interface or an ironic service by a user;
and if not, acquiring the architecture information of the central processing unit corresponding to the bare metal server through a monitoring process in the ironic service.
7. The deployment method of claim 6 wherein the configuration information comprises: the IPMIIPS setting information, the user name, the password, the physical network card information and the architecture information of the central processing unit corresponding to the bare metal server.
8. The deployment method according to claim 6, wherein the obtaining of the architecture information of the central processing unit corresponding to the bare metal server through the monitoring process in the ironic service includes the following sub-steps:
monitoring DHCP messages of a network card of the bare metal server by a monitoring process;
analyzing options of the DHCP message to obtain the architecture information of the central processing unit corresponding to the metal server;
save to a database mounted to the ironic service.
9. The deployment method of claim 4 wherein the storage device comprises: and the shared storage space is linked through an address to store the starting configuration file, the shared storage space opens the authority to all the started and/or un-started bare metal servers, and the shared storage space is deployed in the TFTP server.
10. The deployment method according to claim 1, wherein the step S3 of pre-configuring the start-up configuration file adapted to the central processor is performed on a visual interface or a Neutron client.
11. The deployment method according to any one of claims 3 to 10, wherein the step S1 further comprises: and storing third unique identification numbers respectively returned by uploading the operating system mirror images to the mirror image server in a database, so that a user selects the third unique identification numbers in a visual interface after the bare metal server is started, and pulling the operating system mirror images matched with the third unique identification numbers from the mirror image server.
12. The deployment method according to claim 11, wherein the step S4 further comprises: after the bare metal server is started, starting the IPA of the initialization memory disc and the link of the kernel mirror image for acquiring the operating system mirror image so as to write the operating system mirror image into a local disc of the started bare metal server.
13. The deployment method of claim 12, wherein after writing the operating system image to a local disk of a bare metal server, further comprising: the ornonic service sets a local disk of the bare metal server to be started from the local disk through an IPMI command, restarts the bare metal server, and loads the operating system image to a system disk of the bare metal server after the bare metal server is restarted.
14. A deployment controller running the bare metal server deployment method of any of claims 1 to 13.
15. The deployment controller of claim 14 wherein the deployment controller deploys a Trunk port; the bare metal server is provided with a management network physical interface, a deployment network physical interface and a service network physical interface; a management network data plane is formed between the deployment controller and the bare metal servers, and a deployment network data plane and a service network data plane are formed between the deployment controller and the bare metal servers.
16. A server cluster, comprising:
a deployment controller for controlling the deployment of the deployment controller,
a data exchange device, and
at least one bare metal server started by the deployment controller by the bare metal server deployment method of any one of claims 1 to 13;
and the bare metal server is accessed to the data exchange equipment through a virtual network.
17. The server cluster according to claim 16, wherein the virtual network is a hybrid virtual network consisting of one or any two of a VXLAN virtual network, a GRE virtual network, a VLAN virtual network, a GENEVE virtual network.
18. The server cluster according to claim 16, wherein the deployment controller deploys a Trunk port; the bare metal server is provided with a management network physical interface, a deployment network physical interface and a service network physical interface; a management network data plane is formed between the deployment controller and the bare metal servers, and a deployment network data plane and a service network data plane are formed between the deployment controller and the bare metal servers.
CN202111253305.XA 2021-10-27 2021-10-27 Bare metal server deployment method, deployment controller and server cluster Pending CN113918174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111253305.XA CN113918174A (en) 2021-10-27 2021-10-27 Bare metal server deployment method, deployment controller and server cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111253305.XA CN113918174A (en) 2021-10-27 2021-10-27 Bare metal server deployment method, deployment controller and server cluster

Publications (1)

Publication Number Publication Date
CN113918174A true CN113918174A (en) 2022-01-11

Family

ID=79243222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111253305.XA Pending CN113918174A (en) 2021-10-27 2021-10-27 Bare metal server deployment method, deployment controller and server cluster

Country Status (1)

Country Link
CN (1) CN113918174A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114416434A (en) * 2022-03-30 2022-04-29 苏州浪潮智能科技有限公司 Bare metal disk backup method and device and computer readable storage medium
CN115442316A (en) * 2022-09-06 2022-12-06 南京信易达计算技术有限公司 Full-stack type high-performance computing bare metal management service system and method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114416434A (en) * 2022-03-30 2022-04-29 苏州浪潮智能科技有限公司 Bare metal disk backup method and device and computer readable storage medium
CN114416434B (en) * 2022-03-30 2022-07-08 苏州浪潮智能科技有限公司 Bare metal disk backup method and device and computer readable storage medium
WO2023184875A1 (en) * 2022-03-30 2023-10-05 苏州浪潮智能科技有限公司 Bare metal disk backup method and device, and computer-readable storage medium
CN115442316A (en) * 2022-09-06 2022-12-06 南京信易达计算技术有限公司 Full-stack type high-performance computing bare metal management service system and method
CN115442316B (en) * 2022-09-06 2024-02-23 南京信易达计算技术有限公司 Full stack type high-performance computing bare metal management service system and method

Similar Documents

Publication Publication Date Title
CN111989681A (en) Automatically deployed Information Technology (IT) system and method
US10129206B2 (en) Addressing and managing an internal network of a virtual branch node
US9304793B2 (en) Master automation service
US8918512B2 (en) Managing a workload of a plurality of virtual servers of a computing environment
US9253017B2 (en) Management of a data network of a computing environment
US9086918B2 (en) Unified resource manager providing a single point of control
US7600005B2 (en) Method and apparatus for provisioning heterogeneous operating systems onto heterogeneous hardware systems
US8972538B2 (en) Integration of heterogeneous computing systems into a hybrid computing system
US8984109B2 (en) Ensemble having one or more computing systems and a controller thereof
GB2594108A (en) Methods, systems and computer readable media for self-replicating cluster appliances
WO2020024413A1 (en) Method for controlling deployment of cloud computing platform, server, and storage medium
CN115280728A (en) Software defined network coordination in virtualized computer systems
US11201785B1 (en) Cluster deployment and management system
CN113918174A (en) Bare metal server deployment method, deployment controller and server cluster
CN113419815B (en) Method, system, equipment and medium for pre-starting operation environment installation
US20070124573A1 (en) Method for rapid startup of a computer system
US11941406B2 (en) Infrastructure (HCI) cluster using centralized workflows
CN109857464B (en) System and method for platform deployment and operation of mobile operating system
US11863377B2 (en) Discovery and configuration in computer networks
CN112948008A (en) Ironic based physical bare computer management method
EP3977279A1 (en) Configurable memory device connected to a microprocessor
US20230325203A1 (en) Provisioning dpu management operating systems using host and dpu boot coordination
US20240031234A1 (en) Wireless connections for edge virtualization management
US11645158B2 (en) Automated rollback in virtualized computing environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination