CN112114746A - Automatic deployment method of distributed storage cluster - Google Patents

Automatic deployment method of distributed storage cluster Download PDF

Info

Publication number
CN112114746A
CN112114746A CN202010878400.8A CN202010878400A CN112114746A CN 112114746 A CN112114746 A CN 112114746A CN 202010878400 A CN202010878400 A CN 202010878400A CN 112114746 A CN112114746 A CN 112114746A
Authority
CN
China
Prior art keywords
ceph
cluster
storage system
installation package
deployment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010878400.8A
Other languages
Chinese (zh)
Inventor
贾如瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unicloud Nanjing Digital Technology Co Ltd
Original Assignee
Unicloud Nanjing Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unicloud Nanjing Digital Technology Co Ltd filed Critical Unicloud Nanjing Digital Technology Co Ltd
Priority to CN202010878400.8A priority Critical patent/CN112114746A/en
Publication of CN112114746A publication Critical patent/CN112114746A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3034Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a storage system, e.g. DASD based or network based
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0637Permissions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45504Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
    • G06F9/45508Runtime interpretation or emulation, e g. emulator loops, bytecode interpretation
    • G06F9/45512Command shells

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an automatic deployment method of a distributed storage cluster, which comprises the following steps: s1, obtaining a ceph offline installation package corresponding to the operating system; s2, completing the manufacturing of the ceph offline installation package, associating all ded files and establishing a dependency relationship; s3, configuring the secret-free login of the main node and other nodes in the cluster, and closing the firewall; s4, copying the ceph offline installation package to a machine where the ceph cluster needs to be deployed; s5, installing and deploying an ntp server; s6, deploying a ceph cluster by using a ceph-deployment tool; s7, checking the health state of the ceph cluster; s8, when the health state of the ceph cluster is normal, creating a common user and building a pool; and S9, automatically collecting the key ring ceph.keying and the configuration file ceph.conf of the ceph cluster, exporting the configuration file of the common user, and providing the configuration file for the client to use. Has the advantages that: one-key deployment can be realized, convenience and rapidness are realized, the deployment time is effectively saved, and the deployment efficiency is improved.

Description

Automatic deployment method of distributed storage cluster
Technical Field
The invention relates to the technical field of computers, in particular to an automatic deployment method of a distributed storage cluster.
Background
Distributed refers to a unique type of system architecture that consists of a set of computer nodes that communicate over a network and that work in concert to accomplish a common task. Distributed systems have emerged to perform computing and storage tasks that cannot be performed by a single computer using inexpensive, common machines. The purpose is to process more data with more machines.
At present, distributed storage has wide application in the field of cloud computing due to the advantages of convenient storage, low cost, convenient horizontal and vertical expansion and the like. The ceph storage system is tightly combined with linux, and a reliable multifunctional storage back end is provided for the cloud platform. However, the traditional deployment manner of the distributed storage cluster generally adopts online deployment, and the configuration of the method is complex and is easily influenced by the network.
An effective solution to the problems in the related art has not been proposed yet.
Disclosure of Invention
Aiming at the problems in the related art, the invention provides an automatic deployment method of a distributed storage cluster, which can effectively save deployment time, improve deployment efficiency and have better stability and safety, thereby overcoming the technical problems in the prior related art.
Therefore, the invention adopts the following specific technical scheme:
an automated deployment method of a distributed storage cluster comprises the following steps:
s1, acquiring an offline installation package of a storage system ceph corresponding to a preset operating system by adopting a preset rule;
s2, completing the production of the off-line installation package of the storage system ceph through a preset method, and associating all ded files and establishing a dependency relationship;
s3, configuring the secret-free login of the main node and other nodes in the cluster according to a preset rule, and closing a firewall;
s4, copying the offline installation package of the storage system ceph to a machine where a storage system ceph cluster needs to be deployed by adopting a preset principle;
s5, installing and deploying an ntp server according to a preset principle;
s6, deploying the storage system ceph cluster by using a ceph-deployment tool;
s7, checking the health state of the storage system ceph cluster;
s8, when the health state of the storage system ceph cluster is normal, creating a common user and building a pool;
and S9, automatically collecting the key ring ceph.keying and the configuration file ceph.conf of the ceph cluster of the storage system, exporting the configuration file of the common user, and providing the configuration file for a client to use.
Further, the step of obtaining the offline installation package of the storage system ceph corresponding to the preset operating system by using the preset rule in S1 specifically includes the following steps:
s11, building an environment with the same operating system as the cluster to be deployed;
s12, obtaining the ceph cluster installation package of the single storage system by configuring an online source.
Further, the step of S11 building an environment having the same operating system as the cluster to be deployed further includes the following steps: when an operating system is built, firstly, a preset corresponding mirror image is mounted on a physical machine; then, loading the corresponding mirror image according to the installation step; and finally configuring the service ip and the storage ip of the machine.
Further, the step S2 of completing, by a preset method, the production of the offline installation package of the storage system ceph, and associating all ded files and establishing a dependency relationship specifically includes the following steps:
s21, copying the pre-downloaded ded file package into a new folder;
s22, modifying the authority of the new folder, and establishing the dependency relationship of the ded file package;
and S23, packaging the new folder into a file installation package with the suffix of ar.gz.
Further, the step of configuring, by the S3, a secret-free login of the master node and other nodes inside the cluster according to a preset rule specifically includes the following steps:
s31, generating a secure shell protocol ssh key at the main node, and keeping the password as null;
s32, mapping the ip address, and adding the corresponding information of the ip address and the hostname into the/etc/hosts file of the main node;
and S33, copying the SSh key to each cluster node.
Further, the step S4 of copying the offline installation package of the storage system ceph to the machine where the storage system ceph cluster needs to be deployed by using a preset principle specifically includes the following steps:
s41, acquiring a hostname of the machine;
s42, copying and communicating among the machines are realized by adopting a mode of prompting the number of the input machines;
and S43, adding the offline installation package and the source path of the storage system ceph to a system source, and logging out all other ded files.
Further, the step of installing and deploying the ntp server through the preset principle in S5 specifically includes the following steps:
s51, taking the first machine of the ceph cluster of the storage system as a main node and other machines as sub-nodes to finish the synchronization of the hour hand;
s52, the main node is selected and synchronized with the local ip through modifying/etc/ntp.conf files, and other nodes take the main node as a service node;
and S53, after the configuration file is modified, setting the start-up and self-start of the ntp server.
Further, the step of deploying, by the S6, the storage system ceph cluster by using a ceph-deployment tool specifically includes the following steps:
s61, after generating a configuration file of the storage system ceph, adding a service ip and a storage ip into the configuration file;
s62, deploying monitors in sequence, and copying the configuration file and the management key to each node of the storage system ceph;
and S63, deploying the manager daemon.
Further, the step of S7 checking the health status of the storage system ceph cluster specifically includes the following steps:
s71, checking whether the health state of the ceph cluster of the storage system is health _ ok, if so, executing S72, and otherwise, printing an error log;
s72, judging the deployment progress of the storage system ceph cluster by comparing whether the health state health _ ok is consistent with the state of the storage system ceph cluster.
Further, the pools in S8 include an image pool and an rbd pool, where the image pool is used to store a system image, and the rbd pool is used to store a hard disk block device, and a pg number of a logical storage unit corresponding to a logical partition pool may be calculated by the osd number of the daemon process.
The invention has the beneficial effects that:
1) by using the invention, one-key deployment can be realized, the operation is convenient and quick, the online efficiency is effectively improved, and the upgrading deployment can be covered;
2) the method is compiled by adopting the shell script, has natural compatibility with a linux system, and does not need to download additional installation packages and configuration;
3) by automatically deploying the ntp server, the method is accurate, the clock synchronization time of the cluster is effectively improved, and errors and missed deployment of other deployment methods are avoided;
4) the single, mgr and osd components can be directly deployed by using the ceph-deployment tool without excessive scripts, so that the deployment time of the scripts is shortened, and the deployment efficiency is improved;
5) by automatically outputting the keying and conf of the authority of the common user, the method is safer, and the possibility that the admin user deletes the pool and the hard disk by mistake when the admin user is used is avoided;
6) and automatically calculating the pg number and the pgp number corresponding to the ceph pool by acquiring the osd number, thereby avoiding manual calculation and configuration.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of an automated deployment method of a distributed storage cluster according to an embodiment of the present invention.
Detailed Description
For further explanation of the various embodiments, the drawings which form a part of the disclosure and which are incorporated in and constitute a part of this specification, illustrate embodiments and, together with the description, serve to explain the principles of operation of the embodiments, and to enable others of ordinary skill in the art to understand the various embodiments and advantages of the invention, and, by reference to these figures, reference is made to the accompanying drawings, which are not to scale and wherein like reference numerals generally refer to like elements.
According to an embodiment of the invention, an automated deployment method of a distributed storage cluster is provided.
Referring to the drawings and the detailed description, the invention will be further described, as shown in fig. 1, in an automated deployment method of a distributed storage cluster according to an embodiment of the invention, including the following steps:
s1, acquiring an offline installation package of a storage system ceph corresponding to a preset operating system by adopting a preset rule;
wherein, the S1 specifically includes the following steps:
s11, building an environment with the same operating system as the cluster to be deployed; specifically, the S11 further includes the following steps: when an operating system is built, firstly, a preset corresponding mirror image is mounted on a physical machine; then, loading the corresponding mirror image according to the installation step; finally, configuring a service ip and a storage ip of the machine;
s12, obtaining the ceph cluster installation package of the single storage system by configuring an online source. For example, the ubuntu system, which is obtained under/var/cache/apt/archives/directory, may also be downloaded through a designated directory.
S2, completing the production of the off-line installation package of the storage system ceph through a preset method, and associating all ded files and establishing a dependency relationship;
wherein, the S2 specifically includes the following steps:
s21, copying the pre-downloaded ded file package into a new folder;
s22, modifying the authority of the new folder, and establishing the dependency relationship of the ded file package;
and S23, packaging the new folder into a file installation package with the suffix of ar.gz.
S3, configuring the secret-free login of the main node and other nodes in the cluster according to a preset rule, and closing a firewall;
the purpose of S3 is to communicate between a management node and a storage node when a cluster is deployed, where S3 specifically includes the following steps:
s31, generating a secure shell protocol ssh key at the main node, and keeping the password as null;
s32, mapping the ip address, and adding the corresponding information of the ip address and the hostname into the/etc/hosts file of the main node;
and S33, copying the SSh key to each cluster node.
S4, copying the offline installation package of the storage system ceph to a machine where a storage system ceph cluster needs to be deployed by adopting a preset principle;
wherein, the S4 specifically includes the following steps:
s41, acquiring a hostname of the machine;
s42, because the hostname is basically composed of public parts and serial numbers when the cluster is deployed, the copying and intercommunication among the machines are realized by adopting a mode of prompting the input of the number of the machines;
and S43, adding the offline installation package and the source path of the storage system ceph to a system source, and canceling all other ded files to prevent interference.
S5, installing and deploying an ntp server according to a preset principle; specifically, in the implementation, a broadcast mode is adopted, one-to-many connection is adopted, the server actively sends out time information, and the client adjusts the time of the client according to the information;
wherein, the S5 specifically includes the following steps:
s51, taking the first machine of the ceph cluster of the storage system as a main node and other machines as sub-nodes to finish the synchronization of the hour hand;
s52, the main node is selected and synchronized with the local ip through modifying/etc/ntp.conf files, and other nodes take the main node as a service node; specifically, the ip of the machine is 127.127.1.0;
and S53, after the configuration file is modified, setting the start-up and self-start of the ntp server.
S6, deploying the storage system ceph cluster by using a ceph-deployment tool; in this embodiment, ceph-deploy is a tool for deploying a ceph cluster, and can install a ceph software package, create a cluster, add a monitor, collect (or destroy) a key, add an osd (daemon) and a metadata server, configure a management host, and even remove a cluster to a remote host in an ssh manner. The first three machines of the cluster are daemon machines of a monitor mon and a manager mgr, and osd groups are arranged on a physical disk of each machine;
wherein, the S6 specifically includes the following steps:
s61, after generating the configuration file of the storage system ceph, adding a service ip (public _ network) and a storage ip (cluster _ network) into the configuration file;
s62, deploying monitors in sequence, and copying the configuration file and the management key to each node of the storage system ceph;
and S63, deploying the manager daemon.
S7, checking the health state of the storage system ceph cluster;
wherein, the S7 specifically includes the following steps:
s71, checking whether the health state of the ceph cluster of the storage system is health _ ok, if so, executing S72, and otherwise, printing an error log;
s72, judging the deployment progress of the storage system ceph cluster by comparing whether the health state health _ ok is consistent with the state of the storage system ceph cluster.
S8, when the health state of the storage system ceph cluster is normal, creating a common user and building a pool;
the pools in S8 include an image pool and an rbd pool, where the image pool is used to store a system image, and the rbd pool is used to store a hard disk block device, and a pg number of a logical storage unit corresponding to a logical partition pool may be calculated by the number of daemon osd.
And S9, automatically collecting the key ring ceph.keying and the configuration file ceph.conf of the ceph cluster of the storage system, exporting the configuration file of the common user, and providing the configuration file for a client to use.
In order to better understand the technical solution of the present invention, the following detailed description is made of the working principle or the operation mode of the present invention in the practical process.
And after the secret-free login in the cluster is completed, judging, and if the interconnection between the main node and other nodes in the cluster is failed, outputting a prompt and carrying out secret-free configuration again. After the ceph-deployment completes the installation of the ceph, judging the cluster health state through the ceph health detail, and automatically completing the creation of the pool and the common user if the cluster is OK.
In conclusion, by means of the technical scheme, one-key deployment can be realized, convenience and rapidness are realized, the online efficiency is effectively improved, and the upgrading deployment can be covered; in addition, the invention adopts shell script to compile, and has natural compatibility with a linux system, and extra installation packages and configuration do not need to be downloaded; by automatically deploying the ntp server, the method is accurate, the clock synchronization time of the cluster is effectively improved, and errors and missed deployment of other deployment methods are avoided; according to the invention, the ceph-deployment tool can be used for directly deploying mon, mgr and osd components without excessive scripts, so that the script deployment time is reduced, and the deployment efficiency is improved; the method can also automatically output the keying and conf of the authority of the common user, is safer, and avoids the possibility of mistakenly deleting the pool and the hard disk when the admin user is used; according to the invention, the pg number and the pgp number corresponding to the ceph pool are automatically calculated by acquiring the osd number, so that manual calculation and configuration are not required.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. An automated deployment method of a distributed storage cluster is characterized by comprising the following steps:
s1, acquiring an offline installation package of a storage system ceph corresponding to a preset operating system by adopting a preset rule;
s2, completing the production of the off-line installation package of the storage system ceph through a preset method, and associating all ded files and establishing a dependency relationship;
s3, configuring the secret-free login of the main node and other nodes in the cluster according to a preset rule, and closing a firewall;
s4, copying the offline installation package of the storage system ceph to a machine where a storage system ceph cluster needs to be deployed by adopting a preset principle;
s5, installing and deploying an ntp server according to a preset principle;
s6, deploying the storage system ceph cluster by using a ceph-deployment tool;
s7, checking the health state of the storage system ceph cluster;
s8, when the health state of the storage system ceph cluster is normal, creating a common user and building a pool;
and S9, automatically collecting the key ring ceph.keying and the configuration file ceph.conf of the ceph cluster of the storage system, exporting the configuration file of the common user, and providing the configuration file for a client to use.
2. The automated deployment method of the distributed storage cluster according to claim 1, wherein the step of obtaining the offline installation package of the storage system ceph corresponding to the preset operating system by using the preset rule in S1 specifically includes the following steps:
s11, building an environment with the same operating system as the cluster to be deployed;
s12, obtaining the ceph cluster installation package of the single storage system by configuring an online source.
3. The method according to claim 2, wherein the step of S11 constructing an environment having the same operating system as the cluster to be deployed further comprises the steps of: when an operating system is built, firstly, a preset corresponding mirror image is mounted on a physical machine; then, loading the corresponding mirror image according to the installation step; and finally configuring the service ip and the storage ip of the machine.
4. The automated deployment method of the distributed storage cluster according to claim 1, wherein the S2 completes, by a preset method, the production of the offline installation package of the storage system ceph, and associates all the ded files and establishes the dependency relationship specifically includes the following steps:
s21, copying the pre-downloaded ded file package into a new folder;
s22, modifying the authority of the new folder, and establishing the dependency relationship of the ded file package;
and S23, packaging the new folder into a file installation package with the suffix of ar.gz.
5. The automated deployment method of the distributed storage cluster according to claim 1, wherein the step of S3 configuring the secret-free login of the master node and other nodes inside the cluster according to the preset rule specifically includes the following steps:
s31, generating a secure shell protocol ssh key at the main node, and keeping the password as null;
s32, mapping the ip address, and adding the corresponding information of the ip address and the hostname into the/etc/hosts file of the main node;
and S33, copying the SSh key to each cluster node.
6. The automated deployment method of the distributed storage cluster according to claim 1, wherein the step S4 of copying the offline installation package of the storage system ceph to the machine where the storage system ceph cluster needs to be deployed by using a preset principle specifically includes the following steps:
s41, acquiring a hostname of the machine;
s42, copying and communicating among the machines are realized by adopting a mode of prompting the number of the input machines;
and S43, adding the offline installation package and the source path of the storage system ceph to a system source, and logging out all other ded files.
7. The automated deployment method of the distributed storage cluster according to claim 1, wherein the S5 installing and deploying the ntp server according to a preset rule specifically includes the following steps:
s51, taking the first machine of the ceph cluster of the storage system as a main node and other machines as sub-nodes to finish the synchronization of the hour hand;
s52, the main node is selected and synchronized with the local ip through modifying/etc/ntp.conf files, and other nodes take the main node as a service node;
and S53, after the configuration file is modified, setting the start-up and self-start of the ntp server.
8. The method according to claim 1, wherein the S6 deployment of the ceph cluster of the storage system using a ceph-deployment tool specifically includes the following steps:
s61, after generating a configuration file of the storage system ceph, adding a service ip and a storage ip into the configuration file;
s62, deploying monitors in sequence, and copying the configuration file and the management key to each node of the storage system ceph;
and S63, deploying the manager daemon.
9. The method according to claim 1, wherein the step of S7 checking the health status of the storage system ceph cluster specifically includes the following steps:
s71, checking whether the health state of the ceph cluster of the storage system is health _ ok, if so, executing S72, and otherwise, printing an error log;
s72, judging the deployment progress of the storage system ceph cluster by comparing whether the health state health _ ok is consistent with the state of the storage system ceph cluster.
10. The method according to claim 1, wherein the pools in S8 include an image pool and an rbd pool, the image pool is used to store a system image, and the rbd pool is used to store a hard disk block device, wherein the number of logical storage units pg corresponding to a logical partition pool can be calculated by the number of daemon osd.
CN202010878400.8A 2020-08-27 2020-08-27 Automatic deployment method of distributed storage cluster Pending CN112114746A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010878400.8A CN112114746A (en) 2020-08-27 2020-08-27 Automatic deployment method of distributed storage cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010878400.8A CN112114746A (en) 2020-08-27 2020-08-27 Automatic deployment method of distributed storage cluster

Publications (1)

Publication Number Publication Date
CN112114746A true CN112114746A (en) 2020-12-22

Family

ID=73804223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010878400.8A Pending CN112114746A (en) 2020-08-27 2020-08-27 Automatic deployment method of distributed storage cluster

Country Status (1)

Country Link
CN (1) CN112114746A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860276A (en) * 2021-01-28 2021-05-28 北京北信源软件股份有限公司 Distributed file system deployment method, device, server and storage medium
CN113810373A (en) * 2021-08-11 2021-12-17 长沙证通云计算有限公司 Ceph visual one-key deployment method based on national cryptographic algorithm
CN115987772A (en) * 2022-12-09 2023-04-18 深圳安巽科技有限公司 Operation and maintenance management method, operation and maintenance management platform and storage medium
CN118394611A (en) * 2024-06-25 2024-07-26 广州合明软件科技有限公司 Out-of-band installation operating system progress identification method, device, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
但行好事 莫问前程: "CEPH篇 CEPH部署", 《HTTPS://WWW.CNBLOGS.COM/GARFIELDCGF/P/12145504.HTML》 *
匿名: "ceph离线安装与日常简单维护", 《HTTPS://WWW.DOUNAITE.COM/ARTICLE/626A1EC1FCE9ED0DACD7D582.HTML》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860276A (en) * 2021-01-28 2021-05-28 北京北信源软件股份有限公司 Distributed file system deployment method, device, server and storage medium
CN112860276B (en) * 2021-01-28 2024-06-28 北京北信源软件股份有限公司 Distributed file system deployment method, device, server and storage medium
CN113810373A (en) * 2021-08-11 2021-12-17 长沙证通云计算有限公司 Ceph visual one-key deployment method based on national cryptographic algorithm
CN115987772A (en) * 2022-12-09 2023-04-18 深圳安巽科技有限公司 Operation and maintenance management method, operation and maintenance management platform and storage medium
CN118394611A (en) * 2024-06-25 2024-07-26 广州合明软件科技有限公司 Out-of-band installation operating system progress identification method, device, equipment and medium
CN118394611B (en) * 2024-06-25 2024-09-17 广州合明软件科技有限公司 Out-of-band installation operating system progress identification method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN112114746A (en) Automatic deployment method of distributed storage cluster
CN110752947B (en) K8s cluster deployment method and device, and deployment platform
CN112491606B (en) Method for automatically deploying high-availability cluster of service system based on infrastructure
US8713177B2 (en) Remote management of networked systems using secure modular platform
CN107741852B (en) Service deployment method based on cluster software
CN110138577B (en) Cluster creation method, first server and readable storage medium
CN111538625B (en) Ambari cluster deployment and data backup method based on Docker technology and electronic equipment
CN111835862B (en) Method for realizing storage back-end service of deployment object of reference flow type
CN115220874B (en) Kubernetes cluster deployment method, device, equipment and storage medium
CN111404628B (en) Time synchronization method and device
CN111913719B (en) Deployment method, device and apparatus of high-availability software and computer storage medium
CN114650213B (en) Method, device and storage medium for configuring Jenkins server cluster
CN113938382B (en) Parcemaker-based cluster management method, system and storage medium
CN110489134A (en) A kind of dispositions method and system of PXE server and High-Performance Computing Cluster environment
Cisco Release Notes for the Cisco ICS 7750 for System Software Release 1.1.0
Cisco Release Notes for the Cisco ICS 7750 for System Software Release 1.1.1
Cisco Release Notes for the Cisco ICS 7750 for System Software Release 1.0.x
Cisco Release Notes for the Cisco ICS 7750 for System Software Release 2.x.x
Cisco Loading System Images and Configuration Files
Cisco Loading System Images and Configuration Files
Cisco Loading System Images and Configuration Files
Cisco Loading System Images and Configuration Files
Cisco Loading System Images and Configuration Files
Cisco Loading System Images and Configuration Files
CN113886009A (en) System and method for deploying big data cluster by cloud service platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201222