Openstack environment computing node back-end storage management method
Technical Field
The invention belongs to the field of PAAS platform resource management, and particularly relates to a method for managing back-end storage of a computing node in an Openstack environment.
Background
Openstack, an open source distributed virtualization platform emerging in recent years. VMware, a legacy virtualization platform, is a commercial only version with a significant market share in early cloud computing projects. XEN, a virtualization platform under the citix corporation flag, only a commercial version, is a competitor to VMware. A virtual machine refers to a complete computer system which has complete hardware system functions and runs in a completely isolated environment through software simulation. The system comprises a computing node and a physical server used for running a virtual machine in a virtualization platform, and mainly provides CPU and memory resources for the virtual machine. The back-end storage is a storage device used for storing hard disk data of a virtual machine and is divided into an exclusive type (one back-end storage can only be used by one computing node) and a shared type (one back-end storage can be used by a plurality of computing nodes).
In a common virtualized environment (such as VMware and XEN) in the market, a compute node and a backend storage are usually managed separately as two resources, the compute node only provides CPU and memory resources, and a system disk and other expansion disks of a virtual machine are stored in the backend storage. However, in Openstack, the backend storage is divided into two types, one of which is managed together with the compute node and is used for storing a system disk of the virtual machine (hereinafter referred to as compute backend); and the other type of independent management is used for storing other expansion disks (hereinafter referred to as storage back ends) of the virtual machines.
In Openstack, if information of a computation back end is to be obtained, the information needs to be obtained through an API for obtaining information of a computation node. The record returned by the interface takes the computing node as a unit, and the record of the computing back end is recorded by how many computing nodes are. This results in the storage being counted multiple times if the compute backend is a shared backend storage. Such data can cause resource scheduling errors when creating virtual machines, causing virtual machines and system disks to be created on the wrong compute nodes and back-end storage.
This is a disadvantage of the prior art, and therefore, it is very necessary to provide a method for managing backend storage of a computing node in an Openstack environment to address the above-mentioned disadvantage of the prior art.
Disclosure of Invention
The invention aims to provide a method for managing the storage of the back end of the computing node in the Openstack environment, aiming at the defect that the shared computing back end in the Openstack environment cannot be managed comprehensively, so as to solve the technical problem.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for managing the storage of the back end of a computing node in an Openstack environment comprises the following steps:
step 1, establishing a data model, wherein the data model comprises establishing a storage information table, a storage tag table and a storage combination table;
step 2, computing the back end synchronization, pulling the detailed information of the computing node from Openstack through API, and storing the detailed information in a storage information table;
step 3, shared storage and combination, namely combining the shared computing back ends which are repeatedly and synchronously repeated for many times to eliminate data redundancy;
step 4, merging and synchronizing storage, wherein the merged storage capacity information is updated through an API of Openstack;
and 5, combining storage eviction, and evicting the computation back end which is connected with the computation node and is changed from the shared storage.
Further, in the step 1 of establishing the data model,
the field information of the storage information table comprises an ID, a storage name, a total storage capacity, a used storage capacity and a storage state;
the field information of the storage tag table comprises an ID, an associated storage ID, a tag name and a tag value;
the field information of the storage merge table includes a storage ID, a post-merge storage ID, a storage name, a total storage capacity, a used storage capacity, and a storage status.
Further, the ID of the storage information and the ID of the storage tag table both adopt a 32-bit universal identification code;
GB is adopted for the total storage capacity and the used storage capacity of the storage information table and the total storage capacity and the used storage capacity of the storage combination table;
the storage information table and the storage state in the storage merge table refer to the operating state of the storage device, including both available and unavailable states.
Further, the step 2 of calculating the back-end synchronization specifically comprises the following steps:
step 2-1, calling API (application program interface) os-hypervisor/detail of Openstack to pull detailed information of the computing node;
step 2-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end;
step 2-3, storing the information acquired in the step 2-2 into a storage information table;
and (2-4) ending.
Further, the rule generating the name of the computation backend is "compute node name _ backup".
Further, the specific steps of shared storage merging in step 3 are as follows:
step 3-1, selecting a calculation rear end to be merged from a storage information table;
step 3-2, checking whether the total storage capacity of the selected calculation rear end is consistent, and if so, entering step 3-3; if not, the selected computation back ends do not belong to the same shared storage, and cannot be combined, and the step 3-6 is carried out;
step 3-3, generating a new merged calculation back-end record in the storage information table;
step 3-4, moving the selected merged calculation back end from the storage information table to a storage merging table, and establishing association with a newly generated merged calculation back end in the storage information table;
3-5, adding a 'merge' label to the newly generated MERGED calculation back end, and marking the record as the MERGED record; and 3-6, finishing.
Further, the specific steps of generating a new merged calculation back-end record in the storage information table in step 3-3 are as follows:
3-3-1, self-defining a storage name;
step 3-3-2, selecting and combining the used capacity to calculate the maximum used capacity in the rear end;
and 3-3-3, if at least one running state in the merged computing back end is available, setting the merged computing back end state as an available state, otherwise, setting the merged computing back end state as an unavailable state.
Further, the merging and storage synchronization in step 4 includes the following specific steps:
step 4-1, calling API (application program interface) os-hypervisor of Openstack to pull detailed information of the computing node;
step 4-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end;
4-3, updating the total storage capacity, the used capacity and the state information of the corresponding record in the storage combination table according to the name of the calculated rear end;
step 4-4, updating the information of the calculation rear end after merging in the storage information table according to the updated record in the storage merging table;
4-5, checking a storage merging table, checking whether the total storage capacity of the members in the merged computing back end is changed, and if not, entering the step 4-6; if yes, entering step 5;
and 4-6, finishing.
Further, step 4-4 is to update the information of the post-calculation merged back end in the stored information table according to the updated record in the stored merged table, and the specific steps are as follows:
step 4-4-1, the used capacity is subject to the maximum used capacity in the selected merged calculation back end;
and 4-4-2, if at least one running state in the selected computation back ends is available, setting the merged computation back end state as an available state, otherwise, setting the merged computation back end state as an unavailable state.
Further, the concrete steps of merging the memory evictions in the step 5 are as follows:
step 5-1, judging whether the total storage capacity of all members in the merged computing back end is changed,
if yes, entering a step 5-2, otherwise, entering a step 5-3;
step 5-2, judging whether the total storage capacity of all the members after the change is combined and calculated is consistent,
if yes, the capacity of the shared computing back end is changed, and the total storage capacity of the back end is computed after the combination in the storage information table is updated; entering the step 5-5;
if not, only the total storage capacity of part of the members is changed, then the step 5-3 is carried out;
5-3, moving the merged calculation back ends from the storage merging table to the storage information table again;
step 5-4, adding an 'EXPELLED' label to the calculation back-end storage moved back into the storage information table, and marking the record as an evicted record;
and 5-5, finishing.
The invention has the beneficial effects that: according to the invention, the computing back end of Openstack is managed in a coordinated manner, a merging management function of the shared computing back end is provided, redundant data is shielded, a virtual machine can be prevented from being created on wrong computing nodes and back end storage, and accurate data support is provided for scheduling virtualization platform resources.
In addition, the invention has reliable design principle, simple structure and very wide application prospect.
Therefore, compared with the prior art, the invention has prominent substantive features and remarkable progress, and the beneficial effects of the implementation are also obvious.
Drawings
FIG. 1 is a flow chart of computing backend synchronization;
FIG. 2 is a shared memory merge flow diagram;
FIG. 3 is a merged store synchronization flow diagram;
FIG. 4 is a merge store eviction flow diagram;
FIG. 5 is a schematic diagram 1 of a computing node shared storage of embodiment 2;
FIG. 6 is a table 1 of storage information of embodiment 2;
FIG. 7 is a table 2 of storage information of embodiment 2;
FIG. 8 is a table 3 of storage information of embodiment 2;
FIG. 9 is a memory merge Table 1 of example 2;
FIG. 10 is a memory tag Table 1 of example 2;
FIG. 11 is a memory merge Table 2 of example 2;
FIG. 12 is a stored information Table 4 of example 2;
FIG. 13 is a memory tag Table 2 of example 2;
fig. 14 is a schematic diagram 2 of a computing node shared storage according to embodiment 2.
The specific implementation mode is as follows:
in order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Embodiment 1, the present invention provides a method for managing backend storage of a computing node in an Openstack environment, including the following steps:
step 1, establishing a data model, wherein the data model comprises establishing a storage information table, a storage tag table and a storage combination table;
in the creation of the data model, the model,
the field information of the storage information table comprises an ID, a storage name, a total storage capacity, a used storage capacity and a storage state;
the field information of the storage tag table comprises an ID, an associated storage ID, a tag name and a tag value;
the field information of the storage combination table comprises a storage ID, a combined storage ID, a storage name, total storage capacity, used storage capacity and storage state;
the ID of the storage information and the ID of the storage label table both adopt 32-bit universal identification codes;
GB is adopted for the total storage capacity and the used storage capacity of the storage information table and the total storage capacity and the used storage capacity of the storage combination table;
the storage information table and the storage state in the storage combination table refer to the running state of the storage device, and comprise an available state and an unavailable state;
step 2, computing the back end synchronization, pulling the detailed information of the computing node from Openstack through API, and storing the detailed information in a storage information table; as shown in fig. 1, the specific steps are as follows:
step 2-1, calling API (application program interface) os-hypervisor/detail of Openstack to pull detailed information of the computing node;
step 2-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end; the rule for generating the name of the calculation back end is 'calculation node name _ backup';
step 2-3, storing the information acquired in the step 2-2 into a storage information table;
step 2-4, finishing;
step 3, shared storage and combination, namely combining the shared computing back ends which are repeatedly and synchronously repeated for many times to eliminate data redundancy; as shown in fig. 2, the specific steps are as follows:
step 3-1, selecting a calculation rear end to be merged from a storage information table;
step 3-2, checking whether the total storage capacity of the selected calculation rear end is consistent, and if so, entering step 3-3; if not, the selected computation back ends do not belong to the same shared storage, and cannot be combined, and the step 3-6 is carried out;
step 3-3, generating a new merged calculation back-end record in the storage information table;
3-3-1, self-defining a storage name;
step 3-3-2, selecting and combining the used capacity to calculate the maximum used capacity in the rear end;
step 3-3-3, if at least one running state in the merged computing back end is available, setting the merged computing back end state as an available state, otherwise, setting the merged computing back end state as an unavailable state;
step 3-4, moving the selected merged calculation back end from the storage information table to a storage merging table, and establishing association with a newly generated merged calculation back end in the storage information table;
3-5, adding a 'merge' label to the newly generated MERGED calculation back end, and marking the record as the MERGED record;
step 3-6, finishing;
step 4, merging and synchronizing storage, wherein the merged storage capacity information is updated through an API of Openstack; as shown in fig. 3, the specific steps are as follows:
step 4-1, calling API (application program interface) os-hypervisor of Openstack to pull detailed information of the computing node;
step 4-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end;
4-3, updating the total storage capacity, the used capacity and the state information of the corresponding record in the storage combination table according to the name of the calculated rear end;
step 4-4, updating the information of the calculation rear end after merging in the storage information table according to the updated record in the storage merging table;
step 4-4-1, the used capacity is subject to the maximum used capacity in the selected merged calculation back end;
4-4-2, if at least one running state in the selected computation back ends is available, setting the merged computation back end state as an available state, otherwise, setting the merged computation back end state as an unavailable state;
4-5, checking a storage merging table, checking whether the total storage capacity of the members in the merged computing back end is changed, and if not, entering the step 4-6; if yes, entering step 5;
step 4-6, finishing;
step 5, merging storage and expelling, wherein the computing back end which is connected with the computing node and is changed is expelled from the shared storage; as shown in fig. 4, the specific steps are as follows:
step 5-1, judging whether the total storage capacity of all members in the merged computing back end is changed,
if yes, entering a step 5-2, otherwise, entering a step 5-3;
step 5-2, judging whether the total storage capacity of all the members after the change is combined and calculated is consistent,
if yes, the capacity of the shared computing back end is changed, and the total storage capacity of the back end is computed after the combination in the storage information table is updated; entering the step 5-5;
if not, only the total storage capacity of part of the members is changed, then the step 5-3 is carried out;
5-3, moving the merged calculation back ends from the storage merging table to the storage information table again;
step 5-4, adding an 'EXPELLED' label to the calculation back-end storage moved back into the storage information table, and marking the record as an evicted record;
and 5-5, finishing.
Embodiment 2 is a method for managing storage of a back end of a computing node in an Openstack environment, where the computing node shares the back end of computing, as shown in fig. 5, and the method includes the following steps:
step 1, establishing a data model, wherein the data model comprises establishing a storage information table, a storage tag table and a storage combination table;
in the creation of the data model, the model,
the field information of the storage information table comprises an ID, a storage name, a total storage capacity, a used storage capacity and a storage state;
the field information of the storage tag table comprises an ID, an associated storage ID, a tag name and a tag value;
the field information of the storage combination table comprises a storage ID, a combined storage ID, a storage name, total storage capacity, used storage capacity and storage state;
the ID of the storage information and the ID of the storage label table both adopt 32-bit universal identification codes;
GB is adopted for the total storage capacity and the used storage capacity of the storage information table and the total storage capacity and the used storage capacity of the storage combination table;
the storage information table and the storage state in the storage combination table refer to the running state of the storage device, and comprise an available state and an unavailable state;
step 2, computing the back end synchronization, pulling the detailed information of the computing node from Openstack through API, and storing the detailed information in a storage information table; the method comprises the following specific steps:
step 2-1, calling API (application program interface) os-hypervisor/detail of Openstack to pull detailed information of the computing node;
step 2-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end; the rule for generating the name of the calculation back end is 'calculation node name _ backup'; the number 1 computing node _ backup has the total storage capacity of 40GB, the available capacity of 30GB and the running state of available state; the number 2 computing node _ backup has the total storage capacity of 40GB, the available capacity of 30GB and the running state of available state; the node 3 _ backup is calculated, the total storage capacity is 40GB, the available capacity is 30GB, and the running state is the available state; the number 4 computing node _ backup has the total storage capacity of 50GB, the available capacity of 45GB and the running state of available state; the No. 5 computing node _ backup has the total storage capacity of 50GB, the available capacity of 45GB and the running state of available state; the number 6 computing node _ backup has the total storage capacity of 50GB, the available capacity of 45GB and the running state of available state;
step 2-3, storing the information acquired in the step 2-2 into a storage information table; generating a storage information table as shown in fig. 6;
step 2-4, finishing;
step 3, shared storage and combination, namely combining the shared computing back ends which are repeatedly and synchronously repeated for many times to eliminate data redundancy; the method comprises the following specific steps:
step 3-1, selecting a calculation rear end to be merged from a storage information table;
step 3-2, checking whether the total storage capacity of the selected calculation rear end is consistent, and if so, entering step 3-3; if not, the selected computation back ends do not belong to the same shared storage, and cannot be combined, and the step 3-6 is carried out; the total storage capacity of the No. 1 computing node _ back, the No. 2 computing node _ back and the No. 3 computing node _ back is 40GB, and the total storage capacity of the No. 4 computing node _ back, the No. 5 computing node _ back and the No. 6 computing node _ back is 50 GB;
step 3-3, generating a new merged calculation back-end record in the storage information table; combining the No. 1 computing node _ back, the No. 2 computing node _ back and the No. 3 computing node _ back to form a No. 1-No. 3 computing node shared _ back; combining the No. 4 computing node _ back, the No. 5 computing node _ back and the No. 6 computing node _ back to form the No. 4-No. 6 computing node shared _ back; generating a storage information table as shown in fig. 7;
step 3-4, moving the selected merged calculation back end from the storage information table to a storage merging table, and establishing association with a newly generated merged calculation back end in the storage information table; moving the No. 1 computing node _ backup, the No. 2 computing node _ backup and the No. 3 computing node _ backup from the storage information table to a storage merging table; moving the number 4 computing node _ backup, the number 5 computing node _ backup and the number 6 computing node _ backup from the storage information table to a storage merging table, and generating a storage information table shown in fig. 8 and a storage merging table shown in fig. 9;
3-5, adding a 'merge' label to the newly generated MERGED calculation back end, and marking the record as the MERGED record; adding a "merge" label to the 1-3 computing node sharing _ backup, marking that the record is a MERGED record, adding a "merge" label to the 4-6 computing node sharing _ backup, marking that the record is a MERGED record, and generating a storage label table as shown in fig. 10;
step 3-6, finishing;
step 4, merging and synchronizing storage, wherein the merged storage capacity information is updated through an API of Openstack; the method comprises the following specific steps:
step 4-1, calling API (application program interface) os-hypervisor of Openstack to pull detailed information of the computing node;
step 4-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end; at this time, if the number 1 computing node _ backup has a total storage capacity of 35GB, an available capacity of 20GB, and an operating state of the computing node _ backup is an available state; the number 2 computing node _ backup has the total storage capacity of 40GB, the available capacity of 30GB and the running state of available state; the node 3 _ backup is calculated, the total storage capacity is 40GB, the available capacity is 30GB, and the running state is the available state; the number 4 computing node _ backup has the total storage capacity of 60GB, the available capacity of 45GB and the running state of available state; the No. 5 computing node _ backup has the total storage capacity of 60GB, the available capacity of 45GB and the running state of available state; the number 6 computing node _ backup has the total storage capacity of 60GB, the available capacity of 45GB and the running state of available state;
4-3, updating the total storage capacity, the used capacity and the state information of the corresponding record in the storage combination table according to the name of the calculated rear end; generating a storage merge table as shown in FIG. 11;
step 4-4, updating the information of the calculation rear end after merging in the storage information table according to the updated record in the storage merging table; the total storage capacity of the No. 4-No. 6 computing nodes sharing the _ backup is changed into 60 GB;
4-5, checking a storage merging table, checking whether the total storage capacity of the members in the merged computing back end is changed, and if not, entering the step 4-6; if yes, entering step 5; at this time, the total storage capacity of the member number 1 computing node _ back of the number 1-3 computing node sharing _ back changes, the total storage capacity of the member number 4 computing node _ back of the number 4-5 computing node sharing _ back, the number 5 computing node _ back and the number 6 computing node _ back all change, and the step 5 is entered;
step 4-6, finishing;
step 5, merging storage and expelling, wherein the computing back end which is connected with the computing node and is changed is expelled from the shared storage; the method comprises the following specific steps:
step 5-1, judging whether the total storage capacity of all members in the merged computing back end is changed,
if yes, entering a step 5-2, otherwise, entering a step 5-3; the number 1-3 computing node sharing _ backup only changes the total storage capacity of the number 1 computing node _ backup, and then the step 5-3 is carried out; the total storage capacity of all the number 4 computing node _ back, the number 5 computing node _ back and the number 6 computing node _ back of the number 4-6 computing node sharing _ back is changed, and the step 5-2 is carried out;
step 5-2, judging whether the total storage capacity of all the members after the change is combined and calculated is consistent,
if yes, the capacity of the shared computing back end is changed, and the total storage capacity of the back end is computed after the combination in the storage information table is updated; entering the step 5-5;
if not, only the total storage capacity of part of the members is changed, then the step 5-3 is carried out;
the total storage capacity of the No. 4 computing node _ backup, the No. 5 computing node _ backup and the No. 6 computing node _ backup is changed from 50GB to 60GB, and the changes are consistent; entering the step 5-5;
5-3, moving the merged calculation back ends from the storage merging table to the storage information table again; moving the changed combined calculation back end No. 1 calculation node _ back to the storage information table again; generating a storage information table as shown in fig. 12;
step 5-4, adding an 'EXPELLED' label to the calculation back-end storage moved back into the storage information table, and marking the record as an evicted record; adding an "EXPELLED" tag to the number 1 computing node _ backup, and generating a storage tag table as shown in fig. 13; at this time, the relationship of the computing nodes sharing the computing backend is shown in fig. 14;
and 5-5, finishing.
The embodiments of the present invention are illustrative rather than restrictive, and the above-mentioned embodiments are only provided to help understanding of the present invention, so that the present invention is not limited to the embodiments described in the detailed description, and other embodiments derived from the technical solutions of the present invention by those skilled in the art also belong to the protection scope of the present invention.