CN107423111B - Openstack environment computing node back-end storage management method - Google Patents

Openstack environment computing node back-end storage management method Download PDF

Info

Publication number
CN107423111B
CN107423111B CN201710486725.XA CN201710486725A CN107423111B CN 107423111 B CN107423111 B CN 107423111B CN 201710486725 A CN201710486725 A CN 201710486725A CN 107423111 B CN107423111 B CN 107423111B
Authority
CN
China
Prior art keywords
storage
computing
back end
capacity
merged
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710486725.XA
Other languages
Chinese (zh)
Other versions
CN107423111A (en
Inventor
璧靛北
赵山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Cloud Information Technology Co Ltd
Original Assignee
Inspur Cloud Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Cloud Information Technology Co Ltd filed Critical Inspur Cloud Information Technology Co Ltd
Priority to CN201710486725.XA priority Critical patent/CN107423111B/en
Publication of CN107423111A publication Critical patent/CN107423111A/en
Application granted granted Critical
Publication of CN107423111B publication Critical patent/CN107423111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0667Virtualisation aspects at data level, e.g. file, record or object virtualisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a method for managing back-end storage of an Openstack environment computing node, which comprises the following steps of 1, establishing a data model, wherein the data model comprises a storage information table, a storage tag table and a storage merging table; 2. the calculation back end is synchronous, and the calculation node information is obtained through an API and stored in a storage information table; 3. shared storage merging, merging the shared computing back ends which are repeatedly synchronized for many times, and eliminating data redundancy; 4. merging and storing synchronously, and updating the merged storage capacity information through an API (application programming interface); 5. and combining memory eviction, and evicting the computing back end connected with the change from the shared memory. The invention provides a merging management function of the shared computing back end by carrying out overall management on the computing back end of Openstack, shields redundant data, prevents a virtual machine from being established on wrong computing nodes and back end storage, and provides accurate data support for scheduling virtualization platform resources.

Description

Openstack environment computing node back-end storage management method
Technical Field
The invention belongs to the field of PAAS platform resource management, and particularly relates to a method for managing back-end storage of a computing node in an Openstack environment.
Background
Openstack, an open source distributed virtualization platform emerging in recent years. VMware, a legacy virtualization platform, is a commercial only version with a significant market share in early cloud computing projects. XEN, a virtualization platform under the citix corporation flag, only a commercial version, is a competitor to VMware. A virtual machine refers to a complete computer system which has complete hardware system functions and runs in a completely isolated environment through software simulation. The system comprises a computing node and a physical server used for running a virtual machine in a virtualization platform, and mainly provides CPU and memory resources for the virtual machine. The back-end storage is a storage device used for storing hard disk data of a virtual machine and is divided into an exclusive type (one back-end storage can only be used by one computing node) and a shared type (one back-end storage can be used by a plurality of computing nodes).
In a common virtualized environment (such as VMware and XEN) in the market, a compute node and a backend storage are usually managed separately as two resources, the compute node only provides CPU and memory resources, and a system disk and other expansion disks of a virtual machine are stored in the backend storage. However, in Openstack, the backend storage is divided into two types, one of which is managed together with the compute node and is used for storing a system disk of the virtual machine (hereinafter referred to as compute backend); and the other type of independent management is used for storing other expansion disks (hereinafter referred to as storage back ends) of the virtual machines.
In Openstack, if information of a computation back end is to be obtained, the information needs to be obtained through an API for obtaining information of a computation node. The record returned by the interface takes the computing node as a unit, and the record of the computing back end is recorded by how many computing nodes are. This results in the storage being counted multiple times if the compute backend is a shared backend storage. Such data can cause resource scheduling errors when creating virtual machines, causing virtual machines and system disks to be created on the wrong compute nodes and back-end storage.
This is a disadvantage of the prior art, and therefore, it is very necessary to provide a method for managing backend storage of a computing node in an Openstack environment to address the above-mentioned disadvantage of the prior art.
Disclosure of Invention
The invention aims to provide a method for managing the storage of the back end of the computing node in the Openstack environment, aiming at the defect that the shared computing back end in the Openstack environment cannot be managed comprehensively, so as to solve the technical problem.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for managing the storage of the back end of a computing node in an Openstack environment comprises the following steps:
step 1, establishing a data model, wherein the data model comprises establishing a storage information table, a storage tag table and a storage combination table;
step 2, computing the back end synchronization, pulling the detailed information of the computing node from Openstack through API, and storing the detailed information in a storage information table;
step 3, shared storage and combination, namely combining the shared computing back ends which are repeatedly and synchronously repeated for many times to eliminate data redundancy;
step 4, merging and synchronizing storage, wherein the merged storage capacity information is updated through an API of Openstack;
and 5, combining storage eviction, and evicting the computation back end which is connected with the computation node and is changed from the shared storage.
Further, in the step 1 of establishing the data model,
the field information of the storage information table comprises an ID, a storage name, a total storage capacity, a used storage capacity and a storage state;
the field information of the storage tag table comprises an ID, an associated storage ID, a tag name and a tag value;
the field information of the storage merge table includes a storage ID, a post-merge storage ID, a storage name, a total storage capacity, a used storage capacity, and a storage status.
Further, the ID of the storage information and the ID of the storage tag table both adopt a 32-bit universal identification code;
GB is adopted for the total storage capacity and the used storage capacity of the storage information table and the total storage capacity and the used storage capacity of the storage combination table;
the storage information table and the storage state in the storage merge table refer to the operating state of the storage device, including both available and unavailable states.
Further, the step 2 of calculating the back-end synchronization specifically comprises the following steps:
step 2-1, calling API (application program interface) os-hypervisor/detail of Openstack to pull detailed information of the computing node;
step 2-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end;
step 2-3, storing the information acquired in the step 2-2 into a storage information table;
and (2-4) ending.
Further, the rule generating the name of the computation backend is "compute node name _ backup".
Further, the specific steps of shared storage merging in step 3 are as follows:
step 3-1, selecting a calculation rear end to be merged from a storage information table;
step 3-2, checking whether the total storage capacity of the selected calculation rear end is consistent, and if so, entering step 3-3; if not, the selected computation back ends do not belong to the same shared storage, and cannot be combined, and the step 3-6 is carried out;
step 3-3, generating a new merged calculation back-end record in the storage information table;
step 3-4, moving the selected merged calculation back end from the storage information table to a storage merging table, and establishing association with a newly generated merged calculation back end in the storage information table;
3-5, adding a 'merge' label to the newly generated MERGED calculation back end, and marking the record as the MERGED record; and 3-6, finishing.
Further, the specific steps of generating a new merged calculation back-end record in the storage information table in step 3-3 are as follows:
3-3-1, self-defining a storage name;
step 3-3-2, selecting and combining the used capacity to calculate the maximum used capacity in the rear end;
and 3-3-3, if at least one running state in the merged computing back end is available, setting the merged computing back end state as an available state, otherwise, setting the merged computing back end state as an unavailable state.
Further, the merging and storage synchronization in step 4 includes the following specific steps:
step 4-1, calling API (application program interface) os-hypervisor of Openstack to pull detailed information of the computing node;
step 4-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end;
4-3, updating the total storage capacity, the used capacity and the state information of the corresponding record in the storage combination table according to the name of the calculated rear end;
step 4-4, updating the information of the calculation rear end after merging in the storage information table according to the updated record in the storage merging table;
4-5, checking a storage merging table, checking whether the total storage capacity of the members in the merged computing back end is changed, and if not, entering the step 4-6; if yes, entering step 5;
and 4-6, finishing.
Further, step 4-4 is to update the information of the post-calculation merged back end in the stored information table according to the updated record in the stored merged table, and the specific steps are as follows:
step 4-4-1, the used capacity is subject to the maximum used capacity in the selected merged calculation back end;
and 4-4-2, if at least one running state in the selected computation back ends is available, setting the merged computation back end state as an available state, otherwise, setting the merged computation back end state as an unavailable state.
Further, the concrete steps of merging the memory evictions in the step 5 are as follows:
step 5-1, judging whether the total storage capacity of all members in the merged computing back end is changed,
if yes, entering a step 5-2, otherwise, entering a step 5-3;
step 5-2, judging whether the total storage capacity of all the members after the change is combined and calculated is consistent,
if yes, the capacity of the shared computing back end is changed, and the total storage capacity of the back end is computed after the combination in the storage information table is updated; entering the step 5-5;
if not, only the total storage capacity of part of the members is changed, then the step 5-3 is carried out;
5-3, moving the merged calculation back ends from the storage merging table to the storage information table again;
step 5-4, adding an 'EXPELLED' label to the calculation back-end storage moved back into the storage information table, and marking the record as an evicted record;
and 5-5, finishing.
The invention has the beneficial effects that: according to the invention, the computing back end of Openstack is managed in a coordinated manner, a merging management function of the shared computing back end is provided, redundant data is shielded, a virtual machine can be prevented from being created on wrong computing nodes and back end storage, and accurate data support is provided for scheduling virtualization platform resources.
In addition, the invention has reliable design principle, simple structure and very wide application prospect.
Therefore, compared with the prior art, the invention has prominent substantive features and remarkable progress, and the beneficial effects of the implementation are also obvious.
Drawings
FIG. 1 is a flow chart of computing backend synchronization;
FIG. 2 is a shared memory merge flow diagram;
FIG. 3 is a merged store synchronization flow diagram;
FIG. 4 is a merge store eviction flow diagram;
FIG. 5 is a schematic diagram 1 of a computing node shared storage of embodiment 2;
FIG. 6 is a table 1 of storage information of embodiment 2;
FIG. 7 is a table 2 of storage information of embodiment 2;
FIG. 8 is a table 3 of storage information of embodiment 2;
FIG. 9 is a memory merge Table 1 of example 2;
FIG. 10 is a memory tag Table 1 of example 2;
FIG. 11 is a memory merge Table 2 of example 2;
FIG. 12 is a stored information Table 4 of example 2;
FIG. 13 is a memory tag Table 2 of example 2;
fig. 14 is a schematic diagram 2 of a computing node shared storage according to embodiment 2.
The specific implementation mode is as follows:
in order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Embodiment 1, the present invention provides a method for managing backend storage of a computing node in an Openstack environment, including the following steps:
step 1, establishing a data model, wherein the data model comprises establishing a storage information table, a storage tag table and a storage combination table;
in the creation of the data model, the model,
the field information of the storage information table comprises an ID, a storage name, a total storage capacity, a used storage capacity and a storage state;
the field information of the storage tag table comprises an ID, an associated storage ID, a tag name and a tag value;
the field information of the storage combination table comprises a storage ID, a combined storage ID, a storage name, total storage capacity, used storage capacity and storage state;
the ID of the storage information and the ID of the storage label table both adopt 32-bit universal identification codes;
GB is adopted for the total storage capacity and the used storage capacity of the storage information table and the total storage capacity and the used storage capacity of the storage combination table;
the storage information table and the storage state in the storage combination table refer to the running state of the storage device, and comprise an available state and an unavailable state;
step 2, computing the back end synchronization, pulling the detailed information of the computing node from Openstack through API, and storing the detailed information in a storage information table; as shown in fig. 1, the specific steps are as follows:
step 2-1, calling API (application program interface) os-hypervisor/detail of Openstack to pull detailed information of the computing node;
step 2-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end; the rule for generating the name of the calculation back end is 'calculation node name _ backup';
step 2-3, storing the information acquired in the step 2-2 into a storage information table;
step 2-4, finishing;
step 3, shared storage and combination, namely combining the shared computing back ends which are repeatedly and synchronously repeated for many times to eliminate data redundancy; as shown in fig. 2, the specific steps are as follows:
step 3-1, selecting a calculation rear end to be merged from a storage information table;
step 3-2, checking whether the total storage capacity of the selected calculation rear end is consistent, and if so, entering step 3-3; if not, the selected computation back ends do not belong to the same shared storage, and cannot be combined, and the step 3-6 is carried out;
step 3-3, generating a new merged calculation back-end record in the storage information table;
3-3-1, self-defining a storage name;
step 3-3-2, selecting and combining the used capacity to calculate the maximum used capacity in the rear end;
step 3-3-3, if at least one running state in the merged computing back end is available, setting the merged computing back end state as an available state, otherwise, setting the merged computing back end state as an unavailable state;
step 3-4, moving the selected merged calculation back end from the storage information table to a storage merging table, and establishing association with a newly generated merged calculation back end in the storage information table;
3-5, adding a 'merge' label to the newly generated MERGED calculation back end, and marking the record as the MERGED record;
step 3-6, finishing;
step 4, merging and synchronizing storage, wherein the merged storage capacity information is updated through an API of Openstack; as shown in fig. 3, the specific steps are as follows:
step 4-1, calling API (application program interface) os-hypervisor of Openstack to pull detailed information of the computing node;
step 4-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end;
4-3, updating the total storage capacity, the used capacity and the state information of the corresponding record in the storage combination table according to the name of the calculated rear end;
step 4-4, updating the information of the calculation rear end after merging in the storage information table according to the updated record in the storage merging table;
step 4-4-1, the used capacity is subject to the maximum used capacity in the selected merged calculation back end;
4-4-2, if at least one running state in the selected computation back ends is available, setting the merged computation back end state as an available state, otherwise, setting the merged computation back end state as an unavailable state;
4-5, checking a storage merging table, checking whether the total storage capacity of the members in the merged computing back end is changed, and if not, entering the step 4-6; if yes, entering step 5;
step 4-6, finishing;
step 5, merging storage and expelling, wherein the computing back end which is connected with the computing node and is changed is expelled from the shared storage; as shown in fig. 4, the specific steps are as follows:
step 5-1, judging whether the total storage capacity of all members in the merged computing back end is changed,
if yes, entering a step 5-2, otherwise, entering a step 5-3;
step 5-2, judging whether the total storage capacity of all the members after the change is combined and calculated is consistent,
if yes, the capacity of the shared computing back end is changed, and the total storage capacity of the back end is computed after the combination in the storage information table is updated; entering the step 5-5;
if not, only the total storage capacity of part of the members is changed, then the step 5-3 is carried out;
5-3, moving the merged calculation back ends from the storage merging table to the storage information table again;
step 5-4, adding an 'EXPELLED' label to the calculation back-end storage moved back into the storage information table, and marking the record as an evicted record;
and 5-5, finishing.
Embodiment 2 is a method for managing storage of a back end of a computing node in an Openstack environment, where the computing node shares the back end of computing, as shown in fig. 5, and the method includes the following steps:
step 1, establishing a data model, wherein the data model comprises establishing a storage information table, a storage tag table and a storage combination table;
in the creation of the data model, the model,
the field information of the storage information table comprises an ID, a storage name, a total storage capacity, a used storage capacity and a storage state;
the field information of the storage tag table comprises an ID, an associated storage ID, a tag name and a tag value;
the field information of the storage combination table comprises a storage ID, a combined storage ID, a storage name, total storage capacity, used storage capacity and storage state;
the ID of the storage information and the ID of the storage label table both adopt 32-bit universal identification codes;
GB is adopted for the total storage capacity and the used storage capacity of the storage information table and the total storage capacity and the used storage capacity of the storage combination table;
the storage information table and the storage state in the storage combination table refer to the running state of the storage device, and comprise an available state and an unavailable state;
step 2, computing the back end synchronization, pulling the detailed information of the computing node from Openstack through API, and storing the detailed information in a storage information table; the method comprises the following specific steps:
step 2-1, calling API (application program interface) os-hypervisor/detail of Openstack to pull detailed information of the computing node;
step 2-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end; the rule for generating the name of the calculation back end is 'calculation node name _ backup'; the number 1 computing node _ backup has the total storage capacity of 40GB, the available capacity of 30GB and the running state of available state; the number 2 computing node _ backup has the total storage capacity of 40GB, the available capacity of 30GB and the running state of available state; the node 3 _ backup is calculated, the total storage capacity is 40GB, the available capacity is 30GB, and the running state is the available state; the number 4 computing node _ backup has the total storage capacity of 50GB, the available capacity of 45GB and the running state of available state; the No. 5 computing node _ backup has the total storage capacity of 50GB, the available capacity of 45GB and the running state of available state; the number 6 computing node _ backup has the total storage capacity of 50GB, the available capacity of 45GB and the running state of available state;
step 2-3, storing the information acquired in the step 2-2 into a storage information table; generating a storage information table as shown in fig. 6;
step 2-4, finishing;
step 3, shared storage and combination, namely combining the shared computing back ends which are repeatedly and synchronously repeated for many times to eliminate data redundancy; the method comprises the following specific steps:
step 3-1, selecting a calculation rear end to be merged from a storage information table;
step 3-2, checking whether the total storage capacity of the selected calculation rear end is consistent, and if so, entering step 3-3; if not, the selected computation back ends do not belong to the same shared storage, and cannot be combined, and the step 3-6 is carried out; the total storage capacity of the No. 1 computing node _ back, the No. 2 computing node _ back and the No. 3 computing node _ back is 40GB, and the total storage capacity of the No. 4 computing node _ back, the No. 5 computing node _ back and the No. 6 computing node _ back is 50 GB;
step 3-3, generating a new merged calculation back-end record in the storage information table; combining the No. 1 computing node _ back, the No. 2 computing node _ back and the No. 3 computing node _ back to form a No. 1-No. 3 computing node shared _ back; combining the No. 4 computing node _ back, the No. 5 computing node _ back and the No. 6 computing node _ back to form the No. 4-No. 6 computing node shared _ back; generating a storage information table as shown in fig. 7;
step 3-4, moving the selected merged calculation back end from the storage information table to a storage merging table, and establishing association with a newly generated merged calculation back end in the storage information table; moving the No. 1 computing node _ backup, the No. 2 computing node _ backup and the No. 3 computing node _ backup from the storage information table to a storage merging table; moving the number 4 computing node _ backup, the number 5 computing node _ backup and the number 6 computing node _ backup from the storage information table to a storage merging table, and generating a storage information table shown in fig. 8 and a storage merging table shown in fig. 9;
3-5, adding a 'merge' label to the newly generated MERGED calculation back end, and marking the record as the MERGED record; adding a "merge" label to the 1-3 computing node sharing _ backup, marking that the record is a MERGED record, adding a "merge" label to the 4-6 computing node sharing _ backup, marking that the record is a MERGED record, and generating a storage label table as shown in fig. 10;
step 3-6, finishing;
step 4, merging and synchronizing storage, wherein the merged storage capacity information is updated through an API of Openstack; the method comprises the following specific steps:
step 4-1, calling API (application program interface) os-hypervisor of Openstack to pull detailed information of the computing node;
step 4-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end; at this time, if the number 1 computing node _ backup has a total storage capacity of 35GB, an available capacity of 20GB, and an operating state of the computing node _ backup is an available state; the number 2 computing node _ backup has the total storage capacity of 40GB, the available capacity of 30GB and the running state of available state; the node 3 _ backup is calculated, the total storage capacity is 40GB, the available capacity is 30GB, and the running state is the available state; the number 4 computing node _ backup has the total storage capacity of 60GB, the available capacity of 45GB and the running state of available state; the No. 5 computing node _ backup has the total storage capacity of 60GB, the available capacity of 45GB and the running state of available state; the number 6 computing node _ backup has the total storage capacity of 60GB, the available capacity of 45GB and the running state of available state;
4-3, updating the total storage capacity, the used capacity and the state information of the corresponding record in the storage combination table according to the name of the calculated rear end; generating a storage merge table as shown in FIG. 11;
step 4-4, updating the information of the calculation rear end after merging in the storage information table according to the updated record in the storage merging table; the total storage capacity of the No. 4-No. 6 computing nodes sharing the _ backup is changed into 60 GB;
4-5, checking a storage merging table, checking whether the total storage capacity of the members in the merged computing back end is changed, and if not, entering the step 4-6; if yes, entering step 5; at this time, the total storage capacity of the member number 1 computing node _ back of the number 1-3 computing node sharing _ back changes, the total storage capacity of the member number 4 computing node _ back of the number 4-5 computing node sharing _ back, the number 5 computing node _ back and the number 6 computing node _ back all change, and the step 5 is entered;
step 4-6, finishing;
step 5, merging storage and expelling, wherein the computing back end which is connected with the computing node and is changed is expelled from the shared storage; the method comprises the following specific steps:
step 5-1, judging whether the total storage capacity of all members in the merged computing back end is changed,
if yes, entering a step 5-2, otherwise, entering a step 5-3; the number 1-3 computing node sharing _ backup only changes the total storage capacity of the number 1 computing node _ backup, and then the step 5-3 is carried out; the total storage capacity of all the number 4 computing node _ back, the number 5 computing node _ back and the number 6 computing node _ back of the number 4-6 computing node sharing _ back is changed, and the step 5-2 is carried out;
step 5-2, judging whether the total storage capacity of all the members after the change is combined and calculated is consistent,
if yes, the capacity of the shared computing back end is changed, and the total storage capacity of the back end is computed after the combination in the storage information table is updated; entering the step 5-5;
if not, only the total storage capacity of part of the members is changed, then the step 5-3 is carried out;
the total storage capacity of the No. 4 computing node _ backup, the No. 5 computing node _ backup and the No. 6 computing node _ backup is changed from 50GB to 60GB, and the changes are consistent; entering the step 5-5;
5-3, moving the merged calculation back ends from the storage merging table to the storage information table again; moving the changed combined calculation back end No. 1 calculation node _ back to the storage information table again; generating a storage information table as shown in fig. 12;
step 5-4, adding an 'EXPELLED' label to the calculation back-end storage moved back into the storage information table, and marking the record as an evicted record; adding an "EXPELLED" tag to the number 1 computing node _ backup, and generating a storage tag table as shown in fig. 13; at this time, the relationship of the computing nodes sharing the computing backend is shown in fig. 14;
and 5-5, finishing.
The embodiments of the present invention are illustrative rather than restrictive, and the above-mentioned embodiments are only provided to help understanding of the present invention, so that the present invention is not limited to the embodiments described in the detailed description, and other embodiments derived from the technical solutions of the present invention by those skilled in the art also belong to the protection scope of the present invention.

Claims (7)

1. A method for managing the storage of the back end of a computing node in an Openstack environment is characterized by comprising the following steps:
step 1, establishing a data model, wherein the data model comprises establishing a storage information table, a storage tag table and a storage combination table;
step 2, computing the back end synchronization, pulling the detailed information of the computing node from Openstack through API, and storing the detailed information in a storage information table;
step 3, shared storage and combination, namely combining the shared computing back ends which are repeatedly and synchronously repeated for many times to eliminate data redundancy; the specific steps of shared storage merging in step 3 are as follows:
step 3-1, selecting a calculation rear end to be merged from a storage information table;
step 3-2, checking whether the total storage capacity of the selected calculation rear end is consistent, and if so, entering step 3-3; if not, the selected computation back ends do not belong to the same shared storage, and cannot be combined, and the step 3-6 is carried out;
step 3-3, generating a new merged calculation back-end record in the storage information table;
step 3-4, moving the selected merged calculation back end from the storage information table to a storage merging table, and establishing association with a newly generated merged calculation back end in the storage information table;
3-5, adding a 'merge' label to the newly generated MERGED calculation back end, and marking the record as the MERGED record;
step 3-6, finishing;
step 4, merging and synchronizing storage, wherein the merged storage capacity information is updated through an API of Openstack; in the step 4, the merging and the storage are synchronized, and the specific steps are as follows:
step 4-1, calling API (application program interface) os-hypervisor of Openstack to pull detailed information of the computing node;
step 4-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end;
4-3, updating the total storage capacity, the used capacity and the state information of the corresponding record in the storage combination table according to the name of the calculated rear end;
step 4-4, updating the information of the calculation rear end after merging in the storage information table according to the updated record in the storage merging table;
4-5, checking a storage merging table, checking whether the total storage capacity of the members in the merged computing back end is changed, and if not, entering the step 4-6; if yes, entering step 5;
step 4-6, finishing;
step 5, merging storage and expelling, wherein the computing back end which is connected with the computing node and is changed is expelled from the shared storage; the concrete steps of merging and storing evictions in the step 5 are as follows:
step 5-1, judging whether the total storage capacity of all members in the merged computing back end is changed,
if yes, entering a step 5-2, otherwise, entering a step 5-3;
step 5-2, judging whether the total storage capacity of all the members after the change is combined and calculated is consistent,
if yes, the capacity of the shared computing back end is changed, and the total storage capacity of the back end is computed after the combination in the storage information table is updated; entering the step 5-5;
if not, only the total storage capacity of part of the members is changed, then the step 5-3 is carried out;
5-3, moving the merged calculation back ends from the storage merging table to the storage information table again;
step 5-4, adding an 'EXPELLED' label to the calculation back-end storage moved back into the storage information table, and marking the record as an evicted record;
and 5-5, finishing.
2. The Openstack environment computing node backend storage management method according to claim 1, wherein in the step 1 of establishing the data model,
the field information of the storage information table comprises an ID, a storage name, a total storage capacity, a used storage capacity and a storage state;
the field information of the storage tag table comprises an ID, an associated storage ID, a tag name and a tag value;
the field information of the storage merge table includes a storage ID, a post-merge storage ID, a storage name, a total storage capacity, a used storage capacity, and a storage status.
3. The Openstack environment computing node back-end storage management method according to claim 2, wherein the ID of the storage information and the ID of the storage tag table both adopt 32-bit universal identification codes;
GB is adopted for the total storage capacity and the used storage capacity of the storage information table and the total storage capacity and the used storage capacity of the storage combination table;
the storage information table and the storage state in the storage merge table refer to the operating state of the storage device, including both available and unavailable states.
4. The method for managing the backend storage of the computing nodes in the Openstack environment according to claim 2, wherein the step 2 of computing the backend synchronization specifically includes the steps of:
step 2-1, calling API (application program interface) os-hypervisor/detail of Openstack to pull detailed information of the computing node;
step 2-2, analyzing the information of the computing nodes, acquiring the total storage capacity, the available capacity and the running state of the computing back end connected with the computing nodes, and generating the name of the computing back end;
step 2-3, storing the information acquired in the step 2-2 into a storage information table;
and (2-4) ending.
5. The Openstack environment computing node backend storage management method according to claim 4, wherein a rule for generating a name of a computing backend is "computing node name _ back".
6. The method for managing the backend storage of the computing nodes in the Openstack environment according to claim 4, wherein the specific step of generating a new merged computed backend record in the storage information table in step 3-3 is as follows:
3-3-1, self-defining a storage name;
step 3-3-2, selecting and combining the used capacity to calculate the maximum used capacity in the rear end;
and 3-3-3, if at least one running state in the merged computing back end is available, setting the merged computing back end state as an available state, otherwise, setting the merged computing back end state as an unavailable state.
7. The method for managing the storage of the back end of the computing node in the Openstack environment according to claim 4, wherein the step 4-4 updates the information of the back end of the computing node after merging in the storage information table according to the updated record in the storage merging table, and comprises the following specific steps:
step 4-4-1, the used capacity is subject to the maximum used capacity in the selected merged calculation back end;
and 4-4-2, if at least one running state in the selected computation back ends is available, setting the merged computation back end state as an available state, otherwise, setting the merged computation back end state as an unavailable state.
CN201710486725.XA 2017-06-23 2017-06-23 Openstack environment computing node back-end storage management method Active CN107423111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710486725.XA CN107423111B (en) 2017-06-23 2017-06-23 Openstack environment computing node back-end storage management method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710486725.XA CN107423111B (en) 2017-06-23 2017-06-23 Openstack environment computing node back-end storage management method

Publications (2)

Publication Number Publication Date
CN107423111A CN107423111A (en) 2017-12-01
CN107423111B true CN107423111B (en) 2020-06-26

Family

ID=60427303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710486725.XA Active CN107423111B (en) 2017-06-23 2017-06-23 Openstack environment computing node back-end storage management method

Country Status (1)

Country Link
CN (1) CN107423111B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110515535B (en) * 2018-05-22 2021-01-01 杭州海康威视数字技术股份有限公司 Hard disk read-write control method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101311930A (en) * 2007-05-21 2008-11-26 Sap股份公司 Block compression of tables with repeated values
CN103853599A (en) * 2014-03-17 2014-06-11 北京京东尚科信息技术有限公司 Extension method of node calculating ability
CN106462498A (en) * 2014-06-23 2017-02-22 利奇德股份有限公司 Modular switched fabric for data storage systems

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9411628B2 (en) * 2014-11-13 2016-08-09 Microsoft Technology Licensing, Llc Virtual machine cluster backup in a multi-node environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101311930A (en) * 2007-05-21 2008-11-26 Sap股份公司 Block compression of tables with repeated values
CN103853599A (en) * 2014-03-17 2014-06-11 北京京东尚科信息技术有限公司 Extension method of node calculating ability
CN106462498A (en) * 2014-06-23 2017-02-22 利奇德股份有限公司 Modular switched fabric for data storage systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
OpenStack Nova_ 在一个hypervisor中配置多个Ceph后端;Qi Sun;《http://ceph.org.cn/2016/05/02/openstack-nova-%E5%9C%A8%E4%B8%80%E4%B8%AAhypervisor%E4%B8%AD%E9%85%8D%E7%BD%AE%E5%A4%9A%E4%B8%AAceph%E5%90%8E%E7%AB%AF/》;20160502;第1-6页 *

Also Published As

Publication number Publication date
CN107423111A (en) 2017-12-01

Similar Documents

Publication Publication Date Title
US11055181B2 (en) Unique identifiers for data replication, migration, failover operations and failback operations
US11061584B2 (en) Profile-guided data preloading for virtualized resources
US10061520B1 (en) Accelerated data access operations
CN101964820B (en) Method and system for keeping data consistency
US9710475B1 (en) Synchronization of data
US9244717B2 (en) Method and system for visualizing linked clone trees
US9286344B1 (en) Method and system for maintaining consistency for I/O operations on metadata distributed amongst nodes in a ring structure
US9772784B2 (en) Method and system for maintaining consistency for I/O operations on metadata distributed amongst nodes in a ring structure
US9886443B1 (en) Distributed NFS metadata server
EP3579110A1 (en) Performance testing platform that enables reuse of automation scripts and performance testing scalability
US11423008B2 (en) Generating a data lineage record to facilitate source system and destination system mapping
CN103970585A (en) Method and device for creating virtual machine
US11397718B2 (en) Dynamic selection of synchronization update path
US11176119B2 (en) Database recovery using persistent address spaces
US9619490B2 (en) Mechanism for performing lockless rolling upgrade of NoSQL database
US9792050B2 (en) Distributed caching systems and methods
US11188516B2 (en) Providing consistent database recovery after database failure for distributed databases with non-durable storage leveraging background synchronization point
US9798483B2 (en) Object storage power consumption optimization
CN107423111B (en) Openstack environment computing node back-end storage management method
WO2017188929A1 (en) Method and apparatus for replicating data between storage systems
CN111465920A (en) Management of data written to a memory controller via a bus interface during a remote copy operation
US11429492B2 (en) Protecting and identifying virtual machines that have the same name in a multi-tenant distributed environment
US11334445B2 (en) Using non-volatile memory to improve the availability of an in-memory database
CN107590286B (en) Method and device for managing transaction information in cluster file system
US20240126654A1 (en) Clone-aware backup and restore

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200520

Address after: Building S01, Inspur Science Park, No. 1036, Inspur Road, high tech Zone, Jinan City, Shandong Province, 250000

Applicant after: Tidal Cloud Information Technology Co.,Ltd.

Address before: 450000 Henan province Zheng Dong New District of Zhengzhou City Xinyi Road No. 278 16 floor room 1601

Applicant before: ZHENGZHOU YUNHAI INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Building S01, Inspur Science Park, No. 1036, Inspur Road, high tech Zone, Jinan City, Shandong Province, 250000

Patentee after: Inspur cloud Information Technology Co., Ltd

Address before: Building S01, Inspur Science Park, No. 1036, Inspur Road, high tech Zone, Jinan City, Shandong Province, 250000

Patentee before: Tidal Cloud Information Technology Co.,Ltd.