CN111552439A - Data storage method, device, system, electronic equipment and storage medium - Google Patents

Data storage method, device, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN111552439A
CN111552439A CN202010334382.7A CN202010334382A CN111552439A CN 111552439 A CN111552439 A CN 111552439A CN 202010334382 A CN202010334382 A CN 202010334382A CN 111552439 A CN111552439 A CN 111552439A
Authority
CN
China
Prior art keywords
power
saving group
saving
data object
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010334382.7A
Other languages
Chinese (zh)
Other versions
CN111552439B (en
Inventor
高华龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunkuanzhiye Network Technology Co ltd
Original Assignee
Beijing Yunkuanzhiye Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunkuanzhiye Network Technology Co ltd filed Critical Beijing Yunkuanzhiye Network Technology Co ltd
Priority to CN202010334382.7A priority Critical patent/CN111552439B/en
Publication of CN111552439A publication Critical patent/CN111552439A/en
Application granted granted Critical
Publication of CN111552439B publication Critical patent/CN111552439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0652Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Power Sources (AREA)

Abstract

The application relates to a data storage method, a device, a system, an electronic device and a storage medium, wherein the method comprises the following steps: when the data object needs to be stored, sequentially accessing a plurality of power-saving groups in the recently-accessed LRU queue, and determining whether an allocable position exists in each power-saving group; when there are allocatable locations in the current power saving group: generating an ID corresponding to the data object, the ID containing a number of the assignable location within the current power-saving group; storing bitmap information of the current power-saving group in a first area of the current power-saving group, the bitmap information containing the number of the allocable positions within the current power-saving group; storing the data object in a second region of the current power saving group. According to the embodiment provided by the application, the metadata storage cost can be reduced while the security of the metadata storage is guaranteed.

Description

Data storage method, device, system, electronic equipment and storage medium
Technical Field
The present application relates to the field of storage, and more particularly, to a data storage method, apparatus, system, electronic device, and storage medium.
Background
With the widespread use of computer applications, the amount of data to be stored is increasing, and the amount of data to be archived and backed up for storage is increasing, so that the cost of storing data needs to be reduced. At present, the following methods mainly exist for reducing the storage cost: the first is to reduce the cost of storage hardware, for example, Redundant Array of Independent Disks (RAID) technology can be used to make low-speed media achieve performance close to that of high-speed media; the second is to reduce the cost of operation and maintenance, for example, a large-scale array of Idle Disks (MAID) technology is used to reduce the power consumption of the Disks; the third is to reduce the actual storage space of the data, for example, duplicate data may be deleted, the data may be compressed, and so on. However, there are still certain difficulties in combining these technologies for use due to their application to different scenarios.
In the above method for reducing storage cost, the MAID technology is proposed and developed based on the tape library operation mode. When data is stored using MAID technology, only the accessed disk is powered on, while the other disks are typically powered off. Therefore, a disk storage system to which the MAID technology is applied may also be referred to as a disk library. The MAID technology has the characteristics of greenness, environmental protection, energy conservation and the like required by the current popular 'green storage' to a great extent, because most of the disks in the disk storage system applying the MAID technology are in a power-off state, and only when the system needs to be accessed, the corresponding hard disk can be powered on to operate. After a certain idle time (the idle time can be preset based on the access characteristics, the access frequency and the like of the system) passes through the hard disk in the power-on state, the hard disk can be powered off again, and therefore the purposes of saving electricity, protecting the environment, prolonging the service life of the hard disk and the like are achieved.
However, in the existing MAID technology, metadata is centrally stored, and is generally stored in a different medium from target access data to be corresponded thereto, and a storage medium storing the metadata requires higher storage cost to secure the security of the metadata. Once an unrecoverable corruption of the storage medium used to store the metadata occurs, the target access data is lost even if the medium storing the target access data is not down. Moreover, as data storage space grows, the pressure on metadata storage also increases.
Disclosure of Invention
In order to solve the technical problem, the application provides a data storage method and system.
According to an aspect of the present disclosure, there is provided a data storage method, the method including: when the data object needs to be stored, sequentially accessing a plurality of power-saving groups in the recently-accessed LRU queue, and determining whether an allocable position exists in each power-saving group; generating an ID corresponding to the data object when an allocable position exists in a current power saving group, wherein the ID comprises a number of the allocable position in the current power saving group; storing bitmap information of the current power-saving group in a first area of the current power-saving group, the bitmap information containing the number of the allocable positions within the current power-saving group; storing the data object in a second region of the current power saving group.
Further, the method further comprises: when there is no unaccessed power-saving group in the LRU queue and the LRU queue is full, a tail power-saving group in the LRU queue is evicted and the evicted power-saving group is put to sleep.
Further, the method further comprises: when the power saving set which is not accessed does not exist in the LRU queue and the LRU queue is not full, waking up the power saving set which is not used currently and inserting the power saving set into the head position of the LRU queue; and sequentially accessing a plurality of power saving groups in the LRU queue from the head position of the LRU queue and judging whether an allocable position exists in the power saving groups.
Further, the method further comprises: when a plurality of data objects are stored, sequentially and respectively acquiring the numbers of allocable positions in a power-saving group corresponding to the data objects in the power-saving group for each data object; storing bitmap information of the power-saving group in a first area of the power-saving group, the bitmap information containing numbers of assignable positions within the power-saving group corresponding to the plurality of data objects, respectively; storing the plurality of data objects in a second region of the power-saving group based on the number of assignable locations within the power-saving group corresponding to each data object.
Further, the method further comprises: when the last position of the first area of the power saving group has been allocated, the allocable positions of the first area are sequentially searched starting from the start position of the first area.
Further, when the data object needs to be removed, the method further comprises: reading bitmap information of a power saving group containing the data object into a memory; acquiring the number of the storage position of the data object in the power-saving group based on the ID corresponding to the data object; and removing the bit corresponding to the number from the bitmap information.
Further, the method further comprises: acquiring bitmap information corresponding to the power saving group in a memory; and acquiring a data object from the power-saving group based on the bitmap information.
Further, the ID further includes a cage number and a number of the power saving group in the cage.
Further, the ID also includes an upper placeholder having a particular value.
Further, the power saving group includes a single disk, a RAID composed of several disks, or several volume groups.
According to another aspect of the present application, there is provided a data storage device comprising: the access module is configured to sequentially access a plurality of power-saving groups in the last-access LRU queue and determine whether an allocable position exists in the power-saving groups when the data object needs to be stored; a generation module configured to generate an ID corresponding to the data object when an allocable position exists in a current power saving group, the ID including a number of the allocable position within the current power saving group; a first saving module configured to store bitmap information of the current power saving group in a first area of the current power saving group, the bitmap information containing the number of the allocable position within the current power saving group; a second save module configured to store the data object in a second region of the current power-saving group.
According to another aspect of the present application, there is provided a data storage system, the system comprising: a processor; a memory having stored therein an LRU queue; wherein the processor is configured to: when the data object needs to be stored, sequentially accessing a plurality of power-saving groups in the recently-accessed LRU queue, and determining whether an allocable position exists in each power-saving group; generating an ID corresponding to the data object when an allocable position exists in a current power saving group, wherein the ID comprises a number of the allocable position in the current power saving group; storing bitmap information of the current power-saving group in a first area of the current power-saving group, the bitmap information containing the number of the allocable positions within the current power-saving group; storing the data object in a second region of the current power saving group.
Further, the processor is further configured to: when there is no unaccessed power-saving group in the LRU queue and the LRU queue is full, a tail power-saving group in the LRU queue is evicted and the evicted power-saving group is put to sleep.
Further, the processor is further configured to: when the power saving set which is not accessed does not exist in the LRU queue and the LRU queue is not full, waking up the power saving set which is not used currently and inserting the power saving set into the head position of the LRU queue; and sequentially accessing a plurality of power saving groups in the LRU queue from the head position of the LRU queue and judging whether an allocable position exists in the power saving groups.
Further, the processor is further configured to: when a plurality of data objects are stored, sequentially and respectively acquiring the numbers of allocable positions in a power-saving group corresponding to the data objects in the power-saving group for each data object; storing bitmap information of the power-saving group in a first area of the power-saving group, the bitmap information containing numbers of assignable positions within the power-saving group corresponding to the plurality of data objects, respectively; storing the plurality of data objects in a second region of the power-saving group based on the number of assignable locations within the power-saving group corresponding to each data object.
Further, the processor is further configured to: when the last position of the first area of the power saving group has been allocated, the allocable positions of the first area are sequentially searched starting from the start position of the first area.
Further, when it is desired to remove a data object, the processor is further configured to: reading bitmap information of a power saving group containing the data object into a memory; acquiring the number of the storage position of the data object in the power-saving group based on the ID corresponding to the data object; and removing the bit corresponding to the number from the bitmap information.
Further, the processor is further configured to: acquiring bitmap information corresponding to the power saving group in the memory; and acquiring a data object from the power-saving group based on the bitmap information.
Furthermore, the data storage system also comprises a plurality of cages, each cage comprises a plurality of power saving groups, and the ID also comprises a cage number and the number of the power saving group in the cage.
Further, the ID also includes an upper placeholder having a particular value.
Further, the power saving group includes a single disk, a RAID composed of several disks, or several volume groups.
According to another aspect of the present application, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method as described above.
In at least one embodiment of the foregoing application, by dividing bitmap information as metadata into a plurality of portions and storing the portions in a plurality of power saving groups, respectively, metadata does not need to be separately stored in a relatively expensive storage medium, and a technical effect of reducing storage cost is achieved while security of storing the metadata is ensured.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic diagram of a data storage system according to an embodiment of the present application;
FIG. 2 is a flow diagram of a method of assigning IDs to data objects that need to be stored according to another embodiment of the present application;
FIG. 3 is a flow diagram of a method of assigning IDs to data objects that need to be stored according to another embodiment of the present application;
FIG. 4 is a flow diagram of a method of recovering IDs for stored data objects according to an embodiment of the present application;
FIG. 5 is a block diagram of a structure of a data storage device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a control device of a data storage system according to another embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the existing MAID technology, metadata is centrally stored. In order to ensure the security of metadata and improve the efficiency of metadata access, a storage medium (e.g., SSD) with fast access speed, stable property and high cost is usually used; meanwhile, in order to prevent accidental loss of metadata, a redundancy technique is generally adopted to store the metadata. This results in a higher storage cost for the metadata.
In the technology proposed in the present application, in order to improve the security of metadata storage, a manner may be adopted in which metadata is divided into a plurality of portions and stored on different storage media, respectively, wherein each portion of metadata and a data object corresponding thereto are stored on the same storage medium. For example, metadata may be stored in a storage medium at a lower address portion, and a data object to be stored may be stored in the storage medium at a higher address portion, with a one-to-one mapping between metadata storage units and data object storage units. The proportion of the storage area occupied by the metadata and the data object in the storage medium can be determined according to the size of the data object corresponding to each metadata unit.
Based on the storage mode, even if partial metadata are lost due to storage medium failure, the storage of the rest metadata cannot be influenced; on the other hand, the security of metadata storage and the security of data object storage both depend on the security of the same storage medium. Therefore, the storage mode does not cause the reduction of the storage safety of the data object.
As described previously, the present application employs MAID technology to store data. In particular, the disk storage system of the present application may comprise several physical cages (i.e., storage expansion cabinets), wherein each cage comprises several power saving groups. Here, each power saving group may be a disk, a RAID composed of several disks (or disk partitions), or a series of volume groups (volume groups) established by an algorithm. When a certain power-saving group needs to be accessed, the disk storage system enables the power-saving group to be powered on to operate; when the power-saving group in the power-on state passes through the idle period of the preset time period, the power-saving group can be powered off again, so that the purposes of saving power and prolonging the service life are achieved. Powering up or powering down the power-saving group may be implemented by the processor by invoking an interface registered by the power control module.
Table 1 shows a schematic structural diagram of an ID corresponding to a stored data object in the disk storage technique according to the present application. This table is merely an example and is not intended to limit the scope of the present application.
TABLE 1
Figure BDA0002466073470000071
As shown in table 1, the ID is composed of 64 bits. Where the first two bits are high-order placeholders to distinguish the ID from commonly used addresses or long integer variables, e.g., the high-order placeholder may be a fixed "01" value; 6 bits after the high-order placeholder are used for identifying the number of the cage to which the power saving group stored in the data object corresponding to the ID belongs; the next 8 bits are used for recording information about the number or slot position of the power-saving group or disk stored in the data object in the cage; the next 16 bits are reserved fields that are reserved for later expansion, where information about the software version can be stored, or used to record the state of the object, etc.; the last 32 bits are used to record the number in the power-saving group or disk, and the number is continuously allocated from 0 to represent the physical address of the data object to be accessed stored in the power-saving group.
Here, when allocating the storage units (including the metadata storage unit portion and the data object storage unit portion, the allocations of which are independent of each other) within each power saving group, it is possible to adopt a manner similar to a ring structure, that is, sequentially allocate storage units for data objects to be stored, starting from the starting storage unit of the metadata storage unit portion; when the last storage unit has been allocated, the search for a storage unit that can be allocated is resumed starting from the metadata storage unit partial starting storage unit. After the storage location for storing the metadata is determined, the storage location in the data area for storing the data object is also determined accordingly.
During operation of the disk storage system, a last Recently accessed (LRU) queue may be utilized to manage the power saving group. The last-access LRU queue stores the node group that was most recently in the "active" state, i.e., the node group currently in the powered-on state.
A schematic structural diagram of a data storage system according to an embodiment of the present application is shown in fig. 1. As shown in fig. 1, a plurality of pointers are recorded in the LRU queue, and each pointer points to an information triplet in the memory about an active power saving group, where the information triplet includes a cage number to which the active power saving group belongs, a number of the active power saving group in the cage, and bitmap information (bitmap) of the active power saving group. The bitmap information may have 32 bits as a unit for recording the last 32 bits of the ID, i.e., the number of the data object within the power-saving group. That is, the bitmap information is metadata of all data objects stored in the power saving group. As can be seen from fig. 1, the bitmap information is also recorded in the metadata area portion of the power saving group, and the data area records the corresponding data object.
Depending on the access status of each power-saving group pointed to by the pointer in the LRU queue, the processor may move the most recently accessed power-saving group to the head of the LRU queue (i.e., move the corresponding pointer to the head of the LRU queue); when the LRU queue is full and a power-saving set that was in a dormant state before is needed to be added to the LRU queue, the tail power-saving set in the LRU queue may be cleared (i.e., the corresponding pointer is deleted), the newly added power-saving set is inserted into the head of the LRU queue, and the cage number of the newly added power-saving set, the number of the power-saving set in the cage, and bitmap information are read into the memory.
When a data object in a certain power saving group needs to be accessed, the number of the object in the power saving group recorded in the bitmap information of the power saving group can be acquired, and the storage address of the data object can be acquired according to the information.
Fig. 2 is a flowchart illustrating a method for assigning an ID to a data object to be stored according to an embodiment of the present application. Referring to fig. 2, the method may include the steps of:
s210, when the data object needs to be stored, sequentially accessing a plurality of power-saving groups in the recently accessed LRU queue, and determining whether an allocable position exists in the power-saving groups;
s220, under the condition that an assignable position exists in the current power saving group, generating an ID corresponding to the data object, wherein the ID comprises a number of the assignable position in the current power saving group;
s230, storing bitmap information of the current power saving group in a first area of the current power saving group, the bitmap information including the number of the assignable position in the current power saving group;
s240, storing the data object in a second area of the current power saving group.
In step S210, when a plurality of power saving banks in the LRU queue are sequentially accessed, if an allocable position exists in the current power saving bank, the step is terminated and the next step is performed; moreover, when judging whether the assignable positions exist in the current power saving group, the storage units are assigned by adopting the annular structure in the power saving group, so that each power saving group can record the number of the storage unit assigned last time, and when judging whether the assignable storage units exist in the power saving group, the storage units can be sequentially searched from the number of the storage unit assigned last time recorded in the metadata area of the power saving group; when the last unit of the metadata area of the power saving group is found, the allocable storage units are sequentially found from the starting unit of the metadata area.
In step S220, the generated ID corresponding to the data object has a structure as shown in table 1. Wherein the number of said allocable locations within said current power saving group is stored in the last 32 bits of said structure of IDs.
In steps S230 and S240, the first region is the previously described metadata storage unit portion, and the second region portion is the previously described data object storage unit portion.
FIG. 3 is a flowchart illustrating a method for assigning an ID to a data object to be stored according to an embodiment of the present application. Referring to fig. 2, the method may include the steps of:
s310, judging whether a power saving group exists in the LRU queue, if so, entering a step S330, otherwise, entering a step S320;
s320, calling an interface registered by the power control module to wake up a new power-saving set, inserting the LRU sequence, reading information containing a bitmap of the LRU sequence into a memory, and entering step S330;
s330, accessing a first power saving set in the LRU queue, and entering the step S340;
s340, judging whether the distributable area exists in the current power saving group, if so, entering a step S350, otherwise, entering a step S360;
s350, determining the number of the allocable area in the power-saving group, generating the ID of the data object together with the cage number and the number of the power-saving group in the cage, and then ending the whole method flow;
s360, judging whether an unaccessed power-saving group exists in the LRU queue, if so, entering a step S370, otherwise, entering a step S380;
s370, accessing the next power saving group in the LRU queue, and entering the step S340;
s380, checking whether the LRU queue is full, if so, entering a step S390, otherwise, entering a step S320;
s390, try to eliminate the power saving set accessed earlier in the LRU queue, call the interface registered by the power control module to make the corresponding power saving set sleep, and then go to step S320.
The execution steps from S310 to S320 correspond to initializing the LRU queue. That is, the LRU queue is empty at the beginning, i.e. the number of power saving groups in the queue is 0; at this time, the number of power-saving groups in the LRU queue becomes 1 by waking up a new power-saving group to insert the LRU sequence and reading information including the bitmap of the power-saving group into the memory.
In step S340, since the storage units are allocated inside the power-saving groups in the ring structure as described above, for each power-saving group, the last allocated storage unit number may be recorded, and when determining whether there is an allocatable storage unit in the power-saving group, the storage units may be sequentially searched from the last allocated storage unit number recorded in the metadata area by the power-saving group; when the last unit of the metadata area of the power saving group is found, searching allocable storage units in sequence from the initial unit of the metadata area; when the assignable numbers of the storage units are found, the step S350 is entered; otherwise, when it is proceeded to the last allocated storage unit number and no allocable storage unit is found yet, the flow proceeds to step S360.
In step S350, an ID corresponding to the stored data object is generated based on the number of the found allocable storage unit in the power-saving group, the number of the cage where the power-saving group is located, and the number of the power-saving group in the cage.
When the step S380 or S390 goes to the step S320, the power saving group in the queue does not find a storage unit that can be allocated, and at this time, a new power saving group is awakened and inserted into the head position of the LRU queue, and the bitmap information is read into the memory. When a new data object needs to be allocated with a storage unit, the storage unit which can be allocated is searched from the power saving set at the head of the LRU queue again.
FIG. 4 shows a flow diagram of a method of recycling IDs for stored data objects according to an embodiment of the application. Referring to fig. 4, the method may include the steps of:
in step S410, it is determined whether there is a power saving set to be searched in the LRU queue, if yes, step S450 is entered, otherwise step S420 is entered;
in step S420, it is determined whether the LRU queue is full, and if so, step S430 is performed, and if not, step S440 is performed;
in step S430, eliminating the queue tail element in the LRU queue, and calling the interface registered by the power control module to make the corresponding power saving group sleep, and then entering step S440;
in step S440, waking up the power saving group corresponding to the recovered ID, reading bitmap information of the power saving group into the memory, and entering step S450;
in step S450, the information of the bit corresponding to the recycling ID is removed from the bitmap information, and the updated bitmap information is written into the corresponding area of the disk, and then the whole method flow is ended.
In step S450, the information bits corresponding to the recovered ID in the bitmap information recorded in the memory and the metadata area are removed, which corresponds to deleting the index record of the data object corresponding to the recovered ID; at this time, although the data object is still recorded in a certain storage unit of the data area, the storage unit may already be reallocated, and when a new data object is stored, the original data object is overwritten.
The present application also relates to a data storage device 500, the data storage device 500 comprising:
an accessing module 510 configured to sequentially access a plurality of power saving sets in the last accessed LRU queue and determine whether an allocable location exists in the power saving sets when a data object needs to be stored;
a generating module 520 configured to generate an ID corresponding to the data object when an allocable position exists in a current power saving group, the ID including a number of the allocable position in the current power saving group;
a first storing module 530 configured to store bitmap information of the current power saving group in a first area of the current power saving group, the bitmap information containing the number of the allocable position within the current power saving group;
a second saving module 540 configured to store the data object in a second region of the current power saving group.
Corresponding to the above data storage method, the present application further provides a data storage system, where the data storage system includes a plurality of cages, each of the cages includes a plurality of power saving sets, and the data storage system further includes a processor and a memory, where an LRU queue is stored in the memory, where a currently active power saving set is recorded, and the processor is adapted to execute the above method steps, which are not described herein again.
Furthermore, the present application provides a control device for a data storage system, comprising at least one processor adapted to perform the method steps as described above.
The application also provides an electronic device and a readable storage medium for managing a data storage system according to the embodiment of the application.
As shown in fig. 6, it is a block diagram of an electronic device according to the data storage method of the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers, and the like. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display Graphical information for a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to the Interface. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 601 is taken as an example.
The memory 602 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the data storage methods provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the data storage method provided herein.
The memory 602, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the data storage methods in the embodiments of the present application. The processor 601 executes various functional applications of the server and data processing by running non-transitory software programs, instructions and modules stored in the memory 602, that is, implementing the data storage method in the above method embodiment.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device of the data storage method, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 may optionally include memory located remotely from the processor 601, which may be connected to the electronic devices via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The Display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) Display, and a plasma Display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, Integrated circuitry, Application Specific Integrated Circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (Cathode Ray Tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (14)

1. A method of storing data, comprising:
when the data object needs to be stored, sequentially accessing a plurality of power-saving groups in the recently-accessed LRU queue, and determining whether an allocable position exists in each power-saving group;
generating an ID corresponding to the data object when an allocable position exists in a current power saving group, wherein the ID comprises a number of the allocable position in the current power saving group;
storing bitmap information of the current power-saving group in a first area of the current power-saving group, the bitmap information containing the number of the allocable positions within the current power-saving group;
storing the data object in a second region of the current power saving group.
2. The method of claim 1, further comprising:
when there is no unaccessed power-saving group in the LRU queue and the LRU queue is full, a tail power-saving group in the LRU queue is evicted and the evicted power-saving group is put to sleep.
3. The method of claim 2, further comprising:
when the power saving set which is not accessed does not exist in the LRU queue and the LRU queue is not full, waking up the power saving set which is not used currently and inserting the power saving set into the head position of the LRU queue;
and sequentially accessing a plurality of power saving groups in the LRU queue from the head position of the LRU queue and judging whether an allocable position exists in the power saving groups.
4. The method of claim 1, further comprising:
when a plurality of data objects are stored, sequentially and respectively acquiring the numbers of allocable positions in a power-saving group corresponding to the data objects in the power-saving group for each data object;
storing bitmap information of the power-saving group in a first area of the power-saving group, the bitmap information containing numbers of assignable positions within the power-saving group corresponding to the plurality of data objects, respectively;
storing the plurality of data objects in a second region of the power-saving group based on the number of assignable locations within the power-saving group corresponding to each data object.
5. The method of claim 4, further comprising:
when the last position of the first area of the power saving group has been allocated, the allocable positions of the first area are sequentially searched starting from the start position of the first area.
6. The method of claim 1, wherein when a data object needs to be removed, the method further comprises:
reading bitmap information of a power saving group containing the data object into a memory;
acquiring the number of the storage position of the data object in the power-saving group based on the ID corresponding to the data object;
and removing the bit corresponding to the number from the bitmap information.
7. The method of claim 1, further comprising:
acquiring bitmap information corresponding to the power saving group in a memory;
and acquiring a data object from the power-saving group based on the bitmap information.
8. The method of claim 1, wherein the ID further comprises a cage number and a number of the power saving group within a cage.
9. The method of claim 1, wherein the ID further comprises an upper placeholder having a particular value.
10. The method of claim 1, wherein the power-saving group comprises a single disk, a RAID comprising a plurality of disks, or a plurality of volume groups.
11. A data storage device, comprising:
the access module is configured to sequentially access a plurality of power-saving groups in the last-access LRU queue and determine whether an allocable position exists in the power-saving groups when the data object needs to be stored;
a generation module configured to generate an ID corresponding to the data object when an allocable position exists in a current power saving group, the ID including a number of the allocable position within the current power saving group;
a first saving module configured to store bitmap information of the current power saving group in a first area of the current power saving group, the bitmap information containing the number of the allocable position within the current power saving group;
a second save module configured to store the data object in a second region of the current power-saving group.
12. A data storage system, comprising:
a processor;
a memory having stored therein an LRU queue;
wherein the processor is configured to:
when the data object needs to be stored, sequentially accessing a plurality of power-saving groups in the recently-accessed LRU queue, and determining whether an allocable position exists in each power-saving group;
generating an ID corresponding to the data object when an allocable position exists in a current power saving group, wherein the ID comprises a number of the allocable position in the current power saving group;
storing bitmap information of the current power-saving group in a first area of the current power-saving group, the bitmap information containing the number of the allocable positions within the current power-saving group;
storing the data object in a second region of the current power saving group.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
CN202010334382.7A 2020-04-24 2020-04-24 Data storage method, device, system, electronic equipment and storage medium Active CN111552439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010334382.7A CN111552439B (en) 2020-04-24 2020-04-24 Data storage method, device, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010334382.7A CN111552439B (en) 2020-04-24 2020-04-24 Data storage method, device, system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111552439A true CN111552439A (en) 2020-08-18
CN111552439B CN111552439B (en) 2022-01-21

Family

ID=72003940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010334382.7A Active CN111552439B (en) 2020-04-24 2020-04-24 Data storage method, device, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111552439B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112540728A (en) * 2020-12-07 2021-03-23 北京云宽志业网络技术有限公司 Power-saving storage method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101295310A (en) * 2008-06-12 2008-10-29 上海交通大学 Method for storing data and metadata on magnetic disk
CN101501656A (en) * 2005-09-29 2009-08-05 科潘系统公司 System for archival storage of data
US20110035605A1 (en) * 2009-08-04 2011-02-10 Mckean Brian Method for optimizing performance and power usage in an archival storage system by utilizing massive array of independent disks (MAID) techniques and controlled replication under scalable hashing (CRUSH)

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101501656A (en) * 2005-09-29 2009-08-05 科潘系统公司 System for archival storage of data
CN101295310A (en) * 2008-06-12 2008-10-29 上海交通大学 Method for storing data and metadata on magnetic disk
US20110035605A1 (en) * 2009-08-04 2011-02-10 Mckean Brian Method for optimizing performance and power usage in an archival storage system by utilizing massive array of independent disks (MAID) techniques and controlled replication under scalable hashing (CRUSH)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112540728A (en) * 2020-12-07 2021-03-23 北京云宽志业网络技术有限公司 Power-saving storage method, device, equipment and storage medium
CN112540728B (en) * 2020-12-07 2022-04-01 北京云宽志业网络技术有限公司 Power-saving storage method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111552439B (en) 2022-01-21

Similar Documents

Publication Publication Date Title
US9417794B2 (en) Including performance-related hints in requests to composite memory
US8645750B2 (en) Computer system and control method for allocation of logical resources to virtual storage areas
CN103336849B (en) A kind of database retrieval system improves the method and device of retrieval rate
KR101739213B1 (en) Methods and apparatus for compressed and compacted virtual memory
JP5593577B2 (en) Storage system and control method thereof
US8595451B2 (en) Managing a storage cache utilizing externally assigned cache priority tags
CN103019888B (en) Backup method and device
US7966470B2 (en) Apparatus and method for managing logical volume in distributed storage systems
CN107481762B (en) Trim processing method and device of solid state disk
CN105574141B (en) Method and device for carrying out data migration on database
CN103500164A (en) Data storage space recovery system and method
US10268501B2 (en) Memory optimization by phase-dependent data residency
CN101644996A (en) Storage method of index data and storage control device
US9489404B2 (en) De-duplicating data in a network with power management
CN103761190A (en) Data processing method and apparatus
CN105897859B (en) Storage system
US20190057032A1 (en) Cache Coherence Management Method and Node Controller
CN114138193A (en) Data writing method, device and equipment for solid state disk with partitioned name space
CN110704334B (en) Method, system and equipment for important product data management
CN111552439B (en) Data storage method, device, system, electronic equipment and storage medium
CN109189739B (en) Cache space recovery method and device
CN111858393B (en) Memory page management method, memory page management device, medium and electronic equipment
CN112764662B (en) Method, apparatus and computer program product for storage management
US11803469B2 (en) Storing data in a log-structured format in a two-tier storage system
US11662916B2 (en) Method, electronic device and computer program product for managing a storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant