CN109388346A - A kind of data rule method and relevant apparatus - Google Patents

A kind of data rule method and relevant apparatus Download PDF

Info

Publication number
CN109388346A
CN109388346A CN201811198130.5A CN201811198130A CN109388346A CN 109388346 A CN109388346 A CN 109388346A CN 201811198130 A CN201811198130 A CN 201811198130A CN 109388346 A CN109388346 A CN 109388346A
Authority
CN
China
Prior art keywords
rule
data
node
disk
buffering queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811198130.5A
Other languages
Chinese (zh)
Other versions
CN109388346B (en
Inventor
甄凤远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201811198130.5A priority Critical patent/CN109388346B/en
Publication of CN109388346A publication Critical patent/CN109388346A/en
Application granted granted Critical
Publication of CN109388346B publication Critical patent/CN109388346B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a kind of data rule method and relevant apparatus, and this method determines the node metadata that lower brush is needed in memory into disk;It is obtained from rule buffering queue and arranges most preceding node data;Include the node data on multiple disks according to preset order arrangement in rule buffering queue;Node metadata is merged with most preceding node data is arranged;By brush under the data after merging into disk.Due to having prepared rule buffering queue in advance, therefore when needing to merge before metadata rule, it need to only be obtained from rule buffering queue and arrange most preceding node data, be used to merge with metadata without waiting for reading disk node data.It can be seen that technical scheme is compared with the existing technology, the rule efficiency of metadata can be significantly improved.

Description

A kind of data rule method and relevant apparatus
Technical field
This application involves storage software technology fields, more particularly to a kind of data rule method and relevant apparatus.
Background technique
When host issues input or output (Input/Output I/O) request, metadata first can with B+ tree or Similar storing data structure is stored in memory.But it is limited due to storage system memory, and occur in storage system When powered-off fault, metadata is resided interior in the presence of the loss for being likely to result in data.For these reasons, work as metadata When reaching certain threshold value in memory, rule can be triggered, will be brushed on disk under the data in memory, that is, make memory Data are write on disk in the form of persistence.In turn, when host carries out data query, data can quickly be carried out Positioning.
Below by taking the storing data structure of B+ tree as an example, traditional metadata rule method is illustrated.
Traditional metadata rule method is the B when B+ tree in memory reaches lower brush threshold value, in reading disk + tree, the B+ tree in memory is merged with the B+ tree on disk, will brush magnetic under the new B+ tree after merging after the completion of merging On disk, primary lower brush operation, metadata rule are so completed.When merging, specifically, a smallest leaf section in memory is chosen Then point reads a smallest leaf node, according to the size for the Key value for representing node logical address, again from disk Organize new B+ tree.As it can be seen that this mode of operation makes union operation have to wait for the reading of disk node data.When on disk B+ tree it is especially big (data volume i.e. on disk is huge), and when host I/O request amount is especially big in the short time, rule efficiency is non- It is often low, cause performance of storage system sharply to decline.
Therefore, storage system metadata rule efficiency how is improved to have become this field currently technology urgently to be solved is asked Topic.
Summary of the invention
Based on the above issues, this application provides a kind of data rule method and relevant apparatus, to improve metadata rule Efficiency.
The embodiment of the present application discloses following technical solution:
In a first aspect, the application provides a kind of data rule method, this method comprises:
Determine the node metadata that lower brush is needed in memory into disk;
It is obtained from rule buffering queue and arranges most preceding node data;In the rule buffering queue comprising it is multiple according to Node data on the disk of preset order arrangement;
The node metadata node data most preceding with the arrangement is merged;
By brush under the data after merging into the disk.
As a kind of possible implementation, it is described obtained from rule buffering queue arrange most preceding node data it Before, the method can also include:
Create the rule buffering queue;
According to the preset order, the node data on the disk is loaded onto the rule buffering queue, until Node data is booked in the rule buffering queue.
As a kind of possible implementation, it is described obtained from rule buffering queue arrange most preceding node data it Afterwards, the method can also include:
When determining that the rule buffering queue is discontented with, the node that will not be loaded on the disk according to the preset order Data are loaded onto the rule buffering queue, with rule buffering queue described in polishing.
As a kind of possible implementation, the preset order is that the logical address of the node data on the disk is passed The sequence of increasing;
Described obtain from rule buffering queue arranges most preceding node data, specifically:
The smallest node data of logical address is obtained from the rule buffering queue;
The node data that the node metadata is most preceding with the arrangement merges, specifically:
The node metadata and the smallest node data of the logical address are merged.
As a kind of possible implementation, storing data structure is the first B+ tree in the disk, in the memory Storing data structure is the 2nd B+ tree;
The node data that the node metadata is most preceding with the arrangement merges, specifically:
Team is buffered according to the rule that is arranged in of the node metadata of the 2nd B+ tree and the first B+ tree Most preceding node data in column, the 3rd B+ tree after being merged;
It brushes under the data by after merging into the disk, specifically:
By the lower brush of the 3rd B+ tree into the disk.
Second aspect, the application provide a kind of data rule device, which includes:
To rule metadata determination unit, for determining the node metadata for needing lower brush in memory into disk;
Disk node data acquiring unit arranges most preceding node data for obtaining from rule buffering queue;It is described Include the node data on multiple disks according to preset order arrangement in rule buffering queue;
Data combination unit, for merging the node metadata node data most preceding with the arrangement;
Brush unit under data, for brush under the data after merging into the disk.
As a kind of possible implementation, described device can also include:
Queue creating unit, for creating the rule buffering queue;
The first loading unit of data, for according to the preset order, the node data on the disk to be loaded onto institute It states in rule buffering queue, until being booked node data in the rule buffering queue.
As a kind of possible implementation, described device can also include:
The second loading unit of data, when for determining that the rule buffering queue is discontented with, according to the preset order by institute It states the node data not being loaded on disk and is loaded onto the rule buffering queue, with rule buffering queue described in polishing.
As a kind of possible implementation, the preset order is that the logical address of the node data on the disk is passed The sequence of increasing;
The disk node data acquiring unit, specifically includes:
First obtains subelement, for obtaining the smallest node data of logical address from the rule buffering queue;
The data combination unit, specifically includes:
First merges subelement, for closing the node metadata and the smallest node data of the logical address And.
As a kind of possible implementation, storing data structure is the first B+ tree in the disk, in the memory Storing data structure is the 2nd B+ tree;
The data combination unit, specifically includes:
Second merges subelement, for according to the node metadata of the 2nd B+ tree and the row of the first B+ tree It is listed in node data most preceding in the rule buffering queue, the 3rd B+ tree after being merged;
Brush unit under the data, specifically includes:
First lower brush unit, for will brush under the 3rd B+ tree into the disk.
Compared to the prior art, the application has the advantages that
Data rule method provided by the present application determines the node metadata that lower brush is needed in memory into disk;From falling It is obtained in disk buffering queue and arranges most preceding node data;Include multiple magnetic arranged according to preset order in rule buffering queue Node data on disk;Node metadata is merged with most preceding node data is arranged;To be brushed under data after merging to In disk.
Due to having prepared rule buffering queue in advance, therefore when needing to merge before metadata rule, Need to only be obtained from rule buffering queue and arrange most preceding node data, without waiting for reading disk node data for Metadata merges.It can be seen that this method is compared with the existing technology, the rule efficiency of metadata can be significantly improved.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application without any creative labor, may be used also for those of ordinary skill in the art To obtain other drawings based on these drawings.
Fig. 1 is a kind of data rule method flow diagram provided by the embodiments of the present application;
Fig. 2 is another data rule method flow diagram provided by the embodiments of the present application;
Fig. 3 is a kind of structural schematic diagram of data rule device provided by the embodiments of the present application.
Specific embodiment
In the prior art, when executing data rule, it is necessary to which the reading for waiting disk node data influences holding for data rule Line efficiency.When the data structure on disk is huge, and host I/O request amount is especially big in the short time, this data rule The drawbacks of method, is more significant, and performance of storage system is even influenced when serious.Based on this, this application provides a kind of solution party Case proposes a kind of data rule method and relevant apparatus.In the following, being carried out respectively to this method and device in conjunction with the accompanying drawings and embodiments Description.
First embodiment
Referring to Fig. 1, which is a kind of data rule method flow diagram provided by the embodiments of the present application.
As shown in Figure 1, data rule method provided in this embodiment, comprising:
Step 101: determining the node metadata that lower brush is needed in memory into disk.
As a kind of possible implementation, lower brush threshold value can be preset, when the storing data structure (example in memory Such as B+ tree) reach preset lower brush threshold value, show that the metadata in memory needs to write the persistence that metadata is carried out in disk. At this point it is possible to which the lesser node metadata of logical address in memory B+ tree is determined as to need node member number of the lower brush into disk According to.
Node metadata of the lower brush into disk is needed in memory it is of course also possible to request to determine according to I/O.The present embodiment In, for determining the concrete mode for needing lower node metadata of the brush into disk without limiting.
Step 102: being obtained from rule buffering queue and arrange most preceding node data;Comprising multiple in rule buffering queue According to the node data on the disk of preset order arrangement.
In disk, data can be stored with data structures such as B+ trees.The each node of B+ tree has corresponding logical address. Herein, the corresponding logical address of node can be specific logical address, be also possible to the corresponding coding form of specific logical address. Such as first node counterlogic address 1, second node counterlogic address 2, third node counterlogic address 3 etc..
It is of different sizes due to different node logical address, it can be to the node data in disk according to from logic The sequence that address is incremented by or logical address is successively decreased is ranked up.
By taking preset order is the sequence that the node data logical address on disk is incremented by as an example, if in rule buffering queue The node data logically incremental sequence arrangement in address, then arranging in queue most preceding is the smallest number of nodes of logical address According to.This step is obtained from rule buffering queue arranges most preceding node data, is to obtain logic in rule buffering queue The smallest node data in address.
It should be noted that step 101 can be executed prior to step 102 in the present embodiment, it can also be same with step 102 Shi Zhihang, certain step 102 can also be executed prior to step 101.Therefore, for step 101 and step 102 in the present embodiment Execution sequence is without limiting.
Step 103: the node metadata node data most preceding with the arrangement is merged.
Arranged in rule buffering queue node data earlier above preferentially with needed in memory lower brush to the node member number of disk According to merging.Therefore in this step, before the node metadata that step 101 is determined and the arrangement most that step 102 is got Node data merge.
If in rule buffering queue, logically the incremental sequence in address arranges multiple back end on disk, then In this step, specially node metadata and the smallest node data of logical address in rule buffering queue are merged.
It is understood that being arranged in rule buffering queue after most preceding node data is utilized merging, it is no longer present in In rule buffering queue.That is, merging use by primary, arranging most preceding node data in rule buffering queue becomes in queue Adjacent node data after the most preceding node data of original arrangement.For example, merging it in a data in the present embodiment Before, wherein for A1, A2, A3 ..., A1 is the most preceding node data of arrangement to the node data in rule buffering queue;By primary Data merge, and wherein for A2, A3 ..., A2 is the most preceding node data of arrangement to the node data in rule buffering queue.
Step 104: by brush under the data after merging into the disk.
Brush completes the lower brush rule of node metadata into disk under data after merging.When needing audit memory Central Plains When node metadata in storing data structure, the addresses of the data after merging in disk is directed toward in inquiry automatically.
It is understood that brush under a data carry out once can be merged with every, it can also be by the data after multiple merge Carry out once unified lower brush operation.
The above are data rule methods provided by the embodiments of the present application.In this method, determines and need lower brush to magnetic in memory Node metadata in disk;It is obtained from rule buffering queue and arranges most preceding node data;Comprising more in rule buffering queue Node data on a disk according to preset order arrangement;Node metadata is closed with most preceding node data is arranged And;By brush under the data after merging into disk.Due to before metadata rule, having prepared rule buffering team in advance Column, therefore when needing to merge, it need to only be obtained from rule buffering queue and arrange most preceding node data, without etc. Disk node data to be read with metadata for merging.It can be seen that this method is compared with the existing technology, can significantly improve The rule efficiency of metadata.
Based on the data rule method that previous embodiment provides, to meet when I/O request increases sharply to rule buffer queue It is required that ensureing the efficient rule of metadata, the application still further provides another data rule method.Below with reference to implementation Example and attached drawing are described in detail and illustrate to this method.
Second embodiment
Referring to fig. 2, which is the flow chart of another data rule method provided by the present application.
As shown in Fig. 2, data rule method provided in this embodiment, comprising:
Step 201: creating the rule buffering queue.
In the present embodiment, rule buffering queue is used to lay in the number of nodes on disk to merge with node metadata According to.Node metadata herein specifically refers to the node metadata to lower brush into disk.
Step 202: according to the preset order, the node data on the disk being loaded onto the rule buffering queue In, until being booked node data in the rule buffering queue.
As an optional embodiment, this step can be realized especially by Load (load) thread.Load thread from By the node of storing data structure (such as the first B+ tree), according to preset order, (such as logical address is incremented by from small to large on disk Sequence) be successively loaded into rule buffering queue.It is to show rule buffering queue when rule buffering queue reaches depth capacity In be booked node data, at this time stop load.
Step 203: determining the node metadata that lower brush is needed in memory into disk.
Step 204: being obtained from rule buffering queue and arrange most preceding node data;Include in the rule buffering queue Node data on multiple disks according to preset order arrangement.
Step 205: the node metadata node data most preceding with the arrangement is merged.
For example, the storing data structure in memory is the 2nd B if storing data structure is the first B+ tree in disk + tree, then this step is specifically as follows: according to being arranged in for the node metadata of the 2nd B+ tree and the first B+ tree Most preceding node data in the rule buffering queue, the 3rd B+ tree after being merged.
Step 206: by brush under the data after merging into the disk.
Aforementioned exemplary is continued to use, then this step is specifically as follows, complete by the lower brush of the 3rd B+ tree obtained after merging into disk At the rule of node metadata.
After node metadata rule, when needing to inquire the node metadata in original the first B+ tree of memory, it will automatic Inquiry is directed toward to the position of the 3rd B+ tree interior joint metadata of disk.
In the present embodiment, step 203 to 206 is identical to 104 implementations as step 101 in previous embodiment, about step Rapid 203 to 206 associated description can be found in previous embodiment, and details are not described herein again.
It is understood that it is every after a data merge, vacancy is generated in rule buffering queue.It is produced in some queues Raw vacancy, if polishing queue not in time, it is possible to the demand of the efficient rule of metadata is unable to satisfy when I/O is requested and increased sharply, Node data i.e. in rule buffering queue is possible to that supply falls short of demand.For the generation for avoiding the problem, the efficient of metadata is ensured Rule solves the problems, such as this by step 207 in the present embodiment.
Step 207: when determining that the rule buffering queue is discontented with, will not be added on the disk according to the preset order The node data of load is loaded onto the rule buffering queue, with rule buffering queue described in polishing.
It can specifically determine that rule is slow by way of real time monitoring as a kind of possible implementation, in the present embodiment The state for rushing queue is to have expired or discontented.When determining that rule buffering queue is discontented with, still according to aforementioned load node data Preset order continues in the rule buffering queue of Xiang Buman to load the node data on disk, until rule buffering queue is Full state.Through the above way can polishing rule buffering queue, real-time guarantees metadata rule in rule buffering queue to saving The acquisition demand of point data.
The above are data rule methods provided by the embodiments of the present application.This method creates rule before metadata rule Buffering queue is loaded with the node data on multiple disks according to preset order thereon, therefore when the metadata in memory needs When carrying out rule, it need to only be obtained from rule buffering queue and arrange most preceding node data, without waiting for reading disk section Point data with metadata for merging.It can be seen that this method is compared with the existing technology, the rule of metadata can be significantly improved Efficiency.In addition, this method data each time merging after, in time according to the number of nodes in preset order polishing rule buffering queue According to also can satisfy requirement to rule buffer queue when I/O is requested and increased sharply, ensure the efficient rule of metadata.
Based on the data rule method that previous embodiment provides, correspondingly, the application also provides a kind of data rule device. The device is described in detail below with reference to embodiment and attached drawing.
3rd embodiment
Referring to Fig. 3, which is a kind of structural schematic diagram of data rule device provided in this embodiment.
As shown in figure 3, data rule device provided in this embodiment, comprising: to rule metadata determination unit 301, magnetic Brush unit 304 under disk node data acquiring unit 302, data combination unit 303 and data.
To rule metadata determination unit 301, for determining the node metadata for needing lower brush in memory into disk;
Disk node data acquiring unit 302 arranges most preceding node data for obtaining from rule buffering queue;Institute It states in rule buffering queue comprising the node data on multiple disks according to preset order arrangement;
Data combination unit 303, for merging the node metadata node data most preceding with the arrangement;
Brush unit 304 under data, for brush under the data after merging into the disk.
The above are data rule devices provided by the embodiments of the present application.It is quasi- in advance due to before metadata rule For rule buffering queue, therefore when needing to merge, it need to only be obtained from rule buffering queue and arrange most preceding node Data are used to merge with metadata without waiting for reading disk node data.It can be seen that the device is relative to existing skill Art can significantly improve the rule efficiency of metadata.
As a kind of possible implementation, above-mentioned apparatus can also include:
Queue creating unit, for creating the rule buffering queue;
The first loading unit of data, for according to the preset order, the node data on the disk to be loaded onto institute It states in rule buffering queue, until being booked node data in the rule buffering queue.
As a kind of possible implementation, described device can also include:
The second loading unit of data, when for determining that the rule buffering queue is discontented with, according to the preset order by institute It states the node data not being loaded on disk and is loaded onto the rule buffering queue, with rule buffering queue described in polishing.
The device by data each time merging after, in time according to the node in preset order polishing rule buffering queue Data, therefore request also to can satisfy the requirement to rule buffer queue when surge in I/O, ensure the efficient rule of metadata.
As a kind of possible implementation, preset order can pass for the logical address of the node data on the disk The sequence of increasing;
The disk node data acquiring unit 302, specifically includes:
First obtains subelement, for obtaining the smallest node data of logical address from the rule buffering queue;
The data combination unit 303, specifically includes:
First merges subelement, for closing the node metadata and the smallest node data of the logical address And.
As a kind of possible implementation, storing data structure is the first B+ tree in the disk, in the memory Storing data structure is the 2nd B+ tree;
The data combination unit 303, specifically includes:
Second merges subelement, for according to the node metadata of the 2nd B+ tree and the row of the first B+ tree It is listed in node data most preceding in the rule buffering queue, the 3rd B+ tree after being merged;
Brush unit 304 under the data, specifically includes:
First lower brush unit, for will brush under the 3rd B+ tree into the disk.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment it Between same and similar part may refer to each other, each embodiment focuses on the differences from other embodiments. For equipment and system embodiment, since it is substantially similar to the method embodiment, so describe fairly simple, The relevent part can refer to the partial explaination of embodiments of method.Equipment and system embodiment described above is only schematic , wherein unit may or may not be physically separated as illustrated by the separation member, as unit prompt Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple networks On unit.Some or all of the modules therein can be selected to achieve the purpose of the solution of this embodiment according to the actual needs. Those of ordinary skill in the art can understand and implement without creative efforts.
The above, only a kind of specific embodiment of the application, but the protection scope of the application is not limited thereto, Within the technical scope of the present application, any changes or substitutions that can be easily thought of by anyone skilled in the art, Should all it cover within the scope of protection of this application.Therefore, the protection scope of the application should be with scope of protection of the claims Subject to.

Claims (10)

1. a kind of data rule method characterized by comprising
Determine the node metadata that lower brush is needed in memory into disk;
It is obtained from rule buffering queue and arranges most preceding node data;Comprising multiple according to default in the rule buffering queue Node data on tactic disk;
The node metadata node data most preceding with the arrangement is merged;
By brush under the data after merging into the disk.
2. the method according to claim 1, wherein most preceding in the acquisition arrangement from rule buffering queue Before node data, the method also includes:
Create the rule buffering queue;
According to the preset order, the node data on the disk is loaded onto the rule buffering queue, until described Node data is booked in rule buffering queue.
3. method according to claim 1 or 2, which is characterized in that obtain arrangement most from rule buffering queue described After preceding node data, the method also includes:
When determining that the rule buffering queue is discontented with, the node data that will not be loaded on the disk according to the preset order It is loaded onto the rule buffering queue, with rule buffering queue described in polishing.
4. method according to claim 1 or 2, which is characterized in that the preset order is the number of nodes on the disk According to the incremental sequence of logical address;
Described obtain from rule buffering queue arranges most preceding node data, specifically:
The smallest node data of logical address is obtained from the rule buffering queue;
The node data that the node metadata is most preceding with the arrangement merges, specifically:
The node metadata and the smallest node data of the logical address are merged.
5. method according to claim 1 or 2, which is characterized in that storing data structure is the first B+ tree in the disk, Storing data structure in the memory is the 2nd B+ tree;
The node data that the node metadata is most preceding with the arrangement merges, specifically:
According to being arranged in the rule buffering queue for the node metadata of the 2nd B+ tree and the first B+ tree Most preceding node data, the 3rd B+ tree after being merged;
It brushes under the data by after merging into the disk, specifically:
By the lower brush of the 3rd B+ tree into the disk.
6. a kind of data rule device characterized by comprising
To rule metadata determination unit, for determining the node metadata for needing lower brush in memory into disk;
Disk node data acquiring unit arranges most preceding node data for obtaining from rule buffering queue;The rule Include the node data on multiple disks according to preset order arrangement in buffering queue;
Data combination unit, for merging the node metadata node data most preceding with the arrangement;
Brush unit under data, for brush under the data after merging into the disk.
7. device according to claim 6, which is characterized in that described device further include:
Queue creating unit, for creating the rule buffering queue;
The first loading unit of data, for according to the preset order, the node data on the disk to be loaded onto described fall In disk buffering queue, until being booked node data in the rule buffering queue.
8. device according to claim 6 or 7, which is characterized in that described device further include:
The second loading unit of data, when for determining that the rule buffering queue is discontented with, according to the preset order by the magnetic The node data not being loaded on disk is loaded onto the rule buffering queue, with rule buffering queue described in polishing.
9. device according to claim 6 or 7, which is characterized in that the preset order is the number of nodes on the disk According to the incremental sequence of logical address;
The disk node data acquiring unit, specifically includes:
First obtains subelement, for obtaining the smallest node data of logical address from the rule buffering queue;
The data combination unit, specifically includes:
First merges subelement, for merging the node metadata and the smallest node data of the logical address.
10. device according to claim 6 or 7, which is characterized in that storing data structure is the first B+ in the disk It sets, the storing data structure in the memory is the 2nd B+ tree;
The data combination unit, specifically includes:
Second merges subelement, for according to the node metadata of the 2nd B+ tree and being arranged in for the first B+ tree Most preceding node data in the rule buffering queue, the 3rd B+ tree after being merged;
Brush unit under the data, specifically includes:
First lower brush unit, for will brush under the 3rd B+ tree into the disk.
CN201811198130.5A 2018-10-15 2018-10-15 Data dropping method and related device Active CN109388346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811198130.5A CN109388346B (en) 2018-10-15 2018-10-15 Data dropping method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811198130.5A CN109388346B (en) 2018-10-15 2018-10-15 Data dropping method and related device

Publications (2)

Publication Number Publication Date
CN109388346A true CN109388346A (en) 2019-02-26
CN109388346B CN109388346B (en) 2022-02-18

Family

ID=65427629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811198130.5A Active CN109388346B (en) 2018-10-15 2018-10-15 Data dropping method and related device

Country Status (1)

Country Link
CN (1) CN109388346B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147204A (en) * 2019-05-22 2019-08-20 苏州浪潮智能科技有限公司 A kind of metadata rule method, apparatus, system and computer readable storage medium
CN110673798A (en) * 2019-09-20 2020-01-10 苏州浪潮智能科技有限公司 Storage system and IO (input/output) disk dropping method and device thereof
CN110673791A (en) * 2019-09-06 2020-01-10 苏州浪潮智能科技有限公司 Metadata refreshing method, device, equipment and readable storage medium
CN111858095A (en) * 2020-07-17 2020-10-30 山东云海国创云计算装备产业创新中心有限公司 Hardware queue multithreading sharing method, device, equipment and storage medium
CN116521090A (en) * 2023-06-25 2023-08-01 苏州浪潮智能科技有限公司 Data disc-dropping method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8874680B1 (en) * 2011-11-03 2014-10-28 Netapp, Inc. Interconnect delivery process
CN105094711A (en) * 2015-09-22 2015-11-25 浪潮(北京)电子信息产业有限公司 Method and device for achieving copy-on-write file system
CN108647151A (en) * 2018-04-26 2018-10-12 郑州云海信息技术有限公司 It is a kind of to dodge system metadata rule method, apparatus, equipment and storage medium entirely

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8874680B1 (en) * 2011-11-03 2014-10-28 Netapp, Inc. Interconnect delivery process
CN105094711A (en) * 2015-09-22 2015-11-25 浪潮(北京)电子信息产业有限公司 Method and device for achieving copy-on-write file system
CN108647151A (en) * 2018-04-26 2018-10-12 郑州云海信息技术有限公司 It is a kind of to dodge system metadata rule method, apparatus, equipment and storage medium entirely

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张俊兰: "《计算机操作系统》", 30 September 2003 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147204A (en) * 2019-05-22 2019-08-20 苏州浪潮智能科技有限公司 A kind of metadata rule method, apparatus, system and computer readable storage medium
CN110673791A (en) * 2019-09-06 2020-01-10 苏州浪潮智能科技有限公司 Metadata refreshing method, device, equipment and readable storage medium
CN110673791B (en) * 2019-09-06 2022-07-22 苏州浪潮智能科技有限公司 Metadata refreshing method, device and equipment and readable storage medium
CN110673798A (en) * 2019-09-20 2020-01-10 苏州浪潮智能科技有限公司 Storage system and IO (input/output) disk dropping method and device thereof
CN111858095A (en) * 2020-07-17 2020-10-30 山东云海国创云计算装备产业创新中心有限公司 Hardware queue multithreading sharing method, device, equipment and storage medium
CN111858095B (en) * 2020-07-17 2022-06-10 山东云海国创云计算装备产业创新中心有限公司 Hardware queue multithreading sharing method, device, equipment and storage medium
CN116521090A (en) * 2023-06-25 2023-08-01 苏州浪潮智能科技有限公司 Data disc-dropping method and device, electronic equipment and storage medium
CN116521090B (en) * 2023-06-25 2023-09-12 苏州浪潮智能科技有限公司 Data disc-dropping method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109388346B (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN109388346A (en) A kind of data rule method and relevant apparatus
CN103970520B (en) Method for managing resource, device and architecture system in MapReduce frameworks
US9632826B2 (en) Prioritizing deferred tasks in pending task queue based on creation timestamp
CN101694626B (en) Script execution system and method
EP3176693A1 (en) Multicore processor chip-based data processing method, device, and system
US8621285B2 (en) Request based logging
US9176847B2 (en) Managing diagnostic information
CN103631940A (en) Data writing method and data writing system applied to HBASE database
CN111324427B (en) Task scheduling method and device based on DSP
US20190004959A1 (en) Methods and devices for managing cache
TW201112254A (en) Memory device and data access method for a memory device
US20090006520A1 (en) Multiple Thread Pools for Processing Requests
TW201737111A (en) Method and device for detecting and processing hard disk hanging fault in distributed storage system
CN108984104B (en) Method and apparatus for cache management
US20170123975A1 (en) Centralized distributed systems and methods for managing operations
CN104035925A (en) Data storage method and device and storage system
JP6245700B2 (en) Computer system, data inspection method and computer
US10901982B2 (en) Managing a data set
CN103514140B (en) For realizing the reconfigurable controller of configuration information multi-emitting in reconfigurable system
CN106874343B (en) Data deletion method and system for time sequence database
CN114063883A (en) Method for storing data, electronic device and computer program product
CN117156172B (en) Video slice reporting method, system, storage medium and computer
US9671958B2 (en) Data set management
CN104426965B (en) Self management storage method and system
CN105045891A (en) Method and system for improving performance of sequence list, architecture, optimization method and storage apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant