CN106648442A - Metadata node internal memory mirroring method and device - Google Patents
Metadata node internal memory mirroring method and device Download PDFInfo
- Publication number
- CN106648442A CN106648442A CN201510718506.0A CN201510718506A CN106648442A CN 106648442 A CN106648442 A CN 106648442A CN 201510718506 A CN201510718506 A CN 201510718506A CN 106648442 A CN106648442 A CN 106648442A
- Authority
- CN
- China
- Prior art keywords
- data
- internal storage
- data block
- threshold
- storage data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a metadata node internal memory mirroring method and device. The method comprises the following steps of: when the fact that the size of internal memory is greater than a preset first threshold is detected, dividing the internal memory data into more than one data block, wherein the size of each data block does not exceed the first threshold; and compressing each data block, and writing the compressed data blocks into a magnetic disk. The method and device disclosed by the invention can be used for solving the problem that the magnetic disk space pressure is relatively high during the existing dump internal memory mirroring.
Description
Technical field
The application is related to distributed memory system, specifically, is related to a kind of internal memory mirror of metadata node
Image space method, device.
Background technology
In large-scale distributed storage system, in order to realize concentrating purview certification, quota control, big portion
The method for point employing centralized metadata management, will entirely in storage system all data metadata
Leave concentratedly and stored in several metadata nodes (NameNode).In such framework, first number
The availability of whole storage system is directly connected to according to the availability of node.When upgrading occurs in metadata node
Or process, when restarting, the internal storage data of fast quick-recovery metadata node becomes main demand point, because
This, can adopt periodically the internal storage data in metadata node is write in disk (i.e. within the storage system
Dump memory mirrors), and record data Operation Log is accomplishing to recover the internal memory in metadata node as early as possible
Data, that is to say, that the memory mirror of the preservation in disk is read out and is re-loaded in internal memory, wherein,
Data manipulation daily record is modification (such as the increase or delete) record of the data to real storage.For example exist
In Hadoop distributed file systems, metadata node receive every time back end (real data storage
Node) before data writing operation success, change the data manipulation daily record of metadata node and be synchronized to number
According to the file system of node.
Inventor has found during the present invention is realized:It is existing with the exponential growth of metadata
During dump memory mirrors, disk becomes the bottleneck of memory mirror, and substantial amounts of metadata node is in dump internal memories
Need the disk space for taking big during mirror image, to disk larger space pressure problem is caused.
The content of the invention
In view of this, the application provides a kind of memory mirror method, the device of metadata node, can solve
Certainly disk space pressure larger problem during existing dump memory mirrors.
In order to solve above-mentioned technical problem, the application first aspect provides a kind of internal memory mirror of metadata node
Image space method, including:
When the size for detecting internal storage data is more than default first threshold, the internal storage data is split
For more than one data block, the size of each data block is less than the first threshold;
Each data block is compressed, and the data block after compression is write into disk.
Alternatively, described method also includes:
When the size for detecting internal storage data is equal to default first threshold, the internal storage data is carried out
Compression, and the internal storage data after compression is write into disk.
Alternatively, described method also includes:
When the size for detecting internal storage data is less than default Second Threshold, will be less than the Second Threshold
Internal storage data write disk.
Alternatively, when the size for detecting internal storage data is more than default first threshold, by the internal memory
Data are split as after more than one data block, if there is the data block less than the Second Threshold,
Then methods described also includes:
By less than the data block of Second Threshold write disk.
Alternatively, disk will be write less than the internal storage data or data block of the Second Threshold, including:
Disk will be continuously written into less than the internal storage data or data block of the Second Threshold, and it is little to record each
In the internal storage data of the Second Threshold or the original position of data block and data length, i.e., less than described
Do not exist between the internal storage data or data block of two threshold values more than or equal to the Second Threshold data block or
Internal storage data.
Alternatively, each data block or internal storage data are compressed, including:
The data block or internal storage data of each needs compression are compressed using multiple threads.
Alternatively, the data block or internal storage data of each needs compression are compressed using multiple threads,
Including:
It is compressed in the data block or internal storage data to each needs compression, generating each needs compression
Data block or the corresponding check value of internal storage data.
The present invention also provides a kind of memory mirror device of metadata node, including:
Data split module, and the size for detecting internal storage data in detection module is more than default first
During threshold value, the internal storage data is split as into more than one data block, the size of each data block does not surpass
Cross the first threshold;
Compression module, is compressed for the data to be split with each data block after module splits, and
Data block after compression is write into disk by writing module.
Alternatively, the compression module, is additionally operable to detect the size of internal storage data in the detection module
During equal to default first threshold, the internal storage data is compressed, and by the internal storage data after compression
Disk is write by said write module.
Alternatively, said write module, is additionally operable to the detection module and is detecting the size of internal storage data
During less than default Second Threshold, by less than the internal storage data of Second Threshold write disk.
Alternatively, said write module, is additionally operable to detect the size of internal storage data in the detection module
During more than default first threshold, the internal storage data is split as after more than one data block, if
When there is the data block less than the Second Threshold, by less than the data block of Second Threshold write magnetic
Disk.
Alternatively, said write module, specifically for by less than the internal storage data or number of the Second Threshold
Disk is continuously written into according to block, and records each less than the internal storage data of the Second Threshold or rising for data block
Beginning position and data length, i.e., it is big less than not existing between the internal storage data or data block of the Second Threshold
In data block or internal storage data equal to the Second Threshold.
Alternatively, the compression module, specifically for the data using multiple threads to each needs compression
Block or internal storage data are compressed.
Alternatively, the compression module, specifically in the data block or interior poke to each needs compression
According to being compressed, generating each needs the data block or the corresponding check value of internal storage data of compression.
The present invention also provides a kind of metadata node, including:Above-mentioned memory mirror device.
The embodiment of the present invention by dump memory mirrors, when internal storage data is larger, internal poke
According to carrying out being split as data block, and the compression of multithreading is carried out to the data block after each fractionation, not only may be used
To accelerate dump memory mirror speed, and the space availability ratio of disk can be improved.
Description of the drawings
Accompanying drawing described herein is used for providing further understanding of the present application, constitutes of the application
Point, the schematic description and description of the application is used to explain the application, does not constitute to the application's
Improper restriction.In the accompanying drawings:
Fig. 1 is a kind of framework map of distributed memory system;
Fig. 2 is a kind of flow chart of the memory mirror method of metadata node provided in an embodiment of the present invention;
Fig. 3 is a kind of flow chart of the memory mirror method of metadata node provided in an embodiment of the present invention;
Fig. 4 is a kind of flow chart of the memory mirror method of metadata node provided in an embodiment of the present invention;
Fig. 5 is a kind of structure chart of the memory mirror device of metadata node provided in an embodiment of the present invention.
Specific embodiment
Presently filed embodiment is described in detail below in conjunction with drawings and Examples, thereby to the application
How application technology means come solve technical problem and reach technology effect realize that process can fully understand
And implement according to this.
In a typical configuration, computing device include one or more processors (CPU), input/
Output interface, network interface and internal memory.
Internal memory potentially includes the volatile memory in computer-readable medium, random access memory
And/or the form, such as read-only storage (ROM) or flash memory (flash RAM) such as Nonvolatile memory (RAM).
Internal memory is the example of computer-readable medium.
Computer-readable medium includes that permanent and non-permanent, removable and non-removable media can be by
Any method or technique is realizing information Store.Information can be computer-readable instruction, data structure,
The module of program or other data.The example of the storage medium of computer includes, but are not limited to phase transition internal memory
(PRAM), static RAM (SRAM), dynamic random access memory (DRAM), other
The random access memory (RAM) of type, read-only storage (ROM), electrically erasable is read-only deposits
Reservoir (EEPROM), fast flash memory bank or other memory techniques, read-only optical disc read-only storage
(CD-ROM), digital versatile disc (DVD) or other optical storages, magnetic cassette tape, tape magnetic
Disk storage or other magnetic storage apparatus or any other non-transmission medium, can be used for storage can be counted
The information that calculation equipment is accessed.Define according to herein, computer-readable medium does not include non-temporary computer
The data-signal and carrier wave of readable media (transitory media), such as modulation.
As in specification and claim some vocabulary used in censuring specific components.This area skill
Art personnel are, it is to be appreciated that hardware manufacturer may call same component with different nouns.This explanation
Book and claim not in the way of the difference of title is used as distinguishing component, but with component in function
On difference be used as distinguish criterion.Such as " the bag in specification in the whole text and claim mentioned in
Containing " it is an open language, therefore should be construed to " include but be not limited to "." substantially " refer to can
In the error range of reception, those skilled in the art can solve the technology in the range of certain error and ask
Topic, basically reaches the technique effect.Additionally, " coupling " word here is comprising any directly and indirect
Electric property coupling means.Therefore, if a first device is coupled to a second device described in text, represent
The first device can directly be electrically coupled to the second device, or by other devices or coupling means
The second device is electrically coupled to indirectly.Specification subsequent descriptions are to implement the preferable enforcement of the application
Mode, so the description is for the purpose of the rule for illustrating the application, to be not limited to the application
Scope.The protection domain of the application ought be defined depending on the claims person of defining.
Also, it should be noted that term " including ", "comprising" or its any other variant are intended to contain
Lid nonexcludability is included, so that not only including those including the commodity or system of a series of key elements
Key element, but also including other key elements being not expressly set out, or also include for this commodity or
The intrinsic key element of system.In the absence of more restrictions, limited by sentence "including a ..."
Fixed key element, it is not excluded that also exist in the commodity or system including the key element it is other it is identical will
Element.
Fig. 1 is a kind of framework map of distributed memory system, as shown in figure 1, wherein, datanodes
It is the back end of real data storage, back end of the present invention is not limited to shown in Fig. 1, can be with
Have multiple;Namenodes is the node (abbreviation metadata node) of the metadata of data storage, this
Bright metadata node is not limited to shown in Fig. 1, can be included multiple.
Based on the system architecture diagram shown in Fig. 1, for example, when terminal is want to write data in back end,
Back end initiates write request (such as the DFSclient in Fig. 1, one kind request to metadata node first
Message), metadata node is received after write request, preserve in the internal memory of metadata node this write please
The metadata informations such as size, storage location, title or the mark of the data to be written for including are sought, is recorded
This writes the daily record of data, backward back end return the response message of write request and carry this and write number
According to daily record so that back end to be written into data according to the response message that metadata node is returned real
In being saved in back end.
Again for example, terminal to from back end, during reading data, to metadata node send out first by back end
Read request is played, the internal memory of metadata node is received after read request, what is carried in acquisition read request continues
The information such as the title fetched data or mark, metadata node query metadata information bank, gets and continues
The title fetched data or the metadata information of mark matching, by the metadata information of acquisition data section is returned to
Point, the metadata information that back end is returned according to metadata node, wherein, need in metadata information
Read the information such as storage location of data, back end will can continue reading of fetching data according to metadata information
Go out to show terminal use.
Therefore, when back end often operates a secondary data, metadata node all saves this operand
According to metadata information and Operation Log, substantial amounts of metadata letter is preserved in such metadata node
Breath.The availability of metadata node is directly connected to the availability of whole storage system.Work as metadata node
When appearance upgrading or process are restarted, the internal storage data of fast quick-recovery metadata node becomes main need
Ask a little.
Inventor has found during the present invention is realized:In prior art, in dump memory mirrors,
It is that, with the exponential growth of metadata, disk becomes interior directly by the data copy of internal memory to disk
The bottleneck of mirror image is deposited, substantial amounts of metadata node needs the disk space for taking in dump memory mirrors
Greatly, larger space pressure problem is caused to disk.
The embodiment of the present invention adopt scheme be:In dump memory mirrors, according to the size of internal storage data
The method for taking different Compression Strategies, when internal storage data is less than certain threshold value, not to internal storage data pressure
Contracting, when internal storage data is more than certain threshold value, is compressed to internal storage data.
The present inventor has found during the present invention is realized:
For the data less than 4k, if the caching (buffer) for constituting a 512k every time is pressed
Contracting, reduction size of data that can not be many wastes on the contrary the process resource of processor, increased interior
The time of copy is deposited, causing the speed of dump memory mirrors reduces.Therefore, for the data less than 4k,
It is placed into an incompressible buffer, and in this buffer using the data by this part less than 4k
The data less than 4k of storage must be continuous, that is can not go out between the adjacent data block less than 4k
Now it is more than or equal to the data of 4k.
For example, a buffer is 512k, then the process that the data in buffer are placed is probably 1k,
2k, 3k, it is impossible to be 1k, 5k, 2k, because when there is 5k, a buffer can be changed to write, this
Buffer just writes the data of 1k.
Further, the original position and length of data of this section less than 4k can be recorded in Buffer, is led to
Often, each buffer also has the variable of a supporting record to record the original position of this data.
For the data more than or equal to 4k:The data that 4k can be will be greater than being equal to first are unpacked, and a data are most
Greatly 512k, when the data after unpacking are compressed, the size of meeting automatic identification data will be less than
The buffer unblocks of 4k data are fallen, i.e., directly carry out write magnetic disk, and then the data by this part less than 4k are deposited
In being put into buffer.For the data more than or equal to 4k are compressed using multiple threads, each thread meeting
The data more than or equal to 4k for needing compression are constantly taken out from the queue for needing compression, then to being more than
Data equal to 4k are compressed.Afterwards, during the data after compression to be put into the queue for needing to write, line is write
Journey constantly can take out the data after the compression for needing to write from write queue, write the buffer of data
In.
It should be noted that being the input and output function for reducing disk when disk is write data to, write
During data, a disk is write per 2M data, will not thus waste input and the output resource of disk,
Reduce the operating pressure to disk.
Therefore, write magnetic disk again after being compressed to internal storage data in technical scheme, can alleviate
Disk space pressure during dump memory mirrors, improves disk space usage;And internal storage data is adopted
It is compressed with multithreading, improves the speed and efficiency of dump memory mirrors.
Fig. 2 is a kind of flow chart of the memory mirror method of metadata node provided in an embodiment of the present invention,
As shown in Fig. 2 including:
201st, internal storage data size is detected;
Concrete application scene is that metadata node goes to check internal storage data (such as fsimage at regular intervals
File, this document is in fact the file of metadata information), in each review time point (checkpoint)
Write internal storage data (such as fsimage files).Technology according to the present invention scheme, when each is checked
Between point (checkpoint) write before internal storage data (such as fsimage files), need detect internal memory
Size of data, and different Compression Strategies are carried out according to the size of internal storage data (such as fsimage files).
202nd, when the size for detecting internal storage data is more than default first threshold, by the interior poke
According to being split as more than one data block;
Wherein, the size of each data block is less than the first threshold, in the embodiment of the present invention, first
Threshold value for example could be arranged to 512k;
For example, when the size for detecting fsimage files is more than 512k, fsimage files are torn open
It is divided into multiple subfiles, the size of each subfile is less than 512k.
It should be noted that in order to recognize subfile, the subfile after each fractionation carries fractionation
The mark of front mother file.
203rd, each data block is compressed, and the data block after compression is write into disk.
Specifically, in order to reduce the space pressure of write magnetic disk, the embodiment of the present invention is carried out to each data block
Compression, and the data block after compression is write into disk;
Further, in order to improve the speed of write magnetic disk, the time of write magnetic disk, the embodiment of the present invention are saved
Multiple threads can be adopted to be compressed the data block or internal storage data of each needs compression;
Further, it is compressed in the data block to each needs compression, generating each needs compression
The corresponding check value of data block, check value here can be according to the data block contents life of each needs compression
Into a string of characters, with this each data block contents is verified come check value, thereby may be ensured that number
According to correctness, the generation of wrong data is prevented.
In another optional embodiment is invented, it is equal in the size for detecting internal storage data default
It is right during first threshold (for example, when the size for detecting fsimage files is just equal to 512k)
The internal storage data is compressed using multithreading, and generates the check value of the internal storage data, and will compression
Internal storage data write disk afterwards.
The embodiment of the present invention by dump memory mirrors, when internal storage data is larger, internal poke
According to carrying out being split as data block, and the compression of multithreading is carried out to the data block after each fractionation, not only may be used
To accelerate dump memory mirror speed, and the space availability ratio of disk can be improved.
Fig. 3 is a kind of flow chart of the memory mirror method of metadata node provided in an embodiment of the present invention,
As shown in figure 3, including:
301st, internal storage data size is detected;
With reference to the step of embodiment illustrated in fig. 2 201;
302nd, when the size for detecting internal storage data is less than default Second Threshold, will be less than described the
The internal storage data write disk of two threshold values;
Wherein, inventor has found during the present invention is realized:For the data less than 4k, if
The caching for constituting a 512k every time is compressed, and can not reduce the size of many internal storage datas,
Waste on the contrary because compression needs the process resource for consuming, increased the time of memory copying, cause to write
Checkpoint can be slow, that is, cause the reduction of dump memory mirror speed, therefore, the present invention is real
In applying example, 4k is set into Second Threshold, the internal storage data less than 4k is not compressed.
Further, in 202 the step of embodiment as shown in Figure 2, when internal storage data is split as into one
Data block more than individual, wherein when there is the data block less than Second Threshold, by less than the number of Second Threshold
Disk is write according to block.
Further, in the embodiment of the present invention, will be continuous less than the internal storage data or data block of Second Threshold
Write disk, and record each less than the Second Threshold internal storage data or data block original position and
Data length, i.e., less than not existing more than or equal to institute between the internal storage data or data block of the Second Threshold
State the data block or internal storage data of Second Threshold.When implementing, for example, can be by the data less than 4k
In being placed into an incompressible caching (buffer), and deposit in this incompressible buffer
Data must be continuous, that is can not occur between the adjacent data block less than 4k more than or equal to 4k
Data block, can record in this incompressible Buffer this section less than 4k data block original position
And length.
The embodiment of the present invention by dump memory mirrors, when internal storage data or data block it is less, i.e.,
During less than 4k, the data block less than 4k is not compressed, but is adopted the data block less than 4k
In continuously depositing incompressible caching, extra process resource will not be consumed, it is ensured that dump memory mirrors
Speed, it is possible to achieve the balance of dump memory mirrors speed and process resource.
The method described in the embodiment of the present invention is illustrated below by concrete application:
Fig. 4 is a kind of flow chart of the memory mirror method of metadata node provided in an embodiment of the present invention,
As shown in figure 4, including:
401st, internal storage data size is detected;
The execution step 402 when the size of internal storage data is more than 512k, holds when internal storage data is less than 4k
Row step 405.
402nd, internal storage data is split as into multiple data blocks;
Assume that internal storage data size is 10M (i.e. 10000k), the size maximum of each data block is less than
512k, as such, it is possible to the data block of 19 512k sizes is split as, the data block of 1 272k size;
Assume that internal storage data size is 1025k, according to the size of each data block is maximum 512k be less than,
As such, it is possible to the data block of 2 512k sizes is split as, the data block of 1 1k;
When the size of data block is more than 4k, execution step 403, when the size of data block is less than or equal to
During 4k, execution step 404;
403rd, for the data block more than 4k is compressed using multithreading;
For example, the compressed cache (buffer) of a 512k is set in the embodiment of the present invention, i.e., to pressure
Contracting caching stamps the label for needing compression, for the data block more than 4k, these data is put into this
The compressed cache of 512k, is then placed in compression queue medium to be compressed;In order to accelerate compression speed, adopt
It is compressed with multithreading.
Further, when each data block is compressed, the compression can be produced according to each data block contents
The check value of data block, for ensureing the correctness of the compression data block.
404th, to the data block write disk after compression;
Data block after each is compressed is put into and is write in disk queue, is to reduce disk in write magnetic disk
Input and output function, write a disk per 2M data, that is to say, that data block after compression
To a write magnetic disk is just carried out during 2M, input and the output resource of disk will not be thus wasted, also be subtracted
The operating pressure to disk is lacked.
405th, write disk is not compressed to the data block less than 4k.
For example, to the data block less than 4k, these data blocks less than 4k are put into the non-depressed of 512k
In contracting caching, for uncompressed caching needs to unlock compression tag, i.e., it is not put into compression queue, but
It is directly placed into and writes in disk queue;
It should be noted that when the data block less than 4k is put into incompressible buffer, storage
The data block less than 4k must be continuous in incompressible buffer, that is adjacent is less than
Can not occur the data block more than or equal to 4k between the data block of 4k, can remember in this incompressible Buffer
The original position and length of each section of data block less than 4k of record.
It is the input and output function for reducing disk in write magnetic disk, per 2M data a disk is write,
That is, just carrying out one when the data block less than 4k is put into and reaches 2M in incompressible buffer
Secondary write magnetic disk, will not thus waste input and the output resource of disk, decrease the operation to disk
Pressure.
If can compress without data, then thread can be in the shape for waiting (condition wait)
State, when there is new data that thread is write when needing compression (signal) compression thread can be signaled to.
Write magnetic disk again after being compressed to internal storage data in technical scheme, can alleviate dump
Disk space pressure during memory mirror, improves disk space usage;And to internal storage data using many
Thread is compressed, and improves the speed and efficiency of dump memory mirrors.
Fig. 5 is a kind of structure chart of the memory mirror device of metadata node provided in an embodiment of the present invention,
As shown in figure 5, including:
Detection module 51, for detecting the size of internal storage data
Data split module 52, for the size of internal storage data to be detected in detection module more than default the
During one threshold value, the internal storage data is split as into more than one data block, the size of each data block is not
More than the first threshold;
Compression module 53, is compressed for the data to be split with each data block after module splits,
And the data block after compression is write into disk by writing module 54.
Alternatively, the compression module 53, is additionally operable to detect internal storage data in the detection module 51
Size when being equal to default first threshold, the internal storage data is compressed, and by after compression
Deposit data writes disk by said write module 54.
Alternatively, said write module 54, is additionally operable to the detection module 51 and is detecting internal storage data
Size be less than default Second Threshold when, will be less than the Second Threshold internal storage data write disk.
Alternatively, said write module 54, is additionally operable to detect internal storage data in the detection module 51
Size be more than default first threshold when, by the internal storage data be split as more than one data block it
Afterwards, if there is the data block less than the Second Threshold, will write less than the data block of the Second Threshold
Enter disk.
Alternatively, said write module 54, specifically for by less than the Second Threshold internal storage data or
Data block is continuously written into disk, and records each internal storage data or data block less than the Second Threshold
Original position and data length, i.e., less than not existing between the internal storage data or data block of the Second Threshold
More than or equal to the data block or internal storage data of the Second Threshold.
Alternatively, the compression module 53, specifically for the number using multiple threads to each needs compression
It is compressed according to block or internal storage data.
Alternatively, the compression module 53, specifically in the data block or internal memory to each needs compression
Data are compressed, and generating each needs the data block or the corresponding check value of internal storage data of compression.
Fig. 5 shown devices can perform the method described in above-mentioned Fig. 2-Fig. 3 any embodiments, and it realizes former
Reason and technique effect are repeated no more.
The embodiment of the present invention also provides a kind of metadata node, including the internal memory described in embodiment illustrated in fig. 3
Mirroring apparatus.
Described above illustrates and describes some preferred embodiments of the present invention, but as it was previously stated, should manage
The solution present invention is not limited to form disclosed herein, is not to be taken as the exclusion to other embodiment,
And can be used for various other combinations, modification and environment, and can in invention contemplated scope described herein,
It is modified by the technology or knowledge of above-mentioned teaching or association area.And those skilled in the art are carried out changes
Dynamic and change, then all should be in the protection of claims of the present invention without departing from the spirit and scope of the present invention
In the range of.
Claims (15)
1. a kind of memory mirror method of metadata node, it is characterised in that include:
When the size for detecting internal storage data is more than default first threshold, the internal storage data is split
For more than one data block, the size of each data block is less than the first threshold;
Each data block is compressed, and the data block after compression is write into disk.
2. method according to claim 1, it is characterised in that also include:
When the size for detecting internal storage data is equal to default first threshold, the internal storage data is carried out
Compression, and the internal storage data after compression is write into disk.
3. method according to claim 1, it is characterised in that also include:
When the size for detecting internal storage data is less than default Second Threshold, will be less than the Second Threshold
Internal storage data write disk.
4. method according to claim 1, it is characterised in that detecting the big of internal storage data
It is little more than default first threshold when, the internal storage data is split as after more than one data block,
If there is the data block less than the Second Threshold, methods described also includes:
By less than the data block of Second Threshold write disk.
5. the method according to claim 3 or 4, it is characterised in that will be less than second threshold
The internal storage data of value or data block write disk, including:
Disk will be continuously written into less than the internal storage data or data block of the Second Threshold, and it is little to record each
In the internal storage data of the Second Threshold or the original position of data block and data length, i.e., less than described
Do not exist between the internal storage data or data block of two threshold values more than or equal to the Second Threshold data block or
Internal storage data.
6. method according to claim 1 and 2, it is characterised in that to each data block or interior
Deposit data is compressed, including:
The data block or internal storage data of each needs compression are compressed using multiple threads.
7. method according to claim 6, it is characterised in that each is needed using multiple threads
The data block or internal storage data to be compressed is compressed, including:
It is compressed in the data block or internal storage data to each needs compression, generating each needs compression
Data block or the corresponding check value of internal storage data.
8. the memory mirror device of a kind of metadata node, it is characterised in that include:
Data split module, and the size for detecting internal storage data in detection module is more than default first
During threshold value, the internal storage data is split as into more than one data block, the size of each data block does not surpass
Cross the first threshold;
Compression module, is compressed for the data to be split with each data block after module splits, and
Data block after compression is write into disk by writing module.
9. device according to claim 8, it is characterised in that:
The compression module, is additionally operable to detect the size of internal storage data equal to default in the detection module
First threshold when, the internal storage data is compressed, and by the internal storage data after compression by described
Writing module writes disk.
10. device according to claim 8, it is characterised in that also include:
Said write module, is additionally operable to the detection module and is detecting the size of internal storage data less than default
Second Threshold when, will be less than the Second Threshold internal storage data write disk.
11. devices according to claim 8, it is characterised in that:
Said write module, is additionally operable to detect the size of internal storage data more than default in the detection module
First threshold when, the internal storage data is split as after more than one data block, if exist be less than
During the data block of the Second Threshold, by less than the data block of Second Threshold write disk.
12. devices according to claim 10 or 11, it is characterised in that:
Said write module, the internal storage data or data block specifically for being less than the Second Threshold is continuous
Write disk, and record each less than the Second Threshold internal storage data or data block original position and
Data length, i.e., less than not existing more than or equal to institute between the internal storage data or data block of the Second Threshold
State the data block or internal storage data of Second Threshold.
13. devices according to claim 8 or 10, it is characterised in that:
The compression module, specifically for data block or internal memory using multiple threads to each needs compression
Data are compressed.
14. devices according to claim 13, it is characterised in that:
The compression module, specifically for pressing in the data block or internal storage data to each needs compression
Contracting, generating each needs the data block or the corresponding check value of internal storage data of compression.
15. a kind of metadata nodes, it is characterised in that include:
Memory mirror device as any one of claim 8-14.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510718506.0A CN106648442A (en) | 2015-10-29 | 2015-10-29 | Metadata node internal memory mirroring method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510718506.0A CN106648442A (en) | 2015-10-29 | 2015-10-29 | Metadata node internal memory mirroring method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106648442A true CN106648442A (en) | 2017-05-10 |
Family
ID=58829810
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510718506.0A Pending CN106648442A (en) | 2015-10-29 | 2015-10-29 | Metadata node internal memory mirroring method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106648442A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107436738A (en) * | 2017-08-17 | 2017-12-05 | 北京理工大学 | A kind of date storage method and system |
CN107450856A (en) * | 2017-08-10 | 2017-12-08 | 北京元心科技有限公司 | Writing method and reading method of stored data, corresponding devices and terminals |
CN108170455A (en) * | 2018-03-12 | 2018-06-15 | 晶晨半导体(上海)股份有限公司 | The packaging method and upgrade method of upgrade package |
CN108345434A (en) * | 2018-03-12 | 2018-07-31 | 广州酷狗计算机科技有限公司 | Method for writing data, device, computer equipment and readable storage medium storing program for executing |
CN109240993A (en) * | 2018-07-24 | 2019-01-18 | 郑州云海信息技术有限公司 | Metadata management method and storage server |
CN109413497A (en) * | 2018-09-12 | 2019-03-01 | 青岛海信电器股份有限公司 | A kind of intelligent TV set and its system start method |
CN109491611A (en) * | 2018-11-07 | 2019-03-19 | 郑州云海信息技术有限公司 | A kind of metadata rule method, device and equipment |
CN109582244A (en) * | 2018-12-05 | 2019-04-05 | 广东浪潮大数据研究有限公司 | A kind of metadata rule method, apparatus, terminal and computer readable storage medium |
CN114063888A (en) * | 2020-07-31 | 2022-02-18 | 中移(苏州)软件技术有限公司 | Data storage system, data processing method, terminal and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102203718A (en) * | 2011-05-26 | 2011-09-28 | 华为技术有限公司 | Memory transfer processing method, device and system thereof |
CN103747060A (en) * | 2013-12-26 | 2014-04-23 | 惠州华阳通用电子有限公司 | Distributed monitor system and method based on streaming media service cluster |
CN104123300A (en) * | 2013-04-26 | 2014-10-29 | 上海云人信息科技有限公司 | Data distributed storage system and method |
CN104571955A (en) * | 2014-12-27 | 2015-04-29 | 华为技术有限公司 | Method and device for expanding storage capacity |
-
2015
- 2015-10-29 CN CN201510718506.0A patent/CN106648442A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102203718A (en) * | 2011-05-26 | 2011-09-28 | 华为技术有限公司 | Memory transfer processing method, device and system thereof |
CN104123300A (en) * | 2013-04-26 | 2014-10-29 | 上海云人信息科技有限公司 | Data distributed storage system and method |
CN103747060A (en) * | 2013-12-26 | 2014-04-23 | 惠州华阳通用电子有限公司 | Distributed monitor system and method based on streaming media service cluster |
CN104571955A (en) * | 2014-12-27 | 2015-04-29 | 华为技术有限公司 | Method and device for expanding storage capacity |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107450856A (en) * | 2017-08-10 | 2017-12-08 | 北京元心科技有限公司 | Writing method and reading method of stored data, corresponding devices and terminals |
CN107436738B (en) * | 2017-08-17 | 2019-10-25 | 北京理工大学 | A kind of date storage method and system |
CN107436738A (en) * | 2017-08-17 | 2017-12-05 | 北京理工大学 | A kind of date storage method and system |
CN108345434B (en) * | 2018-03-12 | 2021-03-30 | 广州酷狗计算机科技有限公司 | Data writing method and device, computer equipment and readable storage medium |
CN108345434A (en) * | 2018-03-12 | 2018-07-31 | 广州酷狗计算机科技有限公司 | Method for writing data, device, computer equipment and readable storage medium storing program for executing |
CN108170455A (en) * | 2018-03-12 | 2018-06-15 | 晶晨半导体(上海)股份有限公司 | The packaging method and upgrade method of upgrade package |
CN108170455B (en) * | 2018-03-12 | 2021-04-27 | 晶晨半导体(上海)股份有限公司 | Upgrade package packaging method and upgrade method |
CN109240993A (en) * | 2018-07-24 | 2019-01-18 | 郑州云海信息技术有限公司 | Metadata management method and storage server |
CN109413497A (en) * | 2018-09-12 | 2019-03-01 | 青岛海信电器股份有限公司 | A kind of intelligent TV set and its system start method |
CN109413497B (en) * | 2018-09-12 | 2021-04-13 | 海信视像科技股份有限公司 | Intelligent television and system starting method thereof |
CN109491611A (en) * | 2018-11-07 | 2019-03-19 | 郑州云海信息技术有限公司 | A kind of metadata rule method, device and equipment |
CN109491611B (en) * | 2018-11-07 | 2021-11-09 | 郑州云海信息技术有限公司 | Metadata dropping method, device and equipment |
CN109582244A (en) * | 2018-12-05 | 2019-04-05 | 广东浪潮大数据研究有限公司 | A kind of metadata rule method, apparatus, terminal and computer readable storage medium |
CN114063888A (en) * | 2020-07-31 | 2022-02-18 | 中移(苏州)软件技术有限公司 | Data storage system, data processing method, terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106648442A (en) | Metadata node internal memory mirroring method and device | |
US20180260114A1 (en) | Predictive models of file access patterns by application and file type | |
US20170293450A1 (en) | Integrated Flash Management and Deduplication with Marker Based Reference Set Handling | |
CN107045531A (en) | A kind of system and method for optimization HDFS small documents access | |
CN106201771B (en) | Data-storage system and data read-write method | |
CN103595797B (en) | Caching method for distributed storage system | |
EP2570912A1 (en) | Storage method and device based on data content identification | |
CN109960686A (en) | The log processing method and device of database | |
CN104657366B (en) | The method, apparatus and log disaster tolerance system of massive logs write-in database | |
JP2005267600A5 (en) | ||
CN109726175A (en) | A kind of offline partition management method of mass file based on HBase | |
CN112347076B (en) | Data storage method and device of distributed database and computer equipment | |
KR970059960A (en) | Data transfer control method | |
TW200725298A (en) | System and method for storage management | |
CN106980665A (en) | Data dictionary implementation method, device and data dictionary management system | |
US20130198117A1 (en) | Systems and methods for semantic data integration | |
CN109767274B (en) | Method and system for carrying out associated storage on massive invoice data | |
CN110727406A (en) | Data storage scheduling method and device | |
CN103412929A (en) | Mass data storage method | |
CN112965939A (en) | File merging method, device and equipment | |
CN110750372A (en) | Log system based on shared memory and log management method | |
CN107506466B (en) | Small file storage method and system | |
CN115114370B (en) | Master-slave database synchronization method and device, electronic equipment and storage medium | |
CN109522273A (en) | A kind of method and device for realizing data write-in | |
CN112597348A (en) | Method and device for optimizing big data storage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170510 |
|
RJ01 | Rejection of invention patent application after publication |