CN105242884A - Automatically-layering storage system - Google Patents
Automatically-layering storage system Download PDFInfo
- Publication number
- CN105242884A CN105242884A CN201510696499.9A CN201510696499A CN105242884A CN 105242884 A CN105242884 A CN 105242884A CN 201510696499 A CN201510696499 A CN 201510696499A CN 105242884 A CN105242884 A CN 105242884A
- Authority
- CN
- China
- Prior art keywords
- tier0
- tier1
- storage system
- data
- data object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses an automatically-layering storage system. The storage system comprises a storage layer Tier0 with high performance, a storage layer Tier1 with normal performance, a monitor, a scheduler and an agent node, wherein the storage layer Tier0 is used for setting a storage duplicate with high performance; the storage layer Tier1 is used for setting a storage duplicate with normal performance; the monitor is used for acquiring data object access information and system performance system in the storage system; the scheduler is used for maintaining data object storage on the Tier0, sending the data object to the Tier1, and sending a push command to the Tier1 based on a scheduling strategy so as to carry out data object transmission to the Tier0 from the Tier1; the agent node is used for providing a push interface in external agent service; and a one-way data path towards the Tier0 from the Tier1 exists between the Tier0 and the Tier1 and is opened after the push command from the scheduler is received by the Tier1 so as to carry out one-way data object transmission to the Tier0 from the Tier1. By the storage system, the problem of real-time deficiency in the current automatically-layering storage system is effectively solved, the read access capability of hot data is improved, and the ineffective abrasion of a solid state drive (SSD) is also reduced.
Description
Technical field
The present invention relates to memory system technologies field, espespecially a kind of storage system of AUTOMATIC ZONING.
Background technology
The purpose of design storing AUTOMATIC ZONING technology is performance and the cost variance of the hard disk making full use of different rotating speeds.In recent years, along with flash memory solid state disk (SSD, SolidStateDrives) within the storage system increasingly mature and universal, it is per second carries out read-write operation (IOPS, Input/OutputOperationsPerSecond) comparatively hard disk drive (HDD, HardDiskDrive) compare and had larger lifting, become a kind of ideal chose stored in AUTOMATIC ZONING.
Storing AUTOMATIC ZONING technology to analyze based on such as data access frequency, creation-time, the last index such as access time or response time, by the data placement of different characteristic on different levels, is the important technology in current high-end storage systems nowadays.
But, store in AUTOMATIC ZONING technology and also there are following three challenges:
The first, AUTOMATIC ZONING is a passive technology, and also namely the strategy of its migration data draws according to historical trend, and non real-time state.
The wearing and tearing of the second, SSD reduce its life cycle, should consider the wearing and tearing frequency how reducing SSD, and data protection when wanting treatment S SD to damage.
3rd, because access behavior monitoring and statistics is analyzed and Data Migration operation, all certain computational resource can be consumed.Traditional settling mode is the time section that setting permission system performs statistical study and data migration operation, to avoid accessing peak period.The method can make the problem of the real-time deficiency of AUTOMATIC ZONING more serious.
Therefore, the challenge storing AUTOMATIC ZONING technology brings great complicacy for the high-performance distributed storage system of research and development, has had a strong impact on real-time and the validity of storage of hierarchically.
Summary of the invention
In order to solve the problems of the technologies described above, the invention provides a kind of storage system of AUTOMATIC ZONING, efficiently solve the not enough problem of real-time in current storage AUTOMATIC ZONING system, improve the read access performance of hot spot data, and reduce the invalid wearing and tearing of SSD hard disk.
In order to reach the object of the invention, the invention provides a kind of storage system of AUTOMATIC ZONING, comprising: high performance accumulation layer Tier0, for arranging high performance stored copies; The accumulation layer Tier1 of common performance, for arranging the stored copies of common performance; Monitor, for the data object visit information in responsible acquisition storage system and system performance information; Scheduler, for safeguarding that the data object on Tier0 stores; Send data object to Tier1, and to send to Tier1 based on scheduling strategy and push order and carry out Tier1 and transmit to the data object of Tier0; Agent node, pushes interface in serving for providing external agent; Wherein, there is the one way data path that Tier1 points to Tier0 between Tier0 and Tier1, receive the propelling movement order of child scheduler at Tier1 after, described one way data path is opened, and carries out the one-way data object transfer of Tier1 to Tier0.
Further, the stored copies parameter of described Tier0 comprises: maximum redundancy degree M, the maximum number of copies that in expression system, Tier0 can hold; Configuring redundancy degree m, represents the number of copies pushed in Tier0, and m≤M; Copy groove, represents the position of Replica placement in Tier0; The stored copies parameter of described Tier1 comprises: redundance N, represents the number of copies of Tier1.
Further, described Tier0 operates based on RESTAPI, comprising: CREATE operates, and is used for establishment object; GET operates, and is used for reading data object; REMOVE operates, and is used for deletion data object; CLEAN operates, and is used for removing data object in storage system, and wherein, the opportunity of calling of REMOVE operation is the data that scheduler initiatively deletes in Tier0, or comes from data object in Tier1 and asked by the readjustment that backward scheduler dispatches is removed in refuse collection; CLEAN operation specifies the scope removed to remove data object in storage system according to URL(uniform resource locator).
Further, in the stored copies of described agent node inquiry Tier0 and Tier1, whether there is the object data of requested access, determine stored copies in Tier0 or Tier1 after, by pushing interface to watch-dog sending object visit information.
Further, describedly in Tier0 or Tier1, determine stored copies, be specially: whether the stored copies in inquiry Tier0 can be used; If available, then in Tier0, determine stored copies; If unavailable, then in Tier1, determine stored copies.
Further, described scheduling strategy comprises: hot spot data identification, the maintenance of data temperature and the Replacement Strategy of data.
Further, described watch-dog is by the data object visit information in the propelling movement interface acquisition storage system of agent node and system performance information, the object data that access times are greater than setting value by scheduler is pushed in Tier0, and increases the stored copies quantity of hot spot data according to hot spot data identification.
Further, described monitor and scheduler are positioned in the isolated node outside accumulation layer and agent node, and take the database realizing monitor of shared supervisory system to dock with the supervisory system of storage system.
Further, described in take the database realizing monitor of shared supervisory system to dock with the supervisory system of storage system, be specially: storage system has monitoring interface, use statsd realize; In storage system is run, if object carries out hypertext transfer protocol requests, then insert pile function, by User Datagram Protoco (UDP), Monitoring Data is sent into the monitoring interface of supervisory system, monitor is obtained by the database of supervisory system and detects data.
Further, described scheduler and monitor adopt single cpu mode and carry out High Availabitity protection.
Compared with prior art, the present invention makes full use of the real-time characteristic of data-pushing, collection performance of storage system data and visit information carry out dynamic copies scheduling, efficiently solve the not enough problem of real-time in current storage AUTOMATIC ZONING system, improve the invalid wearing and tearing that hot spot data access performance reduces SSD hard disk, thus promote the development of mass data storage system architecture.
Other features and advantages of the present invention will be set forth in the following description, and, partly become apparent from instructions, or understand by implementing the present invention.Object of the present invention and other advantages realize by structure specifically noted in instructions, claims and accompanying drawing and obtain.
Accompanying drawing explanation
Accompanying drawing is used to provide the further understanding to technical solution of the present invention, and forms a part for instructions, is used from and explains technical scheme of the present invention, do not form the restriction to technical solution of the present invention with the embodiment one of the application.
Fig. 1 is the configuration diagram of the storage system of AUTOMATIC ZONING in a kind of embodiment of the present invention.
Fig. 2 is the schematic diagram of Tier0 access interface in a kind of embodiment of the present invention.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly understand, hereinafter will be described in detail to embodiments of the invention by reference to the accompanying drawings.It should be noted that, when not conflicting, the embodiment in the application and the feature in embodiment can combination in any mutually.
Can perform in the computer system of such as one group of computer executable instructions in the step shown in the process flow diagram of accompanying drawing.Further, although show logical order in flow charts, in some cases, can be different from the step shown or described by order execution herein.
Fig. 1 is the configuration diagram of the storage system of AUTOMATIC ZONING in a kind of embodiment of the present invention.As shown in Figure 1, comprise high performance accumulation layer Tier0, the accumulation layer Tier1 of common performance, monitor, scheduler, and agent node, wherein monitor and scheduler can be in the same apparatus.
Tier0, for arranging high performance stored copies;
Particularly, the stored copies parameter of Tier0 comprises:
Maximum redundancy degree M, the maximum number of copies that in expression system, Tier0 can hold;
Configuring redundancy degree m, represents the number of copies pushed in Tier0, and m≤M;
Copy groove, represent the position of Replica placement in Tier0, the copy in Tier0 is all placed in copy groove, and m part copy can be placed in this position, also can be empty.
Arrange the stored copies parameter of Tier0, such as, M=1, m=1, namely have 1 copy groove in Tier0.
Tier1, for arranging the stored copies of common performance;
Particularly, the stored copies parameter of Tier1 comprises:
Redundance N, represents the number of copies of Tier1.
Arrange the stored copies parameter of Tier1, such as, N=3, namely places 3 parts of copies in Tier1.
Monitor, for the data object visit information in responsible acquisition storage system and system performance information;
Particularly, object accesses information, from agent node, mainly flows to the visit information of stores service.
Scheduler, for the responsible storage condition safeguarding object on Tier0, and based on scheduling strategy to Tier1 propelling data;
Particularly, comprise the removing adopting stores service request propelling data to the upper data of Tier0, Tier0, and the maintenance of list object on Tier0 node.Scheduling strategy mainly comprises: hot spot data identification, the maintenance of data temperature and the Replacement Strategy of data.
Agent node, pushes (PUSH) interface in serving for providing external agent.
In the storage system of this AUTOMATIC ZONING, only there is the one way data path that Tier1 points to Tier0 between Tier0 and Tier1, this path is opened after Tier1 receives PUSH order, does not communicate in other situations between Tier0 and Tier1.
The operation of Tier0, based on REST application programming interface (RESTAPI), as shown in Figure 2, comprising:
CREATE operates, and is used for establishment object;
GET operates, and is used for reading object;
REMOVE operates, and is used for deletion object;
It should be noted that the semanteme deleted is different from the DELETE in Tier1 herein, one is deleted object by REMOVE in systems in which, but not dereference; The opportunity of calling of REMOVE operation is the data that scheduler initiatively deletes in Tier0 on the one hand, and coming from object in Tier1 is removed the readjustment request of backward scheduler dispatches by refuse collection (GC, GarbageCollection) on the other hand;
CLEAN operates, and is used for data in scavenge system, specifies the scope removed according to URL(uniform resource locator) (URL, UniformResoureLocator).
Can find out according to four RESTAPI, Tier0 is only used for optimizing read operation performance, and has nothing to do with write operation.The data of system derive from scheduler completely and send PUSH request to Tier1.
Agent node is before the data of a request object, first all copies of an object are inquired, just data transmission is carried out after selecting a readable copy, if the copy in Tier0 can be used, then can first be selected, if there are not the data of requested object in Tier0, then agent node can read the copy in Tier1.Because Tier0 is only used to optimize read operation, so do not need to preserve any metadata, there is not the complex operations that metadata consistency is safeguarded yet.
Due to access behaviortrace statistical study and Data Migration operation, all certain computational resource can be consumed.In order to make the monitoring of system and dispatch the normal access of not influential system, monitor and scheduler are placed in independent node, monitor during deployment, can be made to dock with the supervisory system of storage, as taked the database mode of shared supervisory system.Storage system originally leaves monitoring interface in time realizing, statsd is used to realize, in the key point of system cloud gray model, such as object carries out HTML (Hypertext Markup Language) (HTTP, HyperTextTransferProtocol), during request, pile function is inserted, by User Datagram Protoco (UDP) (UDP, UserDatagramProtocol) Monitoring Data is sent into supervisory system or monitor, adopt udp protocol that the network overhead of monitoring can be made very little.Can be conducted interviews to Tier0 and Tier1 by RESTAPI for scheduler, this part RESTAPI belongs to the access of Control Cooling, and its load is very little on the impact of Operational Visit.
Scheduler and monitor adopt single cpu mode when realizing; and save state; therefore need to carry out High Availabitity (HA; HighAvailable) protect; once the state in scheduler is lost; only need navigate to the minimum zone that cannot ensure data correctness, send the data that CLEAN order empties appropriate section in Tier0.Almost do not have the problem of consistency maintenance by data in the known Tier0 of the character of content-based addressable storage system, the loss of data also only can cause necessarily to be read performance is lost, and can not affect the correctness of data.The machine if the node at monitor and scheduler place is delayed, only can retain part historical data in Tier0, and cannot obtain up-to-date visit data, storage system itself or available.
Compared with the storage system of the AUTOMATIC ZONING in the present invention manages with traditional Cache, at least there is some advantage below:
The first, traditional Cache system is " Best-Effort ", therefore data access is " synchronous " with the renewal of data in high-speed processing apparatus; And data access does not directly affect the placement of data in high-speed processing apparatus in autostore layering, and be through data acess control and the data determining to need to put into high-speed processing apparatus after calculating, therefore this process is " asynchronous ".
Second, autostore layering is more paid close attention to and is optimized access from global level, after completing the identification of hot spot data, memory device is at a high speed write in the mode of " propelling movement ", this mode is carried out the mode of " pull " compared with to data with traditional C ache system after cachemiss, make the optimization of access more pointed on the one hand, be also conducive on the one hand reducing high-speed processing apparatus by erasable frequency, extend SSD serviceable life.
The present invention devises a kind of storage system of AUTOMATIC ZONING, in AUTOMATIC ZONING storage system architecture, will store and divide level according to performance characteristic, and coordinate monitor, scheduler, and PUSH interface in external service, data are carried out to the layering scheduling of the overall situation; Carry out performance collection and data hierarchy scheduling during operation, collected the run time behaviour data and the visit information of objects of statistics that store by watch-dog, scheduler by the Object Push of frequently accessing in high-performance accumulation layer; Implement dynamic replication management, collect according to visit information when running, improve the number of copies of hot spot data, to improve the concurrent read access performance of object.
The present invention makes full use of the real-time characteristic of data-pushing, based on the dynamic copies scheduling that runtime data is analyzed, and the collection method of performance of storage system data and visit information.The above-mentioned advantage that the storage system of this AUTOMATIC ZONING has, compared with traditional Cache system optimization data access performance, the present invention is while improve hot spot data access performance, efficiently solve the not enough problem of real-time in current storage AUTOMATIC ZONING system, also reduce the invalid wearing and tearing of SSD hard disk to a certain extent, the method proposed in present system is applicable to other distributed memory systems too.Therefore the present invention has very high technological value and practical value in large-scale distributed object storage system practice.
Although the embodiment disclosed by the present invention is as above, the embodiment that described content only adopts for ease of understanding the present invention, and be not used to limit the present invention.Those of skill in the art belonging to any the present invention; under the prerequisite not departing from the spirit and scope disclosed by the present invention; any amendment and change can be carried out in the form implemented and details; but scope of patent protection of the present invention, the scope that still must define with appending claims is as the criterion.
Claims (10)
1. a storage system for AUTOMATIC ZONING, is characterized in that, comprising:
High performance accumulation layer Tier0, for arranging high performance stored copies;
The accumulation layer Tier1 of common performance, for arranging the stored copies of common performance;
Monitor, for the data object visit information in responsible acquisition storage system and system performance information;
Scheduler, for safeguarding that the data object on Tier0 stores; Send data object to Tier1, and to send to Tier1 based on scheduling strategy and push order and carry out Tier1 and transmit to the data object of Tier0;
Agent node, pushes interface in serving for providing external agent;
Wherein, there is the one way data path that Tier1 points to Tier0 between Tier0 and Tier1, receive the propelling movement order of child scheduler at Tier1 after, described one way data path is opened, and carries out the one-way data object transfer of Tier1 to Tier0.
2. the storage system of AUTOMATIC ZONING according to claim 1, is characterized in that, the stored copies parameter of described Tier0 comprises: maximum redundancy degree M, the maximum number of copies that in expression system, Tier0 can hold; Configuring redundancy degree m, represents the number of copies pushed in Tier0, and m≤M; Copy groove, represents the position of Replica placement in Tier0;
The stored copies parameter of described Tier1 comprises: redundance N, represents the number of copies of Tier1.
3. the storage system of AUTOMATIC ZONING according to claim 1, is characterized in that, described Tier0 operates based on RESTAPI, comprising: CREATE operates, and is used for establishment object; GET operates, and is used for reading data object; REMOVE operates, and is used for deletion data object; CLEAN operates, and is used for removing data object in storage system, wherein,
The opportunity of calling of REMOVE operation is the data that scheduler initiatively deletes in Tier0, or comes from data object in Tier1 and asked by the readjustment that backward scheduler dispatches is removed in refuse collection;
CLEAN operation specifies the scope removed to remove data object in storage system according to URL(uniform resource locator).
4. the storage system of AUTOMATIC ZONING according to claim 1, it is characterized in that, the object data of requested access whether is there is in the stored copies of described agent node inquiry Tier0 and Tier1, determine stored copies in Tier0 or Tier1 after, by pushing interface to watch-dog sending object visit information.
5. the storage system of AUTOMATIC ZONING according to claim 4, is characterized in that, describedly in Tier0 or Tier1, determines stored copies, is specially:
Whether the stored copies in inquiry Tier0 can be used; If available, then in Tier0, determine stored copies; If unavailable, then in Tier1, determine stored copies.
6. the storage system of AUTOMATIC ZONING according to claim 1, is characterized in that, described scheduling strategy comprises: hot spot data identification, the maintenance of data temperature and the Replacement Strategy of data.
7. the storage system of AUTOMATIC ZONING according to claim 6, it is characterized in that, described watch-dog is by the data object visit information in the propelling movement interface acquisition storage system of agent node and system performance information, the object data that access times are greater than setting value by scheduler is pushed in Tier0, and increases the stored copies quantity of hot spot data according to hot spot data identification.
8. the storage system of AUTOMATIC ZONING according to claim 1, it is characterized in that, described monitor and scheduler are positioned in the isolated node outside accumulation layer and agent node, and take the database realizing monitor of shared supervisory system to dock with the supervisory system of storage system.
9. the storage system of AUTOMATIC ZONING according to claim 8, is characterized in that, described in take the database realizing monitor of shared supervisory system to dock with the supervisory system of storage system, be specially:
Storage system has monitoring interface, uses statsd to realize;
In storage system is run, if object carries out hypertext transfer protocol requests, then insert pile function, by User Datagram Protoco (UDP), Monitoring Data is sent into the monitoring interface of supervisory system, monitor is obtained by the database of supervisory system and detects data.
10. the storage system of AUTOMATIC ZONING according to claim 9, is characterized in that, described scheduler and monitor adopt single cpu mode and carry out High Availabitity protection.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510696499.9A CN105242884B (en) | 2015-10-23 | 2015-10-23 | A kind of storage system of AUTOMATIC ZONING |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510696499.9A CN105242884B (en) | 2015-10-23 | 2015-10-23 | A kind of storage system of AUTOMATIC ZONING |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105242884A true CN105242884A (en) | 2016-01-13 |
CN105242884B CN105242884B (en) | 2018-10-16 |
Family
ID=55040547
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510696499.9A Active CN105242884B (en) | 2015-10-23 | 2015-10-23 | A kind of storage system of AUTOMATIC ZONING |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105242884B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107463514A (en) * | 2017-08-16 | 2017-12-12 | 郑州云海信息技术有限公司 | A kind of date storage method and device |
CN109344077A (en) * | 2018-10-24 | 2019-02-15 | 郑州云海信息技术有限公司 | RestAPI characteristic test method, device, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102508789A (en) * | 2011-10-14 | 2012-06-20 | 浪潮电子信息产业股份有限公司 | Grading storage method for system |
CN103713861A (en) * | 2014-01-09 | 2014-04-09 | 浪潮(北京)电子信息产业有限公司 | File processing method and system based on hierarchical division |
CN104102454A (en) * | 2013-04-07 | 2014-10-15 | 杭州信核数据科技有限公司 | Method for automatically realizing hierarchical storage and system for managing hierarchical storage |
-
2015
- 2015-10-23 CN CN201510696499.9A patent/CN105242884B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102508789A (en) * | 2011-10-14 | 2012-06-20 | 浪潮电子信息产业股份有限公司 | Grading storage method for system |
CN104102454A (en) * | 2013-04-07 | 2014-10-15 | 杭州信核数据科技有限公司 | Method for automatically realizing hierarchical storage and system for managing hierarchical storage |
CN103713861A (en) * | 2014-01-09 | 2014-04-09 | 浪潮(北京)电子信息产业有限公司 | File processing method and system based on hierarchical division |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107463514A (en) * | 2017-08-16 | 2017-12-12 | 郑州云海信息技术有限公司 | A kind of date storage method and device |
CN109344077A (en) * | 2018-10-24 | 2019-02-15 | 郑州云海信息技术有限公司 | RestAPI characteristic test method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN105242884B (en) | 2018-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107943867B (en) | High-performance hierarchical storage system supporting heterogeneous storage | |
CN104965850B (en) | A kind of database high availability implementation method based on open source technology | |
CN107045422A (en) | Distributed storage method and equipment | |
CN101673192B (en) | Method for time-sequence data processing, device and system therefor | |
CN104407926B (en) | A kind of dispatching method of cloud computing resources | |
CN109299056B (en) | A kind of method of data synchronization and device based on distributed file system | |
CN105095495B (en) | A kind of distributed file system buffer memory management method and system | |
CN105210062A (en) | System-wide checkpoint avoidance for distributed database systems | |
CN105190622A (en) | Fast crash recovery for distributed database systems | |
CN107800808A (en) | A kind of data-storage system based on Hadoop framework | |
EP2494436A1 (en) | Allocating storage memory based on future use estimates | |
CN101866359A (en) | Small file storage and visit method in avicade file system | |
EP3353627B1 (en) | Adaptive storage reclamation | |
CN102404399A (en) | Fuzzy dynamic allocation method for cloud storage resource | |
CN102687112A (en) | Apparatus and method for managing a file in a distributed storage system | |
CN109086141B (en) | Memory management method and device and computer readable storage medium | |
CN101751470B (en) | System for storing and/or retrieving a data-set and method thereof | |
CN102104494B (en) | Metadata server, out-of-band network file system and processing method of system | |
CN113254270B (en) | Self-recovery method, system and storage medium for storing cache hot spot data | |
CN106021593A (en) | Replication processing method in takeover process of first database and second database | |
AU2004285241C1 (en) | Tracking space usage in a database | |
CN109460345A (en) | The calculation method and system of real time data | |
CN105242884A (en) | Automatically-layering storage system | |
CN109189739B (en) | Cache space recovery method and device | |
CN102609508B (en) | High-speed access method of files in network storage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |