CN104133642A - SSD Cache filling method and device - Google Patents

SSD Cache filling method and device Download PDF

Info

Publication number
CN104133642A
CN104133642A CN201410367728.8A CN201410367728A CN104133642A CN 104133642 A CN104133642 A CN 104133642A CN 201410367728 A CN201410367728 A CN 201410367728A CN 104133642 A CN104133642 A CN 104133642A
Authority
CN
China
Prior art keywords
block
ssd cache
size
filled
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410367728.8A
Other languages
Chinese (zh)
Other versions
CN104133642B (en
Inventor
吴会堂
石岩
姚婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201410367728.8A priority Critical patent/CN104133642B/en
Publication of CN104133642A publication Critical patent/CN104133642A/en
Application granted granted Critical
Publication of CN104133642B publication Critical patent/CN104133642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides an SSD Cache filling method and device which can be applied to storage equipment. The method comprises the steps of dividing an SSD Cache into a plurality of Blocks with different sizes; according to the size of a data block to be filled, selecting the Block with the most similar size as the data block to be filled for filling the data block to be filled, wherein the size of the data block to be filled is smaller than or equal to the size of the selected Block. According to the SSD Cache filling method and device, the size of the Block needed for filling is selected according to the magnitude of a service command, the backward reading frequency of a magnetic disk is reduced to the greatest extent, and the SSD Cache filling speed is improved.

Description

SSD Cache fill method and device
Technical field
The present invention relates to technical field of memory, relate in particular to a kind of SSD Cache fill method and device.
Background technology
In at present general storage products, use SSD (Solid State Disk, solid state hard disc) hard disk is read buffer memory as the secondary of system, improve array with machine-readable performance, conventionally SSD hard disk is herein called to SSD Cache (cache memory).Due to memory device self to read buffer memory conventionally all smaller, generally only has hundreds of MB between 1GB, and the space of SSD Cache has reached 1TB, therefore, if data are read in advance and are filled into SSD Cache, so, when data are accessed, directly from SSD Cache, read, can obviously shorten the response time, improve reading performance.
Prior art scheme as shown in Figure 1, after data enter and read buffer memory and read buffer memory to use up, starts to fill SSD Cache, and this has improved the filling threshold of SSD Cache.As shown in Figure 2, when filling SSD Cache, the Block of SSD Cache (piece, the smallest allocation unit of Cache) size is set to the Block that reads buffer memory in the same size conventionally.Data are write and are studied in buffer memory from disk, then are filled into (in figure, solid line is for filling direction signal) SSD Cache from reading buffer memory.The data block reading due to upper-layer service differs in size, so read often to have cavity (the actual filling of some Block size of reading buffer memory is less than N) in the Block of buffer memory, requiring size of data to be necessary for N during filling SSD Cache could fill successfully.Therefore, when data are filled into SSD Cache from reading buffer memory, for the data that are less than N, need to be from disk retaking of a year or grade (figure, dotted line be the signal of retaking of a year or grade direction), the partial data polishing by not enough, is then filled into SSD Cache.Because the Block size of SSD Cache is fixing and identical, and business order is differed in size, and the retaking of a year or grade number of times in the time of must causing filling SSD Cache increases, and has increased the random command number of disk, affects service feature and filling speed.
Summary of the invention
In view of this, the invention provides a kind of SSD Cache fill method, be applied on memory device, the method comprises:
SSD Cache is divided into the piece Block of some different sizes;
According to data block size to be filled, to select to fill with the immediate Block of described data block size to be filled, described data block size to be filled is less than or equal to described Block size.
The present invention also provides a kind of SSD Cache filling device, is applied on memory device, and this device comprises:
Block dispensing unit, for being divided into SSD Cache the piece Block of some different sizes;
Block filler cells, for according to data block size to be filled, selects to fill with the immediate Block of described data block size to be filled, and described data block size to be filled is less than or equal to described Block size.
The present invention selects to need the Block filling big or small according to business order size, reduces the retaking of a year or grade number of times of disk as far as possible, improves the filling speed of SSD Cache.Meanwhile, according to the distribution trend of business order size, dynamically adjust the allocation space of the different B lock of SSD Cache, further reduced the random command request to disk, improved the filling speed of SSD Cache, reach the object that improves random reading performance.
Accompanying drawing explanation
Fig. 1 is that in prior art, SSD Cache starts schematic diagram.
Fig. 2 is that in prior art, SSD Cache fills schematic diagram.
Fig. 3 is the logical organization of SSD Cache filling device and the schematic diagram of underlying hardware environment thereof in one embodiment of the present invention.
Fig. 4 is the process flow diagram of SSD Cache fill method in one embodiment of the present invention.
Fig. 5 is that in one embodiment of the present invention, SSD Cache starts schematic diagram.
Fig. 6 is that in one embodiment of the present invention, SSD Cache fills schematic diagram first.
Fig. 7 is that in one embodiment of the present invention, SSD Cache fills schematic diagram bis-times.
Fig. 8 is SSD Cache dynamic assignment space schematic diagram in one embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in detail.
The invention provides SSD Cache filling device under a kind of virtualized environment, below with software, in computing machine, be embodied as example and describe, but the present invention does not get rid of other implementations such as hardware or logical device.As shown in Figure 3, the computer hardware environment of this device operation comprises CPU, internal memory, nonvolatile memory and other hardware.This device is as the virtual bench of a logic level on computing machine, and it moves by the CPU on computing machine.This device comprises Block dispensing unit and Block filler cells.Please refer to Fig. 4, use and the operational process of this device comprise the following steps:
Step 101, Block dispensing unit is divided into SSD Cache the piece Block of some different sizes;
Step 102, Block filler cells, according to data block size to be filled, is selected to fill with the immediate Block of described data block size to be filled, and described data block size to be filled is less than or equal to described Block size.
The present invention is by being divided into SSD Cache the Block of a plurality of different sizes, according to upper-layer service command selection, big or small immediate Block fills with it again, to reduce the random command request to disk, improve the filling speed of SSD Cache, reached the object that improves random reading performance.Specific implementation process is as follows.
As shown in Figure 5, when upper-layer service sends read request, if need these data be write when studying in buffer memory from disk reading out data, start the filling process of SSD Cache, by data stuffing in SSD Cache.Owing to reading the finite capacity of buffer memory, what read to preserve all the time in buffer memory is the latest data that upper-layer service need to be used, and data are when writing and studying in buffer memory, capital is written in SSD Cache, can be regarded as here to reading the backup of data in buffer memory, because the capacity relative of SSD Cache is larger, therefore, preserve the data (quantity of preservation depends on the capacity of SSD Cache) that nearest upper-layer service is used, wherein, comprised the current data of reading in buffer memory.When upper-layer service need to read nearest used data, and these data are not in reading buffer memory time, in SSD Cache, search, if SSD Cache has this data, from SSD Cache, directly read.From said process, writing the data of studying in buffer memory will write in SSD Cache, and SSD Cache writing speed ratio to read buffer memory slow, therefore, the present invention is by the filling process of SSD Cache in advance, as long as start to write, study in buffer memory and just start SSD Cache and fill process, avoided in prior art after reading buffer memory and using up, could starting SSD Cache, and need to be from read buffer memory by data stuffing to the problem in SSD Cache, save the time, improved to a certain extent filling speed.
Because SSD Cache data stuffing must meet data block size to be filled and Block condition of the same size, therefore, the present invention for example, by (being divided into SSD Cache different big or small Block, 4K, 8K, 16K, 32K and 64K), increase the probability that data block to be filled is mated with Block, and then reduce the number of times from disk retaking of a year or grade data, improve filling speed.
When SSD Cache is carried out to data stuffing, according to data block size to be filled, select to fill with the immediate Block of this data block size.In conjunction with Fig. 6, introduce SSD Cache data stuffing process.In the present embodiment, the Block size that SSD Cache divides is respectively 4K, 8K, 16K, 32K and 64K.For the data block that is greater than 64K (the maximum Block dividing in SSD Cache), this data block is cut apart according to 64K, select a plurality of 64K Block to fill, when last Block filling is discontented, from disk retaking of a year or grade; For the data block that is less than 64K, judgement is filled with the immediate Block of its data block size successively, for example: data block size is less than or equal to 4K, this data block is filled in the Block of 4K size; Data block size is greater than 4K and is less than or equal to 8K, this data block is filled in the Block of 8K size; By that analogy, find the Block that most mate corresponding with each data block size to fill.In filling process, if data block size equals Block size, without from disk retaking of a year or grade data; If data block size is less than Block size, in Block, remaining space need retaking of a year or grade data be filled from disk.For example: data block size is 12K, this data block is filled in the Block of 16K size, fill from disk retaking of a year or grade data in residue 4K space.
In specific implementation process, can record corresponding Block and whether need retaking of a year or grade by retaking of a year or grade flag is set, each binary digit bit represents a Block.Bit equals 0, represents that corresponding Block does not fill or filled not retaking of a year or grade; Bit equals 1, represents that corresponding Block has filled to treat retaking of a year or grade.Suppose to use the SSD Cache of a 1TB, according to limiting case, calculate, the Block that supposes 8K size has occupied whole SSD Cache space, and all need retaking of a year or grade, retaking of a year or grade flag committed memory size is: ((1*1024*1024*1024)/8)/(8*1024*1024)=16MB, so, when SSD Cache initialization, need delimit in advance the space of 16MB to retaking of a year or grade flag, these zone bits can't obviously take very large memory headroom.
Said process is the processing procedure that SSD Cache is filled first, the same with reading buffer memory, the situation that SSD Cache also exists secondary to fill, SSD Cache limited space after all, always be exhausted, or the data block reading hits existing Block space, need to carry out secondary filling.As shown in Figure 7, suppose that the size of data that upper-layer service reads is 20K, wherein a part of data have been present in the Block space of 16K, at this moment need the Block space of 16K to discharge, 20K data are write in the Block of 32K, and the Block that records this 32K needs retaking of a year or grade, from disk, retaking of a year or grade 12K data stuffing is in the Block of 32K.If the Block of the 32K in SSD Cache is finished, use LRU (Least Recently Used, recent minimum use algorithm) to replace, nearest no data block is shifted out from Block, fill new data block.
In order further to improve filling speed, the present invention can be according to the service condition of different big or small Block in SSD Cache, memory space dynamic allocation.For example, during initialization, acquiescence is according to the principle of mean allocation, for the Block of 4K, 8K, 16K, 32K and 64K distributes identical spatial content.The storage space total volume of supposing SSD Cache is Q, and the capacity of every kind of Block size is Q/5.A timer is set, and (such as 1s) once fills the statistics of Block number at regular intervals.Suppose current arrival timing statistics point, the Block of statistics 4K, 8K, 16K, 32K and 64K fills number and is respectively D x, E x, F x, G x, H x, now take 8KBlock as example, the filling ratio that calculates 8K Block is Z x=E x/ (D x+ E x+ F x+ G x+ H x), current 8K Block fills number and accounts for the number percent that all Block fill number.Because upper layer application is in continuous variation, its data length reading also can be different, cause each statistics Block constantly to fill ratio difference.The present invention fills ratio by the Block to m time and averages, as the foundation of calculating current Block storage space.The statistics of supposing 8K Block m time is respectively: in m statistics, having the filling ratio obtaining for n1 time is Z 1, having the filling ratio obtaining for n2 time is Z 2..., having the filling ratio obtaining for ni time is Z i, m=(n1+n2+ wherein ... + ni), reality is filled ratio Z y=(Z 1* n1+Z 2*n2+ ... .+Z i*ni)/m.According to this, filling ratio is the Block memory allocated space of 8K size again, and storage space volume is Q y=Q*Z y.In like manner, the Block that calculates 4K, 16K, 32K and 64K fills ratio, thereby calculates corresponding storage space.Visible, when a certain upper-layer service order increases, the quantity that takies corresponding Block also can increase, and the filling ratio of statistics increases, thereby is the more Block of such business call allocation space.
In above-mentioned filling ratio statistic processes, the situation that may exist a certain Block not used all the time or seldom use, also for example, for its space that retains certain proportion (1%) standby.
Concrete allocation strategy as shown in Figure 8.The filling ratio of supposing each Block of a certain moment statistics is: 4K is that 30%, 8K is that 30%, 16K is that 30%, 32K is that 5%, 64K is 5%; The filling ratio of adding up after 10 seconds is: 4K is that 40%, 8K is that 50%, 16K is that 10%, 32K is that 0%, 64K is 0%, should increase the space of 4K and 8K Block, reduces the storage space of 16K, 32K and 64K Block.Within this period, although 32K and 64K Block are not used, but still need reserved a part of space (for example, each is 1% years old), in remaining space, be that 4K, 8K and 16K pro rata distribute.Simultaneously, before reducing the space of certain Block, need the space that checks this Block whether all to be filled, if certain Block space runs out, according to LRU mode, extract the space that needs release, it is invalid that the data of the inside are set to, and then again give the Block that needs overabsorption space.
The present invention selects to need the Block filling big or small according to business order size, reduces the retaking of a year or grade number of times of disk as far as possible, improves the filling speed of SSD Cache.Meanwhile, according to the distribution trend of business order size, dynamically adjust the allocation space of the different B lock of SSD Cache, further reduced the random command request to disk, improved the filling speed of SSD Cache, reach the object that improves random reading performance.
The foregoing is only preferred embodiment of the present invention, in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of making, be equal to replacement, improvement etc., within all should being included in the scope of protection of the invention.

Claims (10)

1. a SSD Cache fill method, is applied to, on memory device, it is characterized in that, the method comprises:
SSD Cache is divided into the piece Block of some different sizes;
According to data block size to be filled, to select to fill with the immediate Block of described data block size to be filled, described data block size to be filled is less than or equal to described Block size.
2. the method for claim 1, is characterized in that:
Block memory space dynamic allocation for described different sizes.
3. method as claimed in claim 2, is characterized in that:
The storage space computing method of each type Block are identical, are specially:
Q y=Q*Z y
Wherein,
Q is the storage space total volume of SSD Cache;
Z ymean value for m Block filling number number percent;
Q yfor the Block storage space volume of distributing;
The Block that described size is identical is same type Block;
It is that in Preset Time section, the filling number of current type Block accounts for the number percent that all types Block fills number that described Block fills number number percent.
4. method as claimed in claim 3, is characterized in that:
The Q calculating as certain type B lock ybe 0 o'clock, default big or small spatial content distributed to the Block of the type.
5. the method for claim 1, is characterized in that: before carrying out Block filling, also comprise:
Receiving the read request of upper-layer service, data in disk are being write while studying in buffer memory, starting SSD Cache and fill.
6. a SSD Cache filling device, is applied to, on memory device, it is characterized in that, this device comprises:
Block dispensing unit, for being divided into SSD Cache the piece Block of some different sizes;
Block filler cells, for according to data block size to be filled, selects to fill with the immediate Block of described data block size to be filled, and described data block size to be filled is less than or equal to described Block size.
7. device as claimed in claim 6, is characterized in that:
Described Block dispensing unit is further used for the Block memory space dynamic allocation into described different sizes.
8. device as claimed in claim 7, is characterized in that:
The storage space computing method of each type Block are identical, are specially:
Q y=Q*Z y
Wherein,
Q is the storage space total volume of SSD Cache;
Z ymean value for m Block filling number number percent;
Q yfor the Block storage space volume of distributing;
The Block that described size is identical is same type Block;
It is that in Preset Time section, the filling number of current type Block accounts for the number percent that all types Block fills number that described Block fills number number percent.
9. device as claimed in claim 8, is characterized in that:
The Q calculating as certain type B lock ybe 0 o'clock, default big or small spatial content distributed to the Block of the type.
10. device as claimed in claim 6, is characterized in that, before described Block filler cells, also comprises:
Fill start unit, for receiving the read request of upper-layer service, data in disk are write while studying in buffer memory, start SSD Cache and fill.
CN201410367728.8A 2014-07-29 2014-07-29 SSD Cache fill methods and device Active CN104133642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410367728.8A CN104133642B (en) 2014-07-29 2014-07-29 SSD Cache fill methods and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410367728.8A CN104133642B (en) 2014-07-29 2014-07-29 SSD Cache fill methods and device

Publications (2)

Publication Number Publication Date
CN104133642A true CN104133642A (en) 2014-11-05
CN104133642B CN104133642B (en) 2018-07-13

Family

ID=51806333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410367728.8A Active CN104133642B (en) 2014-07-29 2014-07-29 SSD Cache fill methods and device

Country Status (1)

Country Link
CN (1) CN104133642B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106406756A (en) * 2016-09-05 2017-02-15 华为技术有限公司 Space allocation method of file system, and apparatuses
CN109542346A (en) * 2018-11-19 2019-03-29 深圳忆联信息系统有限公司 Dynamic data cache allocation method, device, computer equipment and storage medium
CN111158578A (en) * 2018-11-08 2020-05-15 浙江宇视科技有限公司 Storage space management method and device
CN111880733A (en) * 2020-07-24 2020-11-03 长江存储科技有限责任公司 Three-dimensional memory device, three-dimensional memory, operating method thereof and three-dimensional memory system
CN112162703A (en) * 2020-09-25 2021-01-01 杭州宏杉科技股份有限公司 Cache implementation method and cache management module
CN112486918A (en) * 2019-09-11 2021-03-12 浙江宇视科技有限公司 File processing method, device, equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5394531A (en) * 1989-04-03 1995-02-28 International Business Machines Corporation Dynamic storage allocation system for a prioritized cache
US6327673B1 (en) * 1991-01-31 2001-12-04 Hitachi, Ltd. Storage unit subsystem
CN1527206A (en) * 2003-03-03 2004-09-08 华为技术有限公司 Memory pool managing method
CN1704910A (en) * 2004-06-03 2005-12-07 华为技术有限公司 Write handling method for disc array arrangement
CN1991792A (en) * 2005-09-30 2007-07-04 英特尔公司 Instruction-assisted cache management for efficient use of cache and memory
CN101135994A (en) * 2007-09-07 2008-03-05 杭州华三通信技术有限公司 Method and apparatus for dividing cache space and cache controller thereof
CN102063385A (en) * 2010-12-23 2011-05-18 深圳市金宏威实业发展有限公司 Memory management method and system
CN103778071A (en) * 2014-01-20 2014-05-07 华为技术有限公司 Cache space distribution method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5394531A (en) * 1989-04-03 1995-02-28 International Business Machines Corporation Dynamic storage allocation system for a prioritized cache
US6327673B1 (en) * 1991-01-31 2001-12-04 Hitachi, Ltd. Storage unit subsystem
CN1527206A (en) * 2003-03-03 2004-09-08 华为技术有限公司 Memory pool managing method
CN1704910A (en) * 2004-06-03 2005-12-07 华为技术有限公司 Write handling method for disc array arrangement
CN1991792A (en) * 2005-09-30 2007-07-04 英特尔公司 Instruction-assisted cache management for efficient use of cache and memory
CN101135994A (en) * 2007-09-07 2008-03-05 杭州华三通信技术有限公司 Method and apparatus for dividing cache space and cache controller thereof
CN102063385A (en) * 2010-12-23 2011-05-18 深圳市金宏威实业发展有限公司 Memory management method and system
CN103778071A (en) * 2014-01-20 2014-05-07 华为技术有限公司 Cache space distribution method and device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106406756A (en) * 2016-09-05 2017-02-15 华为技术有限公司 Space allocation method of file system, and apparatuses
CN106406756B (en) * 2016-09-05 2019-07-09 华为技术有限公司 A kind of space allocation method and device of file system
CN111158578A (en) * 2018-11-08 2020-05-15 浙江宇视科技有限公司 Storage space management method and device
CN109542346A (en) * 2018-11-19 2019-03-29 深圳忆联信息系统有限公司 Dynamic data cache allocation method, device, computer equipment and storage medium
CN112486918A (en) * 2019-09-11 2021-03-12 浙江宇视科技有限公司 File processing method, device, equipment and medium
CN112486918B (en) * 2019-09-11 2022-09-06 浙江宇视科技有限公司 File processing method, device, equipment and medium
CN111880733A (en) * 2020-07-24 2020-11-03 长江存储科技有限责任公司 Three-dimensional memory device, three-dimensional memory, operating method thereof and three-dimensional memory system
CN112162703A (en) * 2020-09-25 2021-01-01 杭州宏杉科技股份有限公司 Cache implementation method and cache management module

Also Published As

Publication number Publication date
CN104133642B (en) 2018-07-13

Similar Documents

Publication Publication Date Title
US20220138103A1 (en) Method and apparatus for controlling cache line storage in cache memory
CN104133642A (en) SSD Cache filling method and device
CN103608782B (en) Selective data storage in LSB page face and the MSB page
US9542306B2 (en) Dynamic storage device provisioning
US9239785B2 (en) Stochastic block allocation for improved wear leveling
CN102650931B (en) Method and system for writing data
US9158695B2 (en) System for dynamically adaptive caching
CN108897642B (en) Method and device for optimizing log mechanism in persistent transactional memory system
US10572171B2 (en) Storage system
CN107783812B (en) Virtual machine memory management method and device
CN103942159A (en) Data read-write method and device based on mixed storage device
US20180150219A1 (en) Data accessing system, data accessing apparatus and method for accessing data
CN107025066A (en) The method and apparatus that data storage is write in the storage medium based on flash memory
CN109753361A (en) A kind of EMS memory management process, electronic equipment and storage device
US10838628B2 (en) Storage system and control method of maintaining reliability of a mounted flash storage
CN109324979B (en) Data cache dividing method and data distribution method of 3D flash memory solid-state disk system
US20210056030A1 (en) Multi-level system memory with near memory capable of storing compressed cache lines
CN102999441A (en) Fine granularity memory access method
US10073851B2 (en) Fast new file creation cache
KR20130108117A (en) System for dynamically adaptive caching
US10509742B2 (en) Logical memory buffers for a media controller
CN105630413B (en) A kind of synchronization write-back method of data in magnetic disk
WO2011048400A1 (en) Memory interface compression
CN108139983A (en) For the method and apparatus of the fixed memory page in multilevel system memory
CN109783000B (en) Data processing method and equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant