Background technology
Cache (high-speed cache) is the important component part in computer system, and it is extensively present in central processing unit (CPU), and Magnetic Disk Controller, in the parts such as bus adapter.Cache is a concept in essence, a kind of logic, and it both can exist in the form of hardware, played a role under the cooperation of hardware.
The Core Feature of Cache is the response speed that quickening target component is accessed to external world, and for storage system, the meaning of Cache is mainly the access speed improving disk system.The function of Cache is mainly divided into two aspects, that is, serve read request and write request.For write request, the effect of Cache is data temporary storage to be written in high-speed equipment (internal memory), makes visitor that pending data need not be waited very only to be written to slower-velocity target equipment (disk etc.).For read request, the meaning of Cache is the data trnascription in high-speed equipment to be returned to user as early as possible, and need not go to access actual low-speed device.
In the software architecture of whole storage system, wherein software architecture comprises ISCSI (internet SmallComputer System Interface, Internet Small Computer Systems Interface), Cache, LVM (Logical Volume Manager, logical volume management), RAID (Redundant Array ofIndependent Disk, raid-array) and HDD (Hard Disk Drive, hard disk drive).
As shown in Figure 1, Cache is positioned under ISCSI service layer, on LVM layer, belongs to interface components, has material impact to performance of storage system.
From the flow direction of data, the request issued by ISCSI service layer all converges at Cache, for ensureing the integrality of Cache data, to the use of the same area in Cache, must carry out in a mutually exclusive fashion.That is, when certain region is just when accessed, need to wait for other request of access in this region.This kind of strategy, while ensure that the correctness of logic, reduces the processing speed of request, becomes a bottleneck of system effectiveness.
But for zero-copy Cache, must lock owing to existing in Cache in the process that is used by one user, thus prevent the confusion that two users use internal memory to cause jointly.As internal memory chunk (the memory chunk) of a user in handling characteristics, other need to use the user of this internal memory chunk to wait for, thus reduce concurrent, decrease usefulness.
Summary of the invention
Object of the present invention is intended at least one of solve the aforementioned problems in the prior.
For this reason, embodiments of the invention propose the method and system of a kind of request access zero-copy Cache, to improve the processing speed of request while ensureing logical correctness.
According to an aspect of the present invention, the embodiment of the present invention proposes a kind of method of request access zero-copy high-speed cache Cache, described access method comprises the following steps: for the request of each access Cache creates a request data structure, wherein said request data structure includes the address of recording corresponding request of access and needing the Cache region of access; Described request data structure is put into request queue according to request order; According to each request data structure in request queue, identify that corresponding request of access needs the Cache region of access whether occupied; When identifying that write request needs the Cache region of access occupied, distributing temporarily providing room to this write request and corresponding data being write in described temporarily providing room; And when occupied Cache region is released, the data of described temporarily providing room are write in this Cache region.
According to the further embodiment of the present invention, when identifying that write request needs the Cache region of access unoccupied, write request corresponding data is write in this Cache region.
According to the further embodiment of the present invention, when identifying that read request needs the Cache region of access occupied, postpone the access of this read request to occupied Cache region; And this Cache region is accessed when read request returns.
According to the further embodiment of the present invention, when identifying that read request needs the Cache region of access unoccupied, then read this Cache region.
According to the further embodiment of the present invention, the capacity of described temporarily providing room is consistent with occupied Cache field capacity.
According to a further aspect in the invention, embodiments of the invention propose the system of a kind of request access zero-copy high-speed cache Cache, described access system comprises: creation module, described creation module is used for for the request of each access Cache creates a request data structure, and wherein said request data structure includes the address of recording corresponding request of access and needing the Cache region of access; Queue module, described Queue module is used for described request data structure to put into request queue according to request order; Identification module, described identification module is used for according to each request data structure in request queue, identifies that corresponding request of access needs the Cache region of access whether occupied; Allocation of space module, described allocation of space module is used for, when identifying that write request needs the Cache region of access occupied, distributing temporarily providing room to this write request; And writing module, described writing module is used for write request corresponding data to write in described temporarily providing room; And when occupied Cache region is released, the data of described temporarily providing room are write in this Cache region.
According to the further embodiment of the present invention, when described identification module identification write request needs the Cache region of access unoccupied, write request corresponding data writes in this Cache region by described writing module.
According to the further embodiment of the present invention, also comprise read through model, when described identification module identification read request needs the Cache region of access occupied, described read through model postpones the access of this read request to occupied Cache region; And this Cache region is accessed when read request returns.
According to the embodiment of the present invention's step again, when described identification module identification read request needs the Cache region of access unoccupied, described read through model reads this Cache region.
The present invention adopts request queue mechanism, when write request clashes, conflict region allocation is held to the temporarily providing room of data to be written; After conflict is removed, the content in temporarily providing room is written to former conflict area.When read request clashes, postpone the access to conflict area, the access in non conflicting region is carried out as usual.
Thus under the prerequisite ensureing logical correctness, the request clashed is returned as early as possible, and completing of association requests need not be waited for, improve the processing speed of request.
The aspect that the present invention adds and advantage will part provide in the following description, and part will become obvious from the following description, or be recognized by practice of the present invention.
Describe the present invention below in conjunction with the drawings and specific embodiments, but not as a limitation of the invention.
Embodiment
Be described below in detail embodiments of the invention, the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or has element that is identical or similar functions from start to finish.Being exemplary below by the embodiment be described with reference to the drawings, only for explaining the present invention, and can not limitation of the present invention being interpreted as.
With reference to figure 2, this figure is the system architecture block scheme of the request access zero-copy Cache of the embodiment of the present invention.
As shown in the figure, this system comprises creation module 12, Queue module 14, identification module 16, allocation of space module 18 and writing module 20.
Wherein, creation module 12 is for creating a request data structure for the request of each access Cache, this request is represented with this, wherein there is a sub-data structure in request data structure, wherein have recorded the address (pointer) that corresponding request of access needs the Cache region (Cache block) of access, user visits the content in Cache by these pointers.
Subsequently, request data structure is put into request queue according to request order by Queue module 14, waits pending.
The request data structure schematic diagram of request queue can reference diagram 3, and such as, region block c in the pointed Cache asking the corresponding arrow of i to represent and block d, represents the region of the required access of this request i.
Similarly, this request j needs to access blockd and the block e in Cache to ask the pointer of the subsequent request j of i to represent.
Identification module 16, according to each request data structure in request queue, specifically according to the block address of pointed, identifies that corresponding request of access needs the Cache region of access whether occupied, namely whether there is conflict.Such as Fig. 3 embodiment, identification module 16 can identify that block d accesses request j conflicts with asking i to exist.
For current be the request of access of write request, after a write request arrives, in order to ensure that write request needs application to carry out exclusive formula access to target block at synchronization Cache block only by a request write.If identifying that now target block is asked all or part of taking by other, be conflict block, then distribute temporarily providing room (block copy) by allocation of space module 18 to this write request, and by writing module 20, corresponding data is write in this temporarily providing room, namely the data of conflicting in block are written into block copy.
In one embodiment, the capacity of temporarily providing room is consistent with occupied Cache block capacity.
After this, when identification module 16 identifies that write request needs the Cache block of access unoccupied, be non conflicting block, then write request corresponding data write direct this Cache by writing module 20.
And when the block that conflicts is released, the data of temporarily providing room write in this Cacheblock by writing module 20.
In one embodiment, except being provided with the temporarily providing room distribution to write request access conflict block, the present invention can also comprise read through model 22, when identification module 16 identifies that read request needs the Cacheblock (partly or entirely) of access occupied, read through model 22 postpones the access of this read request to occupied Cache block, first read operation is not carried out to conflict block, only reads non conflicting block (if existence).By the time (disk access process may be experienced therebetween) when read request returns, visit again the block previously clashed.When read request returns, again detect this conflict block by identification module 16 and whether also there is conflict, if then wait for; Otherwise read through model 22 performs read operation to this block.
Above-mentioned modules adopts software mode to realize in systems in which, during system initialization, is loaded in the middle of operating system (Linux) as core functions module (Kernel Module).
In an embodiment of the invention, hardware environment is: the storage server adopting Intel JasperForest framework Xeon central series processing unit (CPU).The Cache logic described in the present invention also can adopt the programmable logic device (PLD) such as field programmable gate array (FPGA), CPLD (CPLD) to realize; The storage space of Cache management distributes in system hosts, or the storage space of Cache management also can use random access memory (RAM) device independent of system hosts to realize.
Below with reference to the method step process flow diagram that Fig. 4, Fig. 4 are request access zero-copy Cache of the present invention.
First, for the request of each access Cache creates a request data structure (step 102), wherein request data structure includes the address of recording corresponding request of access and needing the Cache region of access.
Then, described request data structure is put into request queue (step 104) according to request order.
According to each request data structure in request queue, identify that corresponding request of access needs the Cache region whether occupied (step 106) of access.And, according to recognition result, corresponding accessing operation (step 108) is carried out to Cache region.
The present invention can perform corresponding Cache access process to the request of access of write operation and/or read operation, hereinafter composition graphs 5 and Fig. 6 is described in detail respectively.
Wherein Fig. 5 a and Fig. 5 b is the flow chart of steps of the write request access zero-copy Cache of the embodiment of the present invention; Fig. 6 a and Fig. 6 b is the flow chart of steps of the read request access zero-copy Cache of the embodiment of the present invention.
With reference to figure 5a, the figure illustrates write request constructive process.
After a write request arrives, first for it creates data structure (202), and access (step 204) when application is carried out exclusive to target block.Then target block whether occupied (step 206) is judged, if now target block is asked all or part of taking by other, then request dispatching temporarily providing room for this reason, to create block copy, data in conflict block are written into block copy (step 208), and wherein the capacity of temporarily providing room is consistent with the block clashed.
If unoccupied, be non conflicting block, then write direct data wherein Cache (step 210); When the block that conflicts is released, the content in block copy is written to this block (step 210).
With reference to figure 5b, the figure illustrates block dispose procedure.
First judging whether to there is copy (step 302) when discharging block, if do not exist, directly discharging this block (step 310).If exist, then judge whether still occupied (step 304) corresponding block further, if occupied, wait for that this block unlocks (step 306); Otherwise, by the content replication of block copy to (step 308) in block, then discharge this block (step 310).
With reference to figure 6a, the figure illustrates read request constructive process.
After read request arrives, be first its establishment data structure (402), and in access (step 404) when please carrying out exclusive to target block.Then judge target block whether occupied (step 406), if now target block is asked all or part of taking by other, then read the data (step 410) of not conflicting in block; Otherwise, read whole block (step 408).
With reference to figure 6b, the figure illustrates read request return course.
First judge whether to there is the conflict block (step 502) do not read, if do not exist, directly return (step 610).If exist, then judge that corresponding block is the need of reading further, namely whether content is wherein up-to-date (step 504), this is because this read request may be conflicted with previous write request, if write request after completion, be what be new in block, at this moment just directly can read data (step 506) from this block, and need not read from disk.Otherwise, read (step 508) from disk.
The present invention adopts request queue mechanism, when write request clashes, conflict region allocation is held to the temporarily providing room of data to be written; After conflict is removed, the content in temporarily providing room is written to former conflict area.When read request clashes, postpone the access to conflict area, the access in non conflicting region is carried out as usual.
Integrated cost of the present invention, performance, realize the consideration of the aspects such as difficulty, use Cache software module to realize Cache function in conjunction with the mode of physical memory.Under the prerequisite ensureing logical correctness, the request clashed is returned as early as possible, and completing of association requests need not be waited for, improve the processing speed of request.Correspondingly, improve the access efficiency of Cache.
Further, logic is terse clear, is easy to safeguard, consumes little to hardware performance.
Certainly; the present invention also can have other various embodiments; when not deviating from the present invention's spirit and essence thereof; those of ordinary skill in the art are when making various corresponding change and distortion according to the present invention, but these change accordingly and are out of shape the protection domain that all should belong to the claim appended by the present invention.