CN108984132A - A kind of I O scheduling method and device based on persistence memory file system - Google Patents

A kind of I O scheduling method and device based on persistence memory file system Download PDF

Info

Publication number
CN108984132A
CN108984132A CN201810974379.4A CN201810974379A CN108984132A CN 108984132 A CN108984132 A CN 108984132A CN 201810974379 A CN201810974379 A CN 201810974379A CN 108984132 A CN108984132 A CN 108984132A
Authority
CN
China
Prior art keywords
queue
request
operations queue
similar operations
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810974379.4A
Other languages
Chinese (zh)
Inventor
苏楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201810974379.4A priority Critical patent/CN108984132A/en
Publication of CN108984132A publication Critical patent/CN108984132A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Abstract

The application discloses a kind of I O scheduling method and device based on persistence memory file system, this method comprises: after receiving at least one I/O Request, first the I/O Request is inserted into discrete operations queue, judge whether the I/O Request and the IO subqueue in similar operations queue are related again, wherein, discrete operations queue and similar operations queue such as belong at the queue to be processed, if the I/O Request is related to the IO subqueue in similar operations queue, then as in subqueue insertion similar operations queue, and then it can be according to the similar operations queue of lookup, execute I O scheduling, then, further according to the discrete operations queue of lookup, execute I O scheduling.It can be seen that, the application is after receiving I/O Request, I O scheduling is no longer carried out immediately, but it is inserted into similar queue after being integrated associated I/O Request as subqueue, and then by searching for similar operations queue, then I O scheduling is executed, to realize intelligent load I O scheduling, reduce network communication number, improves performance.

Description

A kind of I O scheduling method and device based on persistence memory file system
Technical field
This application involves technical field of data storage, and in particular to a kind of I O scheduling based on persistence memory file system Method and device.
Background technique
With the development of today's society science and technology, performance gap is increasing between processor and memory, especially handles Huge spread between device and external memory makes I/O bottleneck problem become increasingly conspicuous.
The mode for alleviating I/O bottleneck at present is that disk file cache is arranged in memory, to reduce time of access external memory Number.But with the arrival of big data era, demand of the growth of data explosion formula to memory size is increasing, and traditional passes through Increasing cache capacity bring benefit can be reduced instead, and frequent exchange data can generate biggish hold between interior external memory Pin is based on this, and novel non-volatile memory medium occurs gradually in the visual field of people, it can byte addressing and non-volatile spy Property make data memory grade can persistent storage, and its storage speed is fast, storing data is not easy to lose, so as to will Novel non-volatile memory medium forms persistence memory file system, the side for replacing traditional interior external memory and depositing as memory Formula realizes perdurable data storage as unique storage medium.
Therefore, how using more advanced I O scheduling method to replace traditional scheduling mode, realize and be based on persistence memory The intelligent I O scheduling of file system, so that I/O bottleneck be effectively relieved, it has also become urgent problem to be solved.
Summary of the invention
To solve the above problems, this application provides a kind of I O scheduling methods and dress based on persistence memory file system It sets, specific technical solution is as follows:
In a first aspect, this application provides a kind of I O scheduling method based on persistence memory file system, the method Include:
Receive at least one I/O Request;
After the I/O Request is inserted into discrete operations queue, the IO team in the I/O Request and similar operations queue is judged The queues to be processed such as whether column are related, and the discrete operations queue and the similar operations queue belong to;
If it is, being inserted into the I/O Request as subqueue in the similar operations queue;
According to the similar operations queue of lookup, I O scheduling is executed;
According to discrete team's action column of lookup, I O scheduling is executed.
In an optional implementation manner, the method also includes:
The queue to be processed such as preset, the queue to be processed such as described includes similar operations queue and discrete operations queue;
Wherein, the similar operations queue is made of IO subqueue, and the discrete operations queue is I/O Request composition.
In an optional implementation manner, the similar operations queue according to lookup executes I O scheduling, packet It includes:
Search the similar operations queue in the queue to be processed such as described;
Judge in the similar operations queue whether to include that at least one son wraps column, the son is by least to column packet Two relevant subqueue compositions;
If it is, wrapping according to the son for including in the similar operations queue to column, I O scheduling is executed by packet.
In an optional implementation manner, if IO subqueue in the I/O Request and the similar operations queue without It closes, the method also includes:
Judge whether the I/O Request and other I/O Requests in the discrete operations queue are related;
If it is, creating one according to the I/O Request and other relevant I/O Requests in the discrete operations queue A IO subqueue packet is inserted into the similar operations queue.
In an optional implementation manner, novel non-volatile memory medium is saved as in described.
Second aspect, this application provides a kind of I O scheduling device based on persistence memory file system, described devices Include:
Receiving unit, for receiving at least one I/O Request;
First judging unit, for will the I/O Request be inserted into discrete operations queue after, judge the I/O Request to it is similar The teams to be processed such as whether the IO subqueue in operation queue is related, and the discrete operations queue and the similar operations queue belong to Column;
It is inserted into unit, if related to the IO subqueue in similar operations queue for I/O Request, by the I/O Request It is inserted into the similar operations queue as subqueue;
First scheduling unit executes I O scheduling for the similar operations queue according to lookup;
Second scheduling unit executes I O scheduling for discrete team's action column according to lookup.
In an optional implementation manner, described device further include:
Setting unit, for the queue to be processed such as presetting, the queue to be processed such as described include similar operations queue and Discrete operations queue;
Wherein, the similar operations queue is made of IO subqueue, and the discrete operations queue is I/O Request composition.
In an optional implementation manner, first scheduling unit includes:
Subelement is searched, for searching the similar operations queue in the queue to be processed such as described;
First judgment sub-unit, for judging in the similar operations queue whether to include that at least one son wraps column, The son is made of column packet at least two relevant subqueues;
First scheduling subelement, if including that at least one son wraps column in similar operations queue for being, basis The son for including in the similar operations queue wraps column, executes I O scheduling by packet.
In an optional implementation manner, if IO subqueue in the I/O Request and the similar operations queue without It closes, described device further include:
Second judgment unit, for judge the I/O Request and other I/O Requests in the discrete operations queue whether phase It closes;
Newly-built unit, if for the I/O Request to other I/O Requests in the discrete operations queue be it is relevant, Then according to the I/O Request and other relevant I/O Requests in the discrete operations queue, an IO subqueue packet insertion is created In the similar operations queue.
In an optional implementation manner, novel non-volatile memory medium is saved as in described.
In the I O scheduling method provided by the present application based on persistence memory file system, at least one IO is being received After request, first the I/O Request is inserted into discrete operations queue, then judges IO in the I/O Request and similar operations queue Whether queue is related, wherein discrete operations queue and similar operations queue such as belong at the queue to be processed, if the I/O Request and phase It is related like the IO subqueue in operation queue, then it is inserted into similar operations queue as subqueue, and then can be according to lookup Similar operations queue, execute I O scheduling, then, further according to the discrete operations queue of lookup, execute I O scheduling.As it can be seen that this Shen Please after receiving I/O Request, I O scheduling is no longer carried out immediately, but sub- team is used as after associated I/O Request is integrated Column are inserted into similar queue, and then by searching for similar operations queue, then execute I O scheduling, to realize intelligent load IO Scheduling, reduces network communication number and the erasable number of device, reduces time delay, also improve performance.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for For those of ordinary skill in the art, without any creative labor, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 is a kind of process of the I O scheduling method based on persistence memory file system provided by the embodiments of the present application Figure;
Fig. 2 is the flow chart provided by the embodiments of the present application that I O scheduling is executed according to the similar operations queue of lookup;
Fig. 3 is that a kind of structure of the I O scheduling device based on persistence memory file system provided by the embodiments of the present application is shown It is intended to.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall in the protection scope of this application.
Technical solution provided by the present application in order to facilitate understanding below first carries out the research background of technical scheme Simple declaration.
It is well known that as the description in background technique, now with the development of science and technology, the intersexuality of processor and memory Energy gap is increasing, and the huge spread especially between processor and external memory makes I/O bottleneck problem become increasingly conspicuous, if also It is that by way of alleviating I/O bottleneck, will be unable to meet data explosion formula traditional setting disk file cache in memory Demand of the growth to memory size, also, with the progress of industrial technology, novel non-volatile memory medium is occurred gradually over In the visual field of people, it can byte addressing and non-volatile characteristic make data can persistent storage in memory grade.It is novel High density, the read or write speed of non-volatile memory device become closer to traditional dynamic random access memory (Dynamic Random Access Memory, abbreviation DRAM) and the characteristic that will not lose of power failure data make people as memory Substitute even Next generation storage devices first choice.On this basis, how to be replaced using more advanced I O scheduling method Traditional scheduling mode realizes the intelligent I O scheduling based on persistence memory file system, to coordinate central processing unit When the computing cost and novel non-volatile memory medium of (Central Processing Unit, abbreviation CPU) are as memory Overhead balance, and then I/O bottleneck is effectively relieved, it has also become urgent problem to be solved.
Based on this, present applicant proposes a kind of I O scheduling method and devices based on persistence memory file system, are used for Realize the intelligent I O scheduling based on persistence memory file system.
Below with reference to attached drawing to the I O scheduling method provided by the embodiments of the present application based on persistence memory file system It is described in detail.Referring to Fig. 1, it illustrates a kind of IO based on persistence memory file system provided by the embodiments of the present application The flow chart of dispatching method, the present embodiment may comprise steps of:
S101: at least one I/O Request is received.
In the present embodiment, it as people are more and more using novel non-volatile memory medium as memory, forms Persistence memory file system, opening when for the computing cost for coordinating CPU and novel non-volatile memory medium as memory Pin balance, and then I/O bottleneck is effectively relieved, after receiving at least one I/O Request, it can be realized by subsequent step to reception I/O Request intelligent scheduling.
S102: after I/O Request is inserted into discrete operations queue, judge the IO team in the I/O Request and similar operations queue Whether column are related, wherein discrete operations queue and similar operations queue such as belong at the queue to be processed.
In the present embodiment, in order to realize intelligent I O scheduling, one kind is optionally achieved in that, is preset etc. to be processed Queue, the queue to be processed such as this include similar operations queue and discrete operations queue, wherein similar operations queue is by IO team Column composition, discrete operations queue are made of I/O Request.
In this implementation, in order to realize the intelligent I O scheduling based on semantic analysis, to reach the polymerization processing of operation, First before to I/O Request instruction processing, need to preset the queues to be processed such as multiple, wherein wait queue to be processed can be with Including the similar operations queue being made of IO subqueue and the discrete operations queue being made of I/O Request, under initial situation, Similar operations queue is sky, after receiving at least one I/O Request by step S201, can be inserted into discrete operations queue In, it then is being based on semantic analysis, associated I/O Request is being found out, carries out polymerization processing, so as to when carrying out multithreading operation, Can by similar I/O operation request, such as to the operation of same file, to polymerizations such as multiple operations under the same catalogue At a big I/O subqueue packet, it is inserted into similar operations queue, such as, it is assumed that three I/O Requests received are respectively " to read Preceding 50 bytes of A ", " rear 20 bytes for reading A " and " the 80th byte in modification A ", then from semantically, analysis can It show that these three I/O operation requests are the operations to same file, polymerization processing can be carried out, it can polymerize three At a big I/O subqueue packet, it is inserted into similar operations queue.
And then it is subsequent I/O Request is received by step S201 again after, it can be determined that in the I/O Request and similar operations queue IO subqueue it is whether related, specifically, can by the method for existing semantic analysis, calculate the I/O Request be whether and Each subqueue is relevant in each IO subqueue packet in similar operations queue.If so, this can continue to execute step S103。
S103: if it is, using I/O Request as in subqueue insertion similar operations queue.
In the present embodiment, if judging IO in the I/O Request received and similar operations queue by step S102 Queue is corresponding IO subqueue packet that is relevant, then being inserted into using the I/O Request as subqueue in similar operations queue In.
For example: on the basis of above-mentioned example, it is assumed that in similar operations queue, exist and " read by three I/O Requests Take preceding 50 bytes of A ", rear 20 bytes of A " read " and " the 80th byte in modification A " one I/O team of composition Column packet, at this point, passing through semanteme if receiving one article of I/O Request by step S102 is " the 60th to 70 byte for reading A " Analysis, it can be deduced that IO subqueue " preceding 50 bytes for reading A " in the I/O Request and similar operations queue, " after removing A 20 bytes " and " the 80th byte in modification A " are relevant, and then can be inserted into using the I/O Request as subqueue In above-mentioned IO subqueue packet in similar operations queue, so that the IO subqueue packet includes that there are four I/O subqueues (that is, above-mentioned Four I/O Requests).
It should be noted that in some possible implementations of the application, if in step s 102, judging to receive To I/O Request it is unrelated with the IO subqueue in similar operations queue, then further, it is possible to judge the I/O Request and discrete operations Whether other I/O Requests in queue related, if related, can according in I/O Request and discrete operations queue other with Its relevant I/O Request creates in an IO subqueue packet insertion similar operations queue.
In this implementation, if it is judged that IO subqueue in the I/O Request received and similar operations queue without It closes, then it cannot be inserted into similar operations queue directly as subqueue, but need to judge the I/O Request and discrete behaviour again Whether other I/O Requests made in queue are related, for example, the mode that again may be by semantic analysis is judged, for example, false If the I/O Request and the I/O Request B in discrete operations queue be semantically it is relevant, then can by I/O Request and I/O Request B into Row integration, construct an IO subqueue packet, the two respectively as the subqueue in the packet, and by the IO subqueue packet be inserted into it is similar In operation queue, to realize the scheduling of I/O Request using subsequent step.
It should be noted that if judging that the I/O Request to other I/O Requests in discrete operations queue is also uncorrelated , then it is retained in discrete operations queue, to be scheduled subsequently through step S105.
S104: according to the similar operations queue of lookup, I O scheduling is executed.
In practical applications, similar operations queue is inserted into for the I/O Request received as subqueue by step S103 Afterwards, by searching for all similar operations queues, wherein corresponding I O scheduling can be executed when executing multithreading operation.
In some possible implementations of the application, as shown in Fig. 2, this step S104 can specifically include step S201-S203:
Step S201: the similar operations queue in the queue to be processed such as lookup.
In this implementation, after foring all IO subqueues in similar operations queue by S103, it can pass through All similar operations queues in the queues to be processed such as lookup carry out subsequent step S202.
Step S202: judge in similar operations queue whether to include at least one subqueue packet, wherein subqueue packet is It is made of at least two relevant subqueues.
After finding etc. all similar operations queues in queue to be processed by step S201, it may further judge It whether include out at least one subqueue packet in similar operations queue, if comprising step S203 can be continued to execute.
Step S203: if it is, according to the subqueue packet for including in similar operations queue, I O scheduling is executed by packet.
In this implementation, if judging to include at least one sub- team in similar operations queue by step S202 These subqueue packets then can be carried out I O scheduling by packet, for example, can carry out the transmission etc. of I/O Request by packet by column packet.For example, Client can form multiple subqueue packet insertion similar operations queues after receiving multiple I/O Requests through the above steps In, these subqueue packets are once being sent to server end by packet, it is primary to send a subqueue packet (wherein comprising multiple IP Request), until all subqueue Bao Jun in similar operations queue are sent to server end, to realize intelligent load IO Scheduling, reduces network communication number and the erasable number of device, reduces time delay, also improve performance.Similarly, server end exists Receive client transmission multiple I/O Requests after, can also realize through the above steps and I/O Request is reintegrated, with On the basis of novel non-volatile memory medium is as memory, the centralized processing of faster speed may be implemented, detailed process can join See above-mentioned steps, details are not described herein by the application.
S105: according to the discrete operations queue of lookup, I O scheduling is executed.
In the present embodiment, if after realizing the intelligent I O scheduling to similar operations queue through the above steps, there are still with The incoherent I/O Request of other I/O Requests in discrete operations queue, then can be retained in discrete operations queue, in turn The scheduling to these I/O Requests can be executed by searching for discrete operations queue.
It should be noted that Installed System Memory is novel non-volatile memories in some possible implementations of the application Medium.
Specifically, novel non-volatile memory medium can be memory memory (Storage Class Memory, letter Claim SCM), at least one of nonvolatile storage (NonVolatile Memory, abbreviation NVM).
In this way, being received at least in the I O scheduling method provided by the present application based on persistence memory file system After one I/O Request, first the I/O Request is inserted into discrete operations queue, then is judged in the I/O Request and similar operations queue IO subqueue it is whether related, wherein discrete operations queue and similar operations queue such as belong at the queue to be processed, if the IO is asked It asks related to the IO subqueue in similar operations queue, is then inserted into similar operations queue as subqueue, and then can root It is investigated that the similar operations queue looked for, executes I O scheduling, then, further according to the discrete operations queue of lookup, I O scheduling is executed.It can See, the application no longer carries out I O scheduling immediately, but make after associated I/O Request is integrated after receiving I/O Request It is inserted into similar queue for subqueue, and then by searching for similar operations queue, then executes I O scheduling, to realize intelligence I O scheduling is loaded, reduces network communication number and the erasable number of device, reduces time delay, also improve performance.
Based on based on the I O scheduling method of persistence memory file system, present invention also provides one kind based on lasting above The I O scheduling device of property memory file system, described device include:
Receiving unit 301, for receiving at least one I/O Request;
First judging unit 302 judges the I/O Request and phase after the I/O Request is inserted into discrete operations queue Whether related like the IO subqueue in operation queue, it is to be processed that the discrete operations queue and the similar operations queue belong to etc. Queue;
If insertion unit 303 asks the IO related to the IO subqueue in similar operations queue for I/O Request It asks and is inserted into the similar operations queue as subqueue;
First scheduling unit 304 executes I O scheduling for the similar operations queue according to lookup;
Second scheduling unit 305 executes I O scheduling for discrete team's action column according to lookup.
Optionally, described device further include:
Setting unit, for the queue to be processed such as presetting, the queue to be processed such as described include similar operations queue and Discrete operations queue;
Wherein, the similar operations queue is made of IO subqueue, and the discrete operations queue is I/O Request composition.
Optionally, first scheduling unit 304 includes:
Subelement is searched, for searching the similar operations queue in the queue to be processed such as described;
First judgment sub-unit, for judging in the similar operations queue whether to include that at least one son wraps column, The son is made of column packet at least two relevant subqueues;
First scheduling subelement, if including that at least one son wraps column in similar operations queue for being, basis The son for including in the similar operations queue wraps column, executes I O scheduling by packet.
Optionally, if the I/O Request is unrelated with the IO subqueue in the similar operations queue, described device is also wrapped It includes:
Second judgment unit, for judge the I/O Request and other I/O Requests in the discrete operations queue whether phase It closes;
Newly-built unit, if for the I/O Request to other I/O Requests in the discrete operations queue be it is relevant, Then according to the I/O Request and other relevant I/O Requests in the discrete operations queue, an IO subqueue packet insertion is created In the similar operations queue.
Optionally, novel non-volatile memory medium is saved as in described.
In this way, being received at least in the I O scheduling device provided by the present application based on persistence memory file system After one I/O Request, first the I/O Request is inserted into discrete operations queue, then is judged in the I/O Request and similar operations queue IO subqueue it is whether related, wherein discrete operations queue and similar operations queue such as belong at the queue to be processed, if the IO is asked It asks related to the IO subqueue in similar operations queue, is then inserted into similar operations queue as subqueue, and then can root It is investigated that the similar operations queue looked for, executes I O scheduling, then, further according to the discrete operations queue of lookup, I O scheduling is executed.It can See, the application no longer carries out I O scheduling immediately, but make after associated I/O Request is integrated after receiving I/O Request It is inserted into similar queue for subqueue, and then by searching for similar operations queue, then executes I O scheduling, to realize intelligence I O scheduling is loaded, reduces network communication number and the erasable number of device, reduces time delay, also improve performance.
It should be noted that each embodiment in this specification is described in a progressive manner, each embodiment emphasis is said Bright is the difference from other embodiments, and the same or similar parts in each embodiment may refer to each other.For reality For applying system or device disclosed in example, since it is corresponded to the methods disclosed in the examples, so being described relatively simple, phase Place is closed referring to method part illustration.
It should also be noted that, herein, relational terms such as first and second and the like are used merely to one Entity or operation are distinguished with another entity or operation, without necessarily requiring or implying between these entities or operation There are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant are intended to contain Lid non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
The step of method described in conjunction with the examples disclosed in this document or algorithm, can directly be held with hardware, processor The combination of capable software module or the two is implemented.Software module can be placed in random access memory (RAM), memory, read-only deposit Reservoir (ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technology In any other form of storage medium well known in field.
The foregoing description of the disclosed embodiments makes professional and technical personnel in the field can be realized or use the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the application.Therefore, the application It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest scope of cause.

Claims (10)

1. a kind of I O scheduling method based on persistence memory file system, which is characterized in that the described method includes:
Receive at least one I/O Request;
After the I/O Request is inserted into discrete operations queue, judge that the I/O Request is with the IO subqueue in similar operations queue No correlation, the discrete operations queue and the similar operations queue such as belong at the queue to be processed;
If it is, being inserted into the I/O Request as subqueue in the similar operations queue;
According to the similar operations queue of lookup, I O scheduling is executed;
According to discrete team's action column of lookup, I O scheduling is executed.
2. the I O scheduling method according to claim 1 based on persistence memory file system, which is characterized in that the side Method further include:
The queue to be processed such as preset, the queue to be processed such as described includes similar operations queue and discrete operations queue;
Wherein, the similar operations queue is made of IO subqueue, and the discrete operations queue is I/O Request composition.
3. the I O scheduling method according to claim 1 or 2 based on persistence memory file system, which is characterized in that institute The similar operations queue according to lookup is stated, I O scheduling is executed, comprising:
Search the similar operations queue in the queue to be processed such as described;
Judge in the similar operations queue whether to include that at least one son wraps column, the son is by least two to column packet Relevant subqueue composition;
If it is, wrapping according to the son for including in the similar operations queue to column, I O scheduling is executed by packet.
4. the I O scheduling method according to claim 1 based on persistence memory file system, which is characterized in that if institute It is unrelated with the IO subqueue in the similar operations queue to state I/O Request, the method also includes:
Judge whether the I/O Request and other I/O Requests in the discrete operations queue are related;
If it is, creating an IO according to the I/O Request and other relevant I/O Requests in the discrete operations queue Subqueue packet is inserted into the similar operations queue.
5. the I O scheduling method according to claim 1 based on persistence memory file system, which is characterized in that in described Save as novel non-volatile memory medium.
6. a kind of I O scheduling device based on persistence memory file system, which is characterized in that described device includes:
Receiving unit, for receiving at least one I/O Request;
First judging unit judges the I/O Request and similar operations after the I/O Request is inserted into discrete operations queue The queues to be processed such as whether the IO subqueue in queue is related, and the discrete operations queue and the similar operations queue belong to;
Be inserted into unit, if related to the IO subqueue in similar operations queue for I/O Request, using the I/O Request as Subqueue is inserted into the similar operations queue;
First scheduling unit executes I O scheduling for the similar operations queue according to lookup;
Second scheduling unit executes I O scheduling for discrete team's action column according to lookup.
7. device according to claim 6, which is characterized in that described device further include:
Setting unit, for the queue to be processed such as presetting, the queue to be processed such as described includes similar operations queue and discrete Operation queue;
Wherein, the similar operations queue is made of IO subqueue, and the discrete operations queue is I/O Request composition.
8. device according to claim 6 or 7, which is characterized in that first scheduling unit includes:
Subelement is searched, for searching the similar operations queue in the queue to be processed such as described;
First judgment sub-unit, it is described for judging in the similar operations queue whether to include that at least one son wraps column Son is made of column packet at least two relevant subqueues;
First scheduling subelement, if including that at least one son wraps column in similar operations queue for being, according to The son for including in similar operations queue wraps column, executes I O scheduling by packet.
9. device according to claim 6, which is characterized in that if in the I/O Request and the similar operations queue IO subqueue is unrelated, described device further include:
Second judgment unit, for judging whether the I/O Request and other I/O Requests in the discrete operations queue are related;
Newly-built unit, if to other I/O Requests in the discrete operations queue being relevant, root for the I/O Request According to the I/O Request and other relevant I/O Requests in the discrete operations queue, create described in an IO subqueue packet insertion In similar operations queue.
10. device according to claim 6, which is characterized in that save as novel non-volatile memory medium in described.
CN201810974379.4A 2018-08-24 2018-08-24 A kind of I O scheduling method and device based on persistence memory file system Pending CN108984132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810974379.4A CN108984132A (en) 2018-08-24 2018-08-24 A kind of I O scheduling method and device based on persistence memory file system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810974379.4A CN108984132A (en) 2018-08-24 2018-08-24 A kind of I O scheduling method and device based on persistence memory file system

Publications (1)

Publication Number Publication Date
CN108984132A true CN108984132A (en) 2018-12-11

Family

ID=64546716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810974379.4A Pending CN108984132A (en) 2018-08-24 2018-08-24 A kind of I O scheduling method and device based on persistence memory file system

Country Status (1)

Country Link
CN (1) CN108984132A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989232A (en) * 2019-12-17 2021-06-18 北京搜狗科技发展有限公司 Search result ordering method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346285A (en) * 2013-08-06 2015-02-11 华为技术有限公司 Memory access processing method, device and system
CN104424105A (en) * 2013-08-26 2015-03-18 华为技术有限公司 Memory data reading and writing processing method and device
US20150081839A1 (en) * 2008-06-18 2015-03-19 Amazon Technologies, Inc. Fast sequential message store
EP2955660A1 (en) * 2014-06-12 2015-12-16 Nagravision S.A. System and method for secure loading data in a cache memory
CN106909522A (en) * 2015-12-22 2017-06-30 中国电信股份有限公司 The delay control method of GPU write request data, device and cloud computing system
US20180060235A1 (en) * 2016-08-30 2018-03-01 Intel Corporation Non-volatile memory compression devices and associated methods and systems
CN107943413A (en) * 2017-10-12 2018-04-20 记忆科技(深圳)有限公司 A kind of method of solid state hard disc lifting reading performance

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150081839A1 (en) * 2008-06-18 2015-03-19 Amazon Technologies, Inc. Fast sequential message store
CN104346285A (en) * 2013-08-06 2015-02-11 华为技术有限公司 Memory access processing method, device and system
CN104424105A (en) * 2013-08-26 2015-03-18 华为技术有限公司 Memory data reading and writing processing method and device
EP2955660A1 (en) * 2014-06-12 2015-12-16 Nagravision S.A. System and method for secure loading data in a cache memory
CN106909522A (en) * 2015-12-22 2017-06-30 中国电信股份有限公司 The delay control method of GPU write request data, device and cloud computing system
US20180060235A1 (en) * 2016-08-30 2018-03-01 Intel Corporation Non-volatile memory compression devices and associated methods and systems
CN107943413A (en) * 2017-10-12 2018-04-20 记忆科技(深圳)有限公司 A kind of method of solid state hard disc lifting reading performance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
鄢磊: ""持久性内存文件系统优化研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989232A (en) * 2019-12-17 2021-06-18 北京搜狗科技发展有限公司 Search result ordering method and device

Similar Documents

Publication Publication Date Title
US7158964B2 (en) Queue management
US8762534B1 (en) Server load balancing using a fair weighted hashing technique
US8276154B2 (en) Hash partitioning streamed data
US5706461A (en) Method and apparatus for implementing virtual memory having multiple selected page sizes
US9667754B2 (en) Data structure and associated management routines for TCP control block (TCB) table in network stacks
US20120079213A1 (en) Managing concurrent accesses to a cache
US20060062144A1 (en) Tokens in token buckets maintained among primary and secondary storages
Pandit et al. Resource allocation in cloud using simulated annealing
CN106570113B (en) Mass vector slice data cloud storage method and system
WO2010027609A2 (en) Load balancing for services
US7111289B2 (en) Method for implementing dual link list structure to enable fast link-list pointer updates
Gill et al. Dynamic cost-aware re-replication and rebalancing strategy in cloud system
CN101577705A (en) Multi-core paralleled network traffic load balancing method and system
CN105577806B (en) A kind of distributed caching method and system
CN108874688A (en) A kind of message data caching method and device
US11385900B2 (en) Accessing queue data
CN104899161A (en) Cache method based on continuous data protection of cloud storage environment
CN104965793B (en) A kind of cloud storage data node device
CN108984132A (en) A kind of I O scheduling method and device based on persistence memory file system
US8156289B2 (en) Hardware support for work queue management
CN102970349B (en) A kind of memory load equalization methods of DHT network
CN111061652B (en) Nonvolatile memory management method and system based on MPI-IO middleware
CN107451070A (en) The processing method and server of a kind of data
Hines et al. Distributed anemone: Transparent low-latency access to remote memory
CN106775450B (en) A kind of data distribution method in mixing storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181211

RJ01 Rejection of invention patent application after publication