CN108984121A - Ensure the method, apparatus and computer equipment of Request Priority - Google Patents

Ensure the method, apparatus and computer equipment of Request Priority Download PDF

Info

Publication number
CN108984121A
CN108984121A CN201810716970.XA CN201810716970A CN108984121A CN 108984121 A CN108984121 A CN 108984121A CN 201810716970 A CN201810716970 A CN 201810716970A CN 108984121 A CN108984121 A CN 108984121A
Authority
CN
China
Prior art keywords
request
chained list
priority
node
length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810716970.XA
Other languages
Chinese (zh)
Other versions
CN108984121B (en
Inventor
吴娴
付东松
张健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Union Memory Information System Co Ltd
Original Assignee
Shenzhen Union Memory Information System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Union Memory Information System Co Ltd filed Critical Shenzhen Union Memory Information System Co Ltd
Priority to CN201810716970.XA priority Critical patent/CN108984121B/en
Publication of CN108984121A publication Critical patent/CN108984121A/en
Application granted granted Critical
Publication of CN108984121B publication Critical patent/CN108984121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bus Control (AREA)
  • Computer And Data Communications (AREA)

Abstract

The present invention relates to the method, apparatus and computer equipment that ensure Request Priority, this method includes that setting node requests chained list length;Initialize the quantity of I/O Request chained list and I/O Request;Acquisition request;Judge whether request is I/O Request;If so, the quantity to I/O Request is handled, and according to the relationship of quantity and node request chained list length, carry out the list processing of I/O Request chain;If it is not, then carrying out priority processing to I/O Request and internal request;The step of returning to the quantity for initializing I/O Request chained list and I/O Request.The present invention requests chained list length by setting node, the maximum I/O Request quantity of I/O Request chain length is defined, in the minimum treatment effeciency of CPU1, also internal request can be taken out and has been handled from high priority FIFO before internal request timed out, realize that the phenomenon for avoiding the processing time-out of internal request occurs.

Description

Ensure the method, apparatus and computer equipment of Request Priority
Technical field
The present invention relates to solid state hard disks, more specifically refer to and ensure that the method, apparatus of Request Priority and computer are set It is standby.
Background technique
The controller of current main-stream solid state hard disk (SSD) is all made of multicore (CPU) design, and communication between core and core can be with Using first in first out (i.e. FIFO) mechanism, a core, which is responsible for generating, requests to put into FIFO, another core takes out from FIFO and asks Processing is asked, as shown in Figure 1.In actual scene, there is priority between request, in order to make priority high request in the time Upper priority processing uses more FIFO methods, as shown in Fig. 2, high priority requests are placed on high priority by CPU0 in the prior art In FIFO, low priority request is placed in low priority FIFO, and the every second priority of CPU1 takes request from high priority FIFO, in this way Ensure that the preferential execution of high priority requests.
In existing firmware design, I/O command that firmware is issued in addition to needing to handle host, it is also necessary to handle inside SSD Order, and the internal request of limitation while processing is up to 8 in design, and they and I/O command mix processing, interior The priority of portion's request is higher than I/O command, and SSD requires these internal requests to complete in a short period of time, it is assumed that and 20 milliseconds, IO Order is general long, it is assumed that is 1 second.
In existing firmware design, FTL (i.e. flash translation layer (FTL)) uses 4KB map unit, so CPU0 is to each order It is that unit is cut into several node requests according to 4KB, CPU1 processing, as described in Figure 3, the prior art then is issued into node request In in order to promote the communication efficiency between CPU, disposably as far as possible by one alter node request be put into FIFO, CPU0 handle three It orders, first order is that I/O command is cut into 4 nodes requests, and second order, which is cut into 1 node for internal request, asks It asks, third is cut into 3 nodes requests, the interactive process of CPU0 and CPU1 for I/O command are as follows: CPU0 is by first I/O command Node linked list be put into low priority FIFO, as shown in Figure 4;Since CPU0 and CPU1 are asynchronous parallel work, CPU1 is in sky The not busy phase is attempting always first from high priority FIFO acquisition request, is attempted if empty from low priority FIFO acquisition request, one Denier low priority FIFO has inserted request, and CPU1, which just takes out first from low priority FIFO and alters node request, to be handled; CPU0 works on, since the priority of Internal order wants high, so Internal node is placed into high priority FIFO, The node request of second I/O command is put into low priority FIFO, as shown in Figure 5;Until CPU1 has handled first I/O command After one alters node, continues preferentially to take out request processing from high priority FIFO, handle and requested in low priority FIFO.
The prior art embodies the preferential answering that CPU1 requests Internal, still by setting high priority FIFO SSD requires deadline of internal request very harsh, and need to have handled after the request chained list of node at hand could be from by CPU1 High priority FIFO takes out internal request, if the node request chained list of CPU1 at hand is very long, the processing time is more than 20 milliseconds, that The processing of internal request still can time-out.
Therefore, it is necessary to design a kind of method, realize that the phenomenon for avoiding the processing time-out of internal request occurs.
Summary of the invention
It is an object of the invention to overcome the deficiencies of existing technologies, the method, apparatus and meter of Request Priority are provided safeguard Calculate machine equipment.
To achieve the above object, the invention adopts the following technical scheme: the method for ensureing Request Priority, includes:
Node is set and requests chained list length;
Initialize the quantity of I/O Request chained list and I/O Request;
Acquisition request;
Judge whether request is I/O Request;
If so, the quantity to I/O Request is handled, and according to the relationship of quantity and node request chained list length, carry out I/O Request chain list processing;
If it is not, then carrying out priority processing to I/O Request and internal request;
The step of returning to the quantity for initializing I/O Request chained list and I/O Request.
Its further technical solution are as follows: the setting node requests the step of chained list length, comprising the following specific steps
Obtain the worst processing speed of chip;
Obtain the timeout threshold of internal request;
Node, which is obtained, according to the worst processing speed of chip and timeout threshold requests chained list length;
By first in first out line up in each grid carry node request chained list length be set as node request chained list length.
Its further technical solution are as follows: the quantity of I/O Request is handled, and long according to quantity and node request chained list The step of relationship of degree, progress I/O Request chain list processing, comprising the following specific steps
One is added to the quantity of I/O Request, forms new quantity;
Judge whether new quantity is equal to node request chained list length;
If it is not, the step of then returning to the acquisition request;
If so, I/O Request chained list is put into during low priority first in first out lines up, and return to the initialization I/O Request The step of quantity of chained list and I/O Request.
Its further technical solution are as follows: the step of priority processing is carried out to I/O Request and internal request, including it is following Specific steps:
I/O Request chained list is put into during low priority first in first out lines up;
Internal request is put into during high priority first in first out lines up.
Its further technical solution are as follows: before the step of acquisition request, further includes:
Node request is cut, node pool is formed.
The present invention also provides the devices for ensureing Request Priority, include:
Length setting unit, for node request chained list length to be arranged;
Initialization unit, for initializing the quantity of I/O Request chained list and I/O Request;
Request unit is used for acquisition request;
Judging unit is requested, for judging whether request is I/O Request;
I/O Request processing unit for if so, the quantity to I/O Request is handled, and is requested according to quantity and node The relationship of chained list length carries out the list processing of I/O Request chain;
Priority processing unit is used for if it is not, then carrying out priority processing to I/O Request and internal request.
Its further technical solution are as follows: the length setting unit includes:
Speed acquiring module, for obtaining the worst processing speed of chip;
Threshold value obtains module, for obtaining the timeout threshold of internal request;
Length obtains module, long for obtaining node request chained list according to the worst processing speed of chip and timeout threshold Degree;
Setup module, the node request chained list length for each grid carry in lining up first in first out are set as node Request chained list length.
Its further technical solution are as follows: the I/O Request processing unit includes:
Quantity processing module adds one for the quantity to I/O Request, forms new quantity;
Quantity judgment module, for judging whether new quantity is equal to node request chained list length;
Chain table handing module, for if so, I/O Request chained list is put into during low priority first in first out lines up.
Its further technical solution are as follows: the priority processing unit includes:
I/O Request placement module, for I/O Request chained list to be put into during low priority first in first out lines up;
Internal request placement module, for internal request to be put into during high priority first in first out lines up.
The present invention also provides a kind of computer equipment, including memory, processor and it is stored on the memory simultaneously The computer program that can be run on the processor, the processor realize that above-mentioned guarantee is asked when executing the computer program The method for seeking priority.
Compared with the prior art, the invention has the advantages that: the method for guarantee Request Priority of the invention, by setting Determine node request chained list length, the maximum I/O Request quantity of I/O Request chain length is defined, the node request chained list length according to It is set according to the timeout threshold of worst processing speed and internal request, when the I/O Request quantity of I/O Request chained list is requested equal to node I/O Request chained list is then put into during low priority first in first out lines up, to reach in the minimum treatment effeciency of CPU1 by chained list length In the case of, also internal request can be taken out and handled from high priority FIFO before internal request timed out, realization avoids interior The phenomenon of the processing time-out of portion's request occurs.
The invention will be further described in the following with reference to the drawings and specific embodiments.
Detailed description of the invention
The schematic flow diagram of Fig. 1 FIFO communication mechanism between the CPU of the prior art;
Fig. 2 is the schematic block diagram of the high low priority FIFO of the prior art;
Fig. 3 is the schematic flow diagram by 4KB cutting order of the prior art;
Fig. 4 is that the CPU0 of the prior art submits request chained list to the schematic flow diagram of CPU1;
Fig. 5 is that the CPU1 of the prior art takes out request processing and CPU0 continues to submit the schematic flow diagram of request;
Fig. 6 is the schematic flow diagram of the method for the guarantee Request Priority that the specific embodiment of the invention provides;
Fig. 7 is the sub-step schematic flow diagram of the method for the guarantee Request Priority that the specific embodiment of the invention provides;
Fig. 8 is the sub-step schematic flow diagram of the method for the guarantee Request Priority that the specific embodiment of the invention provides;
Fig. 9 is the sub-step schematic flow diagram of the method for the guarantee Request Priority that the specific embodiment of the invention provides;
Figure 10 is the schematic flow diagram for the cutting node request that the specific embodiment of the invention provides;
Figure 11 is the schematic flow diagram for the limitation FIFO interior joint chained list length that the specific embodiment of the invention provides;
Figure 12 is the schematic block diagram of the device for the guarantee Request Priority that the specific embodiment of the invention provides;
Figure 13 is the schematic block diagram for the length setting unit that the specific embodiment of the invention provides;
Figure 14 is the schematic block diagram for the I/O Request processing unit that the specific embodiment of the invention provides;
Figure 15 is the schematic block diagram for the priority processing unit that the specific embodiment of the invention provides;
Figure 16 is a kind of schematic block diagram for computer equipment that the specific embodiment of the invention provides.
Specific embodiment
In order to more fully understand technology contents of the invention, combined with specific embodiments below to technical solution of the present invention into One step introduction and explanation, but not limited to this.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " and "comprising" instruction Described feature, entirety, step, operation, the presence of element and/or component, but one or more of the other feature, whole is not precluded Body, step, operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment And be not intended to limit the application.As present specification and it is used in the attached claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
The specific embodiment as shown in Fig. 6~16, the method, apparatus and meter provided in this embodiment for ensureing Request Priority Machine equipment is calculated, can be applied in the solid storage mediums such as solid state hard disk, realizes the phenomenon for avoiding the processing time-out of internal request Occur.
Referring to Fig. 6, Fig. 6 is the schematic flow diagram of the method provided in this embodiment for ensureing Request Priority, such as Fig. 6 institute Show, the method for ensureing Request Priority, includes step S101~S106.
S101, setting node request chained list length.
Limitation is committed to the length of low priority FIFO interior joint hyperlink request every time, i.e. each grid is hung in limitation FIFO The node of load requests chained list length, so that the node request chained list that CPU1 takes out from low priority FIFO is met the requirements, keeps away The phenomenon for exempting from the processing time-out for internal request occur occurs.
In one embodiment, as shown in fig. 7, above-mentioned S101 may include having step S1011~S1014.
S1011, the worst processing speed of chip is obtained;
S1012, the timeout threshold for obtaining internal request;
S1013, node request chained list length is obtained according to the worst processing speed of chip and timeout threshold;
S1014, by first in first out line up in each grid carry node request chained list length be set as node request chain Table length.
Figure 11 is please referred to, determining for this chained list length (L) needs to ask according to the worst processing speed of CPU1 and inside The timeout threshold asked determines, it is assumed that the timeout threshold of internal request is 20 milliseconds (TO_TH), and each internal request is cut into 1 The number of node request, firmware design limitation internal request is up to 8;CPU1 may trigger internal task (such as garbage disposal, Abrasion equilibrium etc.), it is minimum to handle the efficiency that host is requested by CPU1 in this case, it is assumed that CPU1 processing one under this worst condition A host node request needs to expend 1 millisecond (T_CPU1), then must satisfy (L+8) * (T_CPU1)≤TO_TH, substitutes into number According to L≤12 are obtained, i.e. chained list length is up to 12, can be handled within a preset time by SSD so that high priority requests are effectively ensured It finishes.
The quantity of S102, initialization I/O Request chained list and I/O Request.
Specifically, in the present embodiment, above-mentioned CPU0 can initialize an I/O Request chained list and counting inside it Device C, C record hangs over the I/O Request number on the chained list, that is, the quantity of initialization I/O Request, guarantees to carry out at I/O Request every time When reason, which is a new I/O Request chained list, and its quantity counts from zero.
S103, acquisition request;
S104, judge whether request is I/O Request;
S105, if so, the quantity to I/O Request is handled, and according to the pass of quantity and node request chained list length System carries out the list processing of I/O Request chain;
For I/O Request processing when, can be according to the long quantity to limit I/O Request of node request chained list, it is ensured that I/O Request Store low priority first in first out line up in do not exceed node request chained list it is long, even if in the minimum treatment effeciency situation of CPU1 Under, also internal request can be taken out and handled from high priority FIFO before internal request timed out.
In one embodiment, referring to Fig. 8, above-mentioned step S105 may include having S1051~S1053.
S1051, one is added to the quantity of I/O Request, forms new quantity;
S1052, judge whether new quantity is equal to node request chained list length;
If it is not, then returning to the step S103;
S1053, if so, I/O Request chained list is put into during low priority first in first out lines up, and return step S102.
I/O Request is handled and is counted, the I/O Request specifically first got is hung on I/O Request chained list, and By the I/O Request quantity C=C+1 on I/O Request chained list, further judges whether C is equal to L, if C=L, chained list is put into low Priority FIFO;If C < L, acquisition request, and on I/O Request carry I/O Request chained list.
S106, if it is not, then carrying out priority processing to I/O Request and internal request;
Return to the step S102.
When encountering request is internal request, then first the I/O Request obtained before (is hung on I/O Request chained list) at this time Storage processing is carried out, then carries out storage processing for internal request, to carry out priority separating treatment to two requests.
In one embodiment, referring to Fig. 9, above-mentioned S106 may include having S1061~S1062.
S1061, I/O Request chained list is put into during low priority first in first out lines up;
S1062, internal request is put into during high priority first in first out lines up.
Directly I/O Request chained list is put into low priority first in first out lines up, is then put into the internal request high preferential Grade first in first out line up, then handle request when, can first carry out high priority first in first out line up in internal request processing, CPU0 has done reasonable limitation to the request chained list length being put into FIFO, solves CPU1 in the prior art and surpasses to internal request When handle problem.
In one embodiment, before the step of acquisition request, further includes:
Node request is cut, node pool is formed.
It requests to form node pool referring to Fig. 10, CPU0 cut many nodes, centre is mingled with a part of internal request, When acquisition request, the request is specifically obtained in node pool, is done the storage of I/O Request chained list again to each request, It is exactly according to the long most I/O Request numbers for limiting each I/O Request chained list of node request chained list, to reach in the lowest point CPU1 In the case of managing efficiency, also internal request can be taken out and handled from high priority FIFO before internal request timed out.Solid The mode communicated between CPU is improved in the design of state hard disk firmware, high priority requests have been handled by SSD within a preset time Finish.
For example, above-mentioned node requests a length of 12 I/O Requests of chained list, then when often getting an I/O Request, by this On the current I/O Request chained list of I/O Request carry, and the quantity of the matched counter of I/O Request chained list is carried out to add a processing, this Afterwards, the count value of counter is judged, if obtaining I/O Request is all that I/O Request is (namely intermediate not occur internal request etc. When high priority processing request), when count value reaches 12 I/O Requests, which is placed on the advanced elder generation of low priority In lining up out, and reinitialize the count value of I/O Request chained list and counter;If the request got is not I/O Request, i.e., When request is internal request, then regardless of counter corresponding to I/O Request chained list is how many, all I/O Request chained list is put It sets during low priority first in first out lines up, internal request is placed on during high priority first in first out lines up, to handle in time Internal request.
The method of above-mentioned guarantee Request Priority requests chained list length by setting node, most to I/O Request chain length Big I/O Request quantity is defined, which requests chained list length according to worst processing speed and the timeout threshold of internal request Setting requests chained list length when the I/O Request quantity of I/O Request chained list is equal to node, then I/O Request chained list is put into low priority During first in first out is lined up, to reach in the minimum treatment effeciency of CPU1, inside can also be asked before internal request timed out It asks and takes out and handled from high priority FIFO, realize that the phenomenon for avoiding the processing time-out of internal request occurs.
Figure 12 is please referred to, Figure 12 is the schematic block diagram of the device for the guarantee Request Priority that specific embodiment provides, such as Shown in Figure 12, ensures the device of Request Priority, includes:
Length setting unit 1, for node request chained list length to be arranged;
Initialization unit 2, for initializing the quantity of I/O Request chained list and I/O Request;
Request unit 3 is used for acquisition request;
Judging unit 4 is requested, for judging whether request is I/O Request;
I/O Request processing unit 5 for if so, the quantity to I/O Request is handled, and is asked according to quantity and node The relationship of chained list length is sought, the list processing of I/O Request chain is carried out;
Priority processing unit 6 is used for if it is not, then carrying out priority processing to I/O Request and internal request.
Specifically, as shown in figure 13, length setting unit 1 includes:
Speed acquiring module 11, for obtaining the worst processing speed of chip;
Threshold value obtains module 12, for obtaining the timeout threshold of internal request;
Length obtains module 13, long for obtaining node request chained list according to the worst processing speed of chip and timeout threshold Degree;
Setup module 14, the node request chained list length for each grid carry in lining up first in first out are set as saving Point request chained list length.
In one embodiment, as shown in figure 14, I/O Request processing unit 5 includes:
Quantity processing module 51 adds one for the quantity to I/O Request, forms new quantity;
Quantity judgment module 52, for judging whether new quantity is equal to node request chained list length;
Chain table handing module 53, for if so, I/O Request chained list is put into during low priority first in first out lines up.
In addition, as shown in figure 15, priority processing unit 6 includes:
I/O Request placement module 61, for I/O Request chained list to be put into during low priority first in first out lines up;
Internal request placement module 62, for internal request to be put into during high priority first in first out lines up.
In one embodiment, above-mentioned device further includes request cutter unit, for cutting node request, formation node Pond.
The device of above-mentioned guarantee Request Priority requests chained list length by setting node, most to I/O Request chain length Big I/O Request quantity is defined, which requests chained list length according to worst processing speed and the timeout threshold of internal request Setting requests chained list length when the I/O Request quantity of I/O Request chained list is equal to node, then I/O Request chained list is put into low priority During first in first out is lined up, to reach in the minimum treatment effeciency of CPU1, inside can also be asked before internal request timed out It asks and takes out and handled from high priority FIFO, realize that the phenomenon for avoiding the processing time-out of internal request occurs.
It is apparent to those skilled in the art that for convenience of description and succinctly, the guarantor of foregoing description Hinder the device of Request Priority and the specific work process of unit, can refer to corresponding processes in the foregoing method embodiment, This is repeated no more.
The device of above-mentioned guarantee Request Priority can be implemented as a kind of form of computer program, and computer program can be with It is run in computer equipment as shown in figure 16.
Figure 16 is please referred to, Figure 16 is a kind of schematic block diagram of computer equipment provided by the embodiments of the present application.The calculating 700 equipment of machine equipment can be terminal or server.
Referring to Fig.1 6, which includes processor 720, memory and the net connected by system bus 710 Network interface 750, wherein memory may include non-volatile memory medium 730 and built-in storage 740.
The non-volatile memory medium 730 can storage program area 731 and computer program 732.The computer program 732 It is performed, processor 720 may make to execute the method that any one ensures Request Priority.
The processor 720 supports the operation of entire computer equipment 700 for providing calculating and control ability.
The built-in storage 740 provides environment for the operation of the computer program 732 in non-volatile memory medium 730, should When computer program 732 is executed by processor 720, processor 720 may make to execute the side that any one ensures Request Priority Method.
The network interface 750 such as sends the task dispatching of distribution for carrying out network communication.Those skilled in the art can manage It solves, structure shown in Figure 16, only the block diagram of part-structure relevant to application scheme, is not constituted to the application side The restriction for the computer equipment 700 that case is applied thereon, specific computer equipment 700 may include more than as shown in the figure Or less component, perhaps combine certain components or with different component layouts.Wherein, the processor 720 is for transporting Row program code stored in memory, to perform the steps of
Node is set and requests chained list length;
Initialize the quantity of I/O Request chained list and I/O Request;
Acquisition request;
Judge whether request is I/O Request;
If so, the quantity to I/O Request is handled, and according to the relationship of quantity and node request chained list length, carry out I/O Request chain list processing;
If it is not, then carrying out priority processing to I/O Request and internal request;
The step of returning to the quantity for initializing I/O Request chained list and I/O Request.
In one embodiment, the processor 720 is realized described for running program code stored in memory The step of node requests chained list length is set, following steps have been implemented:
Obtain the worst processing speed of chip;
Obtain the timeout threshold of internal request;
Node, which is obtained, according to the worst processing speed of chip and timeout threshold requests chained list length;
By first in first out line up in each grid carry node request chained list length be set as node request chained list length.
In one embodiment, the processor 720 is realized described for running program code stored in memory The quantity of I/O Request is handled, and according to the relationship of quantity and node request chained list length, carries out the list processing of I/O Request chain The step of, implement following steps:
One is added to the quantity of I/O Request, forms new quantity;
Judge whether new quantity is equal to node request chained list length;
If it is not, the step of then returning to the acquisition request;
If so, I/O Request chained list is put into during low priority first in first out lines up, and return to the initialization I/O Request The step of quantity of chained list and I/O Request.
In one embodiment, the processor 720 is realized described for running program code stored in memory The step of carrying out priority processing to I/O Request and internal request, implements following steps:
I/O Request chained list is put into during low priority first in first out lines up;
Internal request is put into during high priority first in first out lines up.
In one embodiment, the processor 720 is realized described for running program code stored in memory Before the step of acquisition request, following steps are also achieved:
Node request is cut, node pool is formed.
A kind of above-mentioned computer equipment is requested chained list length by setting node, is asked to the maximum IO of I/O Request chain length Quantity is asked to be defined, which requests chained list length to set according to the timeout threshold of worst processing speed and internal request, Chained list length is requested when the I/O Request quantity of I/O Request chained list is equal to node, then I/O Request chained list is put into the advanced elder generation of low priority In lining up out, to reach in the minimum treatment effeciency of CPU1, also can before internal request timed out by internal request from height It takes out and has handled in priority FIFO, realize that the phenomenon for avoiding the processing time-out of internal request occurs.
It should be appreciated that in the embodiment of the present application, processor 720 can be central processing unit (Central Processing Unit, CPU), which can also be other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic Device, discrete gate or transistor logic, discrete hardware components etc..Wherein, general processor can be microprocessor or Person's processor is also possible to any conventional processor etc..
It will be understood by those skilled in the art that 700 structure of computer equipment shown in Figure 16 is not constituted to computer The restriction of equipment 700 may include perhaps combining certain components or different components than illustrating more or fewer components Arrangement.
Those of ordinary skill in the art will appreciate that be realize above-described embodiment method in all or part of the process, be Relevant hardware can be instructed to complete by computer program, computer program can be stored in a storage medium, this is deposited Storage media is computer readable storage medium.In the embodiment of the present invention, which can be stored in computer system It in storage medium, and is executed by least one processor in the computer system, to realize including such as above-mentioned each guarantee request The process step of the embodiment of the method for priority.
The computer readable storage medium can be magnetic disk, CD, USB flash disk, mobile hard disk, read-only memory (ROM, Read- Only Memory), the various media that can store program code such as magnetic or disk.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware With the interchangeability of software, each exemplary composition and step are generally described according to function in the above description.This A little functions are implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Specially Industry technical staff can use different methods to achieve the described function each specific application, but this realization is not It is considered as beyond scope of the present application.
In several embodiments provided herein, it should be understood that the device of disclosed guarantee Request Priority And method, it may be implemented in other ways.For example, the Installation practice of guarantee Request Priority described above is only It is schematical.For example, the division of each unit, only a kind of logical function partition can have other in actual implementation Division mode.Such as multiple units or components can be combined or can be integrated into another system or some features can neglect Slightly, it or does not execute.
Step in the embodiment of the present application method can be sequentially adjusted, merged and deleted according to actual needs.This Shen Please the unit in embodiment device can be combined, divided and deleted according to actual needs.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, is also possible to two or more units and is integrated in one unit.It is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and when sold or used as an independent product, It can store in a computer readable storage medium.Based on this understanding, the technical solution of the application substantially or Person says that all or part of the part that contributes to existing technology or the technical solution can body in the form of software products Reveal and, which is stored in a storage medium, including some instructions are with so that a computer is set Standby (can be personal computer, terminal or the network equipment etc.) execute each embodiment the method for the application whole or Part steps.
It is above-mentioned that technology contents of the invention are only further illustrated with embodiment, in order to which reader is easier to understand, but not It represents embodiments of the present invention and is only limitted to this, any technology done according to the present invention extends or recreation, by of the invention Protection.Protection scope of the present invention is subject to claims.

Claims (10)

1. the method for ensureing Request Priority, which is characterized in that include:
Node is set and requests chained list length;
Initialize the quantity of I/O Request chained list and I/O Request;
Acquisition request;
Judge whether request is I/O Request;
If so, the quantity to I/O Request is handled, and according to the relationship of quantity and node request chained list length, carries out IO and ask Ask chain list processing;
If it is not, then carrying out priority processing to I/O Request and internal request;
The step of returning to the quantity for initializing I/O Request chained list and I/O Request.
2. the method according to claim 1 for ensureing Request Priority, which is characterized in that the setting node requests chained list The step of length, comprising the following specific steps
Obtain the worst processing speed of chip;
Obtain the timeout threshold of internal request;
Node, which is obtained, according to the worst processing speed of chip and timeout threshold requests chained list length;
By first in first out line up in each grid carry node request chained list length be set as node request chained list length.
3. the method according to claim 1 for ensureing Request Priority, which is characterized in that at the quantity of I/O Request It manages, and the step of requesting the relationship of chained list length according to quantity and node, carrying out the list processing of I/O Request chain, including walks in detail below It is rapid:
One is added to the quantity of I/O Request, forms new quantity;
Judge whether new quantity is equal to node request chained list length;
If it is not, the step of then returning to the acquisition request;
If so, I/O Request chained list is put into during low priority first in first out lines up, and return to the initialization I/O Request chained list And I/O Request quantity the step of.
4. the method according to claim 1 for ensureing Request Priority, which is characterized in that I/O Request and internal request The step of carrying out priority processing, comprising the following specific steps
I/O Request chained list is put into during low priority first in first out lines up;
Internal request is put into during high priority first in first out lines up.
5. the method according to any one of claims 1 to 4 for ensureing Request Priority, which is characterized in that acquisition request Before step, further includes:
Node request is cut, node pool is formed.
6. ensureing the device of Request Priority, which is characterized in that include:
Length setting unit, for node request chained list length to be arranged;
Initialization unit, for initializing the quantity of I/O Request chained list and I/O Request;
Request unit is used for acquisition request;
Judging unit is requested, for judging whether request is I/O Request;
I/O Request processing unit for if so, the quantity to I/O Request is handled, and requests chained list according to quantity and node The relationship of length carries out the list processing of I/O Request chain;
Priority processing unit is used for if it is not, then carrying out priority processing to I/O Request and internal request.
7. the device according to claim 6 for ensureing Request Priority, which is characterized in that the length setting unit includes Have:
Speed acquiring module, for obtaining the worst processing speed of chip;
Threshold value obtains module, for obtaining the timeout threshold of internal request;
Length obtains module, requests chained list length for obtaining node according to the worst processing speed of chip and timeout threshold;
Setup module, the node request chained list length for each grid carry in lining up first in first out are set as node request Chained list length.
8. the device according to claim 6 for ensureing Request Priority, which is characterized in that the I/O Request processing unit packet It has included:
Quantity processing module adds one for the quantity to I/O Request, forms new quantity;
Quantity judgment module, for judging whether new quantity is equal to node request chained list length;
Chain table handing module, for if so, I/O Request chained list is put into during low priority first in first out lines up.
9. the device according to claim 6 for ensureing Request Priority, which is characterized in that the priority processing unit packet It has included:
I/O Request placement module, for I/O Request chained list to be put into during low priority first in first out lines up;
Internal request placement module, for internal request to be put into during high priority first in first out lines up.
10. a kind of computer equipment, which is characterized in that including memory, processor and be stored on the memory and can be The computer program run on the processor, the processor realize such as claim 1 to 5 when executing the computer program Any one of described in ensure Request Priority method.
CN201810716970.XA 2018-07-03 2018-07-03 Method and device for guaranteeing request priority and computer equipment Active CN108984121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810716970.XA CN108984121B (en) 2018-07-03 2018-07-03 Method and device for guaranteeing request priority and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810716970.XA CN108984121B (en) 2018-07-03 2018-07-03 Method and device for guaranteeing request priority and computer equipment

Publications (2)

Publication Number Publication Date
CN108984121A true CN108984121A (en) 2018-12-11
CN108984121B CN108984121B (en) 2021-04-20

Family

ID=64536504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810716970.XA Active CN108984121B (en) 2018-07-03 2018-07-03 Method and device for guaranteeing request priority and computer equipment

Country Status (1)

Country Link
CN (1) CN108984121B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918317A (en) * 2019-03-01 2019-06-21 重庆大学 It is a kind of based on abrasion perception NVM item between abrasion equilibrium method
CN112328178A (en) * 2020-11-05 2021-02-05 苏州浪潮智能科技有限公司 Method and device for processing IO queue full state of solid state disk

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739215A (en) * 2008-11-19 2010-06-16 成都市华为赛门铁克科技有限公司 Method and device for determining input-output scheduling algorithm
CN101944066A (en) * 2009-07-10 2011-01-12 成都市华为赛门铁克科技有限公司 Solid state disk, interface processing method thereof and storage system
US20150220278A1 (en) * 2014-02-05 2015-08-06 Apple Inc. Dynamic io operation timeout assignment for a solid state drive
CN106101022A (en) * 2016-06-15 2016-11-09 珠海迈科智能科技股份有限公司 A kind of data request processing method and system
CN106933495A (en) * 2015-12-30 2017-07-07 华为技术有限公司 A kind of method for reading data, RAID controller and storage device
CN107305473A (en) * 2016-04-21 2017-10-31 华为技术有限公司 The dispatching method and device of a kind of I/O Request
CN107589911A (en) * 2017-09-05 2018-01-16 郑州云海信息技术有限公司 A kind of I O process method and device of SSD cachings

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739215A (en) * 2008-11-19 2010-06-16 成都市华为赛门铁克科技有限公司 Method and device for determining input-output scheduling algorithm
CN101944066A (en) * 2009-07-10 2011-01-12 成都市华为赛门铁克科技有限公司 Solid state disk, interface processing method thereof and storage system
US20150220278A1 (en) * 2014-02-05 2015-08-06 Apple Inc. Dynamic io operation timeout assignment for a solid state drive
CN106933495A (en) * 2015-12-30 2017-07-07 华为技术有限公司 A kind of method for reading data, RAID controller and storage device
CN107305473A (en) * 2016-04-21 2017-10-31 华为技术有限公司 The dispatching method and device of a kind of I/O Request
CN106101022A (en) * 2016-06-15 2016-11-09 珠海迈科智能科技股份有限公司 A kind of data request processing method and system
CN107589911A (en) * 2017-09-05 2018-01-16 郑州云海信息技术有限公司 A kind of I O process method and device of SSD cachings

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918317A (en) * 2019-03-01 2019-06-21 重庆大学 It is a kind of based on abrasion perception NVM item between abrasion equilibrium method
CN112328178A (en) * 2020-11-05 2021-02-05 苏州浪潮智能科技有限公司 Method and device for processing IO queue full state of solid state disk

Also Published As

Publication number Publication date
CN108984121B (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN102779075B (en) Method, device and system for scheduling in multiprocessor nuclear system
US10552222B2 (en) Task scheduling method and apparatus on heterogeneous multi-core reconfigurable computing platform
CN103098014B (en) Storage system
CN105528330B (en) The method, apparatus of load balancing is gathered together and many-core processor
Seelam et al. Virtual I/O scheduler: a scheduler of schedulers for performance virtualization
Guo et al. A framework for providing quality of service in chip multi-processors
CN107656813A (en) The method, apparatus and terminal of a kind of load dispatch
CN104461707B (en) a kind of lock request processing method and device
CN107851039A (en) System and method for resource management
US9858120B2 (en) Modifying memory space allocation for inactive tasks
CN106547612A (en) A kind of multi-task processing method and device
CN104142860A (en) Resource adjusting method and device of application service system
JPH02249055A (en) Multiprocessor system, multiprocessing method and work allocation method
TW200928774A (en) Multicore interface with dynamic task management capability and task loading/offloading method thereof
EP3537281B1 (en) Storage controller and io request processing method
CN108984121A (en) Ensure the method, apparatus and computer equipment of Request Priority
CN109491788A (en) A kind of virtual platform implementation of load balancing and device
CN107368367A (en) Processing method, device and the electronic equipment of resource allocation
CN105701029B (en) A kind of isomery storage optimization method and device
CN111104219A (en) Binding method, device, equipment and storage medium of virtual core and physical core
US20160239421A1 (en) Memory nest efficiency with cache demand generation
JP2015022504A (en) Information processing device, method, and program
CN112783652B (en) Method, device, equipment and storage medium for acquiring running state of current task
CN108255595A (en) A kind of dispatching method of data task, device, equipment and readable storage medium storing program for executing
KR101892273B1 (en) Apparatus and method for thread progress tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant