CN105677253A - Optimization method and device for IO instruction processing queue - Google Patents

Optimization method and device for IO instruction processing queue Download PDF

Info

Publication number
CN105677253A
CN105677253A CN201610009605.6A CN201610009605A CN105677253A CN 105677253 A CN105677253 A CN 105677253A CN 201610009605 A CN201610009605 A CN 201610009605A CN 105677253 A CN105677253 A CN 105677253A
Authority
CN
China
Prior art keywords
instruction
thread
processing
processes
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610009605.6A
Other languages
Chinese (zh)
Other versions
CN105677253B (en
Inventor
殷雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CN201610009605.6A priority Critical patent/CN105677253B/en
Publication of CN105677253A publication Critical patent/CN105677253A/en
Application granted granted Critical
Publication of CN105677253B publication Critical patent/CN105677253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Abstract

The invention discloses an optimization method and device for an IO instruction processing queue. The method includes the steps that according to different stages of an IO instruction processing flow path of SCSI Target in a storage device, the IO instruction processing flow path is segmented to obtain IO instruction processing flow path segments in different stages; different processing threads are distributed to all the IO instruction processing flow path segments; for each processing thread, after the corresponding IO instruction processing flow path segment is conducted on the processing thread, resources occupied by the processing thread are released. By means of the method, the IO instruction processing flow path is optimized, and the IO instruction processing efficiency is improved.

Description

A kind of I/O instruction processes optimization method and the device of queue
Technical field
The present invention relates to field of computer technology, particularly relate to a kind of I/O instruction and process optimization method and the device of queue.
Background technology
At present, along with the development of whole IT industry, every time new technology occurs, in most cases all launches round performance boost from hardware to software. " mole theorem " points out that the processing speed of CPU can be doubled for every 18 months, but enters second of 21 century over 10 years, it appears that its speed is slow gets off. But each IT industry leaders, or constantly at the performance seeking breakthrough of computer, continue challenge physics limit. Specific to storage industry, the release of the storage product that every money is new, how faster also all it is curved about, centered by the I/O of better service front-end server request.
Traditional I/O instruction processes queue and adopts the linear mode performed, the I/O instruction first arrived can take all of thread, cover until all performing flow process, the I/O instruction of rear arrival must wait until that I/O instruction above has performed to enter execution flow process, cause that I/O instruction time delay increases, I/O instruction treatment effeciency is too low, and memory property reduces. And in traditional I/O instruction linear process mode when single IO occurs to block, the I/O instruction of obstruction only can take all of thread and resource, the overall operation of impact storage.
Summary of the invention
It is an object of the invention to provide a kind of I/O instruction and process optimization method and the device of queue, to realize optimizing I/O instruction handling process, improve I/O instruction treatment effeciency.
For solving above-mentioned technical problem, the present invention provides a kind of I/O instruction to process the optimization method of queue, and the method includes:
According to the different phase of the I/O instruction handling process of SCSITarget in storage device, I/O instruction handling process being carried out segmentation, the I/O instruction obtaining different phase processes segmentation flow process;
The process thread that the distribution of segmentation flow process is different is processed for each I/O instruction;
Processing thread for each, after processing the I/O instruction having performed correspondence on thread and processing segmentation flow process, release processes the resource shared by thread.
Preferably, the processing stage that described different phase including instruction, resolution phase, Memory Allocation stage, data reception phase, locally execute the stage, return data phase and return the result stage.
Preferably, described method also includes:
The all of process thread of distribution is carried out monitor in real time, processes the quantity of thread according to the utilization power adjustment processing thread.
Preferably, described method also includes:
When I/O instruction occurs abnormal, execution occurs that abnormal I/O instruction processes the process thread that segmentation flow process is corresponding, the follow-up I/O instruction of normal execution.
Preferably, described method also includes:
Perform the I/O instruction first arrived, after having performed each stage, discharge the process thread of correspondence immediately, utilize the processing procedure processing the I/O instruction arrived after thread performs of the I/O instruction release first arrived.
The present invention also provides for a kind of I/O instruction and processes the optimization device of queue, and this device includes:
Segmentation module, for according to the different phase of the I/O instruction handling process of SCSITarget in storage device, I/O instruction handling process being carried out segmentation, the I/O instruction obtaining different phase processes segmentation flow process;
Distribution module, for processing, for each I/O instruction, the process thread that the distribution of segmentation flow process is different;
Release module, for processing thread for each, after processing the I/O instruction having performed correspondence on thread and processing segmentation flow process, release processes the resource shared by thread.
Preferably, the processing stage that described different phase including instruction, resolution phase, Memory Allocation stage, data reception phase, locally execute the stage, return data phase and return the result stage.
Preferably, described device also includes:
Monitoring module, for all of process thread of distribution is carried out monitor in real time, processes the quantity of thread according to the utilization power adjustment processing thread.
Preferably, described device also includes:
Abnormality processing module, for when I/O instruction occurs abnormal, execution occurs that abnormal I/O instruction processes the process thread that segmentation flow process is corresponding, the follow-up I/O instruction of normal execution.
Preferably, described device also includes:
Instruction performs module, for performing the I/O instruction first arrived, discharges the process thread of correspondence after having performed each stage immediately, utilizes the processing procedure processing the I/O instruction arrived after thread performs of the I/O instruction release first arrived.
A kind of I/O instruction provided by the present invention processes optimization method and the device of queue, according to the different phase of the I/O instruction handling process of SCSITarget in storage device, I/O instruction handling process is carried out segmentation, and the I/O instruction obtaining different phase processes segmentation flow process, the process thread that the distribution of segmentation flow process is different is processed for each I/O instruction, processing thread for each, after processing the I/O instruction having performed correspondence on thread and processing segmentation flow process, release processes the resource shared by thread. visible, the mode that queue linearly performs is processed relative to traditional I/O instruction, the present invention adopts the mode that I/O instruction handling process carries out segmentation, different threads is transferred to go to process respectively, each processes thread release after the I/O instruction having performed correspondence processes segmentation flow process and processes the resource shared by thread simultaneously, at this moment processing thread is the state that can be utilized, I/O instruction as hereafter arrived needs not wait for I/O instruction above and has performed to enter execution flow process, as long as processing thread release to process the resource shared by thread, the I/O instruction of rear arrival just can utilize the process thread of release resource to perform processing procedure, improve I/O instruction treatment effeciency, and then lifting memory property, it is achieved that optimize I/O instruction handling process, improve I/O instruction treatment effeciency.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, the accompanying drawing used required in embodiment or description of the prior art will be briefly described below, apparently, accompanying drawing in the following describes is only embodiments of the invention, for those of ordinary skill in the art, under the premise not paying creative work, it is also possible to obtain other accompanying drawing according to the accompanying drawing provided.
Fig. 1 is the flow chart that a kind of I/O instruction provided by the present invention processes the optimization method of queue;
Fig. 2 is the structural representation optimizing device that a kind of I/O instruction provided by the present invention processes queue.
Detailed description of the invention
The core of the present invention is to provide a kind of I/O instruction and processes optimization method and the device of queue, to realize optimizing I/O instruction handling process, improves I/O instruction treatment effeciency.
In order to make those skilled in the art be more fully understood that the present invention program, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only a part of embodiment of the present invention, rather than whole embodiments. Based on the embodiment in the present invention, the every other embodiment that those of ordinary skill in the art obtain under not making creative work premise, broadly fall into the scope of protection of the invention.
Refer to Fig. 1, Fig. 1 is the flow chart that a kind of I/O instruction provided by the present invention processes the optimization method of queue, and the method includes:
S11: according to the different phase of the I/O instruction handling process of SCSITarget in storage device, I/O instruction handling process is carried out segmentation, the I/O instruction obtaining different phase processes segmentation flow process;
Wherein, the processing stage that described different phase including instruction, resolution phase, Memory Allocation stage, data reception phase, locally execute the stage, return data phase and return the result stage.
Need the I/O instruction handling process of SCSITarget is carried out segmentation. Segmentation is according to the different phase being SCSI command execution, including instruction place physics and chemistry, resolves, Memory Allocation, data receiver, locally executes, return data, returns the stages such as result. Division to handling process, it is possible to increase the management granularity of SCSI command, is conveniently managed I/O instruction, improves I/O instruction treatment effeciency.
S12: process the process thread that the distribution of segmentation flow process is different for each I/O instruction;
Wherein, after the handling process of I/O instruction is carried out segmentation, it is possible to be respectively allocated a number of process thread, the process function of different threads oneself uniqueness of each self-organizing for these different phases.
S13: process thread for each, after processing the I/O instruction having performed correspondence on thread and processing segmentation flow process, release processes the resource shared by thread.
Wherein, application independent between thread and use resource, thread process discharges corresponding resource after the complete respective stage.
Visible, the mode that queue linearly performs is processed relative to traditional I/O instruction, the method adopts the mode that I/O instruction handling process carries out segmentation, different threads is transferred to go to process respectively, each processes thread release after the I/O instruction having performed correspondence processes segmentation flow process and processes the resource shared by thread simultaneously, at this moment processing thread is the state that can be utilized, I/O instruction as hereafter arrived needs not wait for I/O instruction above and has performed to enter execution flow process, as long as processing thread release to process the resource shared by thread, the I/O instruction of rear arrival just can utilize the process thread of release resource to perform processing procedure, improve I/O instruction treatment effeciency, and then lifting memory property, it is achieved that optimize I/O instruction handling process, improve I/O instruction treatment effeciency.
And, the division of handling process, it is possible to increase the management granularity of SCSI command, conveniently I/O instruction is managed, improves I/O instruction treatment effeciency. Meanwhile, the mode of I/O instruction handling process segment processing, it is also possible to play lifting readwrite performance, the effect of balance bulk readwrite performance.
It should be noted that, present invention is generally directed to the IO of server and storage device I O process flow process alternately and in storage device, the I/O instruction handling process of indication of the present invention refers in particular to the handling process of I/O instruction when using SCSI protocol to carry out data interaction between server and storage, optimize to as if storage device in the I O process queue of SCSITarget and I/O instruction handling process, the present invention is the optimization for SCSITarget done for improving memory property.
Preferably, said method is further comprising the steps of:
S21: all of process thread of distribution is carried out monitor in real time, processes the quantity of thread according to the utilization power adjustment processing thread.
Wherein, simultaneously can the utilization power of various threads in monitoring system, according to the whether nervous quantity adjusting thread of thread resources, reduce the thread that vacancy rate is higher, increase the thread that utilization rate is higher. The dynamic distribution of thread, it is possible to be effectively improved the utilization rate of system resource, and then reach to provide the purpose of performance of storage system.
By I/O instruction handling process is carried out segmentation, and transfer to different threads to go to process respectively, dynamically adjustment processes number of threads and the priority of different phase I/O instruction simultaneously, reaches to optimize I/O instruction handling process, improve I/O instruction treatment effeciency, and then play the effect improving memory property. The dynamically distribution of thread can be effectively improved the utilization rate of system resource, and then reaches to provide the purpose of performance of storage system.
Said method is further comprising the steps of:
S31: when I/O instruction occurs abnormal, execution occurs that abnormal I/O instruction processes the process thread that segmentation flow process is corresponding, the follow-up I/O instruction of normal execution.
Wherein, I/O instruction segment processing mode is taked, it is possible to effectively control the abnormal IO impact on system entirety. In traditional I/O instruction linear process mode, if occurring in that the abnormal IO causing thread suspension, the coverage of system not only uncontrollable abnormal I/O instruction, also cannot respond to follow-up I/O instruction simultaneously, can cause that system presents the symptom of " deadlock ". In the present invention, abnormal I/O instruction only can take the thread in corresponding stage, and follow-up IO still can be responded by system, simultaneously as number of threads can dynamically be adjusted, follow-up IO still can continue executing with.
Said method is further comprising the steps of:
S41: perform the I/O instruction first arrived, discharge the process thread of correspondence after having performed each stage immediately, utilizes the processing procedure processing the I/O instruction arrived after thread performs of the I/O instruction release first arrived.
Wherein, I/O instruction owing to processing different masses size needs the time taken and system resource to have larger difference, in traditional I/O instruction linear process mode, the I/O instruction first arrived can take all of thread, cover until all performing flow process, the I/O instruction of rear arrival must wait until that I/O instruction above has performed to enter execution flow process, causes that I/O instruction time delay increases. In the present invention, the I/O instruction first arrived just can discharge the thread of correspondence after having performed the corresponding stage, and the rear I/O instruction arrived just can start to perform processing procedure as early as possible; Simultaneously, fritter I/O instruction will far faster than bulk I/O instruction in some handling process, so fritter I/O instruction formerly processed before stage after, the stage below is started before can getting the jump on bulk IO, reduce the loss of waiting time, it is to avoid owing to the time-out of indivedual bulk I/O instruction causes a large amount of common I/O instruction time-out.
Based on above-mentioned preferably, the method is not only with the mode that I/O instruction handling process carries out segmentation, different threads is transferred to go to process respectively, each processes thread release after the I/O instruction having performed correspondence processes segmentation flow process and processes the resource shared by thread simultaneously, at this moment processing thread is the state that can be utilized, I/O instruction as hereafter arrived needs not wait for I/O instruction above and has performed to enter execution flow process, as long as processing thread release to process the resource shared by thread, the I/O instruction of rear arrival just can utilize the process thread of release resource to perform processing procedure, improve I/O instruction treatment effeciency, and then promote the mode that memory property linearly performs relative to traditional I/O instruction process queue.
And the method is by carrying out segmentation by I/O instruction handling process, and transfer to different threads to go to process respectively, dynamically adjustment processes number of threads and the priority of different phase I/O instruction simultaneously, reach to optimize I/O instruction handling process, improve the purpose of I/O instruction treatment effeciency, and then play the effect improving memory property. Division to handling process, it is possible to increase the management granularity of SCSI command, is conveniently managed I/O instruction, improves I/O instruction treatment effeciency. The dynamic distribution of thread, it is possible to be effectively improved the utilization rate of system resource, and then reach to provide the purpose of performance of storage system. Meanwhile, the mode of I/O instruction segment processing, it is also possible to play lifting readwrite performance, the effect of balance bulk readwrite performance. I/O instruction segment processing is taked to let go, it is possible to effectively to control the abnormal IO impact on system entirety.
Further, by after I/O instruction segmentation in the method, it is simple to Classification Management and process the I/O instruction in different execution stages, it is possible to carry out different optimization respectively processing stage of for different I/O instruction. Meanwhile, after implementing the method, when single IO occurs to block, the I/O instruction of obstruction only can take thread and the resource in corresponding stage, without influence on the overall operation of storage. It addition, the method also can play lifting fritter readwrite performance, the effect of balance bulk readwrite performance, thus reach to promote the effect of memory property. And the effect that can also produce has: effectively reduce the usage quantity of interrupt requests; The system sleep that is likely to occur of task processes of avoidance breakout request or deadlock equivalent risk; Release due to interrupt resources, it is possible to effectively promote system read or write speed under a large pressure. In a word, the method effectively promotes storage system and processes the efficiency of I/O instruction, and optimization system IO manages function, promotes system read-write performance, can effectively isolate the impact on storage system entirety of the abnormal I/O instruction simultaneously.
It addition, in the enforcement of the method, the handling process of storage manufacturer amendment SCSITarget. Main the following steps of concrete enforcement: the stage that definition I/O instruction performs; Different process threads is created for the different stages that performs; Create thread utilization rate monitoring mechanism, dynamically adjust different number of threads according to monitored results. In traditional I/O instruction linear process mode, I/O instruction management is coarse, and treatment effeciency is low, and abnormal I/O instruction can tie down system whole efficiency. The method that the present invention takes can play the role that the management granularity improving I/O instruction, increases the manageability of I/O instruction; Improve resource utilization ratio, promote storage system read-write performance; It is effectively isolated abnormal I/O instruction.
Refer to Fig. 2, Fig. 2 is the structural representation optimizing device that a kind of I/O instruction provided by the present invention processes queue, and this device includes:
Segmentation module 101, for according to the different phase of the I/O instruction handling process of SCSITarget in storage device, I/O instruction handling process being carried out segmentation, the I/O instruction obtaining different phase processes segmentation flow process;
Distribution module 102, for processing, for each I/O instruction, the process thread that the distribution of segmentation flow process is different;
Release module 103, for processing thread for each, after processing the I/O instruction having performed correspondence on thread and processing segmentation flow process, release processes the resource shared by thread.
Wherein, the processing stage that described different phase including instruction, resolution phase, Memory Allocation stage, data reception phase, locally execute the stage, return data phase and return the result stage.
Preferably, described device also includes:
Monitoring module, for all of process thread of distribution is carried out monitor in real time, processes the quantity of thread according to the utilization power adjustment processing thread.
Described device also includes:
Abnormality processing module, for when I/O instruction occurs abnormal, execution occurs that abnormal I/O instruction processes the process thread that segmentation flow process is corresponding, the follow-up I/O instruction of normal execution.
Described device also includes:
Instruction performs module, for performing the I/O instruction first arrived, discharges the process thread of correspondence after having performed each stage immediately, utilizes the processing procedure processing the I/O instruction arrived after thread performs of the I/O instruction release first arrived.
A kind of I/O instruction provided by the present invention processes the optimization device of queue, segmentation module is according to the different phase of the I/O instruction handling process of SCSITarget in storage device, I/O instruction handling process is carried out segmentation, and the I/O instruction obtaining different phase processes segmentation flow process, distribution module is the process thread that each I/O instruction processes that the distribution of segmentation flow process is different, release module processes thread for each, and after processing the I/O instruction having performed correspondence on thread and processing segmentation flow process, release processes the resource shared by thread. visible, the mode that queue linearly performs is processed relative to traditional I/O instruction, this device adopts the mode that I/O instruction handling process carries out segmentation, different threads is transferred to go to process respectively, each processes thread release after the I/O instruction having performed correspondence processes segmentation flow process and processes the resource shared by thread simultaneously, at this moment processing thread is the state that can be utilized, I/O instruction as hereafter arrived needs not wait for I/O instruction above and has performed to enter execution flow process, as long as processing thread release to process the resource shared by thread, the I/O instruction of rear arrival just can utilize the process thread of release resource to perform processing procedure, improve I/O instruction treatment effeciency, and then lifting memory property, it is achieved that optimize I/O instruction handling process, improve I/O instruction treatment effeciency.
Optimization method and device that a kind of I/O instruction provided by the present invention processes queue above are described in detail. Principles of the invention and embodiment are set forth by specific case used herein, and the explanation of above example is only intended to help to understand method and the core concept thereof of the present invention. It should be pointed out that, for those skilled in the art, under the premise without departing from the principles of the invention, it is also possible to the present invention carries out some improvement and modification, these improve and modify in the protection domain also falling into the claims in the present invention.

Claims (10)

1. the optimization method of an I/O instruction process queue, it is characterised in that including:
According to the different phase of the I/O instruction handling process of SCSITarget in storage device, I/O instruction handling process being carried out segmentation, the I/O instruction obtaining different phase processes segmentation flow process;
The process thread that the distribution of segmentation flow process is different is processed for each I/O instruction;
Processing thread for each, after processing the I/O instruction having performed correspondence on thread and processing segmentation flow process, release processes the resource shared by thread.
2. the method for claim 1, it is characterised in that the processing stage that described different phase including instruction, resolution phase, Memory Allocation stage, data reception phase, locally execute the stage, return data phase and return the result stage.
3. the method for claim 1, it is characterised in that also include:
The all of process thread of distribution is carried out monitor in real time, processes the quantity of thread according to the utilization power adjustment processing thread.
4. the method for claim 1, it is characterised in that also include:
When I/O instruction occurs abnormal, execution occurs that abnormal I/O instruction processes the process thread that segmentation flow process is corresponding, the follow-up I/O instruction of normal execution.
5. method as described in any one in Claims 1-4, it is characterised in that also include:
Perform the I/O instruction first arrived, after having performed each stage, discharge the process thread of correspondence immediately, utilize the processing procedure processing the I/O instruction arrived after thread performs of the I/O instruction release first arrived.
6. the optimization device of an I/O instruction process queue, it is characterised in that including:;
Segmentation module, for according to the different phase of the I/O instruction handling process of SCSITarget in storage device, I/O instruction handling process being carried out segmentation, the I/O instruction obtaining different phase processes segmentation flow process;
Distribution module, for processing, for each I/O instruction, the process thread that the distribution of segmentation flow process is different;
Release module, for processing thread for each, after processing the I/O instruction having performed correspondence on thread and processing segmentation flow process, release processes the resource shared by thread.
7. device as claimed in claim 6, it is characterised in that the processing stage that described different phase including instruction, resolution phase, Memory Allocation stage, data reception phase, locally execute the stage, return data phase and return the result stage.
8. device as claimed in claim 6, it is characterised in that also include:
Monitoring module, for all of process thread of distribution is carried out monitor in real time, processes the quantity of thread according to the utilization power adjustment processing thread.
9. device as claimed in claim 6, it is characterised in that also include:
Abnormality processing module, for when I/O instruction occurs abnormal, execution occurs that abnormal I/O instruction processes the process thread that segmentation flow process is corresponding, the follow-up I/O instruction of normal execution.
10. the device as described in any one in claim 6 to 9, it is characterised in that also include:
Instruction performs module, for performing the I/O instruction first arrived, discharges the process thread of correspondence after having performed each stage immediately, utilizes the processing procedure processing the I/O instruction arrived after thread performs of the I/O instruction release first arrived.
CN201610009605.6A 2016-01-07 2016-01-07 A kind of optimization method and device of I/O instruction processing queue Active CN105677253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610009605.6A CN105677253B (en) 2016-01-07 2016-01-07 A kind of optimization method and device of I/O instruction processing queue

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610009605.6A CN105677253B (en) 2016-01-07 2016-01-07 A kind of optimization method and device of I/O instruction processing queue

Publications (2)

Publication Number Publication Date
CN105677253A true CN105677253A (en) 2016-06-15
CN105677253B CN105677253B (en) 2018-09-18

Family

ID=56299373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610009605.6A Active CN105677253B (en) 2016-01-07 2016-01-07 A kind of optimization method and device of I/O instruction processing queue

Country Status (1)

Country Link
CN (1) CN105677253B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109992433A (en) * 2019-04-11 2019-07-09 苏州浪潮智能科技有限公司 A kind of distribution tgt communication optimization method, apparatus, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5404469A (en) * 1992-02-25 1995-04-04 Industrial Technology Research Institute Multi-threaded microprocessor architecture utilizing static interleaving
US20080005525A1 (en) * 2006-06-29 2008-01-03 Rosenbluth Mark B Partitioning program memory
US7324520B2 (en) * 2002-07-03 2008-01-29 Intel Corporation Method and apparatus to process switch traffic
CN103810048A (en) * 2014-03-11 2014-05-21 国家电网公司 Automatic adjusting method and device for thread number aiming to realizing optimization of resource utilization
CN103984523A (en) * 2013-02-08 2014-08-13 上海芯豪微电子有限公司 Multi-transmission-command processing system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5404469A (en) * 1992-02-25 1995-04-04 Industrial Technology Research Institute Multi-threaded microprocessor architecture utilizing static interleaving
US7324520B2 (en) * 2002-07-03 2008-01-29 Intel Corporation Method and apparatus to process switch traffic
US20080005525A1 (en) * 2006-06-29 2008-01-03 Rosenbluth Mark B Partitioning program memory
CN103984523A (en) * 2013-02-08 2014-08-13 上海芯豪微电子有限公司 Multi-transmission-command processing system and method
CN103810048A (en) * 2014-03-11 2014-05-21 国家电网公司 Automatic adjusting method and device for thread number aiming to realizing optimization of resource utilization

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109992433A (en) * 2019-04-11 2019-07-09 苏州浪潮智能科技有限公司 A kind of distribution tgt communication optimization method, apparatus, equipment and storage medium

Also Published As

Publication number Publication date
CN105677253B (en) 2018-09-18

Similar Documents

Publication Publication Date Title
US10331492B2 (en) Techniques to dynamically allocate resources of configurable computing resources
CN106371894B (en) Configuration method and device and data processing server
EP3073373B1 (en) Method for interruption affinity binding of virtual network interface card, and computer device
US10176014B2 (en) System and method for multithreaded processing
US20120166498A1 (en) Garbage collection in a multiple virtual machine environment
US9513953B2 (en) Reducing virtual machine suspension time in checkpoint system
CN112000455B (en) Multithreading task processing method and device and electronic equipment
US20160124762A1 (en) Guest idle based vm request completion processing
US20150113252A1 (en) Thread control and calling method of multi-thread virtual pipeline (mvp) processor, and processor thereof
US9632958B2 (en) System for migrating stash transactions
US10831539B2 (en) Hardware thread switching for scheduling policy in a processor
CN104156255A (en) Virtual machine migration method, virtual machine migration device and source physical host
US8352702B2 (en) Data processing system memory allocation
Zeng et al. An improved xen credit scheduler for i/o latency-sensitive applications on multicores
US20120144389A1 (en) Optimizing virtual image deployment for hardware architecture and resources
US8671232B1 (en) System and method for dynamically migrating stash transactions
US20200272526A1 (en) Methods and systems for automated scaling of computing clusters
WO2016202154A1 (en) Gpu resource allocation method and system
US10387178B2 (en) Idle based latency reduction for coalesced interrupts
US8977752B2 (en) Event-based dynamic resource provisioning
WO2017180032A1 (en) Process scheduling in a processing system having at least one processor and shared hardware resources
CN103955397A (en) Virtual machine scheduling multi-strategy selection method based on micro-architecture perception
CN107402807A (en) Method, system and the processor of multitask execution efficiency are effectively lifted in computer systems
CN105677253A (en) Optimization method and device for IO instruction processing queue
EP4163795A1 (en) Techniques for core-specific metrics collection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant